Saturday, June 30, 2007

IIS 7.0

Be Extensible and Manageable. The vision for IIS 7.0 was to take the speed, reliability, and security of the IIS 6.0 codebase and turn it into a highly extensible and manageable Web server platform powerful enough to run Web applications of the future.

At the core of the IIS 7.0 release is a completely modular Web server, comprised of more than 40 features that can be assembled into small-footprint Web servers optimized for the desired role in your application topology. These features are built on top of a new extensibility layer that allows developers to extend or replace virtually any aspect of the server, in native code or with the Microsoft® .NET Framework. IIS 7.0 offers extensibility throughout its runtime, management, and operational features to help you build end-to-end solutions for your specific needs. On top of the core platform, IIS 7.0 tackles many of the problems associated with the manageability and operation of the server. It features a brand new configuration system that enables fully delegated management of sites and finally makes xcopy deployment of Web applications a reality. The new management APIs and diagnostic features make deployment, administration, and troubleshooting of the server significantly easier and more convenient than ever before.

The centralized configuration store of the previous IIS releases, known affectionately as the metabase, is gone. IIS 7.0 features a new delegated configuration system based on a hierarchy of distributed XML configuration files. This hierarchy is comprised of a global applicationHost.config file, which contains server-level configuration defaults, and distributed web.config files within the application's directory structure.

In the past, IIS application settings had to be explicitly configured in the machine-level metabase repository before the application could function correctly. With distributed web.config files, applications encapsulate the required server configuration within their directory structure. This dramatically simplifies deployment, allowing self-contained applications to be simply copied to the target server's application directory and thus be up and running immediately with the desired settings.

IIS 7.0 continues to support existing setup code that uses the Admin Base Object (ABO) APIs to write to the legacy metabase or scripts that use the higher-level Active Directory® Service Interfaces (ADSI) and Windows Management Instrumentation (WMI) objects to configure IIS. It does this by providing a compatibility layer that emulates the ABO APIs, on top of which all other legacy configuration APIs are based, allowing such scripts to read and change configuration just like they did in the previous releases of IIS.

Remote Management Tool. IIS 7.0 provides a tool that enables delegated management, letting application owners manage their applications remotely without having administrative access to the server computer. Server administrators, of course, have complete control over what management features are delegated to application owners.

Microsoft.Web.Administration, the new .NET Administration API, makes it easy for managed code applications to programmatically provision IIS sites and applications, access important state and diagnostic information, and otherwise configure the server.

In IIS 7.0, ASP.NET comes in two versions: Classic mode and Integrated mode. Classic mode works exactly the same way as it did in previous versions of IIS. Integrated mode, the new platform default, uses a brand new engine to provide unparalleled integration with the IIS Web server.
IIS on Windows Vista is limited to 10 concurrent requests at a time.
The modular nature of the server allows administrators to remove unneeded server features, saving memory and CPU usage during request processing.
The new IIS Output Cache provides support for re-using responses to expensive dynamic pages on the server, alleviating the need to perform the expensive rendering and database transactions in order to return the response to the client.

From IIS 7.0: Explore The Web Server For Windows Vista And Beyond by Mike Volodarsky

Monday, June 11, 2007

Can Sql Server Run Entirely In Ram?

Database pages that have been accessed are stored in the buffer pool. (Pages are placed in RAM only when needed, and SQL Server™ doesn't reload pages.) These pages are kept in RAM as long as possible—that is, until memory becomes a problem. So let's say you've set SQL Server to the proper memory footprint and you've run some queries that touched every page in the database. The entire database would then be in RAM and it would stay there until you ran out of buffer pool space to hold new pages, or until other processes on the system cause physical memory pressure.

During each checkpoint, any modified pages are flushed to disk, so committed changes are taken care of. Log records are not cached per se. As soon as a commit is issued, the records are flushed to disk. So even if all dirty pages are still in RAM, a restart of SQL Server will replay the log records. Don't take this to mean you should rely on the log in this way. A huge log could mean a very long recovery period.

So pages stay in RAM until space is needed; a least recently used (LRU) algorithm is applied so that hot pages stay longer. Most of the time, you won't need the entire database in the cache anyway. So you shouldn't buy a ton of RAM to hold pages that probably aren't touched too often—you likely won't see any performance improvement.

From Running in RAM, Agent Jobs, and More by Nancy Michell.

Guidance On Forest Trusts

The following documents provide valuable guidance on creating forests and implementing trust:

From Running in RAM, Agent Jobs, and More by Nancy Michell.

Monitoring SQL Server Health

See "Troubleshooting Performance Problems in SQL Server 2005" and "Tips for Using SQL Server Performance Monitor Counters"). The key to running traces in the most efficient manner is to run the traces locally on the machine running SQL Server and pipe the output to a local file.

In addition to capturing and analyzing performance counters and profiler information you may want to look into identifying where the engine is spending its time waiting, which tells you if you are I/O constrained, CPU constrained, memory constrained, and so on. For more on this see "How do you measure CPU pressure?".

You should also consider implementing Microsoft® Operations Manager (MOM) 2005 as a part of your SQL Server 2005 monitoring plans. The SQL Server 2005 Management Pack is quite useful. Even without it you can get a good understanding of what's going on with your servers just by using the MOM operator console and warehouse and associated reports that expose counters like CPU, Memory, Disk Queuing, and so on.

From Predicates, Stored Procedures, and More by Nancy Michell

Saturday, June 09, 2007

Rebuilding Indexes in SQL Server

At the simplest level, indexes are rebuilt by constructing a new copy of the index and then deleting the old one. This means that essentially there are two copies of the index for a short period of time. Constructing the new index could require as much database file space as the original index, and if the rebuild requires a sort operation, then an additional 20 percent of the index size is required for the sort.

So the worst case is that rebuilding an index requires 1.2 times the space of the old index. If the database file does not have enough free space, then the file will have to grow as the operation proceeds. It's possible that if autogrow is not enabled or there is not enough space on the disk volume, then there may not be enough free space available and the rebuild operation will fail.

Whether the operation fails or not, the extra space allocated to the database file is not freed up after the rebuild operation completes. The assumption is that the space will be used for regular database operations.

Using DBCC INDEXDEFRAG (or ALTER INDEX … REORGANIZE in SQL Server™ 2005) has the advantage that it uses almost no additional database file space, but it can take longer and generate a lot more transaction logging than an index rebuild. DBCC INDEXDEFRAG is always fully logged, regardless of the recovery mode in use, whereas in simple recovery mode an index rebuild will be bulk-logged. There are a variety of pros and cons to each method and they are explained more fully in the SQL Server Index Defragmentation Best Practices whitepaper.

Before making the decision on how to fix fragmentation, first decide whether to fix fragmentation at all. Depending on the type of operations the index is used for, fragmentation may have no effect on performance and so fixing it is a waste of resources. The whitepaper has great detail.

Rebuilding Indexes, Disk Queue Length, and Moreby Nancy Michell

EXECUTE AS Clause (T-SQL)

In SQL Server 2005 you can define the execution context of the following user-defined modules: functions (except inline table-valued functions), procedures, queues, and triggers.

By specifying the context in which the module is executed, you can control which user account the SQL Server 2005 Database Engine uses to validate permissions on objects that are referenced by the module. This provides additional flexibility and control in managing permissions across the object chain that exists between user-defined modules and the objects referenced by those modules. Permissions must be granted to users only on the module itself, without having to grant them explicit permissions on the referenced objects. Only the user that the module is running as must have permissions on the objects accessed by the module.

USE Sales;
GO
CREATE PROCEDURE dbo.usp_Demo
WITH EXECUTE AS 'CompanyDomain\SqlUser1'
AS
SELECT user_name();
GO

From SQL Server 2005 Books Online

Perform highly privileged activities in SQL Server

  • Define the minimal permissions necessary to perform the action.
  • Define a database role that contains those permissions.
  • Assign a user to that role.
  • Create a stored procedure with the EXECUTE AS clause that will perform those functions.

Best Practices Analyzer, Multi-Core Processors, and More by Nancy Michell

T-SQL to find the size of total and free physical memory

With VASummary(Size,Reserved,Free) AS
(SELECT
Size = VaDump.Size,
Reserved = SUM(CASE(CONVERT(INT, VaDump.Base)^0)
WHEN 0 THEN 0 ELSE 1 END),
Free = SUM(CASE(CONVERT(INT, VaDump.Base)^0)
WHEN 0 THEN 1 ELSE 0 END)
FROM
(
SELECT CONVERT(VARBINARY, SUM(region_size_in_bytes))
AS Size,
region_allocation_base_address AS Base
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address <> 0x0
GROUP BY region_allocation_base_address
UNION
SELECT CONVERT(VARBINARY, region_size_in_bytes),
region_allocation_base_address
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address = 0x0
)
AS VaDump
GROUP BY Size)

SELECT SUM(Size*Free)/1024 AS [Total avail mem, KB] ,CAST(MAX(Size) AS INT)/1024
AS [Max free size, KB]
FROM VASummary
WHERE Free <> 0


Best Practices Analyzer, Multi-Core Processors, and More by Nancy Michell

Saturday, June 02, 2007

Using GC Efficiently

Maoni Stephens's blogs:

Using GC Efficiently – Part 1
Using GC Efficiently – Part 2
What’s new in the CLR 2.0 GC?

Rico Mariani's blog:
Mid-life crisis

Friday, June 01, 2007

GC Performance Counters

The .NET Memory performance counters are updated only when a collection occurs; they are not updated according to the sampling rate used in the Performance Monitor application.

The % Time in GC indicates the percentage of time spent in the GC since the end of the last collection. If you see a really high value (say 50 percent or more), then you should look at what is going on inside the managed heap. If, instead, % Time in GC does not go above 10 percent, it usually is not worth spending time trying to reduce the time the GC spends in collections because the benefits will be marginal.

If you think your application is spending too much time performing garbage collections, the next performance counter to check is Allocated Bytes/sec. This shows the allocation rate. Unfortunately, this counter is not highly accurate when the allocation rate is very low. The counter may indicate 0 bytes/sec if the sampling frequency is higher than the collection frequency since the counter is only updated at the beginning of each collection. This does not mean there were no allocations; rather, the counter was not updated because no collection happened in that particular interval.

If you suspect that you are allocating a lot of large objects (85,000 bytes or bigger), check the Large Object Heap (LOH) size. This is updated at the same time as Allocated Bytes/sec.

A high allocation rate can cause a lot of collection work and as a result might be reflected in a high % Time in GC. A mitigating factor is if objects usually die young, since they will typically be collected during a Gen 0 collection. To determine how object lifecycles impact collection, check the performance counters for Gen collections: # Gen 0 Collections, # Gen 1 Collections, and # Gen 2 Collections. These show the number of collections that have occurred for the respective generation since the process started. Gen 0 and Gen 1 collections are usually cheap, so they don't have a big impact on the application's performance.

As a rule of thumb, a healthy ratio between generation collections is one Gen 2 collection for every ten Gen 1 collections. If you are seeing a lot of time spent in garbage collection, it could be that Gen 2 collections are being done too often.

You may find that % Time in GC is high but the allocation rate is not high. This can happen if many of the objects you allocate survive garbage collection and are promoted to the next generation. The promotion counters—Promoted Memory from Gen 0 and Promoted Memory from Gen 1—can tell you if the promotion rate is an issue. You want to avoid a high promotion rate from Gen 1.

High values for the Gen 1 and Gen 2 heap sizes are usually associated with high values in the promotion rate counters. You can check the GC heap sizes by using Gen 1 heap size and Gen 2 heap size. There is a Gen 0 heap size counter, but it doesn't measure Gen 0 size. Instead, it indicates the budget of Gen 0—meaning how much you can allocate in Gen 0 before the next Gen 0 collection will be triggered.

In all scenarios where you use a lot of objects that need finalization—for instance, objects that rely on COM components for some processing—you can take a look at the Promoted Finalization-Memory from Gen 0 counter. This tells you the amount of memory that cannot be reused because the objects that use the memory need to be added to the finalization queue and therefore cannot be collected immediately. IDisposable and the using statements in C# and Visual Basic® can help reduce the number of objects that end up in the finalization queue, thus reducing the associated cost.

It's also worth checking if the application is inducing too many collections. The # Induced GC counter tells you how many collections have been induced since the process started. In general it is not recommended that you induce GC collections. In most cases, if # Induced GC indicates a high number, you should consider it a bug. Most of the time people induce GCs in hopes of cutting down the heap size, but this is almost never a good idea. Instead, you should find out why your heap is growing.

Adopted from CLR Inside Out: Investigating Memory Issues by Claudio Caldato and Maoni Stephens.