Wednesday, June 17, 2009


Building Web Apps without Web Forms by Chris Tavares
Cutting Edge: ASP.NET Presentation Patterns by Dino Esposito
The Life And Times of an ASP.NET MVC Controller by Scott Allen
RESTful Services With ASP.NET MVC by Aaron Skonnard
Building Testable ASP.NET MVC Applications by Justin Etheredge

Labels: ,

Sunday, June 14, 2009

Programming with FileStreams in SQL Server 2008

Programming with FileStreams in SQL Server 2008 by Bob Beauchemin.


Saturday, June 13, 2009

Build Better Data-Driven Apps With Distributed Caching

Build Better Data-Driven Apps With Distributed Caching by Aaron Dunnington.

Although this article is about Velocity, a feature MS product, the idea of data classification as reference data, activity data, and resource data can be applied anywhere.

Labels: ,

Monday, June 08, 2009

Anti-Patterns To Avoid In Entity Framework N-Tier Applications

Anti-Patterns To Avoid In N-Tier Applications by Daniel Simmons

Memory Usage Auditing For .NET Applications

Memory Usage Auditing For .NET Applications by Subramanian Ramaswamy and Vance Morrison.

Monday, April 13, 2009

Optimizing A Large Scale Software + Services Application

Optimizing A Large Scale Software + Services Application by Udi Dahan.

Sunday, February 15, 2009

Two Good ASP.NET Articles

Performance: Scaling Strategies for ASP.NET Applications by Richard Campbell and Kent Alstad
ASP.NET Workflow: Web Apps That Support Long-Running Operations by Michael Kennedy

Labels: , ,

Monday, September 15, 2008

LINQ: deferred execution and lazy evaluations

This is a technique to achieve maximum performance from LINQ queries on large sets of data in order to create a responsive UI.

Read more details on Increase LINQ Query Performance by Jared Parsons.

Tuesday, July 29, 2008

The Open Closed Principle

Patterns in Practice: The Open Closed Principle by Jeremy Miller

Build Scalable Systems That Handle Failure Without Losing Data

Build Scalable Systems That Handle Failure Without Losing Data by Udi Dahan

Labels: , ,

Friday, July 04, 2008

OutOfMemoryException and Pinning

OutOfMemoryException and Pinning

Some fundamentals of memory

Some fundamentals of memory

Thursday, July 03, 2008

Large Object Heap

For LOH, though, because compaction is expensive the CLR team chose to sweep them, making a free list out of dead objects that can be reused later to satisfy large object allocation requests. Adjacent dead objects are made into one free object.

If I don't have enough free space to accommodate the large object allocation requests, I will first attempt to acquire more segments from the OS. If that fails, then I will trigger a generation 2 garbage collection in hope of freeing up some space.

During a generation 2 garbage collection, I take the opportunity to release segments that have no live objects on them back to the OS. Memory from the last live object to the end of the segment is decommitted. And the free spaces remain committed though they are reset, meaning the OS doesn't need to write data in them back to disk.

A garbage collection occurs if one of the following conditions happens:
  • Allocation Exceeds the Generation 0 or Large Object Threshold (this is the most typical case).
  • System.GC.Collect Is Called
  • System Is in Low Memory Situation This happens when I receive the high memory notification from the OS.

LOH Performance Implications

Allocation costs.The CLR makes the guarantee that the memory for every new object I give out is cleared. This means the allocation cost of a large object is completely dominated by memory clearing (unless it triggers a garbage collection). If it takes two cycles to clear 1 byte, it means it takes 170,000 cycles to clear the smallest large object. For a 16MB object on a 2GHz machine, it will take approximately 16ms to clear the memory.

Collection costs. LOH and generation 2 are collected together. If either one's threshold is exceeded, a generation 2 collection will be triggered. If many large objects are allocated on a very temporary basis and you have a big SOH, you could be spending too much time running garbage collections; not to mention that the allocation cost can really add up if you keep allocating and letting really large objects go.

From Large Object Heap Uncovered by Maoni Stephens.

Tuesday, June 24, 2008

LinkedIn Architecture

Friday, March 07, 2008

Streamed vs. Buffered in WCF

Tuesday, March 04, 2008

TreeView Binding in WPF

Binding a TreeView to a DataSet

Thursday, February 21, 2008

DataContractSerializer.MaxItemsInObjectGraph in WCF

There are several things that control a huge list returned from a WCF call. DataContractSerializer.MaxItemsInObjectGraph, Binding.maxBufferSize, and Binding.maxReceivedMessageSize are commonly used.

According to SDK, DataContractSerializer.MaxItemsInObjectGraph = the maximum number of items in an object graph to serialize or deserialize.
NetTcpBinding.MaxReceivedMessageSize = the maximum size for a received message that is processed by the binding.
NetTcpBinding.MaxBufferSize = a value that specifies the maximum size, in bytes, of the buffer used to store messages in memory.

The default value for all three setting is 65,536. You can tweak this number to support large returned lists.

One thing that you should know is that TheNumberYouSet != MaxListCountExpected. TheNumberYouSet should be far bigger than MaxListCountExpected because of this "graph" thing.

To add a DataContractSerializer.MaxItemsInObjectGraph , you create an endpoint behavior (rather than a service behavior), and add the behavior to the endpoint (not at the service level).

Wednesday, February 20, 2008

SQL Server Remote Connection between XP server and Vista server

I got the following link from a coworker.

The Microsoft UI Automation Library

The Microsoft UI Automation library is included in the .NET Framework 3.0 as part of Windows Presentation Foundation (WPF). Read Dr. James McCaffrey's article The Microsoft UI Automation Library in more details.


Sunday, November 25, 2007

WCF Binding

In WCF, the communication concepts—the transport, the message encoding, and the suite of protocols are handled by the channel stack.

It’s the binding, however, that provides the recipe for building the channel stack needed to properly process the messages. When you load a service, WCF follows the instructions outlined by the binding description to create each channel stack. The binding binds your service implementation to the wire through the channel stack in the middle.

Within the WCF programming model, bindings are represented by the System.ServiceModel.Channels.Binding class. The figure Figure 2 summarizes the built-in binding classes that come with WCF out of the box.

Unlike the HTTP bindings, the various "Net" bindings were not designed for interoperability. In fact, each was designed for optimizing a different communication scenario when you can safely assume you have the Microsoft® .NET Framework 3.0 installed on both sides.

From Service Station: WCF Bindings In Depth by Aaron Skonnard.

Friday, September 28, 2007

Scott Berkun on Politics

According to Scott Berkun, Politics is the skill of managing people and organizations.
  • Politics is not a dirty word.
  • All leaders have political and power constraints.
  • The ratio of power to responsibility is constant.
  • Politics is a kind of problem solving.

Any political or management action that takes place, no matter how stupid or evil it seems, is always one of a limited number of possible choices managers have. The alternatives might be even worse for the project than the choice that was made. Without understanding something of the constraints, judgment will always be based more on venting frustration than on the reality of the situation.

Monday, September 24, 2007

Serializing and Encoding in WCF

Whereas serialization defines how .NET objects map to XML Infosets, the encoding defines how the XML Infoset is written out to a stream of bytes.

The serializer is considered part of the service contract because it directly impacts your code. The encoding isn't considered part of the service contract, but rather a configuration detail since it doesn't impact your code—you control the encoding by configuring the endpoint's binding.

The separation of serialization from encoding makes it possible to build your applications on a consistent data model (the XML Infoset) while providing flexibility in representation (the encoding).

Windows Communication Foundation supports three serializers: XmlSerializer, DataContractSerializer, and NetDataContractSerializer. DataContractSerializer is the default.

WCF currently supports the following encodings: text, binary, Message Transmission Optimization Mechanism (MTOM), and your own custom encodings.

From Service Station: Serialization in Windows Communication Foundation by Aaron Skonnard.

Monday, September 17, 2007

Security in WCF


None, UserName, Windows, Certificate, and IssuedToken.

Default Security in Standard Bindings

basicHttpBinding supports the WS-I basic profile. This particular binding doesn't provide CIA by default like most of the others. The most popular way to secure this binding is by simply running over HTTPS.

wsHttpBinding uses message security by default, with WS-Security and WS-SecureConversation. The default client credential type is Windows. One of the most common security tweaks to use on this binding is to switch it to use TransportWithMessageCredential.

netTcpBinding encodes each SOAP envelope using a proprietary binary encoding of the XML Infoset instead of the traditional angle bracket encoding. By default this binding uses transport security with Windows credentials, and is very efficient. The default binding uses transport security with negotiated authentication. If you want raw speed for Web services on a Windows-based intranet, you should seriously consider using this binding.

Discovering Client Identity

By far the simplest way to discover a client's identity is to leverage Thread.CurrentPrincipal. You can see which groups the user belongs to by calling WindowsPrincipal.IsInRole. For security reasons you really should specify the fully qualified group name, which includes the domain or machine on which the group is defined.

WCF doesn't always set Thread.CurrentPrincipal. It does that only if PrincipalPermissionAttribute is used or if the configuration says it should. Instead of relying on Thread.CurrentPrincipal, you should use ServiceSecurityContext.Current.WindowsIdentity to get the client's identity if one is available.

If the client is using an issued token credential to authenticate, you'll need to use ServiceSecurityContext.AuthorizationContext to pick up those details. If the client is authenticating with a certificate not mapped to a Windows account, ServiceSecurityContext.Current.PrimaryIdentity can be used.

Transparency is important when you're building a secure system. If the security features you build into the system aren't relatively transparent, users will be put off and will generally try to avoid using them.

From Security Briefs: Security in Windows Communication Foundation by Keith Brown.

Saturday, September 01, 2007

WCF Addressing

You can specify an absolute address for each endpoint or you can supply the ServiceHost with a base address and then specify relative paths for each endpoint. Specifying absolute addresses is a little easier to understand, but the base address technique typically makes things easier to manage.

The base address technique is mostly a convenience to reduce the number of places you’ll have to make changes when modifying the locations of your endpoints. Windows Communication Foundation also uses the base HTTP address by default to expose metadata when GET retrieval has been enabled (via the serviceMetadata behavior). You can, however, change the retrieval location using the behavior’s httpGetUrl property.

It’s also possible to specify the base addresses in the configuration file along with the endpoints themselves by listing the base addresses within the host element for each service.

IIS Addressing Considerations

When hosting in IIS, you simply map a .svc endpoint to your service class, configure the service endpoints and behaviors in web.config, and let WCF manage the process of creating and configuring the ServiceHost instance at runtime.

In this hosting scenario, the base HTTP address is determined by the IIS virtual directory housing the service along with the name of the .svc file. If you want to change the base address for your endpoints, you’ll need to move the service to a different IIS virtual directory.

Not only does IIS control the base address, it forces all of your endpoints to actually use the same base address (unlike self-hosting). As a result, it really only makes sense to use relative addresses when hosting in IIS.

Suppose you have a file named calc.svc and you place it in a virtual directory that corresponds to http://localhost:8080/calcservice. The base address for this service will be http://localhost:8080/calcservice/calc.svc. If a relative addresss "secure" is provided, then its address is http://localhost:8080/calcservice/calc.svc/secure. You have to remember that calc.svc is part of the base address so it has to work this way.

Logical vs. Physical Addresses

WCF refers to the logical address as "Address" or "Endpoint Address" and the physical address as "ListenUri".

From Service Station: WCF Addressing In Depth by Aaron Skonnard.

Friday, August 10, 2007

The InstanceContextMode Property

You can apply the ServiceBehavior attribute with the InstanceContextMode property set to the following:
  • InstanceContextMode.PerCall - Per-Call Services.
  • InstanceContextMode.PerSession - Per-Session Services, default.
  • InstanceContextMode.Sharable - Shareable Services, not included in the current release.
  • InstanceContextMode.Single - Singleton Services, with sessions or without sessions.

A service that implements a session-aware contract requires that all the endpoints that expose the contract use bindings that support reliable transport session. One exception to this rule is the named pipes binding.

A shareable service behaves much like a per-session service, with one important additional aspect: the instance has a unique ID, and when a client establishes a session with a shareable service, the client can pass a logical reference to that instance to another client. The second client will establish an independent session but will share the same instance. Also, each of these sessions may use different inactivity timeouts, and expire independently of any other session.

WCF Essentials: Discover Mighty Instance Management Techniques For Developing WCF Apps by Juval Lowy.

Tuesday, August 07, 2007

Some Basic WCF Concepts

WCF unifies the existing suite of .NET distributed technologies into a single programming model that improves the overall developer experience through a consistent architecture, new levels of functionality and interoperability, and all the extensibility points you could want. WCF was designed according to the tenets of service orientation.

A service is a piece of code you interact with through messages. Services are passive. They wait for incoming messages before doing any work. Clients are the initiators. Clients send messages to services to request work.

Services expose one or more endpoints where messages can be sent. Each endpoint consists of an address, a binding, and a contract. The address specifies where to send messages. The binding describes how to send messages. And the contract describes what the messages contain. Clients need to know this information before they can access a service.

Services can package up endpoint descriptions to share with clients, typically by using Web Services Description Language (WSDL). Then clients can use the provided service description to generate code within their environment capable of sending and receiving the proper messages.

WCF Programming Model

With WCF, you're either writing services that expose endpoints or you're writing clients that interact with endpoints. Hence, endpoints are central to the WCF programming model and infrastructure.

When building a WCF service, you typically start by defining a .NET interface definition to serve as the service contract. Then you implement the service contract in a .NET class, known as the service type, and configure its behavior. Next, you define the endpoints the service will expose, specifying the address, binding, and contract for each one. Finally, you host the service type in an application using the WCF hosting infrastructure. Once the service type is hosted, clients can retrieve its endpoint descriptions and begin integrating with it.

When building a WCF client, you first need the description of the target endpoint you want to access. The endpoint description can be used to dynamically create a typed proxy. Then you can write code against the typed proxy to access the service by sending the appropriate messages to the target endpoint.

Service Contracts and Dispatch Behavior

WCF uses the information found in the service contract to perform dispatching and serialization. Dispatching is the process of deciding which method to call for an incoming SOAP message. Serialization is the process of mapping between the data found in a SOAP message and the corresponding .NET objects used in the method invocation. This mapping is controlled by an operation's data contract.

Data Contracts

The way WCF serializes .NET classes depends on the serialization engine in use. The default serialization engine is known as DataContract, a simplified version of XmlSerializer, the default serialization engine used in ASMX today.

With DataContract, only fields marked with DataMember will be serialized. And you can serialize private fields.

Attribute List

ServiceContract, OperationContract, DataContract, DataMember, MessageContract, MessageBody, MessageHeader, ServiceBehavior.

From Distributed .NET: Learn The ABCs Of Programming Windows Communication Foundation by Aaron Skonnard.

Saturday, July 21, 2007

Performance Implications on Collections

When you want additions, removals, and lookups to be very quick, and when you are not concerned about the order of the items in the collections, your first inclination should be to use a System.Collections.Generic.Dictionary.

On the other hand, with a List, inserting and removing items can take a variable amount of time. (List stores items in an underlying array that maintains order. If your usage pattern requires few deletions and mostly additions, and if it is important for you to keep the collection in order, you may still want to choose a List.

Using the new LinkedList collection can potentially help you improve performance in scenarios where you need to maintain order yet still achieve fast inserts. Unlike List, LinkedList is implemented as a chain of dynamically allocated objects. In comparison to List, inserting an object in the middle only requires updating two connections and adding the new item. The downside of a linked list from a performance point of view is increased activity by the garbage collector as it has to traverse the entire list to make sure objects were not disposed of. Additionally, performance problems can arise with large linked lists due to the overhead associated with each node as well as where in memory each node lives.

SortedDictionary, uses a balanced tree implementation as the underlying data store; this provides relatively fast lookups and maintains items in a sorted order, but insertions will most likely be slower.

From CLR Inside Out: Collections Best Practices by Inbar Gazit.

Generic Counterparts to Non-Generic Collections

Sunday, July 08, 2007

Separating Administrative Events and Operational, Analytical, or Trace/Debug Messages

If there's one thing an IT pro really hates, it's an application that makes a lot of noise in the application event log, drowning out the really important messages that well-behaved applications produce from time to time. You shouldn't be writing operational, analytical, or trace/debug messages to the application log! These are more appropriate for a custom application log.

From Security Briefs: Improve Manageability through Event Logging by Keith Brown.

Localization Strategy Used in the Windows Event Log

The Windows® event log was designed with localization and efficiency in mind. For example, a well-designed event found in the Application log should contain just enough prose to explain a problem to the administrator and provide a suggested course of action. The vast majority of this prose should be localized for the reader, who may very well be on a remote machine in a different country. To support reading the log in your own native language, the localizable prose in the message is not actually stored in the log; rather, it is merged with the log locally as you read it. The actual prose is stored in a localized resource DLL on the reader's machine. This also has the effect of keeping the log files smaller and reducing the amount of network traffic required to read them.

Before the .NET Framework came along, this was all very clear to developers because you had to create and register these resource DLLs yourself. A typical message in a resource DLL might look like this: "Failed to connect to database %1 using connection string %2. Check to see that the database is available."

In this example, the only data about the event that would be stored in the event log would be the event ID, source, category, and a few other bits of metadata, along with the arguments supplied when the event was logged: the database name (%1) and the connection string (%2). When you use a tool such as Event Viewer to read an event like this, Windows will load the message resource DLL installed on your machine, look up the localized message based on the event ID, then merge the arguments from the log in order to display the message to you.

The way Windows figures out which resource DLL to use is by looking at the name of the event source in the log and mapping it to a registry key. This registry key contains the path to the resource DLL that should be used to display the message.

In version 1.x of the .NET Framework, the simple call to CreateEventSource wires your event source up with a default message file. This file is called EventLogMessages.dll, and it contains 65,536 event descriptions, each of which consists of the string "%1", a placeholder for whatever string you want to write. While this may have encouraged a few more people to use the event log, it defeated the purpose of having message files in the first place: to allow localized viewing of messages in the reader's language, and to keep the size of the event logs in check.

Fortunately, version 2.0 of the .NET Framework remedies this and allows you to gain the full power of the event log by registering a message file when you create your event source. This lets you support localized categories and messages, and it reduces the size of your messages in the log.

From Security Briefs: Improve Manageability through Event Logging by Keith Brown.

Sunday, July 01, 2007

Write Code Once For Both Mobile And Desktop Apps

Differences in the tools, the languages, the libraries, and the runtime engine affect the goal of writing cross-platform code.

The .NET Compact Framework achieves 80 percent of the relevant functionality found in the .NET Framework with only 20 percent of the footprint. Major areas of the .NET Framework that are absent from the compact version include: ASP.NET, CLR hosting, code access security (CAS), binary serialization, Reflection.Emit, and CodeDOM. Windows Presentation Foundation and Windows Workflow Foundation are not going to be supported any time soon.

System.Data.SqlServerCe, System.Net.IrDA, Microsoft.WindowsMobile.DirectX and Microsoft.WindowsCE.Forms are four device-specific assemblies that are in the .NET Compact Framework and not in the the .NET Framework. So the .NET Compact Framework is not a strict subset of the .NET Framework.

IL opcodes that are not supported by the .NET Compact Framework common are: jmp, calli, refanyval, refanytype, mkrefany, arglist, localloc, unaligned, and tail.

The implementations of the garbage collector and the just-in-time (JIT) compiler are different in the .NET Compact Framework than they are in the .NET Framework.

Note that the public key token for the .NET Compact Framework assemblies starts with 9 whereas the desktop equivalents start with B. You can use that information to quickly determine whether what you are deploying or referencing is a desktop or a device framework assembly.

Implementation approaches: retargeting, conditional branching, or a mix of both.

A runtime check is not enough because of JIT compiling.

Share the Code, Not the Binary.

There are two limitations with the retargetable approach: First, you can’t use any desktop-only types or methods in your code. They simply will not compile. Second, you must be careful not to load any device-specific assemblies on the desktop as they will fail with run-time exceptions.

From Share Code: Write Code Once For Both Mobile And Desktop Apps by Daniel Moth

Saturday, June 30, 2007

IIS 7.0

Be Extensible and Manageable. The vision for IIS 7.0 was to take the speed, reliability, and security of the IIS 6.0 codebase and turn it into a highly extensible and manageable Web server platform powerful enough to run Web applications of the future.

At the core of the IIS 7.0 release is a completely modular Web server, comprised of more than 40 features that can be assembled into small-footprint Web servers optimized for the desired role in your application topology. These features are built on top of a new extensibility layer that allows developers to extend or replace virtually any aspect of the server, in native code or with the Microsoft® .NET Framework. IIS 7.0 offers extensibility throughout its runtime, management, and operational features to help you build end-to-end solutions for your specific needs. On top of the core platform, IIS 7.0 tackles many of the problems associated with the manageability and operation of the server. It features a brand new configuration system that enables fully delegated management of sites and finally makes xcopy deployment of Web applications a reality. The new management APIs and diagnostic features make deployment, administration, and troubleshooting of the server significantly easier and more convenient than ever before.

The centralized configuration store of the previous IIS releases, known affectionately as the metabase, is gone. IIS 7.0 features a new delegated configuration system based on a hierarchy of distributed XML configuration files. This hierarchy is comprised of a global applicationHost.config file, which contains server-level configuration defaults, and distributed web.config files within the application's directory structure.

In the past, IIS application settings had to be explicitly configured in the machine-level metabase repository before the application could function correctly. With distributed web.config files, applications encapsulate the required server configuration within their directory structure. This dramatically simplifies deployment, allowing self-contained applications to be simply copied to the target server's application directory and thus be up and running immediately with the desired settings.

IIS 7.0 continues to support existing setup code that uses the Admin Base Object (ABO) APIs to write to the legacy metabase or scripts that use the higher-level Active Directory® Service Interfaces (ADSI) and Windows Management Instrumentation (WMI) objects to configure IIS. It does this by providing a compatibility layer that emulates the ABO APIs, on top of which all other legacy configuration APIs are based, allowing such scripts to read and change configuration just like they did in the previous releases of IIS.

Remote Management Tool. IIS 7.0 provides a tool that enables delegated management, letting application owners manage their applications remotely without having administrative access to the server computer. Server administrators, of course, have complete control over what management features are delegated to application owners.

Microsoft.Web.Administration, the new .NET Administration API, makes it easy for managed code applications to programmatically provision IIS sites and applications, access important state and diagnostic information, and otherwise configure the server.

In IIS 7.0, ASP.NET comes in two versions: Classic mode and Integrated mode. Classic mode works exactly the same way as it did in previous versions of IIS. Integrated mode, the new platform default, uses a brand new engine to provide unparalleled integration with the IIS Web server.
IIS on Windows Vista is limited to 10 concurrent requests at a time.
The modular nature of the server allows administrators to remove unneeded server features, saving memory and CPU usage during request processing.
The new IIS Output Cache provides support for re-using responses to expensive dynamic pages on the server, alleviating the need to perform the expensive rendering and database transactions in order to return the response to the client.

From IIS 7.0: Explore The Web Server For Windows Vista And Beyond by Mike Volodarsky

Monday, June 11, 2007

Can Sql Server Run Entirely In Ram?

Database pages that have been accessed are stored in the buffer pool. (Pages are placed in RAM only when needed, and SQL Server™ doesn't reload pages.) These pages are kept in RAM as long as possible—that is, until memory becomes a problem. So let's say you've set SQL Server to the proper memory footprint and you've run some queries that touched every page in the database. The entire database would then be in RAM and it would stay there until you ran out of buffer pool space to hold new pages, or until other processes on the system cause physical memory pressure.

During each checkpoint, any modified pages are flushed to disk, so committed changes are taken care of. Log records are not cached per se. As soon as a commit is issued, the records are flushed to disk. So even if all dirty pages are still in RAM, a restart of SQL Server will replay the log records. Don't take this to mean you should rely on the log in this way. A huge log could mean a very long recovery period.

So pages stay in RAM until space is needed; a least recently used (LRU) algorithm is applied so that hot pages stay longer. Most of the time, you won't need the entire database in the cache anyway. So you shouldn't buy a ton of RAM to hold pages that probably aren't touched too often—you likely won't see any performance improvement.

From Running in RAM, Agent Jobs, and More by Nancy Michell.

Guidance On Forest Trusts

The following documents provide valuable guidance on creating forests and implementing trust:

From Running in RAM, Agent Jobs, and More by Nancy Michell.

Monitoring SQL Server Health

See "Troubleshooting Performance Problems in SQL Server 2005" and "Tips for Using SQL Server Performance Monitor Counters"). The key to running traces in the most efficient manner is to run the traces locally on the machine running SQL Server and pipe the output to a local file.

In addition to capturing and analyzing performance counters and profiler information you may want to look into identifying where the engine is spending its time waiting, which tells you if you are I/O constrained, CPU constrained, memory constrained, and so on. For more on this see "How do you measure CPU pressure?".

You should also consider implementing Microsoft® Operations Manager (MOM) 2005 as a part of your SQL Server 2005 monitoring plans. The SQL Server 2005 Management Pack is quite useful. Even without it you can get a good understanding of what's going on with your servers just by using the MOM operator console and warehouse and associated reports that expose counters like CPU, Memory, Disk Queuing, and so on.

From Predicates, Stored Procedures, and More by Nancy Michell

Saturday, June 09, 2007

Rebuilding Indexes in SQL Server

At the simplest level, indexes are rebuilt by constructing a new copy of the index and then deleting the old one. This means that essentially there are two copies of the index for a short period of time. Constructing the new index could require as much database file space as the original index, and if the rebuild requires a sort operation, then an additional 20 percent of the index size is required for the sort.

So the worst case is that rebuilding an index requires 1.2 times the space of the old index. If the database file does not have enough free space, then the file will have to grow as the operation proceeds. It's possible that if autogrow is not enabled or there is not enough space on the disk volume, then there may not be enough free space available and the rebuild operation will fail.

Whether the operation fails or not, the extra space allocated to the database file is not freed up after the rebuild operation completes. The assumption is that the space will be used for regular database operations.

Using DBCC INDEXDEFRAG (or ALTER INDEX … REORGANIZE in SQL Server™ 2005) has the advantage that it uses almost no additional database file space, but it can take longer and generate a lot more transaction logging than an index rebuild. DBCC INDEXDEFRAG is always fully logged, regardless of the recovery mode in use, whereas in simple recovery mode an index rebuild will be bulk-logged. There are a variety of pros and cons to each method and they are explained more fully in the SQL Server Index Defragmentation Best Practices whitepaper.

Before making the decision on how to fix fragmentation, first decide whether to fix fragmentation at all. Depending on the type of operations the index is used for, fragmentation may have no effect on performance and so fixing it is a waste of resources. The whitepaper has great detail.

Rebuilding Indexes, Disk Queue Length, and Moreby Nancy Michell


In SQL Server 2005 you can define the execution context of the following user-defined modules: functions (except inline table-valued functions), procedures, queues, and triggers.

By specifying the context in which the module is executed, you can control which user account the SQL Server 2005 Database Engine uses to validate permissions on objects that are referenced by the module. This provides additional flexibility and control in managing permissions across the object chain that exists between user-defined modules and the objects referenced by those modules. Permissions must be granted to users only on the module itself, without having to grant them explicit permissions on the referenced objects. Only the user that the module is running as must have permissions on the objects accessed by the module.

USE Sales;
WITH EXECUTE AS 'CompanyDomain\SqlUser1'
SELECT user_name();

From SQL Server 2005 Books Online

Perform highly privileged activities in SQL Server

  • Define the minimal permissions necessary to perform the action.
  • Define a database role that contains those permissions.
  • Assign a user to that role.
  • Create a stored procedure with the EXECUTE AS clause that will perform those functions.

Best Practices Analyzer, Multi-Core Processors, and More by Nancy Michell

T-SQL to find the size of total and free physical memory

With VASummary(Size,Reserved,Free) AS
Size = VaDump.Size,
Reserved = SUM(CASE(CONVERT(INT, VaDump.Base)^0)
Free = SUM(CASE(CONVERT(INT, VaDump.Base)^0)
SELECT CONVERT(VARBINARY, SUM(region_size_in_bytes))
AS Size,
region_allocation_base_address AS Base
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address <> 0x0
GROUP BY region_allocation_base_address
SELECT CONVERT(VARBINARY, region_size_in_bytes),
FROM sys.dm_os_virtual_address_dump
WHERE region_allocation_base_address = 0x0
AS VaDump

SELECT SUM(Size*Free)/1024 AS [Total avail mem, KB] ,CAST(MAX(Size) AS INT)/1024
AS [Max free size, KB]
FROM VASummary
WHERE Free <> 0

Best Practices Analyzer, Multi-Core Processors, and More by Nancy Michell

Saturday, June 02, 2007

Using GC Efficiently

Maoni Stephens's blogs:

Using GC Efficiently – Part 1
Using GC Efficiently – Part 2
What’s new in the CLR 2.0 GC?

Rico Mariani's blog:
Mid-life crisis

Friday, June 01, 2007

GC Performance Counters

The .NET Memory performance counters are updated only when a collection occurs; they are not updated according to the sampling rate used in the Performance Monitor application.

The % Time in GC indicates the percentage of time spent in the GC since the end of the last collection. If you see a really high value (say 50 percent or more), then you should look at what is going on inside the managed heap. If, instead, % Time in GC does not go above 10 percent, it usually is not worth spending time trying to reduce the time the GC spends in collections because the benefits will be marginal.

If you think your application is spending too much time performing garbage collections, the next performance counter to check is Allocated Bytes/sec. This shows the allocation rate. Unfortunately, this counter is not highly accurate when the allocation rate is very low. The counter may indicate 0 bytes/sec if the sampling frequency is higher than the collection frequency since the counter is only updated at the beginning of each collection. This does not mean there were no allocations; rather, the counter was not updated because no collection happened in that particular interval.

If you suspect that you are allocating a lot of large objects (85,000 bytes or bigger), check the Large Object Heap (LOH) size. This is updated at the same time as Allocated Bytes/sec.

A high allocation rate can cause a lot of collection work and as a result might be reflected in a high % Time in GC. A mitigating factor is if objects usually die young, since they will typically be collected during a Gen 0 collection. To determine how object lifecycles impact collection, check the performance counters for Gen collections: # Gen 0 Collections, # Gen 1 Collections, and # Gen 2 Collections. These show the number of collections that have occurred for the respective generation since the process started. Gen 0 and Gen 1 collections are usually cheap, so they don't have a big impact on the application's performance.

As a rule of thumb, a healthy ratio between generation collections is one Gen 2 collection for every ten Gen 1 collections. If you are seeing a lot of time spent in garbage collection, it could be that Gen 2 collections are being done too often.

You may find that % Time in GC is high but the allocation rate is not high. This can happen if many of the objects you allocate survive garbage collection and are promoted to the next generation. The promotion counters—Promoted Memory from Gen 0 and Promoted Memory from Gen 1—can tell you if the promotion rate is an issue. You want to avoid a high promotion rate from Gen 1.

High values for the Gen 1 and Gen 2 heap sizes are usually associated with high values in the promotion rate counters. You can check the GC heap sizes by using Gen 1 heap size and Gen 2 heap size. There is a Gen 0 heap size counter, but it doesn't measure Gen 0 size. Instead, it indicates the budget of Gen 0—meaning how much you can allocate in Gen 0 before the next Gen 0 collection will be triggered.

In all scenarios where you use a lot of objects that need finalization—for instance, objects that rely on COM components for some processing—you can take a look at the Promoted Finalization-Memory from Gen 0 counter. This tells you the amount of memory that cannot be reused because the objects that use the memory need to be added to the finalization queue and therefore cannot be collected immediately. IDisposable and the using statements in C# and Visual Basic® can help reduce the number of objects that end up in the finalization queue, thus reducing the associated cost.

It's also worth checking if the application is inducing too many collections. The # Induced GC counter tells you how many collections have been induced since the process started. In general it is not recommended that you induce GC collections. In most cases, if # Induced GC indicates a high number, you should consider it a bug. Most of the time people induce GCs in hopes of cutting down the heap size, but this is almost never a good idea. Instead, you should find out why your heap is growing.

Adopted from CLR Inside Out: Investigating Memory Issues by Claudio Caldato and Maoni Stephens.