Concurrency: What Every Dev Must Know About Multithreaded Apps
I just finish read Vance Morrison's article Concurrency: What Every Dev Must Know About Multithreaded Apps. It's excellent. Here are my notes.
Using the lock statement only if:
- No cleanup is needed in its body
- Everything its body did was read-only
Rules and good practices for using Locks Properly:
- For a lock to provide mutual exclusion for a region of memory, no writes to that memory can occur without entering the same lock. The programmer should document it!
- Check whether any of the regions protected by different locks overlap.
- It is best to have few locks that protect large regions of memory and only split them when lock contention is shown to be a bottleneck on performance.
- Believing that everything changes unless locks are used to prevent it.
- The general rule should be that all program memory should fall into one of three buckets: thread exclusive, read-only, or lock protected, with one exception as follows.
- Finally, in certain specialized cases where the program invariant is relatively weak, it is possible to perform updates that can be done without locks. Typically, specialized compare-and-exchange instructions are used. These techniques are best thought of as special, lightweight implementations of locks.
- Most reusable code (like container classes) should not have locks built into it because it can only protect itself, and it is likely that whatever code uses it will need a stronger lock anyway. The exception to this rule is when it is important that the code works even with high-level program errors. The global memory heap and security-sensitive code are examples of exceptional cases.
- In cases where threads read shared structures frequently, but write to them infrequently, reader-writer locks such as System.Threading.ReaderWriterLock can be used to keep the number of locks in the system low.
Deadlocks:
Deadlocks are generally prevented in one of two ways. The first (and best) way to prevent deadlock, is to have few enough locks in the system that it is never necessary to take more than one lock at a time. If this is impossible, deadlock can also be prevented by having a convention on the order in which locks are taken. To prevent this, each lock in the system is assigned a "level", and the program is designed so that threads always take locks only in strictly descending order by level.
The cost of locks:
Another reason to avoid having many locks in a system is the cost of entering and leaving a lock. The lightest locks use a special compare/exchange instruction to check whether the lock is taken, and if it isn't, they enter the lock in a single atomic action. Unfortunately, this special instruction is relatively expensive (typically ten to hundreds of times longer than an ordinary instruction). There are two main reasons for this expense, and they both have to do with issues arising on a true multiprocessor system.
The first reason is that the compare/exchange instruction must ensure that no other processor is also trying to do the same thing. Fundamentally, this requires one processor to coordinate with all other processors in the system. This is a slow operation and accounts for the lower bound of the cost of a lock (dozens of cycles). The other reason for the expense is the effect inter-process communication has on the memory system. After a lock has been taken, the program is very likely to access memory that may have been recently modified by another thread. If this thread ran on another processor, it is necessary to ensure that all pending writes on all other processors have been flushed so that the current thread sees the updates. The cost of doing this largely depends on how the memory system works and how many writes need to be flushed. This cost can be pretty high in the worst case (possibly hundreds of cycles or more).
0 Comments:
Post a Comment
<< Home