Deciding the critical section of kernel code

Hi I am writing kernel code which intends to do process scheduling and multi-threaded execution. I've studied about locking mechanisms and their functionality. Is there a thumb rule regarding what sort of data structure in critical section should be protected by locking (mutex/semaphores/spinlocks)?

I know that where ever there is chance of concurrency in part of code, we require lock. But how do we decide, what if we miss and test cases don't catch them. Earlier I wrote code for system calls and file systems where I never cared about taking locks.


Is there a thumb rule regarding what sort of data structure in critical section should be protected by locking?

Any object (global variable, field of the structure object, etc.), accessed concurrently when one access is write access requires some locking discipline for access.

But how do we decide, what if we miss and test cases don't catch them?

Good practice is appropriate comment for every declaration of variable, structure, or structure field, which requires locking discipline for access. Anyone, who uses this variable, reads this comment and writes corresponded code for access. Kernel core and modules tend to follow this strategy.

As for testing, common testing rarely reveals concurrency issues because of their low probability. When testing kernel modules, I would advice to use Kernel Strider, which attempts to prove correctness of concurrent memory accesses or RaceHound, which increases probability of concurrent issues and checks them.


It is always safe to grab a lock for the duration of any code that accesses any shared data, but this is slow since it means only one thread at a time can run significant chunks of code.

Depending on the data in question though, there may be shortcuts that are safe and fast. If it is a simple integer ( and by integer I mean the native word size of the CPU, ie not a 64 bit on a 32 bit cpu ), then you may not need to do any locking: if one thread tries to write to the integer, and the other reads it at the same time, the reader will either get the old value, or the new value, never a mix of the two. If the reader doesn't care that he got the old value, then there is no need for a lock.

If however, you are updating two integers together, and it would be bad for the reader to get the new value for one and the old value for the other, then you need a lock. Another example is if the thread is incrementing the integer. That normally involves a read, add, and write. If one reads the old value, then the other manages to read, add, and write the new value, then the first thread adds and writes the new value, both believe they have incremented the variable, but instead of being incremented twice, it was only incremented once. This needs either a lock, or the use of an atomic increment primitive to ensure that the read/modify/write cycle can not be interrupted. There are also atomic test-and-set primitives so you can read a value, do some math on it, then try to write it back, but the write only succeeds if it still holds the original value. That is, if another thread changed it since the time you read it, the test-and-set will fail, then you can discard your new value and start over with a read of the value the other thread set and try to test-and-set it again.

Pointers are really just integers, so if you set up a data structure then store a pointer to it where another thread can find it, you don't need a lock as long as you set up the structure fully before you store its address in the pointer. Another thread reading the pointer ( it will need to make sure to read the pointer only once, ie by storing it in a local variable then using only that to refer to the structure from then on ) will either see the new structure, or the old one, but never an intermediate state. If most threads only read the structure via the pointer, and any that want to write do so either with a lock, or an atomic test-and-set of the pointer, this is sufficient. Any time you want to modify any member of the structure though, you have to copy it to a new one, change the new one, then update the pointer. This is essentially how the kernel's RCU ( read, copy, update ) mechanism works.


Ideally, you must enumerate all the resources available in your system , the related threads and communication, sharing mechanism during design. Determination of the following for every resource and maintaining a proper check list whenever change is made can be of great help :

  • The duration for which the resource will be busy (Utilization of resource) & type of lock
  • Amount of tasks queued upon that particular resource (Load) & priority
  • Type of communication, sharing mechanism related to resource
  • Error conditions related to resource
  • If possible, it is better to have a flow diagram depicting the resources, utilization, locks, load, communication/sharing mechanism and errors.

    This process can help you in determining the missing scenarios/unknowns, critical sections and also in identification of bottlenecks.

    On top of the above process, you may also need certain tools that can help you in testing / further analysis to rule out hidden problems if any :

  • Helgrind - a Valgrind tool for detecting synchronisation errors. This can help in identifying data races/synchronization issues due to improper locking, the lock ordering that can cause deadlocks and also improper POSIX thread API usage that can have later impacts. Refer : http://valgrind.org/docs/manual/hg-manual.html
  • Locksmith - For determining common lock errors that may arise during runtime or that may cause deadlocks.
  • ThreadSanitizer - For detecting race condtion. Shall display all accesses & locks involved for all accesses.
  • Sparse can help to lists the locks acquired and released by a function and also identification of issues such as mixing of pointers to user address space and pointers to kernel address space.
  • Lockdep - For debugging of locks
  • iotop - For determining the current I/O usage by processes or threads on the system by monitoring the I/O usage information output by the kernel.
  • LTTng - For tracing race conditions and interrupt cascades possible. (A successor to LTT - Combination of kprobes, tracepoint and perf functionalities)
  • Ftrace - A Linux kernel internal tracer for analysing /debugging latency and performance related issues.
  • lsof and fuser can be handy in determining the processes having lock and the kind of locks.
  • Profiling can help in determining where exactly the time is being spent by the kernel. This can be done with tools like perf , Oprofile . The strace can intercept/record system calls that are called by a process and also the signals that are received by a process. It shall show the order of events and all the return/resumption paths of calls.

    链接地址: http://www.djcxy.com/p/84848.html

    上一篇: Linux HZ和公平的时间表时间片

    下一篇: 确定内核代码的关键部分