Mutex implementation and signaling

When a mutex is already locked by T1, and T2 tries to lock it, what is the process for T2?

I think it goes something like this:

-T2 tries to lock, fails, maybe spinlocks a bit, then calls yield...
-T2 is scheduled for execution a couple of times, tries to lock fails, yields...
-Eventually T1 unlocks, T2 is scheduled for execution and manages to lock the mutex...

Does T1 unlocking explicitly signal to the scheduler or other threads that the mutex is unlocked? Or does it just unlock, and leave the scheduler to schedule blocked threads when it feels suitable to do so (aka scheduler has no notion of blocked threads and doesn't treat them as special)?


In short: yes, perhaps...

This is implementation details, and it's rather difficult to say without at the very least knowing which operating system you are talking about. Typically, the unlocking of a mutex will only mark the waiting thread as "runnable", but not (necessarily) invoke the scheduler at that time - and even if the scheduler is called, it doesn't mean that T2 will be the next thread to run.

In Linux, the code enters the mutex_unlock() which checks if there is any waiting task (by checking if the lock-count is less than zero - it starts at 1 for unlocked, a single lock-request gets it to zero, a further attempt to lock will make it negative). If there is a further waiting process, it calls a "slow path unlock", which via a couple of redirection functions to allow implementation details, ends up in __mutex_unlock_common_slowpath - a few lines further down, there is a call to wake_up_process which eventually ends up in try_to_wake_up - which essentially just queues the task as "ready to run", and then calls the scheduler (via several layers of functions!)


It depends on your operating system. I've seen just spinning, spinning with yield , general purpose condition variables in the kernel, userland controlled scheduling and specialized locking primitives with kernel support.

Spinning and spinning with yield have terrible performance. Theoretically userland controlled scheduling (see Scheduler activations) should have the best performance, but as far as I know no one has ever made it work properly in all cases. General purpose condition variables in the kernel and specialized locking primitives with kernel support should perform more or less the same with futex in Linux as the best example of the latter.

There are situations where spinning can have better performance. In Solaris some locking primitive in the kernel has an adaptive mode where the lock spins as long as the process holding the lock is running on a different cpu. If the lock owner sleeps or gets preempted the lock waiter also goes to sleep. In other kernels there are classes of locks where the lock owner can't be preempted or sleep when holding the lock, so in those cases spinning works well too. In general though, especially in userland, spinning has such horrible degenerate cases (the spinning process spins until it gets preempted to let the lock owner run) that it's very bad for performance. Notice that specialized locking primitives like futex could implement optimizations like this, something that general purpose condition variables normally can't.


Say we have following scenario:

 1. T1 got M1. M1 locked.
 2. T2 tries to get M1 and gets blocked as M1 is locked.
 3. T3 tries to get M1 and gets blocked as M1 is locked.
 4. ...some time later...
 5. T1 unlocks M1.*
 6. T2 got M1.
 7. T3 is unblocked and tries to get M1 but is blocked again as T2 got M1 first.

*The system call, unlock , should notify all blocked tasks/processes/threads which are blocked on the mutex's lock call . They are then scheduled to execute. That does not mean they are executed as there might be already someone in execution. As others state it depends on the implementation how is that done. If you really want to learn this well I'd recommend this book

链接地址: http://www.djcxy.com/p/84842.html

上一篇: 如何使内核线程抢占?

下一篇: 互斥体的实现和信号