C++ shared

boost::shared_mutex or std::shared_mutex (C++17) can be used for single writer, multiple reader access. As an educational exercise, I put together a simple implementation that uses spinlocking and has other limitations (eg. fairness policy), but is obviously not intended to be used in real applications.

The idea is that the mutex keeps a reference count that is zero if no thread holds the lock. If > 0, the value represents the number of readers that have access. If -1, a single writer has access.

Is this a correct implementation (in particular with the used, minimal, memory orderings) that is free of data races ?

#include <atomic>

class my_shared_mutex {
    std::atomic<int> refcount{0};
public:

    void lock() // write lock
    {
        int val;
        do {
            val = 0; // Can only take a write lock when refcount == 0

        } while (!refcount.compare_exchange_weak(val, -1, std::memory_order_acquire));
        // can memory_order_relaxed be used if only a single thread takes write locks ?
    }

    void unlock() // write unlock
    {
        refcount.store(0, std::memory_order_release);
    }

    void lock_shared() // read lock
    {
        int val;
        do {
            do {
                val = refcount.load(std::memory_order_relaxed);

            } while (val == -1); // spinning until the write lock is released

        } while (!refcount.compare_exchange_weak(val, val+1, std::memory_order_acquire));
    }

    void unlock_shared() // read unlock
    {
        refcount.fetch_sub(1, std::memory_order_relaxed);
    }
};

(I'm using cmpxchg as shorthand for the C++ compare_exchange_weak function, not the x86 cmpxchg instruction).

lock_shared definitely looks good: spinning on a read and only attempting a cmpxchg when the value is much better for performance than spinning on cmpxchg. Although I think you were forced into that for correctness, to avoid changing -1 to 0 and unlocking a write-lock.

I think unlock_shared should be using mo_release , not mo_relaxed , since it needs to order the loads from the shared data structure to make sure a writer doesn't start writing before the loads from the reader critical section happen. (LoadStore reordering is a thing on weakly-ordered architectures, even though x86 only does StoreLoad reordering.) A Release operation will order preceding loads and keep them inside the critical section.


(in write lock ): // can memory_order_relaxed be used if only a single thread takes write locks ?

No, you still need to keep the writes inside the critical section, so the cmpxchg still needs to Synchronize With (in C++ terminology) release-stores from unlock_shared .

链接地址: http://www.djcxy.com/p/37606.html

上一篇: 在离子应用中嵌入来自网站的购买渠道

下一篇: C ++共享