Thread: boost::mutex vs criticalsection

  1. #1
    Registered User
    Join Date
    May 2006
    Posts
    630

    boost::mutex vs criticalsection

    Hello..

    I wonder if criticalsection is faster than boost::mutex? I guess it is, but is there much difference in preformance?

  2. #2
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    What library are you looking at, other than boost?

    Critical section is the piece of code in your multi-threaded program that must only be accessed one thread at a time. In other words, any resource that doesn't allow threads to access it concurrently. It is a concept. Not a method.

    Boost::Mutex is a way to implement a locking mechanism to your shared resource(s). The type of Mutex and the locking mechnism created around it will determine if your resource is a critical section or not.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  3. #3
    Hurry Slowly vart's Avatar
    Join Date
    Oct 2006
    Location
    Rishon LeZion, Israel
    Posts
    6,788
    Quote Originally Posted by l2u
    Hello..

    I wonder if criticalsection is faster than boost::mutex? I guess it is, but is there much difference in preformance?
    it depends on the percentage of the conflicts.

    Windows specific:
    -------------------------

    If conflict are rearly occuring - critical section is better

    If the probability of the conflict is high - mutex beats the critical section.

    The reason for this is how the OS processes the conflict.

    Using mutex - thread changes the state immedeatly and starts waiting for mutext releasing (cahnging state takes several tacs of the processor)

    in the critical section - the thread firstly goes to sleep for about 10 processor tacts and checks the state of the section again.
    If the section is available - thread continues execution, if the second check failes - only then the thread is suspended (this as I said takes more processor tacs when just 10 tacs sleep)...

    So when the conflict is rearly occurs - critical section could never require changing the thread state and becuse of this - beats the mutex.

    When the probability of the conflict is high, critical section is performed exactly as mutex, but has an additional waiting loop before entering the suspended state, that mutex version does not have.
    All problems in computer science can be solved by another level of indirection,
    except for the problem of too many layers of indirection.
    – David J. Wheeler

  4. #4
    Registered User
    Join Date
    May 2006
    Posts
    630
    Quote Originally Posted by Mario F.
    What library are you looking at, other than boost?

    Critical section is the piece of code in your multi-threaded program that must only be accessed one thread at a time. In other words, any resource that doesn't allow threads to access it concurrently. It is a concept. Not a method.

    Boost::Mutex is a way to implement a locking mechanism to your shared resource(s). The type of Mutex and the locking mechnism created around it will determine if your resource is a critical section or not.
    I had in mind function(s) InitializeCriticalSection, EnterCriticalSection..

  5. #5
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    I'm not sure about what Vart said. In my opinion, the CRITICAL_SECTION is always faster than a kernel Mutex.

    A critical section only spinlocks on SMP systems, i.e. multi-CPU or multi-core systems. It never "puts the thread to sleep for about 10 cycles", because actually putting the thread to sleep already takes several hundred cycles (it's done in kernel mode), not to mention the several hundred more to decide what thread to execute instead. On SMP systems, on a locked CS, the thread will do busy waiting for a bit, hoping that the other CPU will release the CS. On non-SMP systems, it will immediately block.
    The thing about the CS is that it doesn't switch to kernel mode just to check whether it is locked and to lock. If the CS is free, it uses atomic operations to immediately lock it without ever entering kernel mode. That's what makes it fast.
    Only if it is blocked does it enter kernel mode to sleep until it is released.

    A kernel Mutex, on the other hand, enters kernel mode unconditionally. This greatly decreases performance. Because the switch to kernel mode far outweighs whatever other cost the Mutex/CS has, Mutex won't be measurably faster even if every single acquisition blocks first. The only advantage the mutex has is that it works across process boundaries. And can be waited for together with other objects using WaitForMultipleObjects or a related function.




    I believe Boost.Threads.Mutex is implemented using a CRITICAL_SECTION, so it would be nearly as fast - the difference would be negligible.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  6. #6
    Registered User
    Join Date
    May 2006
    Posts
    630
    Quote Originally Posted by CornedBee
    I'm not sure about what Vart said. In my opinion, the CRITICAL_SECTION is always faster than a kernel Mutex.

    A critical section only spinlocks on SMP systems, i.e. multi-CPU or multi-core systems. It never "puts the thread to sleep for about 10 cycles", because actually putting the thread to sleep already takes several hundred cycles (it's done in kernel mode), not to mention the several hundred more to decide what thread to execute instead. On SMP systems, on a locked CS, the thread will do busy waiting for a bit, hoping that the other CPU will release the CS. On non-SMP systems, it will immediately block.
    The thing about the CS is that it doesn't switch to kernel mode just to check whether it is locked and to lock. If the CS is free, it uses atomic operations to immediately lock it without ever entering kernel mode. That's what makes it fast.
    Only if it is blocked does it enter kernel mode to sleep until it is released.

    A kernel Mutex, on the other hand, enters kernel mode unconditionally. This greatly decreases performance. Because the switch to kernel mode far outweighs whatever other cost the Mutex/CS has, Mutex won't be measurably faster even if every single acquisition blocks first. The only advantage the mutex has is that it works across process boundaries. And can be waited for together with other objects using WaitForMultipleObjects or a related function.



    I believe Boost.Threads.Mutex is implemented using a CRITICAL_SECTION, so it would be nearly as fast - the difference would be negligible.
    Thanks for that great explanation.

  7. #7
    Hurry Slowly vart's Avatar
    Join Date
    Oct 2006
    Location
    Rishon LeZion, Israel
    Posts
    6,788
    Quote Originally Posted by CornedBee
    I'm not sure about what Vart said. In my opinion, the CRITICAL_SECTION is always faster than a kernel Mutex.

    Only if it is blocked does it enter kernel mode to sleep until it is released.
    So think what will happen when conflict occure so often, that the probability to enter the kernel mode is the same as for the mutex...
    All problems in computer science can be solved by another level of indirection,
    except for the problem of too many layers of indirection.
    – David J. Wheeler

Popular pages Recent additions subscribe to a feed