boost::mutex vs criticalsection

This is a discussion on boost::mutex vs criticalsection within the C++ Programming forums, part of the General Programming Boards category; Hello.. I wonder if criticalsection is faster than boost::mutex? I guess it is, but is there much difference in preformance?...

  1. #1
    l2u
    l2u is offline
    Registered User
    Join Date
    May 2006
    Posts
    630

    boost::mutex vs criticalsection

    Hello..

    I wonder if criticalsection is faster than boost::mutex? I guess it is, but is there much difference in preformance?

  2. #2
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,467
    What library are you looking at, other than boost?

    Critical section is the piece of code in your multi-threaded program that must only be accessed one thread at a time. In other words, any resource that doesn't allow threads to access it concurrently. It is a concept. Not a method.

    Boost::Mutex is a way to implement a locking mechanism to your shared resource(s). The type of Mutex and the locking mechnism created around it will determine if your resource is a critical section or not.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  3. #3
    CSharpener vart's Avatar
    Join Date
    Oct 2006
    Location
    Rishon LeZion, Israel
    Posts
    6,484
    Quote Originally Posted by l2u
    Hello..

    I wonder if criticalsection is faster than boost::mutex? I guess it is, but is there much difference in preformance?
    it depends on the percentage of the conflicts.

    Windows specific:
    -------------------------

    If conflict are rearly occuring - critical section is better

    If the probability of the conflict is high - mutex beats the critical section.

    The reason for this is how the OS processes the conflict.

    Using mutex - thread changes the state immedeatly and starts waiting for mutext releasing (cahnging state takes several tacs of the processor)

    in the critical section - the thread firstly goes to sleep for about 10 processor tacts and checks the state of the section again.
    If the section is available - thread continues execution, if the second check failes - only then the thread is suspended (this as I said takes more processor tacs when just 10 tacs sleep)...

    So when the conflict is rearly occurs - critical section could never require changing the thread state and becuse of this - beats the mutex.

    When the probability of the conflict is high, critical section is performed exactly as mutex, but has an additional waiting loop before entering the suspended state, that mutex version does not have.
    The first 90% of a project takes 90% of the time,
    the last 10% takes the other 90% of the time.

  4. #4
    l2u
    l2u is offline
    Registered User
    Join Date
    May 2006
    Posts
    630
    Quote Originally Posted by Mario F.
    What library are you looking at, other than boost?

    Critical section is the piece of code in your multi-threaded program that must only be accessed one thread at a time. In other words, any resource that doesn't allow threads to access it concurrently. It is a concept. Not a method.

    Boost::Mutex is a way to implement a locking mechanism to your shared resource(s). The type of Mutex and the locking mechnism created around it will determine if your resource is a critical section or not.
    I had in mind function(s) InitializeCriticalSection, EnterCriticalSection..

  5. #5
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,893
    I'm not sure about what Vart said. In my opinion, the CRITICAL_SECTION is always faster than a kernel Mutex.

    A critical section only spinlocks on SMP systems, i.e. multi-CPU or multi-core systems. It never "puts the thread to sleep for about 10 cycles", because actually putting the thread to sleep already takes several hundred cycles (it's done in kernel mode), not to mention the several hundred more to decide what thread to execute instead. On SMP systems, on a locked CS, the thread will do busy waiting for a bit, hoping that the other CPU will release the CS. On non-SMP systems, it will immediately block.
    The thing about the CS is that it doesn't switch to kernel mode just to check whether it is locked and to lock. If the CS is free, it uses atomic operations to immediately lock it without ever entering kernel mode. That's what makes it fast.
    Only if it is blocked does it enter kernel mode to sleep until it is released.

    A kernel Mutex, on the other hand, enters kernel mode unconditionally. This greatly decreases performance. Because the switch to kernel mode far outweighs whatever other cost the Mutex/CS has, Mutex won't be measurably faster even if every single acquisition blocks first. The only advantage the mutex has is that it works across process boundaries. And can be waited for together with other objects using WaitForMultipleObjects or a related function.




    I believe Boost.Threads.Mutex is implemented using a CRITICAL_SECTION, so it would be nearly as fast - the difference would be negligible.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  6. #6
    l2u
    l2u is offline
    Registered User
    Join Date
    May 2006
    Posts
    630
    Quote Originally Posted by CornedBee
    I'm not sure about what Vart said. In my opinion, the CRITICAL_SECTION is always faster than a kernel Mutex.

    A critical section only spinlocks on SMP systems, i.e. multi-CPU or multi-core systems. It never "puts the thread to sleep for about 10 cycles", because actually putting the thread to sleep already takes several hundred cycles (it's done in kernel mode), not to mention the several hundred more to decide what thread to execute instead. On SMP systems, on a locked CS, the thread will do busy waiting for a bit, hoping that the other CPU will release the CS. On non-SMP systems, it will immediately block.
    The thing about the CS is that it doesn't switch to kernel mode just to check whether it is locked and to lock. If the CS is free, it uses atomic operations to immediately lock it without ever entering kernel mode. That's what makes it fast.
    Only if it is blocked does it enter kernel mode to sleep until it is released.

    A kernel Mutex, on the other hand, enters kernel mode unconditionally. This greatly decreases performance. Because the switch to kernel mode far outweighs whatever other cost the Mutex/CS has, Mutex won't be measurably faster even if every single acquisition blocks first. The only advantage the mutex has is that it works across process boundaries. And can be waited for together with other objects using WaitForMultipleObjects or a related function.



    I believe Boost.Threads.Mutex is implemented using a CRITICAL_SECTION, so it would be nearly as fast - the difference would be negligible.
    Thanks for that great explanation.

  7. #7
    CSharpener vart's Avatar
    Join Date
    Oct 2006
    Location
    Rishon LeZion, Israel
    Posts
    6,484
    Quote Originally Posted by CornedBee
    I'm not sure about what Vart said. In my opinion, the CRITICAL_SECTION is always faster than a kernel Mutex.

    Only if it is blocked does it enter kernel mode to sleep until it is released.
    So think what will happen when conflict occure so often, that the probability to enter the kernel mode is the same as for the mutex...
    The first 90% of a project takes 90% of the time,
    the last 10% takes the other 90% of the time.

Popular pages Recent additions subscribe to a feed

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21