Thread: synchronization and volatile

  1. #16
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    What happened to edit and multi-quote? I think you've just broken a record here - no less than 5 posts in a row!
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  2. #17
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Following the discussion here, I'd say:
    1. volatile has it's uses, but it doesn't GUARANTEE access safety between threads, since the processor may still re-order memory accesses, without the compilers knowledge of that [and the only way to avoid that is to use processor specific memory barrier instructions]. So unless the compiler ALSO uses a memory barrier when accessing a volatile variable [1], there is no guarantee that the value written is actually available to another thread before some other data has been read by the current thread.

    2. If you use locks properly, you shouldn't need volatile, and using locks is the only way to prevent problems with threads.

    [1] I'm not aware of any compiler that does this. In the implementation of Linux lock mechanisms, there are explicit memory barrier instructions added so that the order of reads and writes are forced. This is "expensive", because it means that a write operation needs to be completed before the next read operation, so we don't really want to scatter memory barrier instructions all over the code unless it's actually meaningful in this place.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #18
    Registered User
    Join Date
    May 2006
    Posts
    1,579
    Hi Codeplug,


    Sorry for bother again on this topic. :-)

    I have taken some time to read through,

    http://www.hpl.hp.com/personal/Hans_.../user-faq.html

    as you recommended.

    It seems there is no need to add volatile to thread shared variable even if it is not in synchronized code segment, is my understanding correct? (see below quoted section from this link)

    But what means "Data races on volatiles are disallowed along with other races."? Looks like we still need to add volatile to thread shared variables? I am confused and doubt the consistency of this section. Please feel free to correct me if I am wrong. :-)

    --------------------
    So should I use volatile to identify variables modified by another thread?
    For C++0x, no. Currently, the official answer for pthreads is also no. Data races on volatiles are disallowed along with other races. However, it appears to be the only currently available standard mechanism for notifying the compiler that an asynchronous change is possible, and thus some optimizations, such as the one on the switch statement above, are unsafe. Thus in cases in which a race cannot be avoided due to the absence of a suitable atomic operations, and locks are too slow, it seems by far the safest option.
    If you do use volatile remember that its detailed semantics vary dramatically across platforms. On some platforms, two consecutive volatile stores may become visible out of order to two consecutive volatile loads in another thread. On other platforms, that is explicitly prevented. Thus platform-dependent mechanisms for memory ordering are usually also needed. (The atomic_ops package uses volatile in platform-dependent ways internally, but adds fences to enforce requested memory ordering.)

    It is also important to remember that volatile updates are not necessarily atomic. They may appear to be carried out piecemeal. Whether or not they actually are atomic depends on the platform and alignment.
    --------------------



    Quote Originally Posted by Codeplug View Post
    >> When you start turning to articles entitled "Volatile: Multithreaded Programmers Best Friend" ...
    In regards to that article, one should read more than just the title before assuming that volatile has anything to do MT programming.
    From: http://groups.google.com/group/comp....6be8f0b18bd62d
    I agree with post 54 as well, in regards to that article.

    >> There is nothing saying that the compiler HAS to re-read flag_ in this code
    The only way the compiler can NOT re-read flag_ is if it can guarantee that calling Lock() and Unlock() (and Sleep) will have no side effect to flag_. In most cases, these will eventually call into something like EnterCriticalSection() or pthread_mutex_lock() - which the optimizer can not analyze or make any assumptions about side effects.

    I read in a news group posting somewhere that "there are no compilers that hoist variables into a register across opaque function calls", due to the difficulty of guaranteeing there are no side effects. I don't know if that's true, but I do know that if Pthreads synchronization is used then the compiler can not perform this optimization and still be POSIX compliant.

    I would also say that it's very safe to assume this optimization would not be performed when using Enter/Leave CriticalSection - or any of the Win synchronization primitives. Otherwise, a *lot* of code out there would break.

    I found a great paper that talks about the deficiencies and pitfalls of MT programming. It also talks about other compiler optimizations that can cause problems in an MT environment.
    Threads Cannot be Implemented as a Library
    Author: Hans-J. Boehm, HP Labs
    (PDF file)

    Also interesting:
    Programming with Threads: Questions Frequently Asked by C and C++ Programmers
    Authors: Hans-J. Boehm, HP Labs & Paul McKenney, IBM


    gg

    regards,
    George

  4. #19
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    The point is that volatile in itself is not going to solve data races. The volatile addition to a variable only means that "the compiler must not optimize reads/writes from/to this variable, as it may change at any time".

    But bear in mind the other points too:
    1. The order of reads/writes between different volatile variables are still undefined when it comes to the processor level - that is, the processor may well write out the data and read other data in a different order order then the source code [or assembler/machine code] indicates. So if you rely on the order of read/write operations, you will HAVE to put extra instructions in to force reads/writes to be performed in that order.
    2. One volatile variable may be updated in more than one operation. There is no "global lock" applied to volatile variables, so if the variable is updated by multiple operations, then so it is.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #20
    Registered User
    Join Date
    May 2006
    Posts
    1,579
    Thanks Mats,


    From your reply, all I can feel is it is so uncertain when to use volatile or not. :-)

    Maybe the fault of missing comprehensive easy to understand compiler/CPU document. Not developer's fault.

    Well, I think the only definite answer from you is below, if we are using lock properly, no need to use volatile. I think you mean in synchronized section, for thread shared variable, there is no need to add volatile and if we are not in synchronized section, we'd better add volatile for thread shared variable, like the MSDN sample to avoid wrong optimization from compiler/CPU. Is my understanding correct?

    It is always interesting to read the word "properly". What do you mean properly used lock? Any general simple guidelines? :-)

    Quote Originally Posted by matsp View Post

    2. If you use locks properly, you shouldn't need volatile, and using locks is the only way to prevent problems with threads.
    --
    Mats

    regards,
    George

  6. #21
    Registered User
    Join Date
    May 2006
    Posts
    1,579
    Thanks Mats,


    Two more comments,

    1. I think volatile has nothing to do with instruction reorder either from compiler level or CPU level. I only means no optimization from read/write by using register. But your below comments (1) seems that you think volatile could block compiler from reorder?

    Quote Originally Posted by matsp View Post
    But bear in mind the other points too:
    1. The order of reads/writes between different volatile variables are still undefined when it comes to the processor level - that is, the processor may well write out the data and read other data in a different order order then the source code [or assembler/machine code] indicates. So if you rely on the order of read/write operations, you will HAVE to put extra instructions in to force reads/writes to be performed in that order.
    2.

    I am confused about your comment below. Why do you mean the situation "may be updated in more than one operation" and "so it is."? This question is about the relationship between synchronization and volatile, does your below comment has relationship with this question? :-)

    If I missed any of your points, please feel free to correct me.

    Quote Originally Posted by matsp View Post
    2. One volatile variable may be updated in more than one operation. There is no "global lock" applied to volatile variables, so if the variable is updated by multiple operations, then so it is.

    --
    Mats

    have a good weekend,
    George

  7. #22
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    1. I don't think the C/C++ language guarantees that volatile enforces ordering, but certainly the Microsoft documentation indicates that THEIR compiler enforces ordering, and my experience is that other compilers also do this.

    Gcc isn't quite as clear. Have a look at this page: http://gcc.gnu.org/onlinedocs/gcc-4....Volatiles.html

    2. A typical example would be a 32-bit processor accessing a volatile 64-bit integer, or some other type that is not a basic type. If you read a 64-bit value in a 32-bit processor [using standard instructions], it will be performed as two reads. Some processors, such as x86 have SPECIAL instructions that allow certain types of 64-bit operations to be made atomic, e.g. cmpxcgh8b [aside from such things as MMX and SSE instructions that can also make single reads of 64 or 128 bit values].

    Assuming you have two threads doing something like this:
    Code:
    long long int x;
    
    void thread1()
    {
        for(;;) {
           if (x & 0xFFFFFFFFF == 0) {
               printf("%ld\n", (long) x >> 32);
           }
        }
    }
    
    
    void thread2()
    {
        for(;;) {
           x++;
        }
    }
    You could perceivably get "half updated" values where the lower half has been updated and the upper half hasn't yet been updated, so for example the above code may do:
    0
    1
    1
    3

    [Although it's quite unlikely that it will happen that quickly].

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Synchronization Question
    By CyrexCore2k in forum C Programming
    Replies: 4
    Last Post: 05-01-2008, 02:51 AM
  2. MSDN volatile sample
    By George2 in forum C++ Programming
    Replies: 38
    Last Post: 01-05-2008, 06:59 AM
  3. question with multi-threaded programming
    By Hermitsky in forum C++ Programming
    Replies: 12
    Last Post: 12-02-2004, 06:30 AM