Is the use of a buffer appropriate for sufficiently avoiding threads, here ?

This is a discussion on Is the use of a buffer appropriate for sufficiently avoiding threads, here ? within the C++ Programming forums, part of the General Programming Boards category; I'd eventually learn about threads when get a bit of free time. Meanwhile can I do the following with a ...

  1. #1
    Registered User manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    Kolkata@India
    Posts
    2,498

    Is the use of a buffer appropriate for sufficiently avoiding threads, here ?

    I'd eventually learn about threads when get a bit of free time.
    Meanwhile can I do the following with a buffer without running into pitfalls ?

    Suppose an object of a class Device wants to send some packets of data to another such object.
    If they were running simultaneously, then one would send and the other would wait for the packets, until receiving them.

    What I'm doing now , is keeping a std::deque of packets maps (containing a packet and the address of another device) as a member of the class Device.

    When all the required packets are in the buffer, a static function bool connect();
    connects the two devices(which it takes as arguments), i.e..it iterates over the buffer of the sender and passes all the packets mapped to the receiver's address to the receiver's receive buffer .

    Am I taking the right approach?
    Manasij Mukherjee | gcc-4.8.2 @Arch Linux
    Slow and Steady wins the race... if and only if :
    1.None of the other participants are fast and steady.
    2.The fast and unsteady suddenly falls asleep while running !



  2. #2
    Registered User
    Join Date
    Jun 2005
    Posts
    6,336
    Not if your program executes distinct threads for any key operations affecting the devices.

    While, generally, it is a good idea to minimise number of threads (or processes), a buffer in isolation is insufficient to achieve that unless all operations on your devices execute sequentially (one after the other) on one (and only one) thread. If there is any degree of concurrency (eg more than one thread that accesses a particular buffer), your code potentially suffers race conditions.

    If any buffer (or other area of memory) is potentially accessed by two (or more) threads, it is necessary to synchronise access to that buffer. For example, using a mutex or critical section.
    Right 98% of the time, and don't care about the other 3%.

  3. #3
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    If you did do this with the threads, it would be a "producer-consumer" scenario and subject to the caveats grumpy mentions.

    As to whether not doing that is sufficient, it depends on why you think it could be a problem. You say:

    If they were running simultaneously, then one would send and the other would wait for the packets, until receiving them.
    Is that important? Ie, does it matter whether the receiver receives the packets piecemeal ASAP or has to wait a bit but then receives them all at once? For example, if the device where a printer and the packets pieces of a document to be printed, probably it does not matter whether the printer receives a paragraph at a time and prints them or just prints the whole thing at once after a few moments. However, if the "device" were a realtime application, you may want to see the results in real time.

    Even if it is because you think piecemeal processing is important, you still do not necessarily have to use threads. Just deal with one packet at a time, start to finish, at a granularity appropriate to the goals of the "real time" receiver.

    I think the only real reason to thread here would be because it might speed the process up, if the relationship between the two sides is potentially unequal or imbalanced (one side takes less time than the other). To do that, you would decide which end is more processor intensive -- the producer/transmitter or the consumer/reciever -- and have multiple threads for that side. The producer(s) still place(s) packets into a queue, and the consumer(s) take them from there. If it turns out the queue is always empty, you want more producers. Otherwise, you want more consumers, to keep the queue as short as possible but not constantly empty.

    Simply having one thread for each device -- one sending and one receiving -- will not, I think, make much difference unless the system is otherwise completely idle. So the only reason to do that would be if the receiving device takes a long time to do its thing and you want to free the main process to move on. If that is your idea, IMO you would be better off using completely separate processes; ie, you write a small "daemon" program that manages the receiver device and waits on a socket. Then your main process sends to that. On a multi-core system, this should have the exact same advantages as threading, but probably use less resources, esp. if they both link to a shared library for shared code, and be free from the concurrency issues that would complicate a singular multi-threaded process.
    Last edited by MK27; 07-08-2011 at 05:38 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  4. #4
    Registered User manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    Kolkata@India
    Posts
    2,498
    I'm actually trying to simulate a group of computers on a network, as closely as possible.

    Your idea to use different processes is a good one. I'd do that when I have the different modules working properly and in a position to arrange parts of the code into clients and servers.

    Also, I can't deal with each packet at a time because "piecemeal processing" is not necessarily important and sometimes, a single packet may not make sense at all.

    What I meant is that ...without any sort of concurrency, is my approach a good one?
    Last edited by manasij7479; 07-08-2011 at 05:47 AM.
    Manasij Mukherjee | gcc-4.8.2 @Arch Linux
    Slow and Steady wins the race... if and only if :
    1.None of the other participants are fast and steady.
    2.The fast and unsteady suddenly falls asleep while running !



  5. #5
    Registered User
    Join Date
    Apr 2006
    Posts
    2,045
    No, that approach will not work. What happens if your half way though adding an element to the queue, and the other thread comes in to read it? You get a garbled packet. More importantly, the if you're trying to append and pop members off the queue at the same time, the deque's internal size counter could get messed up.

    Now with parity checks or more fancy fault detection as would be present on a network, you can detect a garbled packet and deal with that part, but you still can't use a deque.
    It is too clear and so it is hard to see.
    A dunce once searched for fire with a lighted lantern.
    Had he known what fire was,
    He could have cooked his rice much sooner.

  6. #6
    Registered User manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    Kolkata@India
    Posts
    2,498
    and the other thread comes in to read it
    That is why I'm doing everything in a single thread, for now.
    Manasij Mukherjee | gcc-4.8.2 @Arch Linux
    Slow and Steady wins the race... if and only if :
    1.None of the other participants are fast and steady.
    2.The fast and unsteady suddenly falls asleep while running !



  7. #7
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by manasij7479 View Post
    That is why I'm doing everything in a single thread, for now.
    Well, you know that that works, right? So maybe your question is really: would this be better threaded? Because the answer is not automatically yes. Threading is useful for certain things, but it is not necessarily a better solution than a single process and can easily be worse.

    I would say, unless concurrency is necessary, or you think there are significant performance gains to be found -- which is not as simple as it sounds* -- let it be. More often than not, when I've implemented threads not out of necessity but because I think it will improve performance, I've been disappointed or horribly wrong. Hopefully each time I learn a little about what's appropriate...

    Anyway, IMO threads are a nifty and interesting tool, but like a lot of nifty interesting things, easy to abuse and create crazy (as in, strange and impractical) things with. But YMMV

    * the easiest way to see this is to write some simple programs that do the same producer/consumer oriented task, one single and one multi-threaded, and judge for yourself.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  8. #8
    Registered User
    Join Date
    Jun 2005
    Posts
    6,336
    The main performance gains from threading come in programs which rarely (preferably never) require data to be passed between threads. As soon as data is to be passed between threads - directly or indirectly - it is necessary to protect that data from concurrent access. That generally means some synchronisation scheme - either the threads producing data need to wait until another thread has consumed the data, or vice versa. Multiple consumers (those with read-only access) generally don't need synchornisation, but that changes if there are multiple producers (each modifying the same data store) or if the consumers "clear" the data (which is a modification).

    It doesn't matter if you do that with mutexes to synchronise threads, intermediate files on a hard drive (where the operating system, access controls, and the hard drive itself are, effectively, used to enforce synchronous access to the data), or some middleware layer that arbitrates between producers and consumers. The need for synchronisation does not change: only the mechanism by which it is achieved (and also whether it is explicit in the code).

    Then, even in cases where there is no data sharing between threads, there are the overheads associated with the existence of the theads themselves: the resources the process or operating system must allocate for each thread, and their scheduling. That overhead is relatively small, but is non-zero for each thread, and increases with multiple threads.
    Right 98% of the time, and don't care about the other 3%.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Question about Multiple threads and ring buffer.
    By qingxing2005 in forum C Programming
    Replies: 2
    Last Post: 01-14-2007, 11:30 PM
  2. a point of threads to create multiple threads
    By v3dant in forum C Programming
    Replies: 3
    Last Post: 10-06-2004, 09:48 AM
  3. Avoiding Buffer Overflows
    By Aidman in forum C++ Programming
    Replies: 5
    Last Post: 01-03-2004, 11:21 AM
  4. fgets(buffer,sizeof(buffer),stdin);
    By linuxdude in forum C Programming
    Replies: 2
    Last Post: 10-28-2003, 09:41 AM
  5. serial programming, threads and dynamic buffer
    By lectrolux in forum C Programming
    Replies: 1
    Last Post: 05-06-2003, 09:59 AM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21