Greetings to all. This is my first contribution here. I have been doing multithreaded work for years on a variety of platforms, most recently on Playstation 3/CellBE and Linux for Sony, not that it matters other than to point out its not Windows. One of the trickiest things we have come across is being able to troubleshoot logic problems in heavily multithreaded applications. For those not immediately wincing, the problem is that often the only thing you really have to work with due to platform or other issues is the dreaded printf() or writing to a log file. While both approaches work to an extent the real issue is that doing so causes the kernel to generate unwanted and artificial context switches, resulting in the app working one way with debug statements and another way without. How many times has it happened that you had some nasty bug that you finally trap with printf() or log messages, fix the bug, build in release mode and suddenly the app works completely different? This is what I refer to as the "observer effect" for lack of anything original.
I have spent a lot of time thinking about what could be done for this problem and I think I have come up with a simple germ of an idea to help with it. The system is not by any means complete, more of a proof of concept right now but I think that it has promise. The next real-world system I have to work on I will put it to the test. So how does it work?
In short I had the idea of a passive logging server that provided each out-of-process client a unique memory block to play in, used as a circular queue. Since each client then just pushes data (log msgs) to a slot in memory, no synchronization is needed. More, the messages themselves are tokenized such that each one uses a predictable block of memory and the write can be accomplished with a simple memcpy(). How they are tokenized is beyond the scope of this post but is spelt out more in the article on my website (link to that and demo code at bottom of post). The server itself is never instantiated directly; rather the first client instantiated creates the server and as the last client goes out of scope (in theory at the close of the application) the server is destroyed. As each successive client is created the server allocates a block of memory for that particular client to use. As part of destruction the server iterates through the allocated blocks, converting the tokenized logs into textual messages and writes them all to a disk-file, aggregating them in time-stamp order.
I have a more long-winded article on my blog along with a link to the source code. FWIW it is GPL 3.0. You will need cmake to built the demo app which shows general message construction and how it works in a heavier multithreaded environment. Caveat: the exact message creation/tokenization is still very alpha; I think I will be able to make it more "real world" once I use it a few times in something other than a test-harness. In any event this code and article should explain the direction I am taking on this and would be interested in the thoughts of other folks used to dealing with these kinds of things...