Thursday as in tomorrow? OK.
Well I will do what I can tonight to give some guidance but honestly for a BFO into what is *actually* happening, assign each producer/consumer an id number and have them print it in your output. What you are going to find is something like this:
1. You need to read 20 files and ultimately uppercase them (the "work" and then print them to the screen.
2. You want the thing with the most work to have the most threads.
3. The way your producers are set up is they (all 20) will fire up, fight over a mutex, pull off a filename, read that file, stick it in the data store, set the semaphore for the readers and the *loop* to see if there is more work.
4. The readers fight over the same mutex, 1 (one) of them wins, pulls off a value and prints it to the screen.
Now of the work presented, the way you have distributed it, the producers might as well print it to the screen and be done with it if getting the job done is the point; the readers are superfluous.
*however*
What actually happens on a dual-core box is:
The first 1 or 2 producers will do 90% of the work. This is what will be revealed if you assign each one a number as I had done some time ago. That means that 90-95% of your producers will spin on the while(1) loop and do nothing but eat clock cycles. If they all try to lock the same mutex, most won't even finish. This will demonstrate the lunacy of 20 producers: they won't get the job done 20x faster and most won't do anything at all.
The readers on the other hand really don't have much to do , just fight over a mutex and print a string to the screen.
In a real producer/consumer model, the producers (usually just a handful if even more than 1 or 2) will simply describe the job which could look like this:
Code:
struct TJob
{
// data
char achBuffer[MAX_FILENAME_SIZE];
bool bProcessed; // has this job been done yet?
};
// you know you only have 20 jobs so...
struct TJob jobQueue[20];
Then you keep a job count (how many are in your jobQueue), the currentAddPosition (init to -1) and the current job being done. Init this to -1;
Spin your threads as follows:
1-2 (only one is necessary) producers that feed filenames into the queue
4-5 readers.
Make a mutex-protected function to add to the queue which would involve:
increment the totalJobs variablle.
if( currentAddPosition == -1)
Its the first job so set it to 0
setCurrentJob to 0
else
increment the currentAddPosition
copy job to jobQueue[currentAddPosition]
incrementTotalJobs
then you need a reader function (something to dole out the "jobs" to readers" so
that function would:
read the jobQueue[currentJob]
decrement totalJobs
increment currentJob
The producers all the top function, the readers call the bottom one (protected by mutexes but ONLY as long as it takes to push data into it or pull data from jobQueue. The rest of the time let the threads work in solitude. The readers producers could read the files and push the contents into the jobQueue, readers (and long as each gets a unique index into the data) multiply access the data store at once, getting the data, upper-casing it and printing to the screen. Be prepared for a bottle-neck at the IO layer (screen)..