View Full Version : Aargh... evil static

01-12-2008, 11:05 AM
I'm trying to convert a single-threaded app to use multithreading (first time for me) and I immediately got punished for my sloppiness. :(

#include <iostream>

struct A
int n;
A(int n): n(n) {}
void foo() {
static int num = n;
std::cout << num << '\n';

int main()
A a(10), b(20);

Such statics are shared between objects which I thought would be completely independent but now that I have multiple threads it turns out they provide a back door to mess up each others internal state. (In singlethreading everything is OK, since the static has to remain stable only as long as the function call lasts.) Glad I was at least using asserts to catch problems with the internal state.

Guess I'll have to make them regular members of the class (or local variables).

01-12-2008, 11:22 AM
Haha, I thought you fried your latest graphics card or something :p

01-12-2008, 11:46 AM
Actually they were motived from an attempt at premature optimization.

I needed a temporary vector in some functions, but I thought I'd be clever and not let them release the memory between the function calls:

static vector<int> vec;
//fill up and use

Guess, this is an antipattern after all.

For now I just removed static and run a test to see if this tricks makes any difference in performance.

01-12-2008, 11:54 AM
Yeah, as you know, if 2 or more threads accesses the same object (or if you create new objects that contains static members that are are used), you have to do locking to avoid a data race.
Multi-threading is a pain, I give you that.

01-12-2008, 12:18 PM
Nah, the performance difference after simply removing static is only about 0.1 ms per 8 ms, so there's no reason to start messing with locks (I use locks to protect accesses to the vector that contain the results of the jobs). I'll have to see how the new multithreading version runs on a slower machine before I start to make radical changes.

Actually I just discovered that I had failed to apply an optimization (changing the value of a single constant) to part of the program, that just gained me about 1 ms per 8 ms job. :)

01-12-2008, 12:19 PM
I did an experiment with locks myself.
With time critical code, the code ran 5s (I think the time was about 4s vs 9s or so) slower with locks. And that was only when one thread was locking, so locking is not always optimal!

01-12-2008, 12:38 PM
Since this is first time multithreading for me, I decided to keep to the count of processes low.

The main process obtains a resource (pops from a vector of queues) and the background process periodically checks the size of the queues and pushes more if needed. If the main process still runs out of the resource (when user keeps pressing the "New" button like crazy), it simply blocks to create the resource by itself.

Initially I tried to launch an "emergency thread" in such cases and then let the main thread check the queues but that became messy (couldn't even figure out a good strategy to seed random number generators) and the whole program would lock up sometimes (perhaps because of the statics again).

I hope that my understanding of locks is correct: the main process creates a lock (wxCriticalSection in my case, from wxWidgets) and passes a reference to any threads it creates. Then all accesses to a shared resource must be surrounded by a call to enter() and leave() of the section (or using wxCriticalSectionLocker) and in the meantime other threads have to stop when they reach a critical section and wait until the other thread has left from the critical section.