I would guess the instruction inside the loop is being optimized out since it doesn't result in anything.
I would guess the instruction inside the loop is being optimized out since it doesn't result in anything.
"You are stupid! You are stupid! Oh, and don't forget, you are STUPID!" - Dexter
woah woah, actually debug mode initializes the values in all cases where its not initialized. Invalid test.
"You are stupid! You are stupid! Oh, and don't forget, you are STUPID!" - Dexter
Not exactly.... The debugger tries to remove anything that will result in undefined behavior in a way that is still undefined while easier to spot as a programmer. It actually makes spotting errors so much easier.
It isn't, I verified with assembly.
Not so. They're not always initialized. And if they were, it would mean it's being written twice in the second loop.
But I agree that it would be better with Release (I just had trouble convincing the compiler to stop optimizing the entire code away).
Here's another version that can be compiled in Release:
The amazing results I get from this are:Code:#include <windows.h> #include <iostream> using namespace std; int main() { DWORD dwTick1 = GetTickCount(); for (int i = 0; i < 100; i++) { char x[100000]; strcpy(x, "This is a test\n"); cout << x; } DWORD dwTick2 = GetTickCount(); DWORD dwTick3 = GetTickCount(); for (int i = 0; i < 100; i++) { char x[100000] = {}; strcpy(x, "This is a test\n"); cout << x; } cout << "Took " << dwTick2 - dwTick1 << " ms.\n"; cout << "Took " << GetTickCount() - dwTick3 << " ms.\n"; }
Took 203 ms.
Took 188 ms.
Took 234 ms.
Took 235 ms.
Took 281 ms.
Took 281 ms.
Took 172 ms.
Took 375 ms.
Took 187 ms.
Took 234 ms.
Took 375 ms.
Took 250 ms.
These results can be unpredictable and not 100% accurate due to the nasty work of the scheduler, but it shows that they take about roughly the same amount of time.
It's not quite as expensive as you would have everyone believe.
Ah yes... but it is. I base this on experience with clearing out stack space in an embedded system. If you would like I could certainly continue to dispute your "infallible" test program more. Or you can just accept that this is one of those stalemates we always end up at after dozens of posts back and forth with no change in opinion in sight.
It indeed seems that in debug mode looking after uninitialized things is more expensive. In optimized release mode the uninitialized version is slightly faster (and it doesn't seem that MingW optimizes anything out).
Took 4485 ms.
Took 4562 ms.
However the stack is usually not just one variable. Try replacing x with x[10] and 0 with {0}.
Took 4485 ms.
Took 33921 ms.
I might be wrong.
Quoted more than 1000 times (I hope).Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Heh.. I took it as being directed at me. Not all processors use out of order execution, by the way. Which is one reason that your zeroing may seem completely benign.
Of course, some systems may not be equal in terms of performance for this test. It is by no means a guarantee that it isn't expensive and not real proof, but it is a small test that proves that zeroing may not be as expensive as you claim, at least not on all platforms.
It varies from system to system, but on PCs, I can pretty certainly say it's viable to do so without a big overhead. It's just a matter of the OS builders not choosing to "do so."