Is there a program to test if your local stack variables are zeroed out after exiting a function? If so, how can I go about doing this?
Printable View
Is there a program to test if your local stack variables are zeroed out after exiting a function? If so, how can I go about doing this?
Your local stack variables don't exist after the function. (Assuming you mean variables local to the function.)
If you want that piece of memory to read 0, you would have to set it as such before the function ends (and I would guess that your compiler may well optimize it away).
If you mean something else, you'll have to be more specific.
I think you answered my question.
I need to write a program to check if C++ zeroes out the local stack variables after a function. I could do this simply by writing a class function variable 'x' and writing a main variable 'x' and writing a different definition for the variable in the function than in main. I would then ask my program to output 'x' at the end of the program and it should output main's variable 'x', correct?
It would output main's variable x, yes. On the other hand, that doesn't have anything to do with "zero[ing] out the local stack variables after a function". (This is just checking that myfunc's x and main's x are not the same, not that myfunc's x is zero after the function ends.)
Presumably you would need to return a dangling pointer from the function (Very Bad in the normal scheme of things, but since you're trying to get at implementation details, I guess we'll let it go this time) to a local variable and dereference that pointer after the function ends, if you wanted to see what actually happened to the memory.
A better question would be: why must you do this?
This sounds like a job for Segment Fault Man! Or if enough time lapses you may get to deal with his loveable sidekick The Access Violator. If you really deemed it necessary you could zero out the entire function. But man is that going to be a nasty block of code. I will take no part in helping you write useless code.
Oh god, I must do this for my MIPS assembly class. We are trying to prove that local variables are actually zero'd out after exiting a function (in C++) and I haven't done any C++ code since last semester and even then we barely covered pointers. This will be a lot of fun.
Yeah I figured you were an assembly language programmer. Your questions seem to stem from someone who is more used to assembler languages. Good ol' MIPS :)
But they aren't?
This is undefined behaviour, since you shouldn't return pointers to local variables, but when I print it I see 42 (not 0).Code:#include <iostream>
using namespace std;
int* foo()
{
int n = 42;
return &n; //undefined behaviour
};
int main()
{
int* p = foo();
std::cout << *p;
}
Also, if they were zero'd out after leaving a function, shouldn't they also be zero when entering a function (alas, uninitialized variables don't contain 0)?
Or, if you know assembly, wouldn't you be able to check the output of the compiler to see if there are any instructions to zero out anything when leaving a function (I don't)?
I believe the purpose of the assignment is to prove that fact, anon. Now that I am a little more familiar with why he is asking that I am comfortable to know he isn't just keeping dangling pointers for later use. Its just a computer science proof.
Zeroing out memory is expensive Amyaayaa. Could you even begin to imagine how much overhead it would cost to have every single stack zeroed prior to, or upon exiting each and every function called?
Can this be due to compiling with debugger info? I know that this changes the initialisation of variables.Quote:
This is undefined behaviour, since you shouldn't return pointers to local variables, but when I print it I see 42 (not 0).
The stack isn't necessarily destroyed or overwritten immediately. Its just considered to have no promises attached to it. In essence, you are doing something that yields no consistent behavior.
It's not that expensive. One instruction at most and it might just as well end up in the cache if you're lucky (though I don't know if that can happen for sure).
Debuggers can initialize variables with known values. However, be default variables contain "junk", or in other words - undefined contents.
Point being, variables are not zeroed before creation nor after their destruction.
All that happens is pretty much that the ebp register (I think it's this one?) is changed (it's the stack pointer).
But regardless of all that - this changes from system to system and has no connection to C++. A variable's value before use is undefined and so is its contents after destruction.
I assure you it is expensive enough that your operating system does not do it.
Really?
Try this code (in debug mode, of course):
I fail to see a big speed hit, especially since results seem to yield a lot of time when the second (!) is faster than the first:Code:#include <windows.h>
#include <iostream>
using namespace std;
int main()
{
DWORD dwTick1 = GetTickCount();
for (int i = 0; i < 1000000000; i++)
int x;
cout << "Took " << GetTickCount() - dwTick1 << " ms.\n";
dwTick1 = GetTickCount();
for (int i = 0; i < 1000000000; i++)
int x = 0;
cout << "Took " << GetTickCount() - dwTick1 << " ms.\n";
}
Took 4250 ms.
Took 4079 ms.
Took 3954 ms.
Took 3703 ms.
Took 3937 ms.
Took 3844 ms.
Can you tell how it's expensive?
I would guess the instruction inside the loop is being optimized out since it doesn't result in anything.
woah woah, actually debug mode initializes the values in all cases where its not initialized. Invalid test.
Not exactly.... The debugger tries to remove anything that will result in undefined behavior in a way that is still undefined while easier to spot as a programmer. It actually makes spotting errors so much easier.
It isn't, I verified with assembly.
Not so. They're not always initialized. And if they were, it would mean it's being written twice in the second loop.
But I agree that it would be better with Release (I just had trouble convincing the compiler to stop optimizing the entire code away).
Here's another version that can be compiled in Release:
The amazing results I get from this are:Code:#include <windows.h>
#include <iostream>
using namespace std;
int main()
{
DWORD dwTick1 = GetTickCount();
for (int i = 0; i < 100; i++)
{
char x[100000];
strcpy(x, "This is a test\n");
cout << x;
}
DWORD dwTick2 = GetTickCount();
DWORD dwTick3 = GetTickCount();
for (int i = 0; i < 100; i++)
{
char x[100000] = {};
strcpy(x, "This is a test\n");
cout << x;
}
cout << "Took " << dwTick2 - dwTick1 << " ms.\n";
cout << "Took " << GetTickCount() - dwTick3 << " ms.\n";
}
Took 203 ms.
Took 188 ms.
Took 234 ms.
Took 235 ms.
Took 281 ms.
Took 281 ms.
Took 172 ms.
Took 375 ms.
Took 187 ms.
Took 234 ms.
Took 375 ms.
Took 250 ms.
These results can be unpredictable and not 100% accurate due to the nasty work of the scheduler, but it shows that they take about roughly the same amount of time.
It's not quite as expensive as you would have everyone believe.
Ah yes... but it is. I base this on experience with clearing out stack space in an embedded system. If you would like I could certainly continue to dispute your "infallible" test program more. Or you can just accept that this is one of those stalemates we always end up at after dozens of posts back and forth with no change in opinion in sight.
It indeed seems that in debug mode looking after uninitialized things is more expensive. In optimized release mode the uninitialized version is slightly faster (and it doesn't seem that MingW optimizes anything out).
Took 4485 ms.
Took 4562 ms.
However the stack is usually not just one variable. Try replacing x with x[10] and 0 with {0}.
Took 4485 ms.
Took 33921 ms.
Heh.. I took it as being directed at me. Not all processors use out of order execution, by the way. Which is one reason that your zeroing may seem completely benign.
Of course, some systems may not be equal in terms of performance for this test. It is by no means a guarantee that it isn't expensive and not real proof, but it is a small test that proves that zeroing may not be as expensive as you claim, at least not on all platforms.
It varies from system to system, but on PCs, I can pretty certainly say it's viable to do so without a big overhead. It's just a matter of the OS builders not choosing to "do so."