Can i do something like this:
Code:#define check( obj ) \ ...elided... \ #define check
Can i do something like this:
Code:#define check( obj ) \ ...elided... \ #define check
Out of curiosity, 6tr6tr, have you read Stroustrup's answer to the FAQ: So, what's wrong with using macros? ?
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
You can overload a macro at compile time. I do it for debug and release versions of a custom assert and a debug print macro.
Code:void __assert__(const char * file, unsigned line, bool dump); #ifdef DEBUG #define DEBUG_PRT(args) std::printf args #define ASSERT(expr) if (!(expr)) __assert__(__FILE__, __LINE__, true); #else #define DEBUG_PRT(args) #define ASSERT(expr) if (!(expr)) __assert__(__FILE__, __LINE__, false); #endif
Among the reasons mentioned here for keeping an assert in the code is this:
FWIWAnother reason for keeping asserts in the ship version of an embedded product is that turning the asserts off will change the timing characteristics of the program. On a desktop application, this rarely leads to a different end result. In real-time applications, removing the asserts may lead to misbehavior that did not arise before-and the assertions will not be in place to detect the situation. It's also worth bearing in mind that on the desktop, more speed is always better, while on embedded products, extra speed is often not an advantage, once you meet your hard real-time targets.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
I am with Dave on this one sometimes. The timing issue actually has arisen for me a time or two, but that isn't always my only justification. I am heavy into using asserts in fundamental levels of a program, i.e. memory management. I personally would rather a program die during various critical errors via assert rather than more or less write my own equivalent to assert in order to accomplish the task of aborting safely.
On the flip-side, on non-fundamental levels of a program (such as rendering to the screen) I think its best practice for it not to be compiled into a release version.
I guess now we should really consider the fact of when assert() should be used. When debugging real-time code on a pc, I do use them during development just to protect me against idiotic errors that take forever to debug. i.e.
To be honest, its a widely under used debugging tool that truly does protect against tiny errors that keep us up at night.Code:void drawObject(object *x) { assert(x); assert(x->image); // etc. ... }
So at this point I agree with you entirely brewbuck. Now for the part where I tip my hat to Dave's logic. assert() is handy for embedded devices as he pointed out, but not only that, its handy for security protocols too. One can make assertions about what someone is doing, and ultimately just kill the program instead of allowing a serious attack to be performed. Or in less doomsday-ish scenarios, you'd be amazed at how nice it is to get feedback from an end-user relevant to why a program is giving them trouble instead of just getting a couple emails saying "The program seems to crash very rarely, and I think it has something to do with when I open a help file."
Just some thoughts. Feel free to argue the merits of my commentary.
Requirements of high criticality code (eg developed to SIL 4 - safety critical) would not work well with asserts(). For those types of development it is generally necessary to define requirements (preconditions, postconditions, etc) of code, and then ensure through analysis and design that the code IS NOT invoked in any way that does not meet the requirements. Using a run-time mechanism to detect violated requirements is insufficient to satisfy such demands.
Consider software that monitors human vital signs and controls a respirator: the set of conditions when that software can terminate safely are very small (eg when medical staff consider it appropriate to take the patient off the respirator).
Sure, and asserts that fire regularly even if the code is not safety-critical is still very bad. The point of having an assert is to make the error condition more recognizable. I presume that safety-critical devices also have to ensure that memory violations are not halting the system, for example, and an assert(p != NULL) would not really do here - you need full checking that the pointer is valid before attempting to use it [or make sure pointers are never invalid, which of course is a better approach].
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
In high criticality code, the more usual approach is to ensure pointers are never invalid in the first place. In practice, that means a few constraints (eg dynamic memory allocation is rarely used in a high critical application, except in a clearly defined startup or initialisation phase, reinitialisation virtually never occurs, etc) as that makes it easier to analyse the code and gather evidence that unwanted behaviours do not occur (or are acceptably unlikely to occur).
Clearly that requires a lot more analysis (both of functions, and how the system as a whole uses them). But a side-effect is that assertions are generally unneeded: the system design prevents low level assertions from failing anyway.