Robust error handling

This is a discussion on Robust error handling within the C Programming forums, part of the General Programming Boards category; I've been researching the best way to implement error handling in C programs, but I just don't have enough experience ...

  1. #1
    Registered User
    Join Date
    Mar 2009
    Posts
    399

    Robust error handling

    I've been researching the best way to implement error handling in C programs, but I just don't have enough experience yet in order to evaluate the different alternatives.

    I have seen very few programs that try to mimic C++ exceptions using setjmp/longjmp, so it doesn't seem to be very popular. And I'm not sure how well a scheme like that plays with multithreaded code. One of the first Google results was this article, but I haven't read the whole thing yet: Exception Handling in C Without C++

    Most libraries, including the C standard library, seem to use a combination of in-band error indicators and a global error indicator (errno). I'm not sure if it's required by the C standard that errno should be thread safe, but I've seen a couple of sources that hint that it's required by POSIX. Either way, if you want to extend the functionality of errno with your own error codes, you'll have to worry about making it thread safe and thus a portable nightmare.

    This site is advocating that you shouldn't use in-band error indicators at all, and instead always have functions return error indicators only. That would make the error handling more thread safe, but on the other hand it would lead to more contrived code. For example, if you were to rewrite a simple function like strcmp this way, you would need to always create an additional value to hold the result.

    Another question is if functions should always do as much error checking as they can, or leave some of it to the caller. For example, should the caller or function check if a parameter is a null pointer or not (in those cases when it's not allowed to be NULL)?

  2. #2
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Memloop View Post
    This site[/URL] is advocating that you shouldn't use in-band error indicators at all, and instead always have functions return error indicators only. That would make the error handling more thread safe, but on the other hand it would lead to more contrived code. For example, if you were to rewrite a simple function like strcmp this way, you would need to always create an additional value to hold the result.
    Well, not really, since the return value of strcmp is always either 0 or positive, so you could use negative values for errors, which is common.

    Here's my $0.02: I agree about using the return value as much as possible. When it is not possible, use another method, such as errno; other libraries do this using their own error constant. With openGL, the constant is boxed -- you must actively check for it:
    Code:
    err = glError();
    This seems awkward, but in fact it saves having to incorporate error checking automatically for everything -- in reality, once your code is debugged and working, the need for error checking is reduced more or less to input validation. As a parallel with the return value, you do not REALLY need to check every single return value of every single command which COULD IN THEORY throw an error, if logic says this is impossible anyway. Ie, "error checking" is mostly a debugging tool and should be limited in scope appropriately.

    IMO that is C, you know, shoot yourself in the foot, learn how to handle a weapon better.

    I don't really see a need for a standardized system and I think it would be more than a little anal to try implementing one. In the end, you will just find yourself trying to apply your new universal method and recognizing that it really is better (tidier, more optimal, more streamlined) to work in a low level, context specific way rather than a top down, squeeze everything into my superclass kind of style.

    So error checking should vary with circumstance and not be made a matter of policy.
    Last edited by MK27; 12-19-2009 at 12:50 PM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  3. #3
    Registered User
    Join Date
    Mar 2009
    Posts
    399
    Quote Originally Posted by MK27 View Post
    Well, not really, since the return value of strcmp is always either 0 or positive, so you could use negative values for errors, which is common.
    strcmp can return negative values. From the man page:
    The strcmp() and strncmp() return an integer greater than, equal to, or less than 0, according as the string s1 is greater than, equal to, or less than the string s2.

  4. #4
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Memloop View Post
    strcmp can return negative values. From the man page:
    Oops! Okay. Well, this is a good place for errno then. Except, AFAICT, strcmp() does not provide any error checking, because the compiler will in fact catch a "bad argument", and nothing else would constitute an error here. An example of how the need for this can be illusory.

    Vis, thread safety, I would assume that is what the GL model (using a callback to fetch errors) is intended to insure.

    Again, these things only seem like hassles if you are determined to check every single function call you make, and leave this (absurdly paranoid) error checking in place permanently. If you just need to track down an issue, or leave some key checks in place, it is very simple and works well.

    Hopefully someone will be along to explain why I am out to lunch again
    Last edited by MK27; 12-19-2009 at 01:18 PM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  5. #5
    Registered User
    Join Date
    Dec 2008
    Location
    Black River
    Posts
    128
    Quote Originally Posted by Memloop View Post
    Most libraries, including the C standard library, seem to use a combination of in-band error indicators and a global error indicator (errno). I'm not sure if it's required by the C standard that errno should be thread safe, but I've seen a couple of sources that hint that it's required by POSIX. Either way, if you want to extend the functionality of errno with your own error codes, you'll have to worry about making it thread safe and thus a portable nightmare.
    You can expect any reasonable modern implementation to have a thread-safe errno. It would be nearly worthless other way. In any case, the biggest problem in using errno is that the standard only defines 2 error codes (EDOM and ERANGE), so if you wish to extend its functionality with new codes, you'll have to be careful so that they don't clash with the ones already defined in the system.

    There is a simple way to define your own errno equivalent. It works for MSVC and gcc only, but those 2 cover like 90% of the used compilers. With this method, you can have a per-thread variable and easily define your error codes:

    Code:
    gcc:
    extern __thread int my_errno;
    MSVC:
    extern __declspec(thread) int my_errno;
    For any other compiler, if it's on an x86 and has inline assembler, you can emulate the above using a segment register.

    This site is advocating that you shouldn't use in-band error indicators at all, and instead always have functions return error indicators only. That would make the error handling more thread safe, but on the other hand it would lead to more contrived code. For example, if you were to rewrite a simple function like strcmp this way, you would need to always create an additional value to hold the result.
    Yes, I prefer to handle errors that way myself. I don't think it's hard to come up with values that can be used as error codes. In your strcmp example, it's logical to assume that the return value is computed as the difference between the characters that compose the strings, and given that chars are almost always one byte in size, there's no way that the expression "char1 - char2" can lead to a value of SHRT_MIN or INT_MIN, for example, so those could be used as error codes.

    Another question is if functions should always do as much error checking as they can, or leave some of it to the caller. For example, should the caller or function check if a parameter is a null pointer or not (in those cases when it's not allowed to be NULL)?
    I prefer to have the function verify the integrity of its arguments and return with an error code if something's wrong. It's made my life easier and has helped me with debugging. I can see why others would disagree, though.

  6. #6
    Registered User
    Join Date
    Mar 2009
    Posts
    399
    Quote Originally Posted by Ronix View Post
    Yes, I prefer to handle errors that way myself. I don't think it's hard to come up with values that can be used as error codes. In your strcmp example, it's logical to assume that the return value is computed as the difference between the characters that compose the strings, and given that chars are almost always one byte in size, there's no way that the expression "char1 - char2" can lead to a value of SHRT_MIN or INT_MIN, for example, so those could be used as error codes.
    It works out okay in a situation like this, but it doesn't work quite so well for functions like strtol where both LONG_MIN and LONG_MAX can be valid return values.

    Quote Originally Posted by Ronix View Post
    I prefer to have the function verify the integrity of its arguments and return with an error code if something's wrong. It's made my life easier and has helped me with debugging. I can see why others would disagree, though.
    It can be a matter of efficiency. If you're writing a library that will be used in a computation heavy context, it might make sense to leave the error handling to the caller. Especially if you have some function that is going to be in a loop where the arguments don't change, so it would add a lot of overhead to check them in every loop. I think a valid compromise might be to use assert to validate arguments. At least that way you can optimize away it in release builds.

  7. #7
    Registered User
    Join Date
    Dec 2008
    Location
    Black River
    Posts
    128
    Quote Originally Posted by Memloop View Post
    It works out okay in a situation like this, but it doesn't work quite so well for functions like strtol where both LONG_MIN and LONG_MAX can be valid return values.
    Yes, it's hard to come up with a proper way in some functions. I don't think there is a unique way that you can apply universally. Usually, you'll need to rely on both errno (Or a similar variable) and return values.
    In the case of a strtol-like function, you can make the char** argument mandatory and set it to NULL to signal an error, with the return value indicating exactly what happened: LONG_MIN and LONG_MAX could mean there was an overflow and a value of 0 would mean that no conversion could be performed.

    It can be a matter of efficiency. If you're writing a library that will be used in a computation heavy context, it might make sense to leave the error handling to the caller. Especially if you have some function that is going to be in a loop where the arguments don't change, so it would add a lot of overhead to check them in every loop. I think a valid compromise might be to use assert to validate arguments. At least that way you can optimize away it in release builds.
    True. When error checking is expensive or undesirable, it may be better to make the caller responsible, and assert looks like an okay option. Still, I'd imagine that well predicted if-statements (Those were the comparisons are predictable or heavily weighted) shouldn't slow down programs that much, either way.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Robust error checking
    By Memloop in forum C++ Programming
    Replies: 4
    Last Post: 09-21-2009, 11:45 AM
  2. Robust method for storing data outside of a program
    By goatslayer in forum C++ Programming
    Replies: 17
    Last Post: 09-19-2007, 04:08 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21