Thread: INT_MIN and DBL_MIN

  1. #1
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229

    INT_MIN and DBL_MIN

    Code:
    #include <iostream>
    #include <climits>
    #include <cfloat>
    
    int main() {
            std::cout << INT_MIN << std::endl; //-2147483648
            std::cout << DBL_MIN << std::endl; //2.22507e-308
    }
    That doesn't make sense to me (DBL_MIN is positive while INT_MIN is negative). Anyone care to explain the design decision here?

    Thanks

  2. #2
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Yes, that's the way those are defined. INT_MIN is the "lowest (most negative)" value, whilst DBL_MIN is the "smallest" value - if you want the lowest (mos negative), you need -DBL_MAX.

    Don't ask me why this is...

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #3
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    Ah, I see, thanks for clarifying. Strange stuff I must say...

  4. #4
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    It's because an INT_MIN defined the way of DBL_MIN doesn't make sense (it's 1, always), and a DBL_MIN defined the way of INT_MIN is redundant (all known floating point representations are symmetric, so -DBL_MAX is fine), but the actual definition is needed.

    The members of std::numeric_limits are somewhat more consistent.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  5. #5
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    The reason is based on different representations of integral and floating point types.

    For signed integral types the minimum value is not necessarily equal to -1 times the maximum value.

    Floating point types are notionally represented as sign*mantissa*10**exponent (where x**y denotes x raised to the power of y). All of the fields (sign, mantissa, exponent) are distinctly represented with a discrete set of values. One consequence of this is that the values are symmetric about zero (i.e. the minimum value is -1 times the maximum value). Another consequence is that there floating point variables do not represent all real values: they can only represent a discrete set of values. The smallest normalised non-zero value that can be represented (eg DBL_MIN) is therefore important numerically. I'll leave it as an exercise to find out why.

  6. #6
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    I see. I understand that INT_MIN being 1 and DBL_MIN being -DBL_MAX are redundant. However, they could have at least used different names to eliminate the confusion? (INT_MIN and DBL_SMALLEST for example)

    Also, is it defined in the standard that floating point must be symmetrical? Is it possible for a very primitive implementation to implement "float"s as fixed point numbers? (and therefore overflow like integral types)

    As for DBL_MIN, I guess it could be used as the threshold for comparisons to zero?

  7. #7
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    The requirements of the C standard imply that the floating point representation must be an exponent-mantissa model. I'm not quite sure that it must be symmetric, but I feel that this implication is there as well. See in particular 5.2.4.2.2 of C99.

    Not sure about the C++ standard, but it generally inherits this low-level stuff from C. Also, the constants from <climits> are defined by reference to the C standard.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  8. #8
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by CornedBee View Post
    The requirements of the C standard imply that the floating point representation must be an exponent-mantissa model. I'm not quite sure that it must be symmetric, but I feel that this implication is there as well. See in particular 5.2.4.2.2 of C99.
    The standard does not specifically go out of its way and require that floating point representation be symmetric. However, given the nature of the representation, it is a predictable consequence. I've yet to encounter any example where it is not the case but would be happy (actually intrigued, should there be some specific reason for it) if someone was to identify a counter-example.

Popular pages Recent additions subscribe to a feed