Thread: How to silence integer overflow warning

Threaded View

Previous Post Previous Post   Next Post Next Post
  1. #14
    Registered User awsdert's Avatar
    Join Date
    Jan 2015
    Posts
    1,735
    Quote Originally Posted by laserlight View Post
    If a standard integer type is supported, then its limits would be defined in <limits.h>. The width wouldn't be defined there, but that's what sizeof and possibly CHAR_BIT is for.
    Which works in runtime but not compile time, say you wanna wrap around stddef.h, stdint.h && intypes.h and then make sure everything is defined in the scenario none of them are available (such as the kernel - also I plan on doing just that), you would need those defines to work in #if/#elif statements which is why I'm trying to avoid things like sizeof, the tests I've been doing are just to make sure things work correctly in an environment where those limits happen to be the same and are thus easy to compare and spot incorrect results

    Quote Originally Posted by laserlight View Post
    By "preprocessor integer size/width" you mean "processor integer size/width", i.e., word size? If so, that's true, but I doubt your method of determining that is guaranteed to work: from what I see, with your _SIZEOF and SIZEOF macros applied to SIZEOF(~0u), you're just finding out the size of an unsigned int through a roundabout way, and then assuming that that's the word size.
    If I meant processor integer size/width I would've said that, that however would have to be detected through other macros since the compiler could itself be running in a different mode to what the cpu operates optimally at, if I remember rightly ARM processors are able to do just that and run under x64/x86 modes. No, in this case I'm purely looking for the limits of preprocessor math, I've since renamed the integral defines to CCINT and then added CCLONG, and CCLLONG respectively, those variable then become the default if INT_MAX & related are not defined (once again kernel land for example)

    Quote Originally Posted by laserlight View Post
    It is true that the C standard does say that int (and hence unsigned int) "has the natural size suggested by the architecture of the execution environment", but I don't think that's necessarily true in practice on modern 64-bit systems, where int might only be the natural step between short and long or long long.
    Already noticed that and started a workaround that looks like this for all sizes:
    Code:
    #ifndef UMAX_FOR_8BYTE
    #if UCCINT_MAX > UMAX_FOR_4BYTE
    #define UMAX_FOR_8BYTE UMAX_FOR_SIZE(UMAX_FOR_4BYTE)
    #else
    #define UMAX_FOR_8BYTE UCCLLONG_MAX
    #define UMAX_FOR_8BYTE_ASSUMED
    #endif
    #endif /* MAX_FOR_8BYTE */
    Quote Originally Posted by laserlight View Post
    Furthermore, your code in post #2 ensures that MAX_FOR_8BYTE will not exceed ~0u, so for a 4-byte unsigned int, MAX_FOR_8BYTE will be UINT_MAX, even if unsigned long long is supported such that MAX_FOR_8BYTE should have been ULLONG_MAX. If I'm not wrong, what you're doing there is forcing a conversion from int to unsigned int, hence avoiding the signed integer overflow warnings, but in turn ensuring that integer types with arithmetic conversion ranks higher than unsigned int will never be involved.
    Also made a workaround for:
    Code:
    #define UMAX_FOR_SIZE(PRV) \
    	(((PRV) >= UCCINT_MAX) ? PRV : (((PRV) * ((PRV) + 2))))
    
    #define MAX_FOR_SIZE(PRV) \
    	(((PRV) >= CCINT_MAX) ? PRV : ((PRV) * ((PRV) + (PRV) + 4)) + 1)
    I renamed the original to the UMAX version and the used the programming mode of the system calculator to figure out how to get 7FFF from 7F, have yet to check if it still works with 3F etc but I imagine that it should work given it's only a difference in bit count, not bit format, the only case I need to watch out for probably has defines I can use already, the trit environment where unlike bits that have only 0s and 1s the trit has -1s or 2s aswell (haven't actually tried programming for that environment yet so don't know the specifics)

    Quote Originally Posted by laserlight View Post
    It is true that <stdint.h> doesn't necessarily provide certain definitions... but if it doesn't then you wouldn't be able to get it yourself either because it would mean that say, integer types with the given widths do not exist on that implementation. I was thinking that maybe you're trying to support old implementations that do not conform to C99 or later (at least IIRC, <stdint.h> was introduced in C99), but it has been a two decades.


    I think there are still a few oddities among modern systems that have CHAR_BIT > 8, so you would need CHAR_BIT * sizeof(int) instead if you wish for it to work "equally well on all systems".
    Yeah for those I plan to just do a processor check, define something with the word size/width (& possibly other sizes depending on what this article says) and slap that as the default instead but that'll be done when I begin the porting process to those architectures and testing in VirtualBox or something similar
    Last edited by awsdert; 08-05-2020 at 01:27 AM.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Definitive Guide To Integer Overflow.
    By Dren in forum C Programming
    Replies: 4
    Last Post: 11-08-2019, 01:56 PM
  2. Replies: 6
    Last Post: 09-30-2015, 08:49 AM
  3. integer overflow
    By John Connor in forum C Programming
    Replies: 11
    Last Post: 02-11-2008, 05:30 PM
  4. silence warning when assigning pointers
    By eth0 in forum C Programming
    Replies: 5
    Last Post: 10-27-2005, 11:18 AM
  5. how to handle integer overflow in C
    By kate1234 in forum C Programming
    Replies: 8
    Last Post: 04-23-2003, 12:20 PM

Tags for this Thread