Thread: bitmasks

  1. #1
    Registered User
    Join Date
    Jun 2004
    Posts
    201

    bitmasks

    I got a series of defines who represent all bits 1 for different types.

    #define ALLBITS_8 ( 0xff )
    #define ALLBITS_16 ( 0xffff )
    #define ALLBITS_32 ( 0xffffffffL )

    2 questions

    1. In the code these defines are used as signed types too, is that portable and if not what problems could it cause?

    2. Wouldnt it be better to define these macros like:
    #define ALLBITS_8 ((unsigned char)~0)
    #define ALLBITS_16 ((unsigned short)~0)
    #define ALLBITS_32 ((unsigned long)~0)

  2. #2
    & the hat of GPL slaying Thantos's Avatar
    Join Date
    Sep 2001
    Posts
    5,681
    What would happen in your version if sizeof(short) > 16 bits?

  3. #3
    Registered User
    Join Date
    Jun 2004
    Posts
    201
    Quote Originally Posted by Thantos
    What would happen in your version if sizeof(short) > 16 bits?
    the macro would have to be changed to the type that is 16 bits on that platform.

    I was more thinking about it in terms of that 0xffff is used as a SIGNED type. Could that cause problems on some platforms when converting it to an unsigned type?

  4. #4
    Registered User
    Join Date
    Sep 2004
    Posts
    719
    that certainly depends on you implementation. a 16 bit integer is a 16 bit integer, signed or unsigned. your maskings will still be the same.

    pretend we're working with 4bit integers. not possible without bitfields, bit just pretend....
    if we have
    Code:
    __int4 n = 1000b
    unsigned __int4 m = 1000b
    
    __int4 x = 1000b
    unsigned __int4 y  = 1000b
    
    then 
    n & m & x & y always == true

    now, if we have
    Code:
    __int4 x = 1000
    __int4 y = random;
    and 
    x & y == true 
    we know that y is negative.

    but in this case
    Code:
    __int4 x = 1000
    unsigned __int4 y = random;
    and 
    x & y == true 
    we know that y is greater than 2^3 - 1.
    orther wise, if false, y < 2^3 - 1
    it's not so much the values you get. it's what the values you get mean.

    the portability issues arise when you assume the size of integers. that's why it's best to use __intN when declaring "size critical" integers to ensure they are the size you need them to be across every platform.
    Last edited by misplaced; 04-18-2005 at 05:26 AM.
    i seem to have GCC 3.3.4
    But how do i start it?
    I dont have a menu for it or anything.

  5. #5
    & the hat of GPL slaying Thantos's Avatar
    Join Date
    Sep 2001
    Posts
    5,681
    Quote Originally Posted by Laserve
    the macro would have to be changed to the type that is 16 bits on that platform.

    I was more thinking about it in terms of that 0xffff is used as a SIGNED type. Could that cause problems on some platforms when converting it to an unsigned type?
    And thats where your pretty little macros fail. In theory I should be able to take any code and go from platform to platform and compile and run it without modification.

  6. #6
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,662
    > #define ALLBITS_8 ((unsigned char)~0)
    That would depend on whether the user was after just 8 bits, or all the bits in the underlying data type.

    Whilst most common desktop machines have 8-bit unsigned char, some embedded DSP processors for example have 32-bit unsigned char.

    0xFF and ~0 would then obviously have different answers.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Bitmasks and bitwise operators
    By FortinsM3 in forum C Programming
    Replies: 1
    Last Post: 11-07-2008, 11:46 PM
  2. function help needed
    By threahdead in forum C Programming
    Replies: 6
    Last Post: 07-09-2003, 10:35 AM