Thread: int casting for unsigned chars

  1. #1
    Registered User
    Join Date
    May 2007

    int casting for unsigned chars

    Would it be a "wise" choice to use char variables to store very small integers (such as 0 to 99) for the sake of saving memory (1 byte as opposed to 2 for a short int), and then casting that value to an integer when I need to display it? Or is there more overhead involved in casting?

  2. #2
    Registered User
    Join Date
    Jan 2008
    Maybe it's a tradeoff? You need space and use char or you need speed and use int. I just don't worry about it and use int because C++ defaults to it. Same with double instead of float.

  3. #3
    Registered User
    Join Date
    Oct 2001
    An int is usually chosen to be the natural size of an integer on that machine, so if speed is important, using an int to store very small integers would likely be the best choice. Only if the target machine has limited memory, or one is creating a huge array would it be sensible to use a char. And even in the case of an array, one would want to do timing tests to see if using a char array actually made the code run faster.

  4. #4
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    The edge of the known universe
    > Would it be a "wise" choice to use char variables to store very small integers
    > (such as 0 to 99) for the sake of saving memory
    Two cases where it might
    - you have millions of them to store, possibly hundreds of millions.
    - you're working on a PIC (or similar) with very limited amounts of memory, and everything counts.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  5. #5
    Kernel hacker
    Join Date
    Jul 2007
    Farncombe, Surrey, England
    The "cost" of using bytes vs. ints depends on the architecture. In some machines, the integer type is the ONLY type that can be directly manipulated by the processor - so any other type has to be implicitly cast into an int, and then back to the original type.

    On the other hand, most architectures support multiple data widths, and a cast to integer for display purposes [if at all necessary - a signed char will automatically be converted to int when calling for example printf - but NOT when using cout]. Converting char to integer on for example x86 is a single instruction, and it's not a "difficult" instruction.

    I agree with Salem, you don't want to do this if you have arrays of a few thousand integers in a PC or similar - but if you have many megabytes in a PC, or if you use an embedded system, you can certainly gain some by this method.

    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 8
    Last Post: 03-10-2008, 11:57 AM
  2. Working with random like dice
    By SebastionV3 in forum C++ Programming
    Replies: 10
    Last Post: 05-26-2006, 09:16 PM
  3. getting a headache
    By sreetvert83 in forum C++ Programming
    Replies: 41
    Last Post: 09-30-2005, 05:20 AM
  4. Quack! It doesn't work! >.<
    By *Michelle* in forum C++ Programming
    Replies: 8
    Last Post: 03-02-2003, 12:26 AM
  5. How do you search & sort an array?
    By sketchit in forum C Programming
    Replies: 30
    Last Post: 11-03-2001, 05:26 PM