Thread: type char vs type int

  1. #1
    Registered User
    Join Date
    Jul 2004
    Posts
    222

    type char vs type int

    Given the snippet shown here:

    Code:
    void delay40(char t0) {
       while (t0 != 0) {
          delay_us(40);
          --t0;
       }
    }
    Why is type char used rather than type int for parameter t0?

  2. #2
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Perhaps because numbers needn't be larger than > 255?
    Dunno what delay_us is, so I can't say better.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  3. #3
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    > Why is type char used rather than type int for parameter t0?
    On a lot of machines, it wouldn't make a bean of difference, but perhaps you're using something where it does matter.

    Or as Elysia says, maybe it's a data range thing.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  4. #4
    and the hat of sweating
    Join Date
    Aug 2007
    Location
    Toronto, ON
    Posts
    3,545
    You could ask the same thing about why unsigned short is usually used for TCP/IP port numbers. The reason is that the only valid port numbers are 0-65535 which perfectly fits into an unsigned short.

  5. #5
    Registered User
    Join Date
    Jul 2004
    Posts
    222
    delay_us is a built-in function that's used to delay the amount of microseconds passed within the parameter.

  6. #6
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by stanlvw View Post
    delay_us is a built-in function that's used to delay the amount of microseconds passed within the parameter.
    So with the current setting, you can delay a maximum of 127 (or 255 depending on whether char is signed or unsigned) times 40 microseconds, which makes about 5 milliseconds.

    As to "why it's char" you have to really ask the person who came up with that code - and it would, as Salem suggests, also make some difference on a particular type of machine, and no difference at all on another [e.g. on a PC based system, it doesn't make any difference at all].

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. C problem with legacy code
    By andy_baptiste in forum C Programming
    Replies: 4
    Last Post: 05-19-2008, 06:14 AM
  2. Need help understanding info in a header file
    By hicpics in forum C Programming
    Replies: 8
    Last Post: 12-02-2005, 12:36 PM
  3. getting a headache
    By sreetvert83 in forum C++ Programming
    Replies: 41
    Last Post: 09-30-2005, 05:20 AM
  4. Half-life SDK, where are the constants?
    By bennyandthejets in forum Game Programming
    Replies: 29
    Last Post: 08-25-2003, 11:58 AM
  5. How do you search & sort an array?
    By sketchit in forum C Programming
    Replies: 30
    Last Post: 11-03-2001, 05:26 PM