Given the snippet shown here:
Why is type char used rather than type int for parameter t0?Code:void delay40(char t0) { while (t0 != 0) { delay_us(40); --t0; } }
Given the snippet shown here:
Why is type char used rather than type int for parameter t0?Code:void delay40(char t0) { while (t0 != 0) { delay_us(40); --t0; } }
> Why is type char used rather than type int for parameter t0?
On a lot of machines, it wouldn't make a bean of difference, but perhaps you're using something where it does matter.
Or as Elysia says, maybe it's a data range thing.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
You could ask the same thing about why unsigned short is usually used for TCP/IP port numbers. The reason is that the only valid port numbers are 0-65535 which perfectly fits into an unsigned short.
delay_us is a built-in function that's used to delay the amount of microseconds passed within the parameter.
So with the current setting, you can delay a maximum of 127 (or 255 depending on whether char is signed or unsigned) times 40 microseconds, which makes about 5 milliseconds.
As to "why it's char" you have to really ask the person who came up with that code - and it would, as Salem suggests, also make some difference on a particular type of machine, and no difference at all on another [e.g. on a PC based system, it doesn't make any difference at all].
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.