# Thread: Converting Bytes to Numbers

1. ## Converting Bytes to Numbers

I am reading 4 individual bytes via a serial port. The 4 bytes combined represent a number in decimal. How do I calculate the number in code? I can do it on paper...

For example, I read the following 4 bytes.

0x00 (0)
0x00 (0)
0x0E (14)
0x1C (28)

In reality this represents the number 0x00000E1C (or 3612).

I know that. How do I code the computer to calculate that for me other than a long drawn out seperation of the hex digits using strings etc.?

2. Some bitwise operations would make sense, though you could also use multiplication and addition:
Code:
```#include <stdio.h>

int main(void)
{
unsigned char bytes[] = {0x00, 0x00, 0x0E, 0x1C};
unsigned int number = (bytes[0] << 24) | (bytes[1] << 16) | (bytes[2] << 8) | bytes[3];
printf("%u\n", number);
return 0;
}```

3. Wow that looks simple... I had considered using the bitwise operators but always thoughif you did << too many times you get 0 because you push the value bits outside the space...

1111 << 1 => 1110
1111 << 2 => 1100 etc...

I guess I was mistaken...

Thanks laserlight.

I had considered using the bitwise operators but always thoughif you did << too many times you get 0 because you push the value bits outside the space...
Speaking of that, I made a reasonable assumption by writing 24, 16 and 8. It may be clearer and somewhat more portable (but I doubt that that is a concern here) to change those magic numbers to (3 * CHAR_BIT), (2 * CHAR_BIT) and CHAR_BIT respectively.

5. Originally Posted by laserlight
Speaking of that, I made a reasonable assumption by writing 24, 16 and 8. It may be clearer and somewhat more portable (but I doubt that that is a concern here) to change those magic numbers to (3 * CHAR_BIT), (2 * CHAR_BIT) and CHAR_BIT respectively.
I kind of disagree... The serial port's concept of a "byte" doesn't necessarily relate to the CPU's concept. Presumably you should already know the bitness of a byte coming from the serial port, and use that. So you could still use a macro, but CHAR_BIT is probably not the right one. Maybe a locally defined BITS_PER_BYTE or something like that.

6. Yeah I gathered that... in this case it doesn't matter, but since I'll be using this code elsewhere I've added that...

And it works great... Thanks!

7. Originally Posted by brewbuck
Maybe a locally defined BITS_PER_BYTE or something like that.
Yeah, that would be good too, the point being to replace the magic numbers with something readable.

8. Originally Posted by laserlight
Yeah, that would be good too, the point being to replace the magic numbers with something readable.
Of course, anybody accustomed to this kind of code would immediately understand what those values mean...

9. YEah, but it's those who aren't accustomed to this kind of code that you have to allow for.