# Converting binary string to decimal

• 03-29-2009
Sharke
Converting binary string to decimal
The following was given as an example of how to convert a binary string to an int.

Code:

```int bstr_to_dec(const char * str) {     int val = 0;         while (*str != '\0')         val = 2 * val + (*str++ - '0');     return val; }```
I have no trouble going through it step by step and seeing that it does indeed work, but I'm having trouble understanding why it works and what the principle behind it is. Can anyone explain?
• 03-29-2009
vart
1. converting char '0' or '1' to int is done like (ch - '0') wich gives you int 0 or 1

2. as you know decimal number like abcd could be written as d + 10*(c+10*(b+10*a)

same goes for binary numbers just intead of 10 - 2 is used
• 03-30-2009
iMalc
Perhaps it would save time if you could say which bits are most confusing for you?
E.g. Does the multiply by 2 make sense? Would it help to know that it is the same as (val << 1), a shift left by 1 bit?
• 03-30-2009
Sharke
Quote:

Originally Posted by iMalc
Perhaps it would save time if you could say which bits are most confusing for you?
E.g. Does the multiply by 2 make sense? Would it help to know that it is the same as (val << 1), a shift left by 1 bit?

I guess I'm just having trouble thinking about the idea of a number represented in the form that vart gives, i.e. d + 10*(c+10*(b+10*a) for base 10. The only way I've really thought about it is (d * 10 pow 3) + (c * 10 pow 2) + (b * 10) + d. And when I've written a little binary to decimal function myself in the past, I used this method in starting from the last digit and working my way back in this fashion. I know you're going to tell me that the first expression is exactly the same as my version, but being quite mathematically rusty it's not immediately apparent to me why.