Hi there,

I am very new to c prog and most probly have thousands of questions to ask.
As for now, I have question on how do I make a conversion of Binary to Decimal. Having a standard in binary and standard out for decimal with an error msg where appropriate.

1) Do I have to define the binary string (e.g. #define BSIZE 8) if I do that, does it mean that the standard in will have to be an exact of (e.g 10010001) and I can't input it as (e.g 101)

2) How do I know when I can use a "char" or "int"? I know the function of it like char uses %d it is a single character value (e.g. ABC) and int uses %d (integer value)

3) I know how to convert from binary to decimal in mathematical terms. However, I can't seem to have the logical ability (yet) to code it into a program.
- in maths the binary will have the wt of 128, 64, 32, 16, 8, 4, 2,1
- if I've to convert a binary of 101
- it would be 101 = 4 + 1 = 5

If I have to code it will it look like (will it?)
- x=0 (binary standard in)
- a=128 (the wt of the binary)
- b=a/2 (the next wt)
- x=b

It is still a blunder to me.

Really appreciate if anyone could help to clear this mud of mine. Being a novice is never a good feeling.