# Thread: Converting 1100110.11 to base10

1. Since we're off in the weeds, here's how you convert 1100110.11 to base 10.

Shift left by 2, which is equivalent to multiplying by 4:

1100110.11 -> 110011011

Convert the (now point-less) value to decimal:

110011011b -> 411

Divide by the factor of 4 that was introduced above:

411/4 -> 102.75

Problem solved. You'd think people had never manipulated fixed-point numbers before.

2. Originally Posted by tabstop
What am I saying? The mantissa is fixed point. Still has a (binary/radix) point, though.)
No. It has an exponent. And the mantissa is an integral.

The conversion to decimal does not require you to represent a radix in binary format. You may for illustrative purposes (for readers of Wikipedia). But that doesn't make it a standard practice... and I will go as far as saying, a desirable one.

What would you do that for? You got to all that trouble of converting your IEEE-754 number into a radix-based binary notation and you didn't even start converting to decimal. That's two steps when you could have just taken one: convert mantissa and exponent to decimal, attach sign, and you have your decimal representation in scientific notation. What can possibly be confusing about this is beyond me (and I'm not a even a math person).

I suppose at the end of the day, whichever method you choose you'll end up getting there. That I agree. But I personally fail to see the advantages of a floating point symbol in a binary representation. I think is counter-intuitive, and definitely not a generalization of the vastly different method by which these values are actually stored.

3. For anyone who wants to represent real numbers in binary, the decimal is as advantageous as it is to anyone wanting to represent real numbers in decimal.

I fail to see why the fact that it's impossible for computers to represent some form of information in a certain way implies that representation in that certain way is useless, invalid, or incorrect. Computers use binary, binary doesn't use computers. Furthermore, binary defines binary. Computers may use only a subset of binary(the integers), but that doesn't redefine binary as only capable of representing integers.

4. A bit is supposed to be the smallest piece of information available. This is not just computers.

5. Originally Posted by User Name:
For anyone who wants to represent real numbers in binary, the decimal is as advantageous as it is to anyone wanting to represent real numbers in decimal.
And here I was doing an effort to demonstrate why it isn't useful as a teaching mechanism...

>> I fail to see why the fact that it's impossible for computers to represent some form of information in a certain way implies that representation in that certain way is useless, invalid, or incorrect.

For that I would require to provide you with an analogy. And then we would start debating the analogy instead of the actual issue. I pass...

I think my words on the matter speak for themselves. I'd rather have you use them for your argument (contradict them by all means), instead of denying them without making an effort to disprove them. In other words, I get it you think this is a valid and useful representation. But you failed to explain why.

>> that doesn't redefine binary as only capable of representing integers.

Isn't that something everyone agrees?

6. Originally Posted by whiteflags
A bit is supposed to be the smallest piece of information available. This is not just computers.
I'll have to concede to that argument. Still, being this a programming forum...

7. Originally Posted by whiteflags
A bit is supposed to be the smallest piece of information available. This is not just computers.
A liter is a fundamental unit of volume, but that doesn't mean you can't have 0.1 liters of something. Same goes for bits.

What might not be immediately obvious is that the bit as a unit of information is actually an arbitrary unit, just like the liter. We measure information in bits because we, humans, have designed computers to work on the basis of on-off (binary) states.

Ultimately, a bit of information is the amount of information needed to describe which of two possible values of a random variable actually occur, if the variable is uniformly distributed. A good example is a coin toss. Assuming both sides are equally likely, it takes 1 bit to represent the outcome of a single toss.

But if the two sides are NOT equally likely, it turns out you can take advantage of this to represent the outcome using less than one bit of information. I'll omit the exact formula, but as an example, if you had a coin that came up heads 25% of the time and tails 75% of the time, you could actually encode the state of the coin using only 0.881 bits.

Of course, you can't actually WRITE DOWN 0.881 bits, you need to use a whole number of digits, but that is purely a property of the positional writing system we use to represent numbers (regardless of what base is used), it doesn't actually mean that a fractional number of bits of information can't exist.

If you want to get a little crazier, you can measure information in units called 'nats', which is sort of like using base e for everything -- in this case, the base of the number system isn't even a rational number, much less an integer.

8. No. It has an exponent. And the mantissa is an integral.
I had to read that a couple of times before I realized you meant integer. The mantissa of IEEE-754 numbers is not an integer. It is the fractional part of a number with an implied "1." prefix. In other words, it is a fixed point real number.

That's two steps when you could have just taken one: convert mantissa and exponent to decimal, attach sign, and you have your decimal representation in scientific notation.
I know what you are saying, but you are grossly oversimplifying what you are claiming. If nothing else, you'd have to remember the bias and scientific notation calls for a decimal base.

Come to think of it, that doesn't account for subnormal numbers either, but I will not go there if you don't. I hate dealing with them.
[/Edit]

And here I was doing an effort to demonstrate why it isn't useful as a teaching mechanism...
I find it useful as a teaching tool. Using that representation along with an explanation of dyadic rationals make explaining why mechanisms like IEEE-754 can't accurately represent categories of simple decimal fractions.

I'll omit the exact formula, but as an example, if you had a coin that came up heads 25% of the time and tails 75% of the time, you could actually encode the state of the coin using only 0.881 bits.
For a lot of examples, grab a copy of "Data Compression: The Complete Reference".

Information theory is fun.

Soma

9. He said "A bit is supposed to be the smallest piece of information available." not "A bit is supposed to be the smallest number available.". There is a significant difference.

EDIT: My point is => In computers we have pieces of information while in our heads we have numbers. Mixing those two produces these meaningless arguments like it this thread.

10. My point is => In computers we have pieces of information while in our heads we have numbers. Mixing those two produces these meaningless arguments like it this thread.
In other words, you've repeated what brewbuck just said. At least we have a consensus now.

11. Originally Posted by whiteflags
In other words, you've repeated what brewbuck just said. At least we have a consensus now.
No, because brewbuck is wrong. You can't compare a bit to a liter, because a liter is not the "smallest" of anything. There's something smaller than a liter, so you can't compare a liter to a bit and say they're similar. The best you're getting for a comparison to a liter is a byte.

"A liter is like a byte, because there's something smaller in each unit of measurement!"

That's not even close really, because it would be more like comparing a liter to a megabyte. Or a gigabyte. There is not a "smallest unit of liquid measure in existence" that you can substitute for "liter" in the comparison.

Say we make one up. The smallest unit of liquid measure is a "drop". A drop then would be like a bit. You cannot go sub-atomic on me and say there's a quark that's smaller than a drop of water, because for our comparison to be valid, we have to have something that is indivisible. THAT unit is the same thing as a bit. THAT unit, that I'm calling a drop, would make a his comparison valid.

You aren't comparing the "smallest possible expression of measurement" to a bit (which IS the smallest expression of measurement in its respective field), so your comparison is invalid. Bits can't have decimal points, because if you tried to divide bits smaller, then that new result would automatically be considered a bit. Why? Because a bit is the single smallest expressible form.

I honestly have no idea what you guys are trying to argue now, but you're wrong when you compare a liter to a bit.

Quzah.

12. Originally Posted by quzah
I honestly have no idea what you guys are trying to argue now, but you're wrong when you compare a liter to a bit.
The point brewbuck was making had merit, because he conceived a reason why one would like to add a decimal point to a binary notation. Inn his example, for the convenience of representing a fractional number of bits. He indeed demonstrated one such need when he discussed percentages and odds.

Of course, I disagree with that simply because we are not discussing the applicability of the binary representation on non computational areas. We are discussing the binary representation on what it concerns computers. And lo and behold, we are already have a binary representation for floating and fixed point values.

Why do we need to resort to another one when teaching the binary system -- and what advantage it can possibly have -- is what no one seemed to explain yet.

13. Bits can't have decimal points, because if you tried to divide bits smaller, then that new result would automatically be considered a bit. Why? Because a bit is the single smallest expressible form.
O_o

Okay. I have a specific quantity of entropy here in my pocket. It has an expression of `N' over `M' bits for every `O' bits. I have only enough of this quantity to affect a single byte of information. I have only 3 bits of entropy. Do you seriously want me to start expressing this fractional quantity as it relates to the information in bits without a fractional portion? If so, what do we start calling real bits? You know, the ones that can fully represent a single '0' or '1' state? What do we call those now that we call this smaller measure bit?

My vote is for spoont. I like spoont. We'll have the bit, the byte (still 8 bits), so forth and so on, and the spoont which can represent an unknown but specific quantity of bits and also a single '0' or '1' state.

Awesome. I can't want for teraspoont drives.

Soma

14. Originally Posted by phantomotap
O_o

Okay. I have a specific quantity of entropy here in my pocket. It has an expression of `N' over `M' bits for every `O' bits.
The problem is that the defintion of a bit states that it is indivisible. Stop thinking of it as a number, it's not. But as for number representation ... take an unsigned int, now tell me how you get below zero.

Here's a real world example: absolute zero. Go below that. Oh, wait, you can't. Because by definition, it's the absolutely lowest possible representation of temperature.

Why are you still confused?

Quzah.

15. Originally Posted by Mario F.
And lo and behold, we are already have a binary representation for floating and fixed point values.
Yes, and the original question was just a fixed-point question. The value 1100110.11 is just a Q2 fixed point value. We write down the dot to remember what the Q is. Nobody ever suggested that somewhere inside the computer there is a "dot" flying around.