how to solve this .....1100110.11 to base10?
help meee
This is a discussion on Converting 1100110.11 to base10 within the General Discussions forums, part of the Community Boards category; how to solve this .....1100110.11 to base10? help meee...
how to solve this .....1100110.11 to base10?
help meee
Learn how to do it yourself.
Decimal and Binary Conversion Tool
Can use this to help you too.
Sure there are. The digits to the right of the period represent 1/2 + 1/4 = 0.375
Really... when's the last time you saw a CPU register with a fraction of a bit in it?
Think carefully now... We're talking about binary *representations* of fractions, not actual fractions. One of the big reasons why the original XT and 386 machines didn't have floating point math chips was the difficulty in representing .375 as a binary number. In fact it was all done in software back then, entire accounting packages were written to work on pennies for this exact reason.
No, but there are binary points, which serve the same purpose of separating the whole number portion from the fractional portion of a real number. The binary point is usually represented with a . just like the decimal point in base 10. And yes, that number could be in base 10. Presumably it's not though, since the OP asked that we help convert it to base 10.
Fractions of a bit don't exist, but bits representing fractional parts of numbers do. Hardware components that handle floating point numbers have been around since the XT and 386 days, in the form of a math coprocessor, like the 8087, 80387, et al. I would suspect it had more to do with fitting it all on a single chip die easily and cheaply enough rather than because "it's too difficult to implement". Nowadays, it's all on the CPU.
Actually it does.... Inside the computer it's all just On and Off... 1 and 0... There really is (for now) no other way to do this. The computer itself is a binary machine...
Yes we can use software to produce a base 58 numbering system if we like... but inside the machine it's all 1s and 0s... The CPU register (memory, storage, etc) does not know about the *translations* we apply to it's contents... it only executes the code needed to enact it.
When you see the number 14 on your screen, the computer will see it internally as a binary number... The characters on the screen are merely a software enacted translation of a binary number.
And, now, lets ask... how is that stored? ... well shucks, it's a binary whole number too.
In fact the early math processors used to have 7 bytes for an integer value and another byte saying where to put the decimal place when displaying it... Both of these numbers are whole numbers...
Also think about the problems of rounding and imprecision on floating point units... 8.0 ends up being 7.999999999 ... because someplace there's a 1 bit ambiguity in the calculations... Why is that ambiguity there? Well, because there are no fractions of a bit.
I don't think anybody is disagreeing here, much. It is true that when you represent something inside a computer, you will have to choose some representation of that number (whether it's two's complement (for ints), or IEEE floats (for floats), etc.). But that representation inside the computer doesn't have anything to do with the fact that the number "11011.011_(2)" exists. Of course "11011.011_(2)" exists, and has the value of 27 and three-eighths in decimal (or 27.375), or 033.3 in octal, or 0x1B.6 in hex, and "1000.10101..._(3)" in ternary. The numbers exist.
Why are you so stuck on this being tied to a computer? The OP just asked how to convert a binary number to decimal, nothing about the necessity for it to be stored in a computer (modern day or otherwise). Non-base10 number representations can exist in places other than a computer.
If you understand what you're doing, you're not learning anything.
It's a sequence of bits that aren't necessarily "whole numbers". In IEEE 754, you have a sign bit (not really a whole number, more a "flag"), an exponent (admittedly, a signed integer) and a significand, which is only the fractional part of a scientific notation type number. It can be interpreted as a whole number if you want, but it's no more a whole number that .5, which has a 5 in it, which you could see as a "whole number" if you ignore the decimal point. There's no bit in there to store the . but it's as good as in there.
Actually, that ambiguity is there because you only have a limited number of bits to represent a number, not because "there's no fraction of a bit". You have the same problems with decimal numbers, or any base, if you restrict how many digits you can use. 1/3 doesn't have a finite representation in base 2 or base 10, but it does in base 3.Also think about the problems of rounding and imprecision on floating point units... 8.0 ends up being 7.999999999 ... because someplace there's a 1 bit ambiguity in the calculations... Why is that ambiguity there? Well, because there are no fractions of a bit.