According to you and quzah since you didn't specifically declare what interpretation to apply and C wouldn't handle those apparently values natively they are meaningless and shouldn't be discussed on a C programming forum and any guess of a given interpretation should be met with contempt.Quote:
My gift for all you notation liberals in here.
Which one of those numbers was binary again?
Look, to get it over with! Math say that every number has a base that must be written as index right after that number. Because 10 is the most used, add of index(10) is omitted. Therefore, if we encounter a number between 0-9 without a base, its base is 10! Every base can have fractions!
Now, for C/C++. 10 is decimal, 010 is octal, 0x10 is hexadecimal! Where's the problem?
So, quzah, none of them is binary!
Why would you want to get into the situation where you have to manually analyze each floating point number to decide if it's going to fit right in your data type, or if it isn't going to be accurate enough?
Because you want to go to Mars.
Curiously enough though, this argument only reinforces the need to observe binary representations in the context of how they are actually stored. Precision has a direct relationship with data types, and the way these numbers are stored has a direct impact on what data types we must choose. Something that the OP notation completely hides from the programmer. There's really no benefit on representing binary in any other way than how it is stored. For any concrete use, that representation would then have to be converted again. From binary to binary. Doesn't seem useful or practical.
I'm still a bit shocked that there's an argument for things like 11001.101 in such an established programming forum like this (especially when we aren't even told what is the actual notation being used). But I shouldn't be surprised anymore. In a society where everything wants to be relativized, there's always someone wanting to take their chance at the few areas still dominated by formal rules. Often oblivious to the fact certain values or principles are lost. In short, anything goes.
Of course you're not going to be any more than 46 meters accurate using an int, either, unless you up the storage size of the int (since presumably you will just be using units of 2^(-negative power) radians there anyway). You will just think you are, unless you pay attention to the results you get. (I was having fun the other day breaking calculators on the Ipad that were perfectly willing to show you 50-digit numbers as though they were exact integers, but many of the digits were wrong because their internal data structure wasn't up to the task.) Using int is exactly the same as using fixed-point, really; you're just hiding the fixed-point in the unit definition.
As Sang-drax said, maybe the teacher was trying to teach representation to show that some numbers are inherently impossible to represent. That would be a good lead in to why that, if you add .1 10 times, it won't, in nontrivial cases, equal 1 in practice.
This thread has been fun!
Of course one uses floating point for critical applications. One just has to understand the limitations, if any manifest themselves. In short error analysis.
I've heard your argument used in banks... that "fixed point" notation should be used to do all money calculations. I wonder how they ever manage to do interest calculations then.
I guess we'll all just scrap division too because it could make non-integer results.