Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
How many decimal points of precision do you want? Multiply by 10 for each decimal of precision. Why exactly is it you think none of those would function correctly if you made them work off of integers?
You don't actually need floating point numbers, so long as everything you do is multiplied by 10 for each digit of precision you want to mimic. The decimal point really doesn't do anything for you. You can divide or multiply everything by 10 as many times as you want and it still doesn't actually affect the outcome, provided you're doing the correct multiply / divide to both sides of the equation.
Quzah.
Hope is the first step on the road to disappointment.
I refuse to believe that you've never heard of checking for overflow.
Pick one: End up with faulty unreliable floating point numbers, check for overflow.
It's not like floating point numbers are some how magic that integers can't handle. They lie so that they can fit more than you would normally be able to fit in the same number of bits. There's a cost for that. It's called inaccuracy. If you need accuracy, you don't use floating point numbers. I'm not making this up. Even floats have to worry about overflow. They're not infinite!
Quzah.
Last edited by quzah; 02-08-2011 at 06:47 PM.
Hope is the first step on the road to disappointment.
[EDIT: Whatever. Removed for being a potshot]
Although if you think needing to represent numbers 10 orders of magnitude different (which is what you get with 32 bits) in the same data type is arcane, abstruse, or just plain shouldn't be allowed, then I guess you can get away with it.
Last edited by tabstop; 02-08-2011 at 07:17 PM.
I didn't say it shouldn't be allowed. I said you have to expect errors, and that my brain doesn't care to work in a way where I could get myself into a situation where I needed to manually figure out in my head if a number was going to be valid in a floating point system or not. I just can't possibly see it being useful for me to know how the inner workings of a float do indeed work.
If you are in some situation where you have to stop and manually calculate - which is absolutely absured, I'll say why here in a second - if a number would be accurate or not in your datatype, then you've got yourself into a situation you shouldn't be in.
If you're going to the moon, and you know the data type you have has a chance that it will auto-round some value that you can't afford to have rounded, then don't use that data type.
Now, to the absurdity I alluded to. There's no possible way you would have a program, a mars simulator or whatever it is you're talking about here (you being the generic you, don't get huffy), where YOU manually figure out by hand if every single possible calculation it could encounter along the way is going to not be rounded in your usage, or truncated, or whatever happens inside a float, that YOU will be able to manually figure out in your head. That's absurd and impossible. You would have to plug in an infinite number of calculations of trajectory, yaw, spin, and whatever other possible thing that might come across your flight, that you could realistically "analyze the number's binary value" or whatever the hell was said, to see if it would work right in your float.
That's a preposterous statement.
Quzah.
Hope is the first step on the road to disappointment.