Hey! Thanks for reading ;o)
I was writing my own header for OpenGl rotation, but had alot of hiccups so I decided to take all the code to console and just cout my calculations to see where the problems were occuring. I finally started to notice some of the weird things that floats do, and I don't like it. Basicly I take an objects current angle from a Standard 0 that I decided (using the unit circle it is located at 1, 0, 0)
From this angle I add the change in angle to that then use the following formulas;
The degrees are converted to radians before the cos and sin function calls. (I found that the math wouldn't work if I gave it degrees). When cos and sin are called at angles that should put their value to 0, (ex. 0 degrees should cause Y to equal 0 on coordinates) the float goes into what appears to be scientific notation mode, and for some reason comparisons with the notation don't work consistently.
float DeltaZ = CurrentAngle + NewAngle;
float Radius = DistanceFormula(VectorPos, VectorOrigin);
VectorPos.X = cos(DeltaZ) * Radius;
VectorPos.Y = sin(DeltaZ) * Radius;
Using these functions I can't get X or Y to ever equal 0 but instead end up as
something like 3.42456328e-006. How can I stop the float from going into a scientific notation representation?
Second if I tell it to print a table such as
Degrees X Coordinate Y Coordinate Z Coordinate
0 1 0 0
then change the incriment of degrees change to .001 for a little while degrees goes up as normal but the longer this loop is the more likely I'll end up getting
.001999 instead of .002 and once it gets to that point then it keeps counting like that.
Knowing how important significant digits are in mathematic and scientific calculations I'm wondering if there is a way to control these floats to ONLY hold a number of digits equal to the change you are making. So if I'm increasing by
.001 then I never want to hold more than 3 digits after the decimal point. Any suggestions would be greatly appreciated.
Thanks for your time ;o).