# trig

This is a discussion on trig within the A Brief History of Cprogramming.com forums, part of the Community Boards category; Okay....I have a couple questions about trigonometry functions... Now, we probably all know the acronym: SOHCAHTOA which is short for: ...

1. ## trig

Okay....I have a couple questions about trigonometry functions...

Now, we probably all know the acronym:

SOHCAHTOA

which is short for:

sin = opp / hyp

However, I have been wondering what exactly goes on inside of the sin(), etc. functions in math.h.

In the those functions, you only give the angle measurement, you dont give any opposite or hypotenuse side lengths....so exactly WHAT IS the math that it uses to figure out the sin, cos, tan, etc?

Its the same way with calculators...you give them angle measurements....so WHAT MATH do they use to figure out the answers?

I was talking about this to one of my friends...and one way is to automatically assume that one angle is 90 deg, then there is the user given angle, and then the 3rd angle is 180 minus those 2 angles. With three angles given, you can figure out the side lengths, and then figure out the answers to the trig functions.

However, it would be pretty hard to do this, because you could give ALL SORTS of different angles as input. Therefore if they DID do this, they could not simply use a matrix to store all the side lengths.

They could use some type of algotrithm...but how would they do that? What would be the math involved there?

So it all comes down to...what the heck goes on inside the trigonometry functions?????!??!?!????!?!??!? THEY DONT TELL US IN SCHOOL!!! and it makes me freakin angry because THATS what I want to know....

On a side note, I was also talking to one of my friends about how several teachers talk about using the guess and check system in certain instances. For example, factoring.

Why would anybody in the world want to use factoring, when they could just use the quadratic formula? Factoring requires the guess and check system in SEVERAL cases, and therefore, as a programmer, I consider that hard coding. However, using the quadratic formula, you just plug variables into a function and it returns a value. Much easier and more efficient. Not hard coded. So why the heck is guess and check (and factoring for that matter) even PART of curriculum in math courses?

2. since the trigonometric functions have an infinite precision, and are not linear, i cannot see how they could reproduce them without some sort of vast lookup table. [i too don't recall the nature of their derivation]. however, there are various trigonomic identities that allow us to take 'shortcuts' to those values. for example, the brensenhem circle drawing algorithm takes advantage of the fact that the sine/cosine functions complement eachother, as well as that the circle itself can fold upon itself. i'll be looking this up meanwhile.

hth

3. as for the factoring. it is useful for polynomials of a multitude of degrees as well. but in the case of a quadratic expression, yes it would make more sense perhaps to factor using the formula, [but it is costly in time, however.]

4. Computers probably use an infinite series to calculate sine and cosine. The formula can be derived pretty easily using a Taylor expansion:

Expand sin(x) around x=0:
0th derivative: sin(x) -> sin(0) = 0;
1st derivative: cos(x) -> cos(0) = 1;
2nd derivative: -sin(x) -> -sin(0) = 0;
3rd derivative: -cos(x) -> -cos(0) = -1;
4th derivative: sin(x) -> sin(0) = 0;
5th derivative: cos(x) -> cos(0) = 1;
...

The series for sin(x) is then:
0 + x - x^3/(3!) + x^5/(5!) + ...

Expand cos(x) around x=0:
0th derivative: cos(x) -> cos(0) = 1;
1st derivative: -sin(x) -> sin(0) = 0;
2nd derivative: -cos(x) -> -cos(0) = -1;
3rd derivative: sin(x) -> sin(0) = 0;
4th derivative: cos(x) -> cos(0) = 1;
5th derivative: -sin(x) -> -sin(0) = 0;
...

The series for cos(x) is then:
1 - x^2/(2!) + x^4/(4!) - ...

Taylor series are generally covered in second semester calculus, so it's probably not a big surprise that you haven't learned this in high school geometry.

5. >Taylor expansion:

heeey, that rings a bell... i'm in calc II right now, yay gonna do that! right on!!! DP can you wait between now and 18 weeks? then i'll be able to help you out!

6. finally...i know what goes on in the trig functions.....been wondering that for a couple years...

7. A computer cannot calculate sine and cosine exact. It uses serie-expansion, like Taylor-series, to approximate the values. Note that the computer cannot calculate infinite series, but it takes the first N terms of a serie.

In some applications a sine or cosine table is used. In order to determine values not available in that table interpolation and extrapolation is used. Lagrange gave some nice formula's for high order interpolation which can be used.

8. however, I've read that in a Pentium MMX, a Sine operation using a float consumes 18 clock ticks, how can this be when you're doing some series like taylor's, and you'll need at least 6 iterations to have a good precision. that would give us about 3 clock ticks per iteration, and just the division consumes 3 ticks!.

Oskilian

9. Well, it shouldn't need to perform division. The 1/(n!) factors can be previously calculated and just coded in directly as 0.5, 0.3333, etc. I don't know how many clock ticks a multiplication operation takes, but since the series needs only to go as far as maybe 10 terms at the very most it seems reasonable.

10. i still say that .3_ and 1/3 are not the same thing...

.3_ < 1/3

11. .3 repeating is not 1/3?
is .0 repeating not 0/3?

12. Well, it depends on what you mean by 0.3_. As long as there are a finite number of digits, then indeed it is less than 1/3. If there are an infinite number of digits, than it is equal to one third. Or, to be more rigorous, we can say that the limit of the series following series is equal to 1 as N approaches infinity:

N
Sum[3/10^n]
n=1

Of course, all this talk about infinities is just a bunch of hand-waving unless you've taken calculus.

13. if i get your order of operations as standard, wouldn't the limit evaluate to zero as the denominator increases exponentially under a constant? and if the fraction were being increased exponentially, since the denominator is greater than the numerator, wouldn't it be zero as well? if that is the point, then i agree, which i would have anyway. also, that is a good counterpoint reexplaination concerning the state of the precision.

14. > i still say that .3_ and 1/3 are not the same thing...

Don't start that again!

15. Originally posted by doubleanti
if i get your order of operations as standard, wouldn't the limit evaluate to zero as the denominator increases exponentially under a constant?
Well, indeed the term inside the sum approaches zero as n approaches infinity. But this is the limit of the sum of all terms from n (lowercase) to N(uppercase) as N approaches infinity - that is, as the number of digits in the repeating decimal becomes infinite.

Page 1 of 2 12 Last