Sin, Cos, Tan: A math question

For example:
sin x= x -x^3/3! + x^5/5! -x^7/7! +x^9/9!.........

while
cos x = 1-x^2/2! +x^4/4!-x^6/6!.....
Of course, those only work for x between -1 and 1. And the closer to -1 or 1 they get, the slower they get.

Wouldn't it be better to linearly interpolate between points than to approximate by adding a term from near zero, given the gradient fluctuations?
I don't undertstand what you're saying here.
 
Of course, those only work for x between -1 and 1. And the closer to -1 or 1 they get, the slower they get.
No, they work for all real numbers x. Different values of x will give different rates of convergence, but all of them will converge.
 
I won't pretend to understand this, but it seems you're saying that sin, cos and tan are approximations?

Back in the old days... and I am so old that my first calculator was called a "sliderule" (which was replaced by a 5-function calculator... it had plus, minus, mulitply, divide and square root --- for anything else I used a book of math tables until I got an HP-25 my sophemore year of college)... Most computers did not do complex mathematics, they were essentially glorified adding machines. Adding machines that you could program (with cards using a keypunch... which was much more modern than "computers" which were actually people operating mechanical calculators).

Truthfully, computers still are glorified adding machines... it is just that there is so much memory and they are so fast that the processes are mostly invisible.

Anyway... there have been methods around for a long time to approximate the results of several types of math problems. Problems like integration, trig functions, cube roots and inverting of matrices... etc. This brought about a type of mathematics called "numerical analysis". I have a book with algorthms for some of the more common things, like Newton's Method. while in college studying structural engineering I became familiar with the Rayleigh-Ritz method for calculating Eigenvectors and Eigenvalues (and because of the professor's accent I spent a year thinking it was the "rally-roots" method!)... if you look at the references in the link is a method that dates back to way before computers even existed).

By the way a search on Mathworld for "Numerical Methods" brings up a blank page. But I did find this: http://www.math.niu.edu/~rusin/known-math/index/65-XX.html
 
By the way a search on Mathworld for "Numerical Methods" brings up a blank page.

Maybe you should search for "Numberical methods"... The course calendar at my alma mater had the numerical analysis class listed as Numberical Analysis for years (until the class was taught by someone with a better mastery of English).
 
Maybe you should search for "Numberical methods"... The course calendar at my alma mater had the numerical analysis class listed as Numberical Analysis for years (until the class was taught by someone with a better mastery of English).

I have never ever seen Numberical! (though the course catalog may have been typed by some clerical person who thought "numerical" was a typo... thinking it was a course on analyzing NUMBERS!).

I tried "numercial methods" first, which brought up a bunch of Mathworld pages on each specific method. It was from the "Newton's Method" page that I saw "Numerical analysis" and decided to give that a go.

The class I took was called "Numerical Methods"... taught by a Professor Emeritus. In other words a cranky old coot. That was back in the days when we had to go to the computer center, wait our turn to use the keypunch machine, set our FORTRAN programs written on cards into a card reader... and then wait for the printout. Hoping we did not get the less than helpful "Mode 4 error"!
 
Yeah, I had to program in FORTRAN too, but punchcards had gone the way of the dodo for a long time already. Actually, like most people in the class, I just handed-in dead FORTRAN code with results from a program written in another language (usually C).
 
Yeah, I had to program in FORTRAN too, but punchcards had gone the way of the dodo for a long time already. ...

I did say I was old... but I'm not that old!

By the way, the keypunches were removed the summer I graduated. When I went back the next winter for some evening graduate classes they had been replaced with terminals and modems to connect to VAX/VMS computers. The same as what I used at work (along with another mainframe), so I got a great deal of delight showing my professor which commands to use!
 
Most computers did not do complex mathematics, they were essentially glorified adding machines.
You might be a little older than me, but when I took programming in college, it was all Fortran on punch cards. And Fortran was designed to be a scientific language - you could declare a variable to be type COMPLEX, which I was later shocked to find out that Pascal and C didn't natively support this type.
 
No, they work for all real numbers x. Different values of x will give different rates of convergence, but all of them will converge.
D'oh! For some reason I forgot about the factorials.

Anyway... there have been methods around for a long time to approximate the results of several types of math problems. Problems like integration, trig functions, cube roots and inverting of matrices... etc.
And apparently, one method of finding cube roots is to set up a matrix with that as one one of the eigenvalues, and numerically approximate the eigenvalues.

The class I took was called "Numerical Methods"... taught by a Professor Emeritus.
That Emeritus guy sure is busy, isn't he? It seems like every university has classes taught by Professor Emeritus.
 
...That Emeritus guy sure is busy, isn't he? It seems like every university has classes taught by Professor Emeritus.

Is he the same cranky old coot?

(note to OP, "emeritus" means that the guy is semi-retired, and comes in to teach one or two classes.... what is funny, is for the longest time I thought "eigen" was someone's name, I was told late in my college career that it was German for "proper", or something like that!).
 
You might be a little older than me, but when I took programming in college, it was all Fortran on punch cards. And Fortran was designed to be a scientific language - you could declare a variable to be type COMPLEX, which I was later shocked to find out that Pascal and C didn't natively support this type.

:eye-poppi :eek: :jaw-dropp

Kind of makes sense, FORTRAN is short for "Formula Translation". My experience is that Pascal and C were computer/electrical engineer driven languages. Taking into account my EE hubby's feeling about higher math (he got a "D" in Differential Equations)... I tease that computer/electrical engineers only count with zeros and ones.

I have not been employed for a long time, but even during my last years of work I was doing much less programming in FORTRAN, and using tools where the hard work had already been done. One tool was DMAP, "Direct Matrix Abstraction Programming" (part of NASTRAN, a finite element program, which would solve for eigenvectors), plus a nifty program call Language for Structural Dynamics (yes, it is LSD)... which allowed me to put in the equations almost like what I had on paper. I vaguely remember using a program that used something called a "QR" transformation that got results from a 2nd order differential equation with somewhere between four to ten variables (degrees of freedom).

I do have an old copy of MathCAD... which is cool because you put the equations in like you would on paper, and it solves for the variables.
 
There is a very small demon inside, with an even smaller book of log tables*.

*(Made from the square roots of the magical function tree).
 
Of course, those only work for x between -1 and 1. And the closer to -1 or 1 they get, the slower they get.
No, these work all over the real(or complex) numbers. The factorial function grows faster than any exponental function, guarenteeing convergence.

I don't undertstand what you're saying here.
Sorry.
I ment to say that, as the gradient of a sin or a cos function fluctuates. It makes more sense to interpolate between two known points then to approximate sin(t+D) as sin(t)+sin(D).

Edit: Bah, I was beaten to it by Cabbage.
 
Thought occurs to me that a calculator only needs to calculate a sine of any angle since the cosine would be equal to sin of ((pi/2)-X) and the tangent just sin/cos, all in radians of course .(Tan(pi/2) is undefined )
I've no idea how they program it to calculate the sine . I do remember in the early days of calculators tests to compare accuracy of Trig: functions . These were not all equal I seem to remember .
 
I ment to say that, as the gradient of a sin or a cos function fluctuates. It makes more sense to interpolate between two known points then to approximate sin(t+D) as sin(t)+sin(D).
The formula I gave was sin(t+D)=sin(t)cos(D)+cost(t)sin(D), which isn't an approximation.
 
I tease that computer/electrical engineers only count with zeros and ones.

Then maybe he'll like this joke (though it has to be written to work):

There are 10 kinds of people in the world: those who can count in binary, and those who can't.
 
Then maybe he'll like this joke (though it has to be written to work):

There are 10 kinds of people in the world: those who can count in binary, and those who can't.

Here's another classic joke appropriate for this thread:

There's 3 kinds of mathematicians: those who can count and those who can't count.

/actually, I am merely a math major, not a mathematician...
 

Back
Top Bottom