8:52 PM
The Taylor series approximation of ex converges rapidly for small positive values of x, but gives incorrect results for negative values, because computing the series requires subtracting large numbers from each other to get a small number, and on a computer this will introduce rounding errors.
The solution is to rewrite ex as 1/e−x when x<0. This can be computed without rounding errors reducing the accuracy of the final result. This solution is straightforward, but it was not obviously necessary in the original computation. The lesson, again, is to watch out for rounding errors.
The Taylor series approximation of ex is accurate for larger positive values of x, but the number of terms that must be computed in order for the approximation to converge increases as x increases. This is a problem, because each term of the Taylor series has xn in the numerator. If the number of required terms gets too large, xn will become too large for the computer to represent and the computation will fail. My calculator gives e79 as 2.038281066651×1034. The Taylor series approximation returns infinity, because it requires more than 163 terms and 79163 is greater than the computer can represent.
Once again, the solution is to rewrite the original function. x can be broken up into an integer part and a fractional part, and then ex=einteger part×efractional part. efractional part can be calculated as a Taylor series expansion. If e is stored as a constant, then einteger part can be directly multiplied out, and this can be computed when a strict Taylor series approximation of ex fails.
Taylor series can also be used to approximate sin(x) and cos(x). Like with ex, the series require fewer terms to converge near 0, so the identities sin(x)=sin(x+2πn) and cos(x)=cos(x+2πn) for all positive and negative integer values of n should be used so the Taylor series is only computed for −π<x<π.
Finally, Taylor series can be used to approximate ln(x). The catch here is that ex, sin(x), and cos(x) are all computed based the difference between x and 0, while ln(x) must be computed based on the difference between x and 1. This makes the math a little messier but isn't otherwise a big deal. However, the Taylor series for ln(x) does not converge for x>2. Since ln(x) = −ln(1/x), to compute ln(x) for x≥2, compute −ln(1/x) instead.
6:47 PM
I have been considering resurrecting this blog to talk about my current interest, which is math. This fall I'm taking a course in Numerical Analysis, and I'm finding that in order to make sense of the class, I have to rethink through the lectures after each class. As long as I'm doing this thinking, I figured I should post it. Hopefully posting about math in a generic blogger blog won't turn into too much of a headache.
I'm going to go ahead and start with Lecture 1, which was last week's class. I likely don't know what I'm talking about. If anyone sees this and wants to leave a comment, please do.
The big picture on Numerical Analysis is that math gives us all kinds of tools for understanding the world. However, it's not always easy to go from a mathematical formula to a number, and often, we care about the number rather than the formula. Numerical Analysis is the techniques used to approximate the values of formulas and turn them into numbers.
Certain mathematical operations are easy. Addition, subtraction, multiplication, and division are all easy, and you can do a lot with them, such as evaluating polynomials. If f(x) is a polynomial, you can find the value of f(x) for any x just by a bunch of multiplications and additions. But there are lots of functions that are not polynomials, and these are hard to evaluate. For example, f(x)=ex, f(x)=sin(x), and f(x)=∫−∞z (e−t2/2)/(2π)1/2 dt are common functions with many applications which cannot be directly evaluated at more than a few points.
Numerical analysis shows how to approximate the values of these functions. For example, we can look at how to approximate f(x)=ex for values of x other than 0. Math tells us that lots of functions, including the exponential function, can be written as a Taylor series. The general form of a Taylor series is f(x) = f(a) + f′(a)(x−a)/1! + f″(a)(x−a)2/2! + f(3)(a)(x−a)3/3! + … . Written compactly, this is ∑n=0→∞ f(n)(a)(x−a)n/n! .
The good news is that for f(x)=ex, f′(x)=ex=f(x), and therefore f(n)(x)=ex for all integers n≥0. In combination with the fact that a0=1 for all a≠0, this gives the Taylor series for ex=∑n=0→∞ xn/n! = 1 + x + x2/2 + … . This means that a Taylor series turns functions into polynomials, and since polynomials can be evaluated with addition, subtraction, multiplication, and division, we can evaluate functions other than polynomials, as long as they can be expressed as a Taylor series.
The bad news is that this is an infinite polynomial. We have turned ex from a function that can't be evaluated at all into one that can be evaluated, but requires an infinite number of calculations. Fortunately, math tells us that if you take a finite number of terms of the Taylor series, that sum will approximate the true value of the function, and the approximation gets as close as you want as long as you take enough terms. Math tells us to figure out how many is "enough", so you can write a little computer program that does the calculations and spits out the value of ex for any value of x. This value is an approximation, but it can be a good enough approximation for whatever purpose you're using it for.
Let's do a quick demonstration of this. The table below shows a sequence of approximations of f(1)=e1=e. For each value of n, the table shows the sum of the first n terms of the Taylor series expansion of ex, evaluated at x=1. For the record, my calculator says that e=2.71828182846.
n | sum |
0 | 1.00000000000 |
1 | 2.00000000000 |
2 | 2.50000000000 |
3 | 2.66666666667 |
4 | 2.70833333333 |
5 | 2.71666666667 |
6 | 2.71805555556 |
7 | 2.71825396825 |
8 | 2.71827876984 |
9 | 2.71828152557 |
10 | 2.71828180115 |
11 | 2.71828182620 |
12 | 2.71828182829 |
13 | 2.71828182845 |
14 | 2.71828182846 |
The Taylor series reaches the same level of accuracy as my calculator in 14 steps, which is relatively quick. According to math, this calculation works for any value of x, although it may require more steps depending on the value of x to get the desired accuracy. In the real world, however, if x gets too large, then ex can be bigger than the computer can represent. For example, if x=1020, the resulting sequence of finite sums is:
n | sum |
0 | 1 |
1 | 1e+20 |
2 | 5e+39 |
3 | 1.6666666667e+59 |
4 | 4.1666666667e+78 |
5 | 8.3333333333e+97 |
6 | 1.3888888889e+117 |
7 | 1.9841269841e+136 |
8 | 2.4801587302e+155 |
9 | 2.7557319224e+174 |
10 | 2.7557319224e+193 |
11 | 2.5052108385e+212 |
12 | 2.0876756988e+231 |
13 | 1.6059043837e+250 |
14 | 1.1470745598e+269 |
15 | 7.6471637318e+287 |
16 | Inf |
Actually, much smaller values of x will return a value of ∞ on most computers. e710 is larger that the computer can represent, although it takes many more steps to get there.
There's a much bigger problem with using computers to calculate ex using a Taylor series expansion. Using the Taylor series expansion, calculating e−20 requires 87 steps, but the result eventually settles on 4.1736374994×10−9. This looks reasonable, but my calculator says the answer is 2.06115362244×10−9. We can see the problem by looking at the individual sums in the expansion.
n | sum |
0 | 1 |
1 | -19 |
2 | 181 |
3 | -1152.3333333 |
4 | 5514.3333333 |
5 | -21152.333333 |
6 | 67736.555556 |
7 | -186231.69841 |
8 | 448688.93651 |
9 | -962245.80776 |
10 | 1859623.6808 |
11 | -3271048.1166 |
12 | 5280071.5457 |
13 | -7875497.1655 |
14 | 10918172.422 |
15 | -14140053.695 |
16 | 17182728.951 |
17 | -19667603.573 |
18 | 21277210.343 |
19 | -21822593.779 |
20 | 21277210.343 |
21 | -19770222.154 |
22 | 17545625.57 |
23 | -14902937.669 |
24 | 12137531.697 |
25 | -9494843.7955 |
etc. |
The problem is that the final result is a little tiny number, but early steps in the sum involve adding and subtracting really big numbers from each other. According to math, the Taylor series gives the correct result. On a computer, however, numbers are slightly rounded off. When a bunch of big numbers are added and subtracted from each other to get a small result, the rounding errors can make the final answer totally wrong.
This is kind of terrifying. You can write a program that math says will give the right answer, to the accuracy that you need it, but the computer program could introduce rounding errors that make the final answer totally wrong. By looking at the intermediate steps in this case, we could guess that there might be a problem with the final answer, but if you are handed a computer program that claims to evaluate ex, or any other function, it's possible that the program will be mathematically correct but give incorrect answers for some inputs. These errors may only be detectable after the fact. The appropriate response is probably justifiable paranoia.