Last time, I said I couldn't figure out how to justify the discussion of g(x) for fixed-point iteration. I just realized this was actually discussed in the class, but I was too lost at the time to figure it out.

Back up for a moment to consider an example of bisection. For example, compute the square root of A, where A is some non-negative number. The square root of A is a root of the function f(x)=x^{2}−A. We are looking for the positive number r which satisfies f(r)=0. Let's be lazy and note that r must be between 1 and A (even if A<1), so we can start with 1 and A as the endpoints of the bracket and start iterating. With each iteration, one endpoint of the bracket will move, and x_{i}, which is the midpoint of the bracket and our best guess for r, will converge towards r. Three iterations increases the accuracy of x_{i} by about one decimal place, so if we need a lot of decimal places it might take a while.

There are a couple of things to think about if we want a method that converges faster than bisection. The first is that we are only moving one endpoint at a time. The second is that we know f(x) in this case, and we aren't using that information. This suggests the possibility of moving both endpoints with each iteration by using what we know about f(x). For f(x)=x^{2}−A, if x<r then r<A/x and vice versa. Therefore x and A/x bracket r. So if we start with a bracket with endpoints a and b, we can get a new bracket with endpoints x=(a+b)/2 and A/x.

When we try this approach to find r, we find that the number of accurate digits in our approximation x_{i} approximately doubles with each iteration. This is way faster than basic bisection, which requires 3 iterations just to get one additional digit of accuracy. In addition, since our two endpoints are x and A/x, we only have to keep track of one number in each iteration, as opposed to the two numbers required for bisection. This leads to the iteration formula x_{i+1}=(x_{i}+A/x_{i})/2=g(x_{i}).

Applying this idea of bisection but with moving both endpoints to cube roots (f(x)=x^{3}−A) leads to a guess of an iterating function g(x)=(x+A/x^{2})/2. However, this iterating function gives performance not much better than bisection, which prompts a general discussion of fixed-point iteration and the demonstration that if 0<|g′(r)|<1, fixed-point iteration has linear convergence, but if g′(r)=0, fixed-point iteration has quadratic convergence.

Having established that fixed-point iteration gives quadratic convergence if g′(r)=0, we can ask how to determine g(x) from a given f(x), which leads to Newton's method. However, I think conceptually Newton's method makes more sense geometrically, so I would introduce it geometrically, then show that it meets the criteria of fixed-point convergence with g′(r)=0 as long as f′(r) is not also zero and f″(r) is not infinite.