Fixed Point Iteration

Subjects: Numerical Analysis
Links: Solutions of Equations of One Variable

The number p is a fixed point for a given function g if g(p)=p

Given a root finding problem f(p)=0, we can define function g with a fixed point at p, as

g(x)=x+λf(x)

for λR. Conversely, ik a function g has a fixed point at p, then the function defined by

f(x)=xg(x)

has a zero at p

Th:

Fixed Point of Functional Iteration

def fixed_point_iteration(g, x0, tolerance:float = 1e-6, max_iterations = 1_000):
	x_current = x0
	
	for _ in range(max_iterations):
		x_next = g(x_current)
		
		if abs(x_next - x_current) < tolerance:
			return x_next
		x_current = x_next
	
	print('Did not converge within the maximum iterations')
	return x_current

g = lambda x: 0.5 * (x + 2 / x)
print(fixed_point_iteration(g, 1))

Fixed Point Theorem

Let gC[a,b] be such that Img[a,b]. Suppose that g is Lipscitz continuous with Lipschitz constant k<1. Then for any p0[a,b], the sequence defined by

pn+1=g(pn)nN

converges to the unique fixed point p, in [a,b]

Cor: Let gC[a,b] be such that Img[a,b]. Suppose that g is Lipschitz continuous with Lipschitz constant k<1, then the bounds for the error involved in using pn to approximate p are given by

|pnp|knmax{p0a,bp0}

and

|pnp|kn1k|p1p0|for all n1

Meaning that pn=p+O(kn). With this we can construct given a tolerance ε>0, we can construct, the least n necessary to have a precision of ε. We have that

logk(ε(1k)|p1p0|)<n

We have a that if gC[a,b], with |g(p)|>1, then there’s a δ>0 such that if 0<|p0p|<δ, then |p0p|<|p1p|. Thus, no matter how close the initial approximation p0 is to p, the next iterate p1 is farther away, so the fixed-point iteration does not converge ifp0p. This is where the definition of unstable fixed points comes from

Similarly, let g be continuously differentiable on some interval (c,d) that contains the fixed point p of g. Then if |g(p)|<1, then there’s a δ>0, such that if |p0p|δ, then the fixed point iteration converges.