Bisection Method

Subjects: Numerical Analysis
Links: Solutions of Equations of One Variable

The first technique, based on the IVT, is called Bisection or Binary Search method

Suppose f is a continuous function defined on the interval [a,b], with f(a) and f(b) of opposite sign. The Intermediate Value Theorem implies that a number p exists in (a,b) with f(p)=0 Although the procedure will work when there is more than one root in the interval (a,b), we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving (or bisecting) of subintervals of [a,b] and, at each step, locating the half containing p.

To begin, set a1=a and b1=b and let p1 be the midpoint of [a,b], that is.$$ p_1 = a_1 + \frac{b_1 - a_1}{2} = \frac{a_1 + b_1}{2} $$

then we reapply this process

Bisection Code

def bisection(f, a:float, b:float, tolerance:float = 1e-6, max_iterations:int = 1_000):
	def sign(x):
		return (x> 0) - (x < 0)
	if sign(f(a))* sign(f(b)) > 0:
		raise Exception(f'The endpoints {a} and {b} must have different signs when evaluated by the function')
		
	iteration = 0
	midpoint = (a+b)/2
	
	while (abs(f(midpoint)) > tolerance) and (iteration < max_iterations):
		midpoint = (a+b)/2
		fp = f(midpoint)
		if fp == 0 or (b-a)/2 < tolerance:
			return midpoint
		elif sign(fp) * sign(f(a)) < 0:
			b = midpoint
		else:
			a = midpoint
		iteration += 1
	return midpoint	

f = lambda x: x*x - 2
a = 1
b = 2
print(bisection(f, a, b, tolerance = 1e-7))

We have multiple criterion for stopping the algorithm given a tolerance ε, while generating p1,,pN until one of the condition is met

|pNpN1|<ε|pNpN1||pN|<εpN0|f(pn)|<ε

Each of this criterion for stopping can be problematic, since f can be problematic. We can have that in certain cases (pn)nN is a pseudo Cauchy sequence, that diverges. We can have that f(p) be really small while being nowhere near the actual 0. The second inequality is the best stopping criterion to apply because it comes closest to testing relative error.

Th: Suppose that fC[a,b] and f(a)f(b)<0. The Bisection method generates a sequence (pn)nN approximating a zero p of f with $$ |p_n - p| \le \frac{b-a}{2^n} $$
Meaning the sequence pn=p+O(2n). This is quite the loose bound for most cases, we can find a formula to find the least n, such that given any tolerance ε, we get that if$$ \log_2\left(\frac{b-a}{\varepsilon}\right) < n $$so the least n that satisfies that if the ceiling of that, or the ceiling of that +1. This allows us to have more of an idea of how many steps are necessary for the desired precision of the input. Meaning that |pnp|<ε, we can still want that |f(pn)|<ε, and this is much harder to control without info of the function.