Brent's method

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jitse Niesen (talk | contribs) at 08:00, 19 April 2006 (→‎The method: add algo). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Jump to navigation Jump to search

In numerical analysis, Brent's method is a complicated but popular root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the more unreliable methods. The idea is to use the secant method or inverse quadratic interpolation if possible, because they converge faster, but to fall back to the more robust bisection method if necessary. Brent's method is due to Richard Brent (1973) and builds on an earlier algorithm of Theodorus Dekker.

The method

Suppose that we want to solve the equation f(x) = 0. As with the bisection method, we need to initialize Brent's methods with two points, say a and b, such that f(a) and f(b) have opposite signs. If f is continuous on [a, b], the intermediate value theorem guarantees the existence of a solution between a and b.

Given these inputs, Brent's methods goes through a number of iterations until it has found an approximation to the root within the user-specified tolerance. At every iteration, f(a) and f(b) will have opposite signs, ensuring that a and b bracket a root. Usually, b will be closer to the unknown root. Denote the value of b in the previous iteration by c (in the first iteration, we set c = a).

We now try to find a better approximation to the root than b. If f(a), f(b) and f(c) are all different, we use inverse quadratic interpolation:

If they are not all different, we use linear interpolation as with the secant method:

If the value of d, obtained by either inverse quadratic interpolation or linear interpolation, lies between b and the midpoint

then we set e = d, otherwise we set e = m. In the latter case, we do a step according to the bisection method.

For the next iteration, we replace a by b if f(b) and f(e) have opposite signs. Otherwise, a is unchanged. Furthermore, we replace c by b and b by e. If the new value of b is still not sufficiently close to the root, we find a better approximation by inverse quadratic iteration, the secant method or bisection, etc.

Algorithm

  • Inputs: a and b, such that f(a) f(b) < 0
  • c := a
  • repeat until convergence
    • if f(a) ≠ f(b) and f(a) ≠ f(c) and f(b) ≠ f(c) then
    • else
    • end if
    • if then d := m end if
    • if f(a) f(d) < 0 then a := b end if
    • c := b
    • b := d
  • end repeat

Properties

If the function f is well-behaved, then Brent's method will usually proceed by either inverse quadratic or linear interpolation, in which case it will converge superlinearly. On the other hand, Brent's method will always fall back on the bisection method if necessary, so it will never do worse than the bisection method; in particular, it will never fail.

Implementations

Brent (1973) published an Algol 60 implementation. The MATLAB function fzero also implements Brent's method. Other implementations of the algorithm (in C++, C, and Fortran flavours) can be found in the Numerical Recipes books.

References

  • Atkinson (1989). An Introduction to Numerical Analysis (2nd ed.), Section 2.8. John Wiley and Sons. ISBN 0-471-50023-2.
  • Brent (1973). Algorithms for Minimization without Derivatives, Chapter 4. Prentice-Hall, Englewood Cliffs, NJ.