The Journal of Online Mathematics and Its Applications, Volume 6 (2006)
The Chebyshev Equioscillation Theorem, Robert Mayans

9. Finding the Polynomial of Best Approximation.


The proof of the Chebyshev theorem does not produce a construction of the polynomial of best approximation. Indeed, a general algorithm to produce the best approximation for any continuous function is not feasible, because we would require an infinite amount of input data to specify an arbitrary continuous function. Still, there are methods to find the polynomial of best approximation for many cases, and these methods make critical use of the Chebyshev theorem.

For some very simple cases, ad hoc methods suffice to find the polynomial of best approximation. For example, we seek the best quadratic approximation to the function y = f(x) = sin2(x) on the interval [−π / 2, π / 2].

Your browser does not support the applet tag.

By examining the graph of y = sin2(x), it looks plausible that a good quadratic approximation p(x) would be a parabola symmetric on the y-axis, which overestimates sin2(x) at x = 0 and x = a = π / 2, and most underestimates sin2(x) at a point z near 1.

The Chebyshev theorem tells us that if we can find a quadratic that make these three errors equal and maximal, then we've found the best approximation, because we have a five-point alternating set of (−π / 2, −z, 0, z, π / 2). The solution is not hard: writing p(x) = b + cx2 and E for the maximum error, we must solve:

So b = E, c = (2 / π)2, and z is the root of sin(2x) − 2cx = 0 near 1. By Newton's iteration, we find that z = 1.056612311, and basic calculus techniques show that these five points 0, ± z, ± π / 2 are the only maxima and minima of cos2(x) − p(x) on the interval [π / 2, π / 2]. Thus p(x) = 0.405284735x2 + 0.152818379, and the maximum error is E = 0.152818379. Note that since the alternating set has length 5, this is also the best polynomial approximation of degree ≤ 3. Other simple examples may be found in Atkinson (1989).

The Remes algorithm

A more general method for finding the polynomial of best approximation is the Remes algorithm. This is a family of algorithms that searches for the best polynomial by searching for alternating sets. Here is an outline, as defined in Atkinson (1978); see also the article on the Remes Algorithm at MathWorld:

Step 1

Given a sequence of n + 2 nodes x0, ..., xn + 1, solve the linear system:

Image: Finding2.png

This is a system of n + 2 equations, linear in a0, a1, ..., an, and E. The solution defines a new polynomial p(x) = a0 + a1 x1 + ··· + an xn that approximates f.

Step 2

Find a new non-uniform alternating set z0, z1, ..., zn + 1 of length n + 2, where each zi is a local maxima or minima, and where f(xi) − p(xi) and f(xi) − p(zi) have the same sign. Also, exchange nodes if needed so that the point of maximum error is included in the zi's. Finally, we rename the z's to x's, and return to Step 1.

In practice, the Remes algorithm converges quickly if the initial approximation is sufficiently close. For a full discussion, see Meinardus (1967).

The following applet implements a version of the Remes algorithm to find the polynomial of best approximation. As in a previous applet, you may either select a predefined function, or sketch a function by dragging the mouse from left to right over the plotting area. Select the degree of the polynomial. By pressing Start, the applet generates an initial approximation. To get to the next iteration, press the Next button twice. The first time, the applet shows the points selected for the next non-uniform alternating set. The second time, the new polynomial and error function are calculated and displayed. The iteration halts if the approximation is exact, or if the program cannot find an alternating set, or if the linear system is singular or nearly singular, or if the norm of the error grows to more than five times the initial error.

Your browser does not support Java applets