5.3. Choosing the collocation points: the Chebyshev method#

Co-authored with Stephen Roberts of the Australian National University.

References:

In some situations, one can choose the points \(x_i\) to use in polynomial collocations (these points are also called the nodes) and a natural objective is to minimise the worst case error over some interval \([a,b]\) on which the approximation is to be used. As discussed previously, the best one can do in most cases is to minimise the maximum absolute value of the polynomial \(w_{n+1}(x) := \prod_{i=0}^n (x-x_i)\) arising in the error formula.

The intuitive idea of using equally spaced points is not optimal as \(w_{n+1}\) reaches considerably larger values between the outermost pairs of nodes than elsewhere, as seen with the example of the Witch of Agnesi in section Error Formulas for Polynomial Collocation. Better intuition suggests that moving the nodes a bit closer in these regions of large error will reduce the maximum error there while not increasing it too much elsewhere,and reduce the maximum error. Further, it would seem that this strategy is possible so long as the maximum amplitude sin some of the intervals between the nodes is larger than others: the endpoints \(a\) and \(b\) need not be nodes so there are \(n+2\) such intervals.

This suggests the conjecture that the smallest possible maximum amplitude of \(w_{n+1}(x)\) on an interval \([a,b]\) will be obtained for a set of nodes such that \(|w_{n+1}(x)|\) takes it maximum value \(n+2\) times, once in each of the interval separated by the nodes. Indeed this is true, and the nodes achieving this result are the so called Chebyshev points or Chebyshev nodes, given by the simple formula

(5.4)#\[ x_i = \frac{a+b}{2} + \frac{b-a}{2}\cos \left( \frac{2i+1}{2n+2}\pi\right), \qquad 0 \leq i \leq n \]

To understand this result, consider the case where the interval of interest is \([-1,1]\), so that these special nodes are \(\cos \left( \frac{2i+1}{2n+2}% \pi \right) \) The general case then follows by using the change of variables \(x=(a+b)/2+t(b-a)/2\). The reason that this works is that these are the roots of the function

\[ T_{n+1}(x):=\cos ((n+1)\cos ^{-1}x) \]

which turns out to be a polynomial of degree \(n+1\) that takes its maximum absolute value of 1 at the \(n+2\) points \(\cos \left( \frac i{n+1}\pi \right),0\leq i\leq n+1\).

There are a number of claims here: most are simple consequences of the definition and what is known about the roots and extreme values of cosine. The one surprising fact is that \(T_n(x)\) is a polynomial of degree \(n\), known as a Chebyshev polynomial. The notation comes from an alternative transliteration, Tchebyshev, of this Russian name.

This can be checked by induction. The first few cases are easy to check: \(% T_0(x)=1\), \(T_1(x)=x\) and \(T_2(x)=\cos 2\theta =2\cos ^2\theta -1=2x^2-1\). In general, let \(\theta =\cos ^{-1}x\) so that \(\cos \theta =x\). Then trigonometric identities give

\[\begin{eqnarray*} T_{n+1}(x) &=&\cos (n+1)\theta \\ &=&\cos n\theta \cos \theta -\sin n\theta \sin \theta \\ &=&T_n(x)x-\sin n\theta \sin \theta \end{eqnarray*}\]

and similarly

\[\begin{eqnarray*} T_{n-1}(x) &=&\cos (n-1)\theta \\ &=&\cos n\theta \cos \theta +\sin n\theta \sin \theta \\ &=&T_n(x)x+\sin n\theta \sin \theta \end{eqnarray*}\]

Thus \(T_{n+1}(x)+T_{n-1}(x)=2xT_n(x)\) or

\[T_{n+1}(x)=2xT_n(x)-T_{n-1}(x)\]

Since \(T_0\) and \(T_1\) are known to be polynomials, the same follows for each successive \(n\) from this formula. The induction also shows that

\[\begin{equation*} T_n(x)=2^{n-1}x^n+\quad \text{terms involving lower powers of }x \end{equation*}\]

so in particular the degree is \(n\).

With this information, the error formula can be written in a special form. Firstly \(w_{n+1}\) is then a polynomial of degree \(n+1\) with the same roots as \(T_{n+1}\), so is a multiple of the latter function. Secondly, the leading coefficient of \(w_{n+1}\) is 1, compared to \(2^{n+1}\) for the Chebyshev polynomial, so \(w_{n+1} = T_{n+1}/2^n\). Finally, the maximum of \(w_{n+1}\) is seen to be \(1/2^n\) and we have the result that

Theorem 5.4

When a polynomial approximation \(p(x)\) to a function \(f(x)\) on the interval \([-1,1]\) is constructed by collocation at the roots of \(T_{n+1}\), the error is bounded by

\[|f(x)-p(x)|\leq \frac 1{2^n(n+1)!}\max_{-1\leq t\leq 1}|f^{(n+1)}(t)|\]

When the interval is \([a,b]\) and the collocation points are the appropriately rescaled Chebychev points as given in (5.4).

\[|f(x)-p(x)|\leq \frac{(b-a)^{n+1}}{2^{2n+1}(n+1)!}\max_{a\leq x\leq b}|f^{(n+1)}(x)|\]

This method works well in many cases. Further, it is known that any continuous on any interval \([a,b]\) can be approximated arbitrarily well by polynomials, in the sense that the maximum error over the whole interval can be made as small as one likes [this is the Weierstrass Approximation Theorem]. However, collocation at these Chebyshev nodes will not work for all continuous functions: indeed no choice of points will work for all cases, as is made precise in theorem 6 on page 288 of [Kincaid and Chenney, 1990]. One way to understand the problem is that the error bound relies on derivatives of ever higher order, so does not even apply to some continuous functions.

This suggests a new strategy: break the interval \([a,b]\) into smaller interval, approximate on each interval by a polynomial of some small degree, and join these polynomials together. Hopefully, the errors will only depend on a few derivatives, and so will be more controllable, while using enough nodes and small enough intervals will allow the errors to be made as small as desired. This fruitful idea is dealt with next.