13. Polynomial Collocation (Interpolation/Extrapolation) and Approximation¶
Last Revised on Wednesday February 17.
References:
Section 3.1 Data and Interpolating Functions of Sauer
Section 3.1 Interpolation and the Lagrange Polynomial of Burden&Faires
Section 4.1 of Chenney&Kincaid
13.1. Introduction¶
Numerical methods for dealing with functions require describing them, at least approximately, using a finite list of numbers, and the most basic approach is to approximate by a polynomial. (Other important choices are rational functions and “trigonometric polynomials”: sums of multiples of sines and cosines.) Such polynomials can then be used to aproximate derivatives and integrals.
The simplest idea for approximating
In fact there are infinitely many such polynomials: given one, add to it any polynomial with zeros at all of the
Theorem
Given
(Note: although the degree is typically
Historically there are several methods for finding
In general, this method starts by assuming that the function wanted is a sum of unknown multiples of a collection of known functions.
Here,
(Note: any of the
The unknown factors (
Next one states the problem as a system of equations for these undetermined coefficients, and solves them.
Here, we have
This is a system if
Back in terms of polynomials, this is the claim that the only polynomial of degree at most
The proof of this theorem is completely constructive; it gives the only numerical method we need, and which is the one implemented in Numpy through the pair of functions numpy.polyfit
and numpy.polyval
that we will use in examples.
(Aside: here as in many places, Numpy mimics the names and functionality of corresponding Matlab tools.)
Briefly, the algorithm is this (indexing from 0 !)
Create the
matrix with elements
and the
Solve
for the vector of coefficients as above.
I use the name
13.2. Python example with polyfit
and polyval
from Numpy¶
As usual, I concoct a first example with known correct answer, by using a polynomial as
n = 4
x_nodes = array([1., 2., 7., 5., 4.]) # They do not need to be in order
print(f"The x nodes 'x_i' are {x_nodes}")
y_nodes = empty_like(x_nodes)
for i in range(len(x_nodes)):
y_nodes[i] = f(x_nodes[i])
print(f"The y values at the nodes are {y_nodes}")
c = polyfit(x_nodes, y_nodes, n) # Note: n must as be specified; we will see why soon
print(f"The coefficients of P are {c}")
P_values = polyval(c, x_nodes)
print(f"The values of P(x_i) are {P_values}")
13.2.1. Python presentation aside: pretty-print the result¶
13.3. Example 2: not a polynomial of degree ¶
# Reduce the degree of $P$ to at most 3:
n = 3
# Truncate the data to 4 values:
x_nodes = array([0., 2., 3., 4.])
y_nodes = empty_like(x_nodes)
for i in range(len(x_nodes)):
y_nodes[i] = f(x_nodes[i])
print(f"n is now {n}, the nodes are now {x_nodes}, with f(x_i) values {y_nodes}")
c = polyfit(x_nodes, y_nodes, n)
print(f"The coefficients of P are now {c}")
print("P(x) = ", end="")
for j in range(n-1): # count up to j=n-2 only, omitting the x and constant terms
print(f"{c[j]:.4}x^{n-j} + ", end="")
print(f"{c[-2]:.4}x + {c[-1]:.4}")
P_values = polyval(c, x_nodes)
print(f"The values of P(x_i) are now {P_values}")
There are several ways to assess the accuracy of this fit; we start graphically, and later consider the maximum and root-mean-square (RMS) errors.

13.4. Example 3: not a polynomial at all¶
n = 3
x_g_nodes = linspace(a_g, b_g, n+1)
y_g_nodes = empty_like(x_g_nodes)
for i in range(len(x_g_nodes)):
y_g_nodes[i] = g(x_g_nodes[i])
print(f"{n=}")
print(f"node x values {x_g_nodes}")
print(f"node y values {y_g_nodes}")
c_g = polyfit(x_g_nodes, y_g_nodes, n)
print("P(x) = ", end="")
for j in range(n-1): # count up to j=n-2 only, omitting the x and constant terms
print(f"{c_g[j]:.4}x^{n-j} + ", end="")
print(f"{c_g[-2]:.4}x + {c_g[-1]:.4}")
P_values = polyval(c_g, x_g_nodes)
print(f"The values of P(x_i) are {P_values}")
There are several ways to assess the accuracy of this fit. We start graphically, and later consider the maximum and root-mean-square (RMS) errors.

13.5. Why the maximum degree must be specified in polyval
¶
The function numpy.polyval
can also give a polynomial of lower degree, in which case it is no longer an exact fit; we will consider this next;
returning to the first example of the quartic
m = 1 # m is the maximum degree of the polynomial: m < n now
c_m = polyfit(x_nodes, y_nodes, m)
print(f"The coefficients of P are {c_m}")
P_m_x = polyval(c_m, x)
# Try to print P(x) again:
print(f"P_{m}(x) = ", end="")
for j in range(m-1):
print(f"{c_m[j]:.4}x^{m-j} + ", end="")
print(f"{c_m[-2]:.4}x + {c_m[-1]:.4}")
print(f"The values of P(x_i) are {P_values}")
figure(figsize=[12,8])
title("With $f(x)$")
plot(x, f(x), label="$y=f(x)$")
plot(x_nodes, y_nodes, "*", label="nodes")
plot(x, P_m_x, label=f"y=$P_{m}(x)$")
legend()
grid(True)
13.6. What is the degree polynomial given by polyfit(x, y, m)
? Least squares fitting.¶
Another form of polynomial approximation is to find the polynomial of degree at most
Given a function
Minimize the worst case error, by mimimizing the largest of the absolute errors
— this is choosing to minimize , often called Chebychev approximation.Minimize the average error in the root-mean-square sense — this is minimizing the Euclidean norm
.
This time, the Euclidean norm or RMS error method is by far the easiest, giving a simple linear algebra solution.
For now, a quick sketch of the method; the justification will come soon.
If above we seek a polynomial of degree at most
At this point the “+1” bits are annoying, so we will change notation and consider a more general setting:
Given a
which is now an
This work is licensed under Creative Commons Attribution-ShareAlike 4.0 International