Section 6.2 Properties of Power Series
Combining Power Series.
Power series can be combined to fomer ne ones more or less like polynomials, with the one new detail that one needs to keep track of the radius of convergence. The results il be stated for th basic case with center \(a=0\text{,}\) but they all carry over to other centers in an intutive way.
Theorem 6.2.1.
Consider any two power series \(\sum_{n=0}^\infty c_n x^n\) and \(c\text{,}\) which resepctively converge to \(f(x) = \sum_{n=0}^\infty c_n x^n\) with radius of convergence \(R_1\) and \(g(x) = \sum_{n=0}^\infty d_n x^n\) with radius of convergence \(R_2\text{.}\) Then
Their sums \(\sum_{n=0}^\infty (c_n + d_n) x^n\) converges to \(f(x) + g(x)\text{,}\) with radius of convergence at least the minimum of \(R_1\) and \(R_2\text{.}\)
Similarly, their difference \(\sum_{n=0}^\infty (c_n - d_n) x^n\) converges to \(f(x) - g(x)\text{.}\)
Any scalar multiple \(\sum_{n=0}^\infty b c_n x^n\) converges to \(b f(x)\text{.}\)
Muliplying by \(x^m\) for any natural number \(m\) gives \(\sum_{n=0}^\infty b c_n x^{n+m}\) which converges to \(x^m f(x)\text{.}\)
Composing with any monomial \(b x^m\) for any natural number \(m\) gives \(\sum_{n=0}^\infty c_n (b x^m)^n = \sum_{n=0}^\infty (b^n c_n) x^{mn}\text{,}\) a power seried that converges to the composition \(f(b x^m)\text{.}\)
The results above can be used to multiply a power series by a polynomial, and one can in fact go further and multiply two power series, but as with derivatives and integrals, handling products of functions is a bit more involved than sums, differences and constant multiples!
\begin{equation*}
\begin{split}
f(x) g(x) &= \left( \sum_{n=0}^\infty c_n x^n \right) \left( \sum_{n=0}^\infty c_n x^n \right)\\
&= (c_0 + c_1 x + c_2 x^2 + \cdots)(d_0 + d_1 x + d_2 x^2 + \cdots)\\
&= c_0d_0 + (c_0d_1 + c_1d_0)x + (c_0 d_2 + c_1d_1 + c_2d_0)x^2 + \cdots
\end{split}
\end{equation*}
The pattern is that the coefficient of \(x^n\) is the sum of all coefficients \(c_i d_j\) with \(i+j = n\text{;}\) that is \(f(x) g(x) = \sum_{n=0}^\infty p_n x^n\) with \(p_n = \sum_{i=0}^n c_i d_{n-i}\text{.}\)
The radius of convergence is the minimum of those for the two series being multiplied together.
Calculus with Power Series.
The derivatives and integrals of power series work just a for polynomials, term be term, and the radius of convergence is unchanged:
Theorem 6.2.2.
For a power series giving \(f(x) = \sum_{n=0}^\infty c_n x^n\) in radius of convergence \(R\text{,}\)
\(f'(x) = c_1 + 2c_2 x + 3c_3 x^2 + \cdots
= \sum_{n=1}^\infty n c_n x^{n-1}
= \sum_{n=0}^\infty (n+1)c_{n+1} x^n\) with this series having the same radius of convergence \(R\text{.}\)
\(\displaystyle \int f(x)\ dx = C + c_0 x + \frac{c_1}{2} x^2 + \frac{c_2}{3} x^3 + \cdots
= C + \sum_{n=1}^\infty \frac{c_{n-1}}{n}c_n x^n\) with this series again having the same radius of convergence \(R\text{.}\)
The second of these in particular will be extremely useful: it allow us to evaluate integrals like
\(\displaystyle \int e^{-x^2} dx\) and
\(\displaystyle \int \frac{\sin x}{x} dx\) that cannot be handled by any of the methods seen in
Chapter 3 or a previous calculus course.
Uniqueness of the Coefficients.
One final, intuitive observation: if there is a power series for a function, then it is unique. That is, there can ony be one possible set of values for the coefficients
\(c_0, c_1, c_2, \dots\text{.}\) One way to verify this leads into the topic of the next section,
Section 3,
Taylor and Maclaurin Series.
Theorem 6.2.3.
If two powers series \(\sum_{n=0}^\infty c_n (x-a)^n\) and \(\sum_{n=0}^\infty b_n (x-a)^n\) both converge to the same function \(f(x)\) for some \(x\) values \(|x-a| < R\) near \(a\text{,}\) then their coefficients are the same: \(c_n = d_n\) for all \(n \geq 0\text{,}\) so they are actually the same series.
Proof.
Subtracting the two series shows that
\begin{equation}
\sum_{n=0}^\infty e_n (x-a)^n = e_0 + e_1 (x-a) + e_2 (x-a)^2 + \cdots = 0\tag{6.2.1}
\end{equation}
where \(e_n = c_n - d_n\text{;}\) thus it suffices to show that all the \(e_n = 0\) in this case.
Evaluating Equation
(6.2.1) for
\(x=a\) gives
\(e_0 + e_1 \cdot 0 + e_2 \cdot 0^2 + \cdots = e_0 = 0\text{,}\)
Then the derivative of Eq.
(6.2.1) gives
\begin{equation}
e_1 + 2e_2 (x-a) + 3e_3(x-a)^2 \cdots = 0\tag{6.2.2}
\end{equation}
and evaluating this for \(x=a\) gives \(e_1 + 2e_2 \cdot 0 + 3e_3 \cdot 0^2 + \cdots = e_1 = 0\)
Differentiating again similarly gives \(2 e_2 + 3 \cdot 2 e_3\cdot 0 + 4 \cdot 3 e_4\cdot 0 = 2 e_2 = 0\text{,}\) so \(e_2 = 0\text{,}\) and so on: all coefficients \(e_n = c_n - d_n\) are zero as claimed, so \(c_n = d_n\) and the two series for \(f(x)\) are actually the same.
Study Guide.
Theorems 2, 4, 5
Examples 4, 6, 9, 10
Checkpoints 4, 6, 8, 9
and one or several exercises from each of the following groups: 63 and 64, 69–71, 87 and 88, 89 and 90.
Note that we de-emphasize interval of convergence, so when that is asked for, it is sufficient to determine the center and the radius of convergence.
Also, we will not do much with products of power series: integrals of power series are by far the most important new idea in this section.
openstax.org/books/calculus-volume-2/pages/6-2-properties-of-power-series
openstax.org/books/calculus-volume-2/pages/6-2-properties-of-power-series