Differential and Integral Calculus
5. Taylor polynomial
Taylor polynomial
Definition: Taylor polynomial
Let be times differentiable at the point . Then the Taylor polynomial \begin{align} P_n(x)&=P_n(x;x_0)\\\ &=f(x_0)+f'(x_0)(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+ \\ & \dots +\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n\\ &=\sum_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k\\ \end{align} is the best polynomial approximation of degree (with respect to the derivative) for a function , close to the point .
Note. The special case is often called the Maclaurin polynomial.
If is times differentiable at , then the Taylor polynomial has the same derivatives at as the function , up to the order (of the derivative).
The reason (case ): Let so that \begin{align} P_n'(x)&=c_1+2c_2x+3c_3x^2+\dots +nc_nx^{n-1}, \\ P_n''(x)&=2c_2+3\cdot 2 c_3x\dots +n(n-1)c_nx^{n-2} \\ P_n'''(x)&=3\cdot 2 c_3\dots +n(n-1)(n-2)c_nx^{n-3} \\ \dots && \\ P^{(k)}(x)&=k!c_k + x\text{ terms} \\ \dots & \\ P^{(n)}(x)&=n!c_n \\ P^{(n+1)}(x)&=0. \end{align}
From these way we obtain the coefficients one by one: \begin{align} c_0= P_n(0)=f(0) &\Rightarrow c_0=f(0) \\ c_1=P_n'(0)=f'(0) &\Rightarrow c_1=f'(0) \\ 2c_2=P_n''(0)=f''(0) &\Rightarrow c_2=\frac{1}{2}f''(0) \\ \vdots & \\ k!c_k=P_n^{(k)}(0)=f^{(k)}(0) &\Rightarrow c_k=\frac{1}{k!}f^{(k)}(0). \\ \vdots &\\ n!c_n=P_n^{(n)}(0)=f^{(n)}(0) &\Rightarrow c_k=\frac{1}{n!}f^{(n)}(0). \end{align} Starting from index we cannot pose any new conditions, since .
Taylor's Formula
If the derivative exists and is continuous on some interval , then and the error term satisfies at some point . If there is a constant (independent of ) such that for all , then as .
\neq omitted here (mathematical induction or integral).
Examples of Maclaurin polynomial approximations: \begin{align} \frac{1}{1-x} &\approx 1+x+x^2+\dots +x^n =\sum_{k=0}^{n}x^k\\ e^x&\approx 1+x+\frac{1}{2!}x^2+\frac{1}{3!}x^3+\dots + \frac{1}{n!}x^n =\sum_{k=0}^{n}\frac{x^k}{k!}\\ \ln (1+x)&\approx x-\frac{1}{2}x^2+\frac{1}{3}x^3-\dots + \frac{(-1)^{n-1}}{n}x^n =\sum_{k=1}^{n}\frac{(-1)^{k-1}}{k}x^k\\ \sin x &\approx x-\frac{1}{3!}x^3+\frac{1}{5!}x^5-\dots +\frac{(-1)^n}{(2n+1)!}x^{2n+1} =\sum_{k=0}^{n}\frac{(-1)^k}{(2k+1)!}x^{2k+1}\\ \cos x &\approx 1-\frac{1}{2!}x^2+\frac{1}{4!}x^4-\dots +\frac{(-1)^n}{(2n)!}x^{2n} =\sum_{k=0}^{n}\frac{(-1)^k}{(2k)!}x^{2k} \end{align}
Example
Which polynomial approximates the function in the interval so that the absolute value of the error is less than ?
We use Taylor's Formula for at . Then independently of and the point . Also, in the interval in question, we have . The requirement will be satisfied (at least) if This inequality must be solved by trying different values of ; it is true for .
The required approximation is achieved with , which fo sine is the same as .
Check from graphs: is not enough, so the theoretical bound is sharp!
Taylor polynomial and extreme values
If , then also some higher derivatives may be zero: Then the behaviour of near is determined by the leading term (after the constant term ) of the Taylor polynomial.
This leads to the following result:
Extreme values
Newton's method
The first Taylor polynomial is the same as the linearization of at the point . This can be used in some simple approximations and numerical methods.
Newton's method
The equation can be solved approximately by choosing a starting point (e.g. by looking at the graph) and defining for This leads to a sequence , whose terms usually give better and better approximations for a zero of .
The recursion formula is based on the geometric idea of finding an approximative zero of by using its linearization (i.e. the tangent line).
Example
Find an approximate value of by using Newton's method.
We use Newton's method for the function and initial value . The recursion formula becomes from which we obtain , , and so on.
By experimenting with these values, we find that the number of correct decimal places doubles at each step, and gives already 100 correct decimal places, if intermediate steps are calculated with enough precision.
Taylor series
Taylor series
If the error term in Taylor's Formula goes to zero as increases, then the limit of the Taylor polynomial is the Taylor series of (= Maclaurin series for ).
The Taylor series of is of the form This is an example of a power series.
The Taylor series can be formed as soon as has derivatives of all orders at and they are substituted into this formula. There are two problems related to this: Does the Taylor series converge for all values of ?
Answer: Not always; for example, the function has a Maclaurin series (= geometric series) converging only for , although the function is differentiable for all :
If the series converges for some , then does its sum equal ? Answer: Not always; for example, the function satisfies for all (elementary but difficult calculation). Thus its Maclaurin series is identically zero and converges to only at .
Conclusion: Taylor series should be studied carefully using the error terms. In practice, the series are formed by using some well known basic series.
Examples
\begin{align} \frac{1}{1-x} &= \sum_{k=0}^{\infty} x^k,\ \ |x|< 1 \\ e^x &= \sum_{k=0}^{\infty} \frac{1}{k!}x^k, \ \ x\in \mathbb{R} \\ \sin x &= \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!} x^{2k+1}, \ \ x\in \mathbb{R} \\ \cos x &= \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k)!} x^{2k},\ \ x\in \mathbb{R} \\ (1+x)^r &= 1+\sum_{k=1}^{\infty} \frac{r(r-1)(r-2)\dots (r-k+1)}{k!}x^k, |x|<1 \end{align} The last is called the Binomial Series and is valid for all . If , then starting from , all the coefficients are zero and in the beginning
Compare this to the Binomial Theorem: for .
Power series
Definition: Power series
A power series is of the form The point is the centre and the are the coefficients of the series.
There are only three essentially different cases:
Abel's Theorem.
- The power series converges only for (and then it consists of the constant only)
- The power series converges for all
- The power series converges on some interval (and possibly in one or both of the end points), and diverges for other values of .
The number is the radius of convergence of the series. In the first two cases we say that or respectively.
Example
For which values of the variable does the power series converge?
We use the ratio test with . Then as . By the ratio test, the series converges for , and diverges for . In the border-line cases the general term of the series does not tend to zero, so the series diverges.
Result: The series converges for , and diverges otherwise.
Definition: Sum function
In the interval where the series converges, we can define a function by setting \begin{equation} \label{summafunktio} f(x) = \sum_{k=0}^{\infty} c_k(x-x_0)^k, \tag{1} \end{equation} which is called the sum function of the power series.
The sum function is continuous and differentiable on . Moreover, the derivative can be calculated by differentiating the sum function term by term: Note. The constant term disappears and the series starts with . The differentiated series converges in the same interval ; this may sound a bit surprising because of the extra coefficient .
Example
Find the sum function of the power series
This series is obtained by differentiating termwise the geometric series (with ). Therefore, \begin{align} 1+2x+3x^2+4x^3+\dots &= D(1+x+x^2+x^3+x^4+\dots ) \\ &= \frac{d}{dx}\left( \frac{1}{1-x}\right) = \frac{1}{(1-x)^2}. \end{align} Multiplying with we obtain which is valid for .
In the case we can also integrate the sum function termwise: Often the definite integral can be extended up to the end points of the interval of convergence, but this is not always the case.
Example
Calculate the sum of the alternating harmonic series.
Let us first substitute to the geometric series. This yields By integrating both sides from to we obtain Note. Extending the limit of integration all the way up to should be justified more rigorously here. We shall return to integration later on the course.