Applications of Taylor Series

We started studying Taylor Series because we said that polynomial functions are easy and that if we could find a way of representing complicated functions as series ("infinite polynomials") then maybe some properties of functions would be easy to study too. In this section, we'll show you a few ways in Taylor series can make life easy.


Evaluating definite integrals

Remember that we've said that some functions have no antiderivative which can be expressed in terms of familiar functions. This makes evaluating definite integrals of these functions difficult because the Fundamental Theorem of Calculus cannot be used. However, if we have a series representation of a function, we can oftentimes use that to evaluate a definite integral.

Here is an example. Suppose we want to evaluate the definite integral

\[ 
\int_0^1 \sin(x^2)~dx  \]

The integrand has no antiderivative expressible in terms of familiar functions. However, we know how to find its Taylor series: we know that

\[ 
\sin t = t - \frac{t^3}{3!} + \frac{t^5}{5!} - \frac{t^7}{7!} + \ldots 
 \]

Now if we substitute $  t = x^2  $ , we have

\[ 
\sin(x^2) = x^2 - \frac{x^6}{3!} + \frac{x^{10}}{10!} - 
\frac{x^{14}}{14!} + \ldots 
 \]

In spite of the fact that we cannot antidifferentiate the function, we can antidifferentiate the Taylor series:


\begin{eqnarray*} 
\int_0^1 \sin(x^2)~dx & = & \int_0^1 (x^2 - \frac{x^6}{3!} + 
\frac{x^{10}}{5!} - \frac{x^{14}}{7!} + \ldots)~dx \\ \\ 
& = & (\frac{x^3}{3} - \frac{x^7}{7\cdot 3!} + \frac{x^{11}}{11\cdot 
5!} - \frac{x^{15}}{15\cdot 7!} +\ldots)|_0^1 \\ \\ 
& = & \frac 13 - \frac{1}{7\cdot 3!} + \frac{1}{11\cdot 5!} - 
\frac{1}{15\cdot 7!} + \ldots 
\end{eqnarray*}

Notice that this is an alternating series so we know that it converges. If we add up the first four terms, the pattern becomes clear: the series converges to 0.31026.


Understanding asymptotic behaviour

Sometimes, a Taylor series can tell us useful information about how a function behaves in an important part of its domain. Here is an example which will demonstrate.

A famous fact from electricity and magnetism says that a charge q generates an electric field whose strength is inversely proportional to the square of the distance from the charge. That is, at a distance r away from the charge, the electric field is

\[ 
E = \frac{kq}{r^2} 
 \]

where k is some constant of proportionality.

Oftentimes an electric charge is accompanied by an equal and opposite charge nearby. Such an object is called an electric dipole. To describe this, we will put a charge q at the point $  x = d  $ and a charge -q at $  x 
= -d  $ .

Along the x axis, the strength of the electric fields is the sum of the electric fields from each of the two charges. In particular,

\[ 
E = \frac{kq}{(x-d)^2} - \frac{kq}{(x+d)^2} 
 \]

If we are interested in the electric field far away from the dipole, we can consider what happens for values of x much larger than d. We will use a Taylor series to study the behaviour in this region.

\[ 
E = \frac{kq}{(x-d)^2} - \frac{kq}{(x+d)^2} = \frac{kq}{x^2(1-\frac 
dx)^2} - \frac{kq}{x^2(1+\frac dx)^2} 
 \]

Remember that the geometric series has the form

\[ 
\frac 1{1-u} = 1 + u + u^2 + u^3 + u^4 + \ldots 
 \]

If we differentiate this series, we obtain

\[ 
\frac{1}{(1-u)^2} = 1 + 2u + 3u^2 + 4u^3 + \ldots 
 \]

Into this expression, we can substitute $  u = \frac dx  $ to obtain

\[ 
\frac{1}{(1-\frac dx)^2} = 1+\frac{2d}{x} + \frac{3d^2}{x^2} + 
\frac{4d^3}{x^3} + \ldots 
 \]

In the same way, if we substitute $  u = -\frac dx  $ , we have

\[ 
\frac{1}{(1+\frac dx)^2} = 1-\frac{2d}{x} + \frac{3d^2}{x^2} - 
\frac{4d^3}{x^3} + \ldots 
 \]

Now putting this together gives


\begin{eqnarray*} 
E & = & \frac{kq}{x^2(1-\frac dx)^2} - \frac{kq}{x^2(1+\frac dx)^2} \\ 
\\ 
& = & \frac{kq}{x^2}\big[ 
(1+\frac{2d}{x} + \frac{3d^2}{x^2} + \frac{4d^3}{x^3} + \ldots) 
-(1-\frac{2d}{x} + \frac{3d^2}{x^2} - \frac{4d^3}{x^3} + \ldots) \big] \\ \\ 
& = & \frac{kq}{x^2} \big[ \frac{4d}{x} + \frac{8d^3}{x^3} + \ldots 
\big] \\ \\ 
& \approx & \frac{4dq}{x^3} 
\end{eqnarray*}

In other words, far away from the dipole where x is very large, we see that the electric field strength is proportional to the inverse cube of the distance. The two charges partially cancel one another out to produce a weaker electric field at a distance.


Understanding the growth of functions

This example is similar is spirit to the previous one. Several times in this course, we have used the fact that exponentials grow much more rapidly than polynomials. We recorded this by saying that

\[ 
\lim_{n\to\infty} \frac{e^x}{x^n} = \infty 
 \]

for any exponent n . Let's think about this for a minute because it is an important property of exponentials. The ratio $  \frac{e^x}{x^n}  $ is measuring how large the exponential is compared to the polynomial. If this ratio was very small, we would conclude that the polynomial is larger than the exponential. But if the ratio is large, we would conclude that the exponential is much larger than the polynomial. The fact that this ratio becomes arbitrarily large means that the exponential becomes larger than the polynomial by a factor which is as large as we would like. This is what we mean when we say "an exponential grows faster than a polynomial."

To see why this relationship holds, we can write down the Taylor series for $ e^x  $ .


\begin{eqnarray*} 
\frac{e^x}{x^n} & = & \frac{1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots 
\frac{x^n}{n!} + \frac{x^{n+1}}{(n+1)!} + \ldots}{x^n} \\ 
& = & \frac{1}{x^n} + \frac{1}{x^{n-1}} + \ldots + \frac{1}{n!} + 
\frac{x}{(n+1)!} +\ldots \\ 
& & > \frac{x}{(n+1)!} 
\end{eqnarray*}

Notice that this last term becomes arbitrarily large as $  x 
\to \infty  $ . That implies that the ratio we are interested in does as well:

\[ 
\lim_{x\to\infty}\frac{e^x}{x^n} = \infty 
 \]

Basically, the exponential $  e^x  $ grows faster than any polynomial because it behaves like an infinite polynomial whose coefficients are all positive.


Solving differential equations

Some differential equations cannot be solved in terms of familiar functions (just as some functions do not have antiderivatives which can be expressed in terms of familiar functions). However, Taylor series can come to the rescue again. Here we will present two examples to give you the idea.

Example 1: We will solve the initial value problem


\begin{eqnarray*} 
\frac{dy}{dx} & = & y \\ 
y(0) & = & 1 
\end{eqnarray*}

Of course, we know that the solution is $  y(x) = e^x  $ , but we will see how to discover this in a different way. First, we will write out the solution in terms of its Taylor series:

\[ 
y = a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 +\ldots 
 \]

Since this function satisfies the condition $  y(0) = 1  $ , we must have $  y(0) = a_0 = 1  $ .

We also have

\[ 
\frac{dy}{dx} = a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + 
\ldots 
 \]

Since the differential equation says that $  \frac{dy}{dx} = y 
 $ , we can equate these two Taylor series:

\[ 
a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 +\ldots = 
a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + \ldots 
 \]

If we now equate the coefficients, we obtain:

\[ 
\begin{array}{ll} 
a_0 = a_1 = 1, \hspace{.1in} & a_1 = 1 \\ 
a_1 = 2a_2, & a_2 = \frac{a_1}{2} = \frac 12 \\ \\ 
a_2 = 3a_3, & a_3 = \frac{a_2}{3} = \frac 1{2\cdot 3} \\ \\ 
a_3 = 4a_4, & a_4 = \frac{a_3}{4} = \frac 1{2\cdot 3\cdot 4} \\ 
\\ 
a_{n-1} = na_n, & a_n = \frac{a_{n-1}}{n} = \frac{1}{1\cdot 2\cdot 3 
\ldots n} = \frac{1}{n!} 
\end{array} 
 \]

This means that $  y = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} 
+ \ldots + \frac{x^n}{n!} + \ldots = e^x  $ as we expect.

Of course, this is an intial value problem we know how to solve. The real value of this method is in studying initial value problems that we do not know how to solve.

Example 2: Here we will study Airy's equation with initial conditions:


\begin{eqnarray*} 
y^{\prime\prime} & = & xy \\ 
y(0) & = & 1 \\ 
y^\prime(0) & = & 0 
\end{eqnarray*}

This equation is important in optics. In fact, it explains why a rainbow appears the way in which it does! As before, we will write the solution as a series:

\[ 
y = a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 + a_5x^5 + \ldots 
 \]

Since we have the initial conditions, $  y(0) = a_0 = 1  $ and $  y^\prime(0) = a_1 = 0  $ .

Now we can write down the derivatives:

\[ 
\begin{array}{c} 
y^\prime = a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + 5a_5x^4 +\ldots \\ 
y^{\prime\prime} = 2a_2 + 2\cdot 3x + 3\cdot 4x^2 + 4\cdot 5x^3 +\ldots 
\end{array} 
 \]

The equation then gives


\begin{eqnarray*} 
y^{\prime\prime} & = & xy \\ 
2a_2 + 2\cdot 3a_3x + 3\cdot 4a_4x^2 + 4\cdot 5a_5x^3 +\ldots & = & x(a_0 + 
a_1x + a_2x^2 + a_3x^3 + \ldots) \\ 
2a_2 + 2\cdot 3a_3x + 3\cdot 4a_4x^2 + 4\cdot 5a_5x^3 +\ldots & = & a_0x + 
a_1x^2 + a_2x^3 + a_3x^4 + \ldots 
\end{eqnarray*}

Again, we can equate the coefficients of x to obtain

\[ 
\begin{array}{ll} 
2a_2 = 0 & a_2 = 0 \\ 
2\cdot 3 a_3 = a_0 \hspace{.2in} & a_3 = \frac{1}{2\cdot 3} \\ 
3\cdot 4 a_4 = a_1 & a_4 = 0 \\ 
4\cdot 5 a_5 = a_2 & a_5 = 0 \\ 
5\cdot 6 a_6 = a_3 & a_6 = \frac 1{2\cdot 3\cdot 5\cdot 6} 
\end{array} 
 \]

This gives us the first few terms of the solution:

\[  y = 1 + \frac{x^3}{2\cdot 3} + \frac{x^6}{2\cdot 3\cdot 5\cdot 
6} + \ldots 
 \]

If we continue in this way, we can write down many terms of the series (perhaps you see the pattern already?) and then draw a graph of the solution. This looks like this:

Notice that the solution oscillates to the left of the origin and grows like an exponential to the right of the origin. Can you explain this by looking at the differential equation?