Welcome to Calculus. I'm Professor Ghrist. We're about to begin
lecture 3 on Taylor series. >> In this lesson we'll harness our
understand of e to the x as a series, and extend this perspective to
a wider world of functions. This will be done through
the key tool of Chapter 1, that of the Taylor series of a function. >> In our last lesson,
we began with the definition of e to the x as something like
a long polynomial and from that, and
little bit of help from Euler's formula. We observed similar formulae, or expressions, for the basic trigonometric
functions, sine and cosine. Expressions of this form we
are going to call series, and we will be working with them
throughout this course. The question arises, are there other,
similar expressions for different functions,
besides these basic three? The answer is an emphatic yes. We're going to work under the assumption
that every reasonable function can be expressed in the form
of a series as a constant plus some other constant times x plus
a third constant times x squared, etc., for some collection of constants. Now, of course, strictly speaking,
this is not true. We need to be careful about what
we mean by every and reasonable. For the moment, let's pretend that
this is true and see where it gets us. We are first led to the question
how do we figure out, or compute, these coefficients? The following definition is critical. The Taylor Series of a function f at
an input 0 is the following series, f at 0, plus the derivative at 0 times x, plus one over 2!times the second
derivative at 0, times x squared, etc. That is,
the kth coefficient is equal to the kth derivative of f evaluated at
the input 0 and then divided by k!. This is a most important definition, and for all intents and purposes at this
point in the course, remarkably, this series returns the value of f at x. Let's see how this plays out in
an example that we already know, starting with our definition
of the Taylor series. Let's apply it to the function e to the x. In order to compute this, we're going to need to know
the derivatives of e to the x. And we're going to have to
evaluate them at x equals zero. But since we know that the derivative
of e to the x is e to the x, all of these derivatives evaluate to 1. Therefore, when we substitute these values
into our formula for the Taylor series, we obtain the familiar series for
e to the x. This definition, at least,
works in this one context. Let's continue. Let's look at another function for which we know a series expression,
that of sin x. Recall how the derivative of sin goes. That gives you cos. The derivative of cos is -sin. The derivative of -sin is -cos, and then
the derivative of -cos is sin once again. Evaluating all of these at an input of 0
gives us alternating forms of zero and non-zero terms, with the non-zero
terms having alternating signs. Therefore, we can substitute
in these derivatives, obtaining 0, 1, 0, -1, and
repeating in blocks of four. When we write out the resulting
Taylor series, we see, once again, the familiar form,
x- x cubed over 3!+ x to the 5th over 5!, etc. This is the expression for sin x. It seems clear that this ought to
work in other contexts as well, but let's just check it. For example, if we work with cos x, then, well, the derivatives of cos
follow the same pattern as before, evaluating those to zero gives
us the same numbers as before. Why do we not get the same series? Well, because when we
evaluate these derivatives, the sequence of numbers is shifted. f at 0, that is cos 0, is 1. The derivative at 0 is 0. -1, 0, 1. And the pattern continues. When we simplify this expression,
we see that all of the odd degree terms have 0 coefficients,
leaving us with only the even degree terms in this series. And with the now familiar
alternating signs giving us our familiar expression for
cos x. And so, it seems clear that we ought to
be able to apply this method to other functions as well. Let us do so with another function
that is at least reasonably simple. What would a very simple function be? Well, let's take a polynomial, in this case, x cubed + 2x squared- x + 5. I know that we can
differentiate this with ease. If we evaluate this function at 0, we obtain the first term in the series,
namely 5. If we take the first derivative
of this function, what do we get? Well, 3x squared + 4x- 1. Evaluating this at 0 gives us what? Well, that gives us -1. Therefore, the next term in that
series expansion is -1 times x. Continuing, if we take the second
derivative of this function, we will obtain 6x + 4. Evaluating this at 0 gives us, simply, 4. Therefore the next term
in the Taylor series is 1 over 2!times 4 times x squared. The third derivative of this
function is very simple. It is exactly 6,
independent of where we evaluate it. Therefore, the next term in
the Taylor series is 1 over 3!times 6 times x cubed. What happens when we take higher and
higher derivatives? Well, the derivative of a constant is 0. Thus, all of the higher
derivatives vanish, and all further terms in the Taylor
series evaluate to 0. So we can drop them out,
without consequence. Rewriting, using a bit of simplification, gives us the Taylor series for this function as 5- x +
2x squared + x cubed. Let us take a look at our work. Do we believe what we have done? Well, of course this is exactly the same
function that we started off with, we've merely written the terms
in ascending order of degree. This seems like a trivial example,
but it is actually very crucial. You must remember that polynomials
have themselves as the Taylor series. Polynomials have polynomial Taylor series. This is going to connect to some very
deep properties concerning polynomial approximation. Our strategy, therefore, for working with functions,
is to think of Taylor expansion, not as a function itself, but
as something like an operator, as something that takes as
its input a function, and returns as its output something
that is in the form of a long polynomial, or better, a series. Why do we want to do this? Well, series,
thought of as long polynomials, are very simple to work with,
whereas some functions can be obtuse, very difficult, maybe even unknown,
in a specific form. Taylor expansion helps us to convert such
objects into an easier to work with form. Indeed, some functions really can't be
defined well except as a Taylor series. Here's an example that I'll
bet you've never seen before, though it's a famous function. This is the Bessel function,
J0, that is most easily defined in terms of its Taylor series as
the sum k goes from 0 to infinity, -1 to the k times x to the 2k over 2 to the 2k times (k!) squared. That's a bit of a mouthful. We could write that out, and we would
get something that doesn't look too bad. There are lot of complexities
in the coefficients there. How might we understand this function? Well, let's see. The general form of it is
reminiscent of the expression for cosine that we have derived, in that, it
has alternating signs and only even terms. But notice that the denominator of
the coefficient is growing very rapidly, much more rapidly than k!or even (2k)!. We might therefore anticipate that
the graph of this function looks something like a cosine wave, but with a decreasing
amplitude as a function of x. Have you ever seen such a function before? Maybe you have. If you've ever taken a chain or rope and rotated it about a vertical axis,
if it winds up in equilibrium, the shape that you will get is
related to this Bessel function. If you drop a pebble in some
water in a round tank or an open pond,
the rippling effect of the waves is going to be very closely related
to such a Bessel function. These are not too unusual,
even in everyday occurrences. In fact, for a chain or
a rope that is rotated in equilibrium, we can describe the displacement away
from the vertical axis r as follows. This r is proportional to
the Bessel function J0, evaluated at 2 omega over the square
root of g, times the square root of x. Here, omega is the angular frequency,
or how fast you're spinning that rope. g is a gravitation constant. And x, most importantly, is the distance, not from the top of the rope,
but from the bottom of the rope. You don't need to remember this formula,
you don't need to know how it's derived. What we are going to look at is what
happens when we substitute these values into our Taylor series for
the Bessel function. One of the things that we can
conclude form this Taylor series is that if x is reasonably small,
if you're near the bottom of the rope, small enough so that we can ignore
some of the quadratic and order terms, then what's left over looks
like a linear expression in x, or is proportional to 1-
omega squared over g times x. That tells you something about
the slope at the end of the rope, namely, that this free end is swinging
with a slope that is proportional not to omega, the angular frequency,
but to omega squared. The faster you spin it,
the more the slope changes. And we can say exactly
what rate that change is. It's quadratic. You can try this at home with
a piece of heavy rope or chain. >> This lesson has given us a new
definition, that of Taylor series, as well as a new perspective,
the idea that expanding out a function into a long polynomial or
series is advantageous. Next time, we'll consider the question of how one can
effectively compute these Taylor series.