Taylor series | Chapter 11, Essence of calculus

3Blue1Brown
7 May 201722:19

Summary

TLDRThis script delves into the significance of Taylor series in approximating functions across various fields like math, physics, and engineering. It illustrates the concept through a physics problem involving pendulum potential energy, highlighting how Taylor series simplifies complex functions into more manageable polynomials. The explanation progresses to constructing quadratic and higher-order polynomials to closely approximate functions like cosine and e to the x. The script also touches on the geometric interpretation of Taylor series through the fundamental theorem of calculus and the concept of convergence, emphasizing the series' utility in translating derivative information at a point into function approximations nearby.

Takeaways

  • 🔍 Taylor series are crucial in approximating functions, and they're prevalent in math, physics, and engineering.
  • 🎓 The concept of Taylor series clicked in a physics context, specifically with pendulums and potential energy calculations.
  • 📉 The cosine function's complexity was simplified by approximating it as 1 - (Ξ^2 / 2) for small angles, illustrating the power of Taylor series.
  • 🔱 The process of constructing a Taylor series involves matching the function's value, first derivative, second derivative, and so on at a chosen point.
  • 📝 The coefficients of the Taylor polynomial are determined by evaluating the function's derivatives at the point and dividing by the appropriate factorial.
  • 🔄 Polynomial approximations using Taylor series are particularly useful because polynomials are easier to work with computationally.
  • 🌐 The approximation's accuracy improves with more terms, but at the cost of increased polynomial complexity.
  • 📊 A geometric interpretation of Taylor series involves approximating the area under a curve, which naturally introduces second-order terms.
  • ∞ The concept of a Taylor series extends to infinite sums, which converge to a value if the partial sums approach a limit.
  • 📚 The radius of convergence is a critical concept for Taylor series, defining the range of inputs for which the series provides a valid approximation.

Q & A

  • What is the primary use of Taylor series in various fields?

    -Taylor series are primarily used for approximating functions, making them a powerful tool in mathematics, physics, and engineering.

  • How did the concept of Taylor series first become clear to the speaker?

    -The speaker first understood the importance of Taylor series when studying the potential energy of a pendulum in a physics class.

  • What is the significance of the cosine function in the context of the pendulum problem?

    -The cosine function was significant because it represented the height of the pendulum's weight above its lowest point, making the problem complex until approximated.

  • Why is approximating the cosine function with a quadratic function useful?

    -Approximating the cosine function with a quadratic function simplifies the calculations and makes the relationship between pendulums and other oscillating phenomena clearer.

  • How does the Taylor series help in creating polynomial approximations of non-polynomial functions?

    -Taylor series helps by taking the derivatives of a function at a specific point and using them to construct a polynomial that closely approximates the function near that point.

  • What role do polynomials play in the Taylor series approximation?

    -Polynomials play a crucial role as they are easier to compute, differentiate, and integrate compared to other functions, making them ideal for approximation.

  • Why is it important for the Taylor series approximation to match the function's value and derivatives at a certain point?

    -Matching the value and derivatives ensures that the approximation behaves similarly to the original function, especially near the point of approximation.

  • How does the process of creating a Taylor series approximation involve factorial terms?

    -Factorial terms naturally arise when taking successive derivatives of a function, and they are used to adjust the coefficients of the polynomial to match the derivatives of the function.

  • What is the significance of the radius of convergence in the context of Taylor series?

    -The radius of convergence indicates the range of input values around the point of approximation for which the Taylor series converges to the actual function.

  • Why is the Taylor series for e^x considered 'magical'?

    -The Taylor series for e^x converges to e^x for any input value, which is unique compared to other functions where the series may only converge within a certain range.

  • What is the fundamental intuition behind Taylor series that the speaker suggests keeping in mind?

    -The fundamental intuition is that Taylor series translate derivative information at a single point into approximation information around that point.

Outlines

00:00

📚 Introduction to Taylor Series

The speaker begins by reflecting on their initial underappreciation of Taylor series, highlighting their importance in various fields such as mathematics, physics, and engineering. They recount a pivotal moment in a physics class where the potential energy of a pendulum was discussed, and the cosine function's complexity was simplified using a Taylor series approximation. This led to a clearer understanding of pendulum dynamics. The speaker emphasizes the utility of Taylor series in approximating non-polynomial functions with polynomials for ease of computation, setting the stage for a deeper exploration of Taylor series.

05:05

🔍 Deriving the Quadratic Approximation of Cosine

The speaker delves into the process of creating a quadratic approximation for the cosine function near x equals 0. They explain the need to match the function's value, first derivative, and second derivative at the point of approximation. By setting the first coefficient to 1, ensuring the derivative matches by setting the second coefficient to 0, and using the second derivative to determine the third coefficient as -1/2, the speaker arrives at the approximation 1 - (1/2)x^2. They demonstrate the effectiveness of this approximation by estimating the cosine of 0.1, showing its accuracy. The speaker also discusses the concept of adding more terms to the polynomial to improve the approximation, illustrating with the addition of a fourth-order term.

10:06

🔱 Understanding Taylor Polynomials and Their Generalization

The speaker expands on the concept of Taylor polynomials, explaining how they are derived from the higher-order derivatives of a function at a single point. They discuss the natural emergence of factorial terms in the process and how each added term in the polynomial corresponds to matching a higher derivative of the function. The speaker emphasizes that adding new terms does not disrupt the values of the previous terms, which is crucial for maintaining the polynomial's accuracy. They also touch on the idea of approximating functions near points other than zero by using powers of (x - a), where 'a' is the point of approximation.

15:10

📐 Geometric Interpretation of Taylor Series

The speaker offers a geometric perspective on Taylor series by considering the area under a curve as a function of a variable right endpoint. They use this to illustrate the second-order term in Taylor series, showing how the area function's second derivative at a point 'a' can be used to approximate the area function at a nearby point 'x'. This leads to a discussion of the fundamental theorem of calculus and how it relates to the approximation process. The speaker concludes by introducing the concept of Taylor series as an infinite sum, which can converge to the actual function's value, providing a powerful tool for function approximation.

20:12

🌐 Convergence and Applications of Taylor Series

The speaker addresses the concept of convergence in Taylor series, explaining that while some series converge for all inputs, others may only converge within a certain range, known as the radius of convergence. They provide examples, including the Taylor series for the exponential function and the natural logarithm, to illustrate the point. The speaker concludes by emphasizing the fundamental intuition behind Taylor series: translating derivative information at a single point into approximation information around that point. They express gratitude to the audience and hint at upcoming content on probability.

Mindmap

Keywords

💡Taylor series

Taylor series is a mathematical representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. In the video, Taylor series are highlighted as a powerful tool for approximating functions, especially in fields like mathematics, physics, and engineering. The script uses the example of approximating the cosine function to illustrate how Taylor series can simplify complex problems and make them more manageable.

💡Approximation

Approximation in the context of the video refers to the process of simplifying a mathematical function or problem by using a simpler, closely related function. The script explains how Taylor series are used to approximate non-polynomial functions with polynomials, which are easier to work with. This is exemplified by approximating the cosine function with a quadratic polynomial to make calculations in physics problems more straightforward.

💡Polynomial

A polynomial is an expression consisting of variables and coefficients, involving only the operations of addition, subtraction, multiplication, and non-negative integer exponents. The video emphasizes that polynomials are more 'friendly' in calculations due to their simplicity compared to other functions. Polynomials are used in Taylor series to approximate more complex functions near a chosen point.

💡Derivatives

Derivatives in calculus represent the rate of change of a function with respect to its variable. The script discusses how the derivatives of a function at a specific point are used to construct the Taylor series for that function. Derivatives are crucial in determining how well the Taylor series approximation matches the original function's behavior.

💡Cosine function

The cosine function is a periodic function that describes the horizontal displacement of a point on the unit circle and is commonly used in trigonometry and physics. In the video, the cosine function is used to demonstrate the process of creating a Taylor series approximation. The script shows how approximating the cosine function with a Taylor polynomial can simplify calculations in physics.

💡Potential energy

Potential energy is the stored energy of an object due to its position relative to other objects. In the script, potential energy is mentioned in the context of a physics problem involving a pendulum, where the cosine function is used to describe the height of the pendulum's weight above its lowest point. This example illustrates the practical application of Taylor series in physics.

💡Oscillating phenomena

Oscillating phenomena refer to the periodic or oscillatory motion of a system, such as a pendulum. The video script uses the pendulum as an example to explain how Taylor series can help in understanding the relationship between different oscillating systems by simplifying the mathematical expressions involved.

💡Quadratic approximation

A quadratic approximation is a second-degree polynomial used to approximate a function near a specific point. The video explains how to construct a quadratic approximation for the cosine function near x equals 0, which involves matching the function's value, first derivative, and second derivative at that point.

💡Factorial

A factorial of a non-negative integer n is the product of all positive integers less than or equal to n, denoted as n!. In the context of the video, factorials appear naturally when calculating the coefficients for higher-order terms in a Taylor series. The script explains that factorials are used to adjust the coefficients to ensure that the derivatives of the polynomial approximation match those of the original function.

💡Radius of convergence

The radius of convergence is a measure of the interval around the point of expansion within which a Taylor series converges to the actual function. The video script discusses how the Taylor series for some functions, like the natural log, only converge within a certain range. Outside this range, the series may diverge, indicating the limitations of the approximation.

💡Fundamental theorem of calculus

The fundamental theorem of calculus links the concept of differentiation and integration. In the video, the theorem is used to explain the geometric interpretation of the second-order term in Taylor polynomials, relating it to the area under a curve. This connection illustrates how calculus concepts are interconnected and provides a deeper understanding of Taylor series.

Highlights

Taylor series are essential in approximating functions across various fields.

The importance of Taylor series was realized through a physics problem involving pendulum potential energy.

Approximating cosine function with a quadratic polynomial simplifies complex problems.

The process of approximating functions using Taylor series involves matching function values and derivatives.

The cosine function is approximated by a polynomial near x equals 0 to simplify calculations.

The derivative of cosine at x equals 0 is used to match the slope of the approximation.

The second derivative of cosine is used to ensure the curvature of the approximation matches that of the original function.

The best quadratic approximation of cosine is 1 - (1/2)x^2, which is also the best cubic approximation.

Adding a fourth-order term to the approximation improves the accuracy of cosine approximation.

Taylor series leverages higher-order derivative information to create accurate function approximations.

Factorial terms naturally arise in Taylor series due to the repeated application of the power rule.

The Taylor series for e^x is derived by using the fact that the derivative of e^x is itself.

The geometric interpretation of Taylor series involves approximating the area under a curve.

The fundamental theorem of calculus is used to understand the second-order term in Taylor series.

The concept of infinite series and convergence is crucial in understanding Taylor series fully.

The radius of convergence determines the range of inputs for which a Taylor series approximation is valid.

Taylor series can be used to approximate a wide range of functions, each with its own unique characteristics.

The series on probability will be the next topic explored, following the successful coverage of Taylor series.

Transcripts

play00:14

When I first learned about Taylor series, I definitely

play00:17

didn't appreciate just how important they are.

play00:20

But time and time again they come up in math, physics,

play00:22

and many fields of engineering because they're one of the most

play00:25

powerful tools that math has to offer for approximating functions.

play00:30

I think one of the first times this clicked for me as a

play00:32

student was not in a calculus class but a physics class.

play00:35

We were studying a certain problem that had to do with the potential energy of a

play00:40

pendulum, and for that you need an expression for how high the weight of the

play00:44

pendulum is above its lowest point, and when you work that out it comes out to be

play00:48

proportional to 1 minus the cosine of the angle between the pendulum and the vertical.

play00:53

The specifics of the problem we were trying to solve are beyond the point here,

play00:57

but what I'll say is that this cosine function made the problem awkward and unwieldy,

play01:02

and made it less clear how pendulums relate to other oscillating phenomena.

play01:07

But if you approximate cosine of theta as 1 minus theta squared over 2,

play01:12

everything just fell into place much more easily.

play01:16

If you've never seen anything like this before,

play01:19

an approximation like that might seem completely out of left field.

play01:23

If you graph cosine of theta along with this function, 1 minus theta squared over 2,

play01:28

they do seem rather close to each other, at least for small angles near 0,

play01:33

but how would you even think to make this approximation,

play01:36

and how would you find that particular quadratic?

play01:41

The study of Taylor series is largely about taking non-polynomial

play01:44

functions and finding polynomials that approximate them near some input.

play01:48

The motive here is that polynomials tend to be much easier to deal

play01:52

with than other functions, they're easier to compute,

play01:55

easier to take derivatives, easier to integrate, just all around more friendly.

play02:00

So let's take a look at that function, cosine of x,

play02:03

and really take a moment to think about how you might construct a quadratic

play02:08

approximation near x equals 0.

play02:10

That is, among all of the possible polynomials that look like c0 plus c1

play02:16

times x plus c2 times x squared, for some choice of these constants, c0,

play02:21

c1, and c2, find the one that most resembles cosine of x near x equals 0,

play02:27

whose graph kind of spoons with the graph of cosine x at that point.

play02:33

Well, first of all, at the input 0, the value of cosine of x is 1,

play02:38

so if our approximation is going to be any good at all,

play02:41

it should also equal 1 at the input x equals 0.

play02:45

Plugging in 0 just results in whatever c0 is, so we can set that equal to 1.

play02:53

This leaves us free to choose constants c1 and c2 to make this

play02:56

approximation as good as we can, but nothing we do with them is

play03:00

going to change the fact that the polynomial equals 1 at x equals 0.

play03:04

It would also be good if our approximation had the same

play03:08

tangent slope as cosine x at this point of interest.

play03:11

Otherwise the approximation drifts away from the

play03:14

cosine graph much faster than it needs to.

play03:18

The derivative of cosine is negative sine, and at x equals 0,

play03:22

that equals 0, meaning the tangent line is perfectly flat.

play03:26

On the other hand, when you work out the derivative of our quadratic,

play03:31

you get c1 plus 2 times c2 times x.

play03:35

At x equals 0, this just equals whatever we choose for c1.

play03:40

So this constant c1 has complete control over the

play03:43

derivative of our approximation around x equals 0.

play03:47

Setting it equal to 0 ensures that our approximation

play03:49

also has a flat tangent line at this point.

play03:53

This leaves us free to change c2, but the value and the slope of our

play03:57

polynomial at x equals 0 are locked in place to match that of cosine.

play04:04

The final thing to take advantage of is the fact that the cosine graph

play04:08

curves downward above x equals 0, it has a negative second derivative.

play04:13

Or in other words, even though the rate of change is 0 at that point,

play04:17

the rate of change itself is decreasing around that point.

play04:21

Specifically, since its derivative is negative sine of x,

play04:25

its second derivative is negative cosine of x, and at x equals 0, that equals negative 1.

play04:33

Now in the same way that we wanted the derivative of our approximation to

play04:37

match that of the cosine so that their values wouldn't drift apart needlessly quickly,

play04:41

making sure that their second derivatives match will ensure that they

play04:45

curve at the same rate, that the slope of our polynomial doesn't drift

play04:49

away from the slope of cosine x any more quickly than it needs to.

play04:54

Pulling up the same derivative we had before, and then taking its derivative,

play04:59

we see that the second derivative of this polynomial is exactly 2 times c2.

play05:04

So to make sure that this second derivative also equals negative 1 at x equals 0,

play05:10

2 times c2 has to be negative 1, meaning c2 itself should be negative 1 half.

play05:16

This gives us the approximation 1 plus 0x minus 1 half x squared.

play05:23

To get a feel for how good it is, if you estimate cosine of 0.1 using this polynomial,

play05:29

you'd estimate it to be 0.995, and this is the true value of cosine of 0.1.

play05:36

It's a really good approximation!

play05:40

Take a moment to reflect on what just happened.

play05:42

You had 3 degrees of freedom with this quadratic approximation,

play05:46

the constants c0, c1, and c2.

play05:49

c0 was responsible for making sure that the output of the approximation matches that of

play05:55

cosine x at x equals 0, c1 was in charge of making sure that the derivatives match at

play06:01

that point, and c2 was responsible for making sure that the second derivatives match up.

play06:08

This ensures that the way your approximation changes as you move away from x equals 0,

play06:14

and the way that the rate of change itself changes,

play06:17

is as similar as possible to the behaviour of cosine x,

play06:20

given the amount of control you have.

play06:24

You could give yourself more control by allowing more terms

play06:27

in your polynomial and matching higher order derivatives.

play06:30

For example, let's say you added on the term c3 times x cubed for some constant c3.

play06:36

In that case, if you take the third derivative of a cubic polynomial,

play06:41

anything quadratic or smaller goes to 0.

play06:45

As for that last term, after 3 iterations of the power rule,

play06:50

it looks like 1 times 2 times 3 times c3.

play06:56

On the other hand, the third derivative of cosine x comes out to sine x,

play07:01

which equals 0 at x equals 0.

play07:03

So to make sure that the third derivatives match, the constant c3 should be 0.

play07:09

Or in other words, not only is 1 minus œ x2 the best possible quadratic

play07:14

approximation of cosine, it's also the best possible cubic approximation.

play07:21

You can make an improvement by adding on a fourth order term, c4 times x to the fourth.

play07:27

The fourth derivative of cosine is itself, which equals 1 at x equals 0.

play07:34

And what's the fourth derivative of our polynomial with this new term?

play07:38

Well, when you keep applying the power rule over and over,

play07:42

with those exponents all hopping down in front,

play07:45

you end up with 1 times 2 times 3 times 4 times c4, which is 24 times c4.

play07:51

So if we want this to match the fourth derivative of cosine x,

play07:56

which is 1, c4 has to be 1 over 24.

play07:59

And indeed, the polynomial 1 minus œ x2 plus 1 24 times x to the fourth,

play08:05

which looks like this, is a very close approximation for cosine x around x equals 0.

play08:13

In any physics problem involving the cosine of a small angle, for example,

play08:18

predictions would be almost unnoticeably different if you substituted this polynomial

play08:23

for cosine of x.

play08:26

Take a step back and notice a few things happening with this process.

play08:30

First of all, factorial terms come up very naturally in this process.

play08:35

When you take n successive derivatives of the function x to the n,

play08:39

letting the power rule keep cascading on down,

play08:43

what you'll be left with is 1 times 2 times 3 on and on up to whatever n is.

play08:49

So you don't simply set the coefficients of the polynomial equal to whatever derivative

play08:53

you want, you have to divide by the appropriate factorial to cancel out this effect.

play08:59

For example, that x to the fourth coefficient was the fourth derivative of cosine,

play09:05

1, but divided by 4 factorial, 24.

play09:09

The second thing to notice is that adding on new terms,

play09:12

like this c4 times x to the fourth, doesn't mess up what the old terms should be,

play09:17

and that's really important.

play09:20

For example, the second derivative of this polynomial at x equals 0 is still equal

play09:25

to 2 times the second coefficient, even after you introduce higher order terms.

play09:30

And it's because we're plugging in x equals 0,

play09:33

so the second derivative of any higher order term, which all include an x,

play09:38

will just wash away.

play09:40

And the same goes for any other derivative, which is why each derivative of a

play09:45

polynomial at x equals 0 is controlled by one and only one of the coefficients.

play09:52

If instead you were approximating near an input other than 0, like x equals pi,

play09:57

in order to get the same effect you would have to write your polynomial in

play10:01

terms of powers of x minus pi, or whatever input you're looking at.

play10:06

This makes it look noticeably more complicated,

play10:09

but all we're doing is making sure that the point pi looks and behaves like 0,

play10:13

so that plugging in x equals pi will result in a lot of nice cancellation that

play10:18

leaves only one constant.

play10:22

And finally, on a more philosophical level, notice how what we're doing here is basically

play10:27

taking information about higher order derivatives of a function at a single point,

play10:32

and translating that into information about the value of the function near that point.

play10:40

You can take as many derivatives of cosine as you want.

play10:44

It follows this nice cyclic pattern, cosine of x,

play10:47

negative sine of x, negative cosine, sine, and then repeat.

play10:52

And the value of each one of these is easy to compute at x equals 0.

play10:56

It gives this cyclic pattern 1, 0, negative 1, 0, and then repeat.

play11:02

And knowing the values of all those higher order derivatives is a lot of information

play11:07

about cosine of x, even though it only involves plugging in a single number, x equals 0.

play11:14

So what we're doing is leveraging that information to get an approximation around this

play11:19

input, and you do it by creating a polynomial whose higher order derivatives are designed

play11:25

to match up with those of cosine, following this same 1, 0, negative 1, 0, cyclic pattern.

play11:31

And to do that, you just make each coefficient of the polynomial follow that

play11:35

same pattern, but you have to divide each one by the appropriate factorial.

play11:40

Like I mentioned before, this is what cancels out

play11:42

the cascading effect of many power rule applications.

play11:47

The polynomials you get by stopping this process at

play11:50

any point are called Taylor polynomials for cosine of x.

play11:53

More generally, and hence more abstractly, if we were dealing with some other function

play11:58

other than cosine, you would compute its derivative, its second derivative, and so on,

play12:03

getting as many terms as you'd like, and you would evaluate each one of them at x equals

play12:08

0.

play12:09

Then for the polynomial approximation, the coefficient of each x to the n term should be

play12:15

the value of the nth derivative of the function evaluated at 0,

play12:20

but divided by n factorial.

play12:23

This whole rather abstract formula is something you'll likely

play12:27

see in any text or course that touches on Taylor polynomials.

play12:31

And when you see it, think to yourself that the constant term ensures that

play12:36

the value of the polynomial matches with the value of f,

play12:39

the next term ensures that the slope of the polynomial matches the slope

play12:43

of the function at x equals 0, the next term ensures that the rate at which

play12:48

the slope changes is the same at that point, and so on,

play12:51

depending on how many terms you want.

play12:54

And the more terms you choose, the closer the approximation,

play12:57

but the tradeoff is that the polynomial you'd get would be more complicated.

play13:02

And to make things even more general, if you wanted to approximate near some input

play13:07

other than 0, which we'll call a, you would write this polynomial in terms of powers

play13:12

of x minus a, and you would evaluate all the derivatives of f at that input, a.

play13:18

This is what Taylor polynomials look like in their fullest generality.

play13:24

Changing the value of a changes where this approximation is hugging the original

play13:28

function, where its higher order derivatives will be equal to those of the original

play13:33

function.

play13:35

One of the simplest meaningful examples of this is

play13:38

the function e to the x around the input x equals 0.

play13:42

Computing the derivatives is super nice, as nice as it gets,

play13:46

because the derivative of e to the x is itself,

play13:49

so the second derivative is also e to the x, as is its third, and so on.

play13:54

So at the point x equals 0, all of these are equal to 1.

play13:59

And what that means is our polynomial approximation should look like

play14:05

1 plus 1 times x plus 1 over 2 times x squared plus 1 over 3 factorial times x cubed,

play14:13

and so on, depending on how many terms you want.

play14:19

These are the Taylor polynomials for e to the x.

play14:26

Ok, so with that as a foundation, in the spirit of showing you just how connected all

play14:31

the topics of calculus are, let me turn to something kind of fun,

play14:34

a completely different way to understand this second order term of the Taylor

play14:38

polynomials, but geometrically.

play14:41

It's related to the fundamental theorem of calculus,

play14:43

which I talked about in chapters 1 and 8 if you need a quick refresher.

play14:47

Like we did in those videos, consider a function that gives the area

play14:52

under some graph between a fixed left point and a variable right point.

play14:56

What we're going to do here is think about how to approximate this area function,

play15:00

not the function for the graph itself, like we've been doing before.

play15:04

Focusing on that area is what's going to make the second order term pop out.

play15:10

Remember, the fundamental theorem of calculus is that this graph itself represents the

play15:16

derivative of the area function, and it's because a slight nudge dx to the right bound

play15:22

of the area gives a new bit of area approximately equal to the height of the graph times

play15:28

dx.

play15:30

And that approximation is increasingly accurate for smaller and smaller choices of dx.

play15:35

But if you wanted to be more accurate about this change in area,

play15:39

given some change in x that isn't meant to approach 0,

play15:42

you would have to take into account this portion right here,

play15:46

which is approximately a triangle.

play15:49

Let's name the starting input a, and the nudged input above it x, so that change is x-a.

play15:58

The base of that little triangle is that change, x-a,

play16:02

and its height is the slope of the graph times x-a.

play16:08

Since this graph is the derivative of the area function,

play16:11

its slope is the second derivative of the area function, evaluated at the input a.

play16:18

So the area of this triangle, 1 half base times height,

play16:22

is 1 half times the second derivative of this area function, evaluated at a,

play16:28

multiplied by x-a2.

play16:30

And this is exactly what you would see with a Taylor polynomial.

play16:34

If you knew the various derivative information about this area function at the point a,

play16:40

how would you approximate the area at the point x?

play16:45

Well you have to include all that area up to a, f of a,

play16:49

plus the area of this rectangle here, which is the first derivative, times x-a,

play16:54

plus the area of that little triangle, which is 1 half times the second derivative,

play17:00

times x-a2.

play17:02

I really like this, because even though it looks a bit messy all written out,

play17:06

each one of the terms has a very clear meaning that you can just point to on the diagram.

play17:13

If you wanted, we could call it an end here, and you would have a

play17:16

phenomenally useful tool for approximating these Taylor polynomials.

play17:21

But if you're thinking like a mathematician, one question you might ask is

play17:25

whether or not it makes sense to never stop and just add infinitely many terms.

play17:31

In math, an infinite sum is called a series, so even though one of these

play17:35

approximations with finitely many terms is called a Taylor polynomial,

play17:40

adding all infinitely many terms gives what's called a Taylor series.

play17:45

You have to be really careful with the idea of an infinite series,

play17:48

because it doesn't actually make sense to add infinitely many things,

play17:52

you can only hit the plus button on the calculator so many times.

play17:57

But if you have a series where adding more and more of the terms,

play18:01

which makes sense at each step, gets you increasingly close to some specific value,

play18:06

what you say is that the series converges to that value.

play18:10

Or, if you're comfortable extending the definition of equality to

play18:14

include this kind of series convergence, you'd say that the series as a whole,

play18:19

this infinite sum, equals the value it's converging to.

play18:23

For example, look at the Taylor polynomial for e to the x,

play18:27

and plug in some input, like x equals 1.

play18:31

As you add more and more polynomial terms, the total sum gets closer and

play18:36

closer to the value e, so you say that this infinite series converges to the number e,

play18:42

or what's saying the same thing, that it equals the number e.

play18:47

In fact, it turns out that if you plug in any other value of x, like x equals 2,

play18:53

and look at the value of the higher and higher order Taylor polynomials at this value,

play18:59

they will converge towards e to the x, which is e squared.

play19:04

This is true for any input, no matter how far away from 0 it is,

play19:08

even though these Taylor polynomials are constructed only from derivative information

play19:14

gathered at the input 0.

play19:18

In a case like this, we say that e to the x equals its own Taylor series at all inputs x,

play19:24

which is kind of a magical thing to have happen.

play19:28

Even though this is also true for a couple other important functions,

play19:32

like sine and cosine, sometimes these series only converge within a

play19:36

certain range around the input whose derivative information you're using.

play19:41

If you work out the Taylor series for the natural log of x around the input x equals 1,

play19:47

which is built by evaluating the higher order derivatives of the natural

play19:51

log of x at x equals 1, this is what it would look like.

play19:56

When you plug in an input between 0 and 2, adding more and more terms of this

play20:00

series will indeed get you closer and closer to the natural log of that input.

play20:06

But outside of that range, even by just a little bit,

play20:09

the series fails to approach anything.

play20:12

As you add on more and more terms, the sum bounces back and forth wildly.

play20:18

It does not, as you might expect, approach the natural log of that value,

play20:22

even though the natural log of x is perfectly well defined for inputs above 2.

play20:28

In some sense, the derivative information of ln

play20:31

of x at x equals 1 doesn't propagate out that far.

play20:36

In a case like this, where adding more terms of the series doesn't approach anything,

play20:41

you say that the series diverges.

play20:44

And that maximum distance between the input you're approximating

play20:47

near and points where the outputs of these polynomials actually

play20:51

converge is called the radius of convergence for the Taylor series.

play20:56

There remains more to learn about Taylor series.

play20:59

There are many use cases, tactics for placing bounds on the error of

play21:03

these approximations, tests for understanding when series do and don't converge,

play21:07

and for that matter, there remains more to learn about calculus as a whole,

play21:11

and the countless topics not touched by this series.

play21:15

The goal with these videos is to give you the fundamental intuitions

play21:19

that make you feel confident and efficient in learning more on your own,

play21:23

and potentially even rediscovering more of the topic for yourself.

play21:28

In the case of Taylor series, the fundamental intuition to keep in mind

play21:32

as you explore more of what there is, is that they translate derivative

play21:36

information at a single point to approximation information around that point.

play21:43

Thank you once again to everybody who supported this series.

play21:47

The next series like it will be on probability,

play21:49

and if you want early access as those videos are made, you know where to go.

play22:11

Thank you.

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Étiquettes Connexes
Taylor SeriesMathematicsPhysicsEngineeringApproximationCalculusPolynomialsDerivativesEducationalSeries Convergence
Besoin d'un résumé en anglais ?