A Taylor series is a powerful mathematical concept that represents a function as an infinite sum of terms. These terms are calculated from the values of the function's derivatives at a specific point. The general form of a Taylor series centered at point a is shown here. Each term involves a derivative of the function evaluated at point a, divided by a factorial, and multiplied by the distance from x to a raised to a power. On the right, we can see how Taylor polynomials of increasing order approximate the exponential function near zero. As we add more terms, the approximation becomes more accurate over a wider range.
Let's look at some common Taylor series expansions centered at zero, also known as Maclaurin series. For the exponential function e to the x, we get a series with all positive terms where each term is x to the n divided by n factorial. For sine of x, we get an alternating series with only odd powers of x. For cosine of x, we get an alternating series with only even powers of x. And for the natural logarithm of 1 plus x, we get an alternating series with powers of x divided by n. On the right, we can see how Taylor polynomials of increasing order approximate the sine function. Notice how the approximation improves as we add more terms, especially as we move away from the center point.
A key question with Taylor series is: when does it actually converge to the original function? Each Taylor series has a radius of convergence, which can be calculated using the ratio test shown here. Within this radius, the series converges to the function; beyond it, it may diverge. When using a finite number of terms, we introduce an error. This error can be bounded using the remainder term formula, where c is some point between a and x. The error generally increases as we move away from the center point, as shown in the yellow region on our graph. Taylor series have numerous practical applications, including numerical approximations in calculators and computers, solving differential equations that can't be solved exactly, and modeling complex systems in physics and engineering.
Let's see a practical example of using Taylor series to compute the value of sine of 0.3 radians. We'll use the Taylor series for sine centered at zero. For x equals 0.3, we calculate each term: The first term is simply x, which is 0.3. The second term is negative x cubed divided by 3 factorial, which equals negative 0.0045. The third term is x to the fifth divided by 5 factorial, approximately 0.00002. The fourth term is even smaller, about negative 0.0000002. Adding these terms, we get approximately 0.2955. The actual value of sine of 0.3 is 0.29552, so our approximation has an error of only about 0.00002! On the right, we can see how each successive Taylor polynomial gets closer to the actual sine function at x equals 0.3. The fifth-order approximation is practically indistinguishable from the true value at this point.
To summarize what we've learned about Taylor series: First, a Taylor series represents a function as an infinite sum of terms calculated from the function's derivatives at a single point. The general form of a Taylor series centered at point a involves the derivatives of the function at that point, divided by factorials, and multiplied by powers of x minus a. We've seen common examples like the series for e to the x, sine of x, cosine of x, and natural logarithm of 1 plus x. An important concept is that Taylor series converge to the original function only within a specific radius of convergence. Outside this radius, the series may diverge or converge to a different value. Finally, Taylor series have numerous practical applications, including numerical calculations in computers, solving differential equations that can't be solved exactly, and modeling complex physical systems. This powerful mathematical tool connects calculus, infinite series, and approximation theory in an elegant way.