Differential and Integral Calculus
Site: | MyCourses |
Course: | MS-A0111 - Differential and Integral Calculus 1, Lecture, 13.9.2021-27.10.2021 |
Book: | Differential and Integral Calculus |
Printed by: | Guest user |
Date: | Saturday, 23 November 2024, 5:09 AM |
1. Sequences
Basics of sequences
This section contains the most important definitions about sequences. Through these definitions the general notion of sequences will be explained, but then restricted to real number sequences.
Note. Characteristics of the set give certain characteristics to the sequence. Because is ordered, the terms of the sequence are ordered.
Definition: Terms and Indices
A sequence can be denoted denoted as
instead of The numbers are called the terms of the sequence.
Because of the mapping we can assign a unique number to each term. We write this number as a subscript and define it as the index; it follows that we can identify any term of the sequence by its index.
A few easy examples
Example 1: The sequence of natural numbers
The sequence defined by is called the sequence of natural numbers. Its first few terms are: This special sequence has the property that every term is the same as its index.
Example 2: The sequence of triangular numbers
Triangular numbers get their name due to the following geometric visualization: Stacking coins to form a triangular shape gives the following diagram:
To the first coin in the first layer we add two coins in a second layer to form the second picture . In turn, adding three coins to forms . From a mathematical point of view, this sequence is the result of summing natural numbers. To calculate the 10th triangular number we need to add the first 10 natural numbers: In general form the sequence is defined as:
This motivates the following definition:
Example 3: Sequence of square numbers
The sequence of square numbers is defined by: . The terms of this sequence can also be illustrated by the addition of coins.
Interestingly, the sum of two consecutive triangular numbers is a square number. So, for example, we have: and . In general this gives the relationship:
Example 4: Sequence of cube numbers
Analogously to the sequence of square number, we give the definition of cube numbers as The first terms of the sequence are: .
Example 5.
Example 6.
Given the sequence with , i.e. Let be its 1st difference sequence. Then it follows that A term of has the general form
Some important sequences
There are a number of sequences that can be regarded as the basis of many ideas in mathematics, but also can be used in other areas (e.g. physics, biology, or financial calculations) to model real situations. We will consider three of these sequences: the arithmetic sequence, the geometric sequence, and Fibonacci sequence, i.e. the sequence of Fibonacci numbers.
The arithmetic sequence
There are many definitions of the arithmetic sequence:
Definition A: Arithmetic sequence
A sequence is called the arithmetic sequence, when the difference between two consecutive terms is constant, thus:
Note: The explicit rule of formation follows directly from definition A: For the th term of an arithmetic sequence we also have the recursive formation rule:
Definition B: Arithmetic sequence
A non-constant sequence is called an arithmetic sequence (1st order) when its 1st difference sequence is a sequence of constant value.
This rule of formation gives the arithmetic sequence its name: The middle term of any three consecutive terms is the arithmetic mean of the other two, for example:
Example 1.
The sequence of natural numbers is an arithmetic sequence, because the difference, , between two consecutive terms is always given as .
The geometric sequence
The geometric sequence has multiple definitions:
Definition: Geometric sequence
A sequence is called a geometric sequence when the ratio of any two consecutive terms is always constant , thus
Note.The recursive relationship of the terms of the geometric sequence and the explicit formula for the calculation of the n th term of a geometric sequence follows directly from the definition.
Again the name and the rule of formation of this sequence are connected: Here, the middle term of three consecutive terms is the geometric mean of the other two, e.g.:
Example 2.
Let and be fixed positive numbers. The sequence with , i.e. is a geometric sequence. If the sequence is monotonically increasing. If it is strictly decreasing. The corresponding range is finite in the case (namely, a singleton), otherwise it is infinite.
The Fibonacci sequence
The Fibonacci sequence is famous because it plays a role in many biological processes, for instance in plant growth, and is frequently found in nature. The recursive definition is:
Definition: Fibonacci sequence
Let and let for . The sequence is then called the Fibonacci sequence. The terms of the sequence are called the Fibonacci numbers.
The sequence is named after the Italian mathematician Leonardo of Pisa (ca. 1200 AD), also known as Fibonacci (son of Bonacci). He considered the size of a rabbit population and discovered the number sequence:
Example 3.
The structure of sunflower heads can be described by a system of two spirals, which radiate out symmetrically but contra rotating from the centre; there are 55 spirals which run clockwise and 34 which run counter-clockwise.
Pineapples behave very similarly. There we have 21 spirals running in one direction and 34 running in the other. Cauliflower, cacti, and fir cones are also constructed in this manner.
Convergence, divergence and limits
The following chapter deals with the convergence of sequences. We will first introduce the idea of zero sequences. After that we will define the concept of general convergence.
Preliminary remark: Absolute value in
The absolute value function is fundamental in the study of convergence of real number sequences. Therefore we should summarise again some of the main characteristics of the absolute value function:
Theorem: Calculation Rule for the Absolute Value
Parts 1.-3. Results follow directly from the definition and by dividing it up into separate cases of the different signs of and
Part 4. Here we divide the triangle inequality into different cases.
Case 1.
First let . Then it follows that and the desired inequality is shown.
Case 2. Case 3.Finally we consider the case and . Here we have two subcases:
For we have and thus from the definition of absolute value. Because then and therefore also . Overall we have:
For then . We have . Because , we have and thus . Overall we have:
The case and we prove it analogously to the case 3, in which and are exchanged.
Zero sequences
Definition: Zero sequence
A sequence s called a zero sequence, if for every there exists an index such that for every . In this case we also say that the sequence converges to zero.
Informally: We have a zero sequence, if the terms of the sequence with high enough indices are arbitrarily close to zero.
Example 1.
The sequence defined by , i.e. is called the harmonic sequence. Clearly, it is positive for all , however as increases the absolute value of each term decreases getting closer and closer to zero.
Take for example , then choosing the index , it follows that , for all .
Example 2.
Consider the sequence Let .We then obtain the index in this manner that for all terms where .
Note. To check whether a sequence is a zero sequence, you must choose an (arbitrary) where . Then search for a index , after which all terms are smaller then said .
Example 3.
We consider the sequence , defined by
Because of the factors two consecutive terms have different signs; we call a sequence whose signs change in this way an alternating sequence.
We want to show that this sequence is a zero sequence. According to the definition we have to show that for every there exist , such that we have the inequality: for every term where .
Theorem: Characteristics of Zero sequences
Parts 1 and 2. If is a zero sequence, then according to the definition there is an index , such that for every and an arbitrary . But then we have ; this proves parts 1 and 2 are correct.
Part 3. If , then the result is trivial. Let and choose such that for all . Rearranging we get:
Part 4.
Because is a zero sequence, by the definition we have for all . Analogously, for the zero sequence there is a with for all .
Then for all it follows (using the triangle inequality) that:
Convergence, divergence
The concept of zero sequences can be expanded to give us the convergence of general sequences:
Example 4.
We consider the sequence where By plugging in large values of , we can see that for and therefore we can postulate that the limit is .
For a vigorous proof, we show that for every there exists an index , such that for every term with the following relationship holds:
Firstly we estimate the inequality:
Now, let be an arbitrary constant. We then choose the index , such that Finally from the above inequality we have: Thus we have proven the claim and so by definition is the limit of the sequence.
If a sequence is convergent, then there is exactly one number which is the limit. This characteristic is called the uniqueness of convergence.
Assume ; choose with Then in particular
Because converges to , there is, according to the definition of convergence, a index with for Furthermore, because converges to there is also a with for For we have:
Consequently we have obtained , which is a contradiction as . Therefore the assumption must be wrong, so .
Definition: Divergent, Limit
If provided that a sequence and an exist, to which the sequence converges, then the sequence is called convergent and is called the limit of the sequence, otherwise it is called divergent.
Notation. is convergent to is also written: Such notation is allowed, as the limit of a sequence is always unique by the above Theorem (provided it exists).
Rules for convergent sequences
Theorem: Rules
Let and be sequences with and . Then for it follows that:
Informally: Sums, differences and products of convergent sequences are convergent.
Part 1. Let . We must show, that for all it follows that: The left hand side we estimate using:
Because and converge, for each given it holds true that:
Therefore for all numbers . Therefore the sequence is a zero sequence and the desired inequality is shown.
Part 2. Let . We have to show, that for all Furthermore an estimation of the left hand side follows: We choose a number , such that for all and . Such a value of exists by the Theorem of convergent sequences being bounded. We can then use the estimation: For all we have and , and - putting everything together - the desired inequality it shown.
2. Series
Table of Content
Convergence
Convergence
If the sequence of partial sums has a limit , then the series of the sequence converges and its sum is . This is denoted by
Divergence of a series
A series that does ot converge is divergent. This can happen in three different ways:
- the partial sums tend to infinity
- the partial sums tend to minus infinity
- the sequence of partial sums oscillates so that there is no limit.
In the case of a divergent series the symbol does not really mean anything (it isn't a number). We can then interpret it as the sequence of partial sums, which is always well-defined.
Basic results
Geometric series
A geometric series converges if (or ), and then its sum is . If , then the series diverges.
Rules of summation
Properties of convergent series:
Note: Compared to limits, there is no similar product-rule for series, because even for sums of two elements we have The correct generalization is the Cauchy product of two series, where also the cross terms are taken into account.
Note: The property cannot be used to justify the convergence of a series; cf. the following examples. This is one of the most common elementary mistakes many people do when studying series!
Example
Explore the convergence of the series
Solution. The limit of the general term of the series is As this is different from zero, the series diverges.
This is a classical result first proven in the 14th century by Nicole Oresme after which a number of proofs using different approaches have been published. Here we present two different approaches for comparison.
i) An elementary proof by contradiction. Suppose, for the sake of contradiction, that the harmonic series converges i.e. there exists
such that . In this case
Now, by direct comparison we get
hence following from the Properties of summation it follows that
But this implies that , a contradiction. Therefore, the initial assumption that the harmonic series converges must be false and thus the series diverges.
ii) Proof using integral: Below a histogram with heights lies the graph of
the function , so comparing areas we have
as .
Positive series
Summing a series is often difficult or even impossible in closed form, sometimes only a numerical approximation can be calculated. The first goal then is to find out whether a series is convergent or divergent.
A series is positive, if for all .
Convergence of positive series is quite straightforward:
Theorem 2.
A positive series converges if and only if the sequence of partial sums is bounded from above.
Why? Because the partial sums form an increasing sequence.
Example
Show that the partial sums of a superharmonic series satisfy for all , so the series converges.
Solution. This is based on the formula for , as it implies that for all .
This can also be proven with integrals.
Leonhard Euler found out in 1735 that the sum is actually . His proof was based on comparison of the series and product expansion of the sine function.
Absolute convergence
Theorem 3.
An absolutely convergent series converges (in the usual sense) and
This is a special case of the Comparison principle, see later.
Suppose that converges. We study separately the positive and negative
parts of :
Let
Since , the positive series and converge by Theorem 2.
Also, , so converges as a difference of two convergent series.
Example
Study the convergence of the alternating (= the signs alternate) series
Solution. Since and the superharmonic series converges, then the original series is absolutely convergent. Therefore it also converges in the usual sense.
Alternating harmonic series
The usual convergence and absolute convergence are, however, different concepts:
Example
The alternating harmonic series converges, but not absolutely.
(Idea) Draw a graph of the partial sums to get the idea that even and odd index partial sums and are monotone and converge to the same limit.
The sum of this series is , which can be derived by integrating the formula of a geometric series.
points are joined by line segments for visualization purposes
Convergence tests
Comparison test
The preceeding results generalize to the following:
Proof for Majorant. Since and
then is convergent as a difference of two convergent positive series.
Here we use the elementary convergence property (Theorem 2.) for positive series;
this is not a circular reasoning!
Proof for Minorant. It follows from the assumptions that the partial sums of
tend to infinity, and the series is divergent.
Example
Solution. Since for all , the first series is convergent by the majorant principle.
On the other hand, for all , so the second series has a divergent harmonic series as a minorant. The latter series is thus divergent.
Ratio test
In practice, one of the best ways to study convergence/divergence of a series is the so-called ratio test, where the terms of the sequence are compared to a suitable geometric series:
Limit form of ratio test
(Idea) For a geometric series the ratio of two consecutive terms is exactly . According to the ratio test, the convergence of some other series can also be investigated in a similar way, when the exact ratio is replaced by the above limit.
In the formal definition of a limit . Thus starting from some index we have and the claim follows from Theorem 4.
In the case the general term of the series does not go to zero, so the series diverges.
The last case does not give any information.
This case occurs for the harmonic series (, divergent!) and superharmonic
(, convergent!) series. In these cases the convergence or divergence
must be settled in some other way, as we did before.
3. Continuity
Table of Content
In this section we define a limit of a function at a point . It is assumed that the reader is already familiar with limit of a sequence, the real line and the general concept of a function of one real variable.
Limit of a function
For a subset of real numbers, denoted by , assume that is such point that there is a sequence of points such that as . Here the set is often the set of all real numbers, but sometimes an interval (open or closed).
Example 1.
Note that it is not necessary for to be in . For example, the sequence as in , and for all but is not in .
Limit of a function
Example 3.
The function defined by does not have a limit at the point . To formally prove this, take sequences , defined by and for . Then the both sequences are in , but and for any .
Example 5.
One-sided limits
An important property of limits is that they are always unique. That is, if and , then . Although a function may have only one limit at a given point, it is sometimes useful to study the behavior of the function when approaches the point from the left or the right side. These limits are called the left and the right limit of the function at , respectively.
Definition 2: One-sided limits
Suppose is a set in and is a function defined on the set . Then we say that has a left limit at , and write if, as for every sequence in the set , such that as .
Similarly, we say that has a right limit at , and write if, as for every sequence in the set , such that as .
Example 6.
The sign function is defined on . Its left and right limits at are However, the function does not have a limit at .
Limit rules
The following limit rules are immediately obtained from the definition and basic algebra of real numbers.
Limits and continuity
In this section, we define continuity of the function. The intutive idea behind continuity is that the graph of a continuous function is a connected curve. However, this is not sufficient as a mathematical definition for several reasons. For example, by using this definition, one cannot easily decide if is a continuous function or not.
Example 1.
Let . Functions defined by , , are continuous at every point .
Why? If , then and . For , we have and hence, . Similarly, and .
Example 2.
Let . We define a function by Then Therefore is not continuous at the point .
Some basic properties of continuous functions of one real variable are given next. From the limit rules (Theorem 2) we obtain:
Theorem 3.
The sum, the product and the difference of continuous functions are continuous. Then, in particular, polynomials are continuous functions. If and are polynomials and , then is continuous at a point .
A composition of continuous functions is continuous if it is defined:
Theorem 4.
Let and . Suppose that is continuous at a point and is continuous at . Then is continuous at a point .
Note. If is continuous, then is continuous.
Why?
Note. If and are continuous, then and are continuous. (Here .)
Why?
Delta-epsilon definition
The so-called -definition for continuity is given next. The basic idea behind this test is that, for a function continuous at , the values of should get closer to as gets closer to .
This is the standard definition of continuity in mathematics, because it also works for more general classes of functions than ones on this course, but it not used in high-school mathematics. This important definition will be studied in-depth in Analysis 1 / Mathematics 1.
Example 3.
Example 4.
Let . We define a function by In Example 2 we saw that this function is not continuous at the point . To prove this using the -test, we need to find some and some such that for all , , but .
Proof. Let and . By choosing , we have
and
Therefore by Theorem 5 is not continuous at the point .
Properties of continuous functions
This section contains some fundamental properties of continuous functions. We start with the Intermediate Value Theorem for continuous functions, also known as Bolzano's Theorem. This theorem states that a function that is continuous on a given (closed) real interval, attains all values between its values at endpoints of the interval. Intuitively, this follows from the fact that the graph of a function defined on a real interval is a continuous curve.
Example 1.
Let function , where Show that there is at least one such that .
Solution. As a polynomial function, is continuous. And because and by the Intermediate Value Theorem there is at least one such that .
Next we prove that a continuous function defined on a closed real interval is necessarily bounded. For this result, it is important that the interval is closed. A counter example for an open interval is given after the next theorem.
Example 4.
Example 5.
Let , where The domain of the function is . To determine the range of the function, we first notice that the function is decreasing. We will now show this.
Because , and Thus, if then , which means that the function is decreasing.
We know that a decreasing function has its minimum value in the right endpoint of the interval. Thus, the minimum value of is Respectively, a decreasing function has it's maximum value in the left endpoint of the interval and so the maximum value of is
As a polynomial function, is continuous and it therefore has all the values between it's minimum and maximum values. Hence, the range of is .
Example 6.
Suppose that is a polynomial. Then is continuous on and, by Theorem 7, is bounded on every closed interval , . Furthermore, by Theorem 3, must have minimum and maximum values on .
Note. Theorem 8 is connected to the Intermediate Value Theorem in the following way:
4. Derivative
Derivative
The definition of the derivative of a function is given next. We start with an example illustrating the idea behind the formal definition.
Example 0.
The graph below shows how far a cyclist gets from his starting point.
a) Look at the red line. We can see that in three hours, the cyclist moved km. The average speed of the whole trip is km/h.
b) Now look at the green line. We can see that during the third hour the cyclist moved km further. That makes the average speed of that time interval km/h.
Notice that the slope of the red line is and that the slope of the blue line is . These are the same values as the corresponding average speeds.
c) Look at the blue line. It is the tangent of the curve at the point . Using the same principle as with average speeds, we conclude that after two hours of the departure, the speed of the cyclist was km/h km/h.
Now we will proceed to the general definition:
Definition: Derivative
Let . The derivative of function at the point is If exists, then is said to be differentiable at the point .
Note: Since , then , and thus the definition can also be written in the form
Interpretation. Consider the curve . Now if we draw a line through the points and , we see that the slope of this line is When , the line intersects with the curve only in the point . This line is the tangent of the curve at the point and its slope is which is the derivative of the function at . Hence, the tangent is given by the equation
Interactivity. Move the point of intersection and observe changes on the tangent line of the curve.
Example 2.
Let be the function . We find the derivative of .
Immediately from the definition we get:
Here is the slope of the tangent line. Note that the derivative at does not depend on because is the equation of a line.
Note. When , we get and . The derivative of a constant function is zero.
Example 3.
Let be the function . Does have a derivative at ?
The graph has no tangent at the point : Thus does not exist.
Conclusion. The function is not differentiable at the point .
Remark. Let . If exists for every then we get a function . We write:
(1) | = , | ||
(2) | = | = , | |
(3) | = | = , | |
(4) | = | = , | |
... |
Here is called the second derivative of at , is the third derivative, and so on.
We introduce the notation \begin{eqnarray} C^n\bigl( ]a,b[\bigr) =\{ f\colon \, ]a,b[\, \to \mathbb{R} & \mid & f \text{ is } n \text{ times differentiable on the interval } ]a,b[ \nonumber \\ & & \text{ and } f^{(n)} \text{ is continuous}\}. \nonumber \end{eqnarray} These functions are said to be n times continuously differentiable.
Example 4.
The distance moved by a cyclist (or a car) is given by . Then the speed at the moment is and the acceleration is .
Linearization and differential
Properties of derivative
Next we give some useful properties of the derivative. These properties allow us to find derivatives for some familiar classes of functions such as polynomials and rational functions.
Continuity and derivative
If is differentiable at the point , then is continuous at the point : Why? Because if is differentiable, then we get as .
Note. If a function is continuous at the point , it doesn't have to be differentiable at that point. For example, the function is continuous, but not differentiable at the point .
Differentiation Rules
For we repeteadly apply the product rule, and obtain
The case of negative is obtained from this and the product rule applied to the identity .
From the power rule we obtain a formula for the derivative of a polynomial. Let where . Then
Suppose that is differentiable at and . We determine
From the definition we obtain:
Example 3.
The one-sided limits of the difference quotient have different signs at a local extremum. For example, for a local maximum it holds that \begin{eqnarray} \frac{f(x_0+h)-f(x_0)}{h} = \frac{\text{negative} }{\text{positive}}&\le& 0, \text{ when } h>0, \nonumber \\ \frac{f(x_0+h)-f(x_0)}{h} = \frac{\text{negative}}{\text{negative}}&\ge& 0, \text{ when } h<0 \nonumber \end{eqnarray} and is so small that is a maximum on the interval .
Derivatives of Trigonometric Functions
In this section, we give differentiation formulas for trigonometric functions , and .
The Chain Rule
In this section we learn a formula for finding the derivative of a composite function. This important formula is known as the Chain Rule.
The Chain Rule.
Proof.Example 1.
The problem is to differentiate the function . We take and and differentiate the composite function . As we get
Example 2.
We need to differentiate the function . Take and , then differentiate the composite function .
Remark. Let and . Now Similarly, one may obtain even more complex rules for composites of multiple functions.
Extremal Value Problems
We will discuss the Intermediate Value Theorem for differentiable functions, and its connections to extremal value problems.
Definition: Local Maxima and Minima
A function has a a local maximum at the point , if for some and for all such that , we have .
Similarly, a function has a local minimum at the point , if for some and for all such that , we have .
A local extreme is a local maximum or a local minimum.
Remark. If is a local maximum value and exists, then Hence .
We get:
Example 1.
Let be defined by Then and we can see that at the points and the local maximum and minimum of are obtained,
Finding the global extrema
In practice, when we are looking for the local extrema of a given function, we need to check three kinds of points:
the zeros of the derivative
the endpoints of the domain of definition (interval)
points where the function is not differentiable
If we happened to know beforehand that the function has a minimum/maximum, then we start off by finding all the possible local extrema (the points described above), evaluate the function at these points and pick the greatest/smallest of these values.
Example 2.
Let us find the smallest and greatest value of the function , . Since the function is continuous on a closed interval, then it has a maximum and a minimum. Since the function is differentiable, it is sufficient to examine the endpoints of the interval and the zeros of the derivative that are contained in the interval.
The zeros of the derivative: . Since , we only need to evaluate the function at three points, , and . From these we can see that the smallest value of the function is and the greatest value is , respectively.
Next we will formulate a fundamental result for differentiable functions. The basic idea here is that the change on an interval can only happen, if there is change at some point on the inverval.
Theorem 2.
(The Intermediate Value Theorem for Differentiable Functions). Let be continuous in the interval and differentiable in the interval . Then for some
Let be continuous in the interval and differentiable in the interval . Let us define
Now and is differentiable in the interval . According to Rolle's Theorem, there exists such that . Hence
This result has an important application:
Theorem 3.
Example 3.
Example 4.
We need to find a rectangle so that its area is and it has the least possible perimeter.
Let and be the sides of the rectangle. Then and we get . Now the perimeter is In which point does the function get its minimum value? Function is continuous and differentiable, when and using the quotient rule, we get Now , when but we have defined that and therefore are only interested in the case . Let's draw a table:
decr. | incr. |
As the function is continuous, we now know that it attains its minimum at the point . Now we calculate the other side of the rectangle: .
Thus, the rectangle, which has the least possible perimeter is actually a square, which sides are of the length .
Example 5.
We must make a one litre measure, which is shaped as a right circular cylinder without a lid. The problem is to find the best size of the bottom and the height so that we need the least possible amount of material to make the measure.
Let be the radius and the height of the cylinder. The volume of the cylinder is dm and we can write from which we get
The amount of material needed is the surface area
Let function be defined by We must find the minimum value for function , which is continuous and differentiable, when . Using the reciprocal rule, we get Now , when
Let's draw a table:
decr. | incr. |
As the function is continuous, we now know that it gets its minimum value at the point . Then
This means that it would take least materials to make a measure, which is approximately dm dm cm in diameter and dm cm high.
5. Taylor polynomial
Taylor polynomial
Definition: Taylor polynomial
Let be times differentiable at the point . Then the Taylor polynomial \begin{align} P_n(x)&=P_n(x;x_0)\\\ &=f(x_0)+f'(x_0)(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+ \\ & \dots +\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n\\ &=\sum_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k\\ \end{align} is the best polynomial approximation of degree (with respect to the derivative) for a function , close to the point .
Note. The special case is often called the Maclaurin polynomial.
If is times differentiable at , then the Taylor polynomial has the same derivatives at as the function , up to the order (of the derivative).
The reason (case ): Let so that \begin{align} P_n'(x)&=c_1+2c_2x+3c_3x^2+\dots +nc_nx^{n-1}, \\ P_n''(x)&=2c_2+3\cdot 2 c_3x\dots +n(n-1)c_nx^{n-2} \\ P_n'''(x)&=3\cdot 2 c_3\dots +n(n-1)(n-2)c_nx^{n-3} \\ \dots && \\ P^{(k)}(x)&=k!c_k + x\text{ terms} \\ \dots & \\ P^{(n)}(x)&=n!c_n \\ P^{(n+1)}(x)&=0. \end{align}
From these way we obtain the coefficients one by one: \begin{align} c_0= P_n(0)=f(0) &\Rightarrow c_0=f(0) \\ c_1=P_n'(0)=f'(0) &\Rightarrow c_1=f'(0) \\ 2c_2=P_n''(0)=f''(0) &\Rightarrow c_2=\frac{1}{2}f''(0) \\ \vdots & \\ k!c_k=P_n^{(k)}(0)=f^{(k)}(0) &\Rightarrow c_k=\frac{1}{k!}f^{(k)}(0). \\ \vdots &\\ n!c_n=P_n^{(n)}(0)=f^{(n)}(0) &\Rightarrow c_k=\frac{1}{n!}f^{(n)}(0). \end{align} Starting from index we cannot pose any new conditions, since .
Taylor's Formula
If the derivative exists and is continuous on some interval , then and the error term satisfies at some point . If there is a constant (independent of ) such that for all , then as .
\neq omitted here (mathematical induction or integral).
Examples of Maclaurin polynomial approximations: \begin{align} \frac{1}{1-x} &\approx 1+x+x^2+\dots +x^n =\sum_{k=0}^{n}x^k\\ e^x&\approx 1+x+\frac{1}{2!}x^2+\frac{1}{3!}x^3+\dots + \frac{1}{n!}x^n =\sum_{k=0}^{n}\frac{x^k}{k!}\\ \ln (1+x)&\approx x-\frac{1}{2}x^2+\frac{1}{3}x^3-\dots + \frac{(-1)^{n-1}}{n}x^n =\sum_{k=1}^{n}\frac{(-1)^{k-1}}{k}x^k\\ \sin x &\approx x-\frac{1}{3!}x^3+\frac{1}{5!}x^5-\dots +\frac{(-1)^n}{(2n+1)!}x^{2n+1} =\sum_{k=0}^{n}\frac{(-1)^k}{(2k+1)!}x^{2k+1}\\ \cos x &\approx 1-\frac{1}{2!}x^2+\frac{1}{4!}x^4-\dots +\frac{(-1)^n}{(2n)!}x^{2n} =\sum_{k=0}^{n}\frac{(-1)^k}{(2k)!}x^{2k} \end{align}
Example
Which polynomial approximates the function in the interval so that the absolute value of the error is less than ?
We use Taylor's Formula for at . Then independently of and the point . Also, in the interval in question, we have . The requirement will be satisfied (at least) if This inequality must be solved by trying different values of ; it is true for .
The required approximation is achieved with , which fo sine is the same as .
Check from graphs: is not enough, so the theoretical bound is sharp!
Taylor polynomial and extreme values
If , then also some higher derivatives may be zero: Then the behaviour of near is determined by the leading term (after the constant term ) of the Taylor polynomial.
This leads to the following result:
Extreme values
Newton's method
The first Taylor polynomial is the same as the linearization of at the point . This can be used in some simple approximations and numerical methods.
Newton's method
The equation can be solved approximately by choosing a starting point (e.g. by looking at the graph) and defining for This leads to a sequence , whose terms usually give better and better approximations for a zero of .
The recursion formula is based on the geometric idea of finding an approximative zero of by using its linearization (i.e. the tangent line).
Example
Find an approximate value of by using Newton's method.
We use Newton's method for the function and initial value . The recursion formula becomes from which we obtain , , and so on.
By experimenting with these values, we find that the number of correct decimal places doubles at each step, and gives already 100 correct decimal places, if intermediate steps are calculated with enough precision.
Taylor series
Taylor series
If the error term in Taylor's Formula goes to zero as increases, then the limit of the Taylor polynomial is the Taylor series of (= Maclaurin series for ).
The Taylor series of is of the form This is an example of a power series.
The Taylor series can be formed as soon as has derivatives of all orders at and they are substituted into this formula. There are two problems related to this: Does the Taylor series converge for all values of ?
Answer: Not always; for example, the function has a Maclaurin series (= geometric series) converging only for , although the function is differentiable for all :
If the series converges for some , then does its sum equal ? Answer: Not always; for example, the function satisfies for all (elementary but difficult calculation). Thus its Maclaurin series is identically zero and converges to only at .
Conclusion: Taylor series should be studied carefully using the error terms. In practice, the series are formed by using some well known basic series.
Examples
\begin{align} \frac{1}{1-x} &= \sum_{k=0}^{\infty} x^k,\ \ |x|< 1 \\ e^x &= \sum_{k=0}^{\infty} \frac{1}{k!}x^k, \ \ x\in \mathbb{R} \\ \sin x &= \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!} x^{2k+1}, \ \ x\in \mathbb{R} \\ \cos x &= \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k)!} x^{2k},\ \ x\in \mathbb{R} \\ (1+x)^r &= 1+\sum_{k=1}^{\infty} \frac{r(r-1)(r-2)\dots (r-k+1)}{k!}x^k, |x|<1 \end{align} The last is called the Binomial Series and is valid for all . If , then starting from , all the coefficients are zero and in the beginning
Compare this to the Binomial Theorem: for .
Power series
Definition: Power series
A power series is of the form The point is the centre and the are the coefficients of the series.
There are only three essentially different cases:
Abel's Theorem.
- The power series converges only for (and then it consists of the constant only)
- The power series converges for all
- The power series converges on some interval (and possibly in one or both of the end points), and diverges for other values of .
The number is the radius of convergence of the series. In the first two cases we say that or respectively.
Example
For which values of the variable does the power series converge?
We use the ratio test with . Then as . By the ratio test, the series converges for , and diverges for . In the border-line cases the general term of the series does not tend to zero, so the series diverges.
Result: The series converges for , and diverges otherwise.
Definition: Sum function
In the interval where the series converges, we can define a function by setting \begin{equation} \label{summafunktio} f(x) = \sum_{k=0}^{\infty} c_k(x-x_0)^k, \tag{1} \end{equation} which is called the sum function of the power series.
The sum function is continuous and differentiable on . Moreover, the derivative can be calculated by differentiating the sum function term by term: Note. The constant term disappears and the series starts with . The differentiated series converges in the same interval ; this may sound a bit surprising because of the extra coefficient .
Example
Find the sum function of the power series
This series is obtained by differentiating termwise the geometric series (with ). Therefore, \begin{align} 1+2x+3x^2+4x^3+\dots &= D(1+x+x^2+x^3+x^4+\dots ) \\ &= \frac{d}{dx}\left( \frac{1}{1-x}\right) = \frac{1}{(1-x)^2}. \end{align} Multiplying with we obtain which is valid for .
In the case we can also integrate the sum function termwise: Often the definite integral can be extended up to the end points of the interval of convergence, but this is not always the case.
Example
Calculate the sum of the alternating harmonic series.
Let us first substitute to the geometric series. This yields By integrating both sides from to we obtain Note. Extending the limit of integration all the way up to should be justified more rigorously here. We shall return to integration later on the course.
6. Elementary functions
This chapter gives some background to the concept of a function. We also consider some elementary functions from a (possibly) new viewpoint. Many of these should already be familiar from high school mathematics, so in some cases we just list the main properties.
Functions
Definition: Function
A function is a rule that determines for each element exactly one element . We write .
Definition: Domain and codomain
In the above definition of a function is the domain (of definition) of the function and is called the codomain of .
Definition: Image of a function
The image of is the subset
of . An alternative name for image is range.
For example, , , has codomain , but its image is .
The function in the previous example can also be defined as , , and then the codomain is the same as the image. In principle, this modification can always be done, but it is not reasonable in practice.
Inverse functions
Observe: A function becomes surjective if all redundant points of the codomain are left out. A function becomes injective if the domain is reduced so that no value of the function is obtained more than once.
Another way of defining these concepts is based on the number of solutions to an equation:
Definition
Definition: Inverse function
If is bijective, then it has an inverse , which is uniquely determined by the condition
The inverse satisfies for all and for all .
The graph of the inverse is the mirror image of the graph of with respect to the line : A point lies on the graph of the point lies on the graph of . The geometric interpretation of is precisely the reflection with respect to .
If and is strictly monotone, then the function has an inverse.
If here is an interval and is continuous, then also is is continuous in the set .
Theorem: Derivative of the inverse
Let be differentiable and bijective, so that it has an inverse . As the graphs and are mirror images of each other, it seems geometrically obvious that also is differentiable, and we actually have if .
Transcendental functions
Trigonometric functions
Unit of measurement of an angle = rad: the arclength of the arc on the unit circle, that corresponds to the angle.
The functions are defined in terms of the unit circle so that , , is the point on the unit circle corresponding to the angle , measured counterclockwise from the point .
Proof: Pythagorean Theorem.
Addition formulas:
Basic properties (from the unit circle!)
Proof: Geometrically, or more easily with vectors and matrices.
Example
It follows that the functions and satisfy the differential equation that models harmonic oscillation. Here is the time variable and the constant is the angular frequency of the oscillation. We will see later that all the solutions of this differential equation are of the form with constants. They will be uniquely determined if we know the initial location and the initial velocity . All solutions are periodic and their period is .
Arcus functions
The trigonometric functions have inverses if their domain and codomains are chosen in a suitable way.
Here we will only prove the first result (1). By differentiating both sides of the equation for :
The last row follows also directly from the formula for the derivative of an inverse.
Example
Example
Derive the addition formula for tan, and show that
Solutions: Voluntary exercises. The first can be deduced by looking at a rectangular triangle with the length of the hypotenuse equal to 1 and one leg of length .
Introduction: Radioactive decay
Let model the number of radioactive nuclei at time . During a short time interval the number of decaying nuclei is (approximately) directly proportional to the length of the interval, and also to the number of nuclei at time : The constant depends on the substance and is called the decay constant. From this we obtain and in the limit as we end up with the differential equation .
Exponential function
Definition: Exponential function
The Exponential function exp: This definition (using the series expansion) is based on the conditions and , which imply that for all , so the Maclaurin series is the one above.
The connections between different expressions are surprisingly tedious to prove, and we omit the details here. The main steps include the following:
From here on we write . Properties:
for all .Differential equation
Theorem
Let be a constant. All solutions of the ordinary differenial equation (ODE) are of the form , where is a constant. If we know the value of at some point , then the constant will be uniquely determined.
Euler's formula
Definition: Complex numbers
Imaginary unit : a strange creature satisfying . The complex numbers are of the form , where . We will return to these later.
Theorem: Euler's formula
If we substitute as a variable in the expontential fuction, and collect real terms separately, we obtain Euler's formula
As a special case we have Euler's identity . It connects the most important numbers , , , ja and the three basic operations sum, multiplication, and power.
Logarithms
Note. The general logarithm with base is based on the condition for and .
Beside the natural logarithm, in applications also appear the Briggs logarithm with base 10: , and the binary logarithm with base 2: .
Usually (e.g. in mathematical software) is the same as .
Properties of the logarithm:
Hyperbolic functions
Definition: Hyperbolic functions
Hyperbolic sine sinus hyperbolicus , hyperbolic cosine cosinus hyperbolicus and hyperbolic tangent are defined as
Properties: ; all trigonometric have their hyperbolic counterparts, which follow from the properties , . In these formulas, the sign of will change, but the other signs remain the same.
Hyperbolic inverse functions: the so-called area functions; area and the shortening ar refer to a certain geometrical area related to the hyperbola :
7. Area
Table of Content
Area in the plane
We consider areas of plane sets bounded by closed curves. In the more general cases, the concept of area becomes theoretically very difficult.
The area of a planar set is defined by reducing to the areas of simpler sets. The area cannot be "calculated", unless we first have a definition of "area" (although this is common practice in school mathematics).
Starting point
Polygon
A (simple) polygon is a plane set bounded by a closed curve that consists of a finite number of line segments without self-intersections.
Definition: Area of a polygon
The area of a polygon is defined by dividing it into a
finite number of triangles (called a
Theorem.
The sum of the areas of triangles in a triangulation of a polygon is the same for all triangulations.
General case
A surprise: The condition that is bounded by a closed curve (without self-intersections) does not guarantee that it has an area! Reason: The boundary curve can be so "wiggly", that it has positive "area". The first such example was constucted by [W.F. Osgood, 1903]:
Wikipedia: Osgood curve8. Integral
From sum to integral
Definite integral
Geometric interpretation: Let be such that for all . How can we find the area of the region bounded by the function graph , the x-axis and the two lines and ?
The answer to this question is given by the definite integral Remark. The general definition of the integral does not necessitate the condition .
Integration of continuous functions
Definition: Partition
Let be continuous. A finite sequence of real numbers such that is called a partition of the interval .
Definition: Upper and lower sum
For each partition we define the related upper sum of the function as and the lower sum as
If is a positive function then the upper sum represents the total area of the rectangles circumscribing the function graph and similarly the lower sum is the total area of the inscribed rectangles.
Properties of partitions
Definition: Integrability
We say that a function is integrable if for every there exists a corresponding partition of such that
Definition: Integral
Integrability implies that there exists a unique real number such that for every partition . This is called the integral of over the interval and denoted by
Remark. This definition of the integral is sometimes referred to as the Darboux integral.
For non-negative functions this definition of the integral coincides with the idea of making the difference between the the areas of the circumscribed and the inscribed rectangles arbitrarily small by using ever finer partitions.
Theorem.
A continuous function on a closed interval is integrable.
Here we will only provide the proof for continuous functions with bounded derivatives.
Suppose that is a continuous function and that there exists a constant such that for all . Let and define to be an equally spaced partition of such that Let and for some suitable points . The mean value theorem then states that and thus
Definition: Riemann integral
Suppose that is a continuous function and let be a partition of and be a sequence of real numbers such that for all . The partial sums are called the Riemann sums of . Suppose further that the partitions are such that as . The integral of can then be defined as the limit This definition of the integral is called the Riemann integral.
Remark. This definition of the integral turns out to be equivalent to that of the Darboux integral i.e. a function is Riemann-integrable if and only if it is Darboux-integrable and the values of the two integrals are always equal.
Example
Find the integral of over the interval using Riemann sums.
Let . Then , and for all . Thus the sequence is a proper partition of . This partition has the pleasant property hat is a constant. Estimating the Riemann sums we now find that as and hence
This is of course the area of the triangular region bounded by the line , the -axis and the lines and .
Remark. Any interval can be partitioned into equally spaced subintervals by setting and .
Conventions
Piecewise-defined functions
Definition: Piecewise continuity
A function is called piecewise continuous if it is continuous except at a finite number of points and the one-sided limits of the function are defined and bounded on each of these points. It follows that the restriction of on each subinterval is continuous if the one-sided limits are taken to be the values of the function at the end points of the subinterval.
Definition: Piecewise integration
Let be a piecewise continuous function. Then where and is thought as a continuous function on each subinterval . Usually functions which are continuous yet piecewise defined are also integrated using the same idea.
Important properties
Properties
Suppose that are piecewise continuous functions. The integral has the following properties
Fundamental theorem of calculus
Theorem: Mean value theorem
Let be a continuous function. Then there exists such that This is the mean value of on the interval and we denote it with .
Antiderivative
If on some open interval then is the antiderivative (or the primitive function) of . The fundamental theorem of calculus guarantees that for every continuous function there exists an antiderivative The antiderivative is not necessarily expressible as a combination of elementary functions even if were an elementary function, e.g. . Such primitives are called nonelementary antiderivatives.
Suppose that for all . Then the derivative of is identically zero and thus the difference is a constant.
(Second) Fundamental theorem of calculus
Let be a continuous function and an antiderivative of , then
Integrals of elementary functions
Constant Functions
Given the constant function . The integral has to be determined now.
Solution by finding a antiderivative
From the previous chapter it is known that gives . This means that is an antiderivative for . So the following applies
Remark: Of course, a function would also be an antiderivative of , since the constant is omitted in the derivation. For sake of simplicity can be used, since can be chosen as for definite integrals.
Solution by geometry
The area under the constant function forms a rectangle with height and length . Thus the area is and this corresponds to the solution of the integral. Illustrate this remark by a sketch.
Linear functions
Given is the linear function . We are looking for the integral .
Solve by finding a antiderivative
The antiderivative of a linear function is in any case a quadratic function, since . The derivative of a quadratic function results in a linear function. Here, it is important to consider the leading factor as in Thus the result is
Solving by geometry
The integral can be seen geometrically, as subtracting the triangle with the edges , and from the triangle with the edges , and . Since the area of a triangle ist given by , the area of the first triangle and that of the second triangle is analogous . For the integral the result is . This is consistent with the integral calculated using the antiderivative. Illustrate this remark by a sketch.
Power functions
In constant and linear functions we have already seen that the exponent of a function decreases by one when it is derived. So it has to get bigger when integrating. The following applies: It follows that the antiderivative for must have the exponent , By multiplying the last equation with we get Finally the antiderivative is .
Examples
The formula is also valid, if the exponent of the function is a real number and not equal .
Examples
Natural Exponential function
The natural exponential function is one of the easiest function to differentiate and integrate. Since the derivation of results in , it follows
Example 1
Determine the value of the integral .
Example 2
Determine the value of the integral . Using the same considerations as above we get Important is here, that we have to use the factor .
Natural Logarithm
The derivative of the natural logarithmic function is for . It even applies to . These results together result in for the antiderivative of
An antiderivative can be specified for the natural logarithm:
Trigonometric function
The antiderivatives of and also result logically if you derive "backwards". We have since Furthermore we know since applies.
Example 1
Which area is covered by the sine on the interval and the -axis? To determination the area we simply have to evaluate the integral That means Again make a sketch for this example.
Example 2
How can the integral be expressed analytically?
To determine the integral we use the antiderivative of the cosine: . However, the inner derivativ has to be considered in the given function and thus we get
Example 1
Solution. The antiderivative of is so we have that The antiderivative of is and thus
Example 2
Solution. The antiderivative might look something like , where we can find the factor through differentiation: hence if we get the correct antiderivative. Thus This integral can also be solved using integration by substitution; more on this method later.
Geometric applications
Area of a plane region
Suppose that and are piecewise continuous functions. The area of a region bounded by the graphs , and the vertical lines and is given by the integral
Especially if is a non-negative function on the interval and for all then the integral is the area of the region bounded by the graph , the -axis and the vertical lines and .
Arc length
The arc length of a planar curve between points and is given by the integral
Heuristic reasoning: On a small interval the arc length of the curve between and is approximately
Surface of revolution
The area of a surface generated by rotating the graph around the -axis on the interval is given by Heuristic reasoning: An area element of the surface is approximately
Solid of revolution
Suppose that the cross-sectional area of a solid is given by the function when . Then the volume of the solid is given by the integral If the graph is rotated around the -axis between the lines and the volume of the generated figure (the solid of revolution) is This follows from the fact that the cross-sectional area of the figure at is a circle with radius i.e. .
More generally: Let and suppose that the region bounded by and and the lines and is rotated around the -axis. The volume of this solid of revolution is
Improper integral
Definition: Improper integral
One limitation of the improper integration is that the limit must be taken with respect to one endpoint at a time.
Example
Provided that both of the integrals on the right-hand side converge. If either of the two is divergent then so is the integral.
Definition
Let be a piecewise continuous function. Then provided that the limit exists and is finite. We say that the improper integral of converges over .
Likewise for we define provided that the limit exists and is finite.
Example
Solution. Notice that as . Thus the improper integral converges and
Definition
Let be a piecewise continuous function. Then if both of the two integrals on the right-hand side converge.
However, this doesn't apply in general. For example, let . Note that even though for all the improper integral does not converge.
Improper integrals of the 2nd kind are handled in a similar way using limits. As there are many different (but essentially rather similar) cases, we leave the matter to one example only.
Example
Find the value of the improper integral .
Solution. We get as . Thus the integral converges and its value is .
Comparison test
Example 2
Notice that and that the integral converges. Thus by the comparison test the integral also converges and its value is less than or equal to .
Example 3
Likewise and because converges so does and its value is less than or equal to .
Note. The choice of the dominating function depends on both the original function and the interval of integration.
Example 4
Determine whether the integral converges or diverges.
Solution. Notice that for all and therefore Now, because the integral diverges then by the comparison test so does the original integral.
Integration techniques
Logarithmic integration
Given a quotient of differentiable functions, we know to apply the quotient rule. However, this is not so easy with integration. Here only for a few special cases we will state rules in this chapter.
Logarithmic integration As we already know the derivative of , i.e. the natural logarithm to the base , equal to . According to the chain rule the derivative of differentiable function with positive function values is . This means that for a quotient of functions where the numerator is the derivative of the denominator yields the rule: \begin{equation} \int \frac{f'(x)}{f(x)}\, \mathrm{d} x= \ln \left(|f(x)|\right) +c,\,c\in\mathbb{R}.\end{equation} Using the absolute value of the function is important, since the logarithm is defined on .
Examples
Integration of rational functions - partial fraction decomposition
The logarithmic integration works well in special cases of broken rational functions where the counter is a multiple of the derivation of the denominator. However, other cases can sometimes be traced back to this. This method is called partial fractional decomposition, which represents rational functions as the sum of proper rational functions.
Example 1
The function cannot be integrated at first glance. However, the denominator can be written as and the function can finally reads as by partial fraction decomposition. This expression can be integrated, as demonstrated now: \begin{eqnarray} \int \dfrac{1}{1-x^2} \,\mathrm dx &= & \int \dfrac{\frac{1}{2}}{1+x} + \dfrac{\frac{1}{2}}{1-x}\, \mathrm dx \\ & =& \frac{1}{2} \int \dfrac{1}{1+x}\, \mathrm dx - \frac{1}{2} \int \dfrac{-1}{1-x}\, \mathrm dx\\ & = &\frac{1}{2} \ln|1+x| +c_1 - \frac{1}{2} \ln|1-x| +c_2\\ &= &\frac{1}{2} \ln \left|\dfrac{1+x}{1-x}\right|+c,\,c\in\mathbb{R}. \end{eqnarray} This procedure is now described in more detail for some special cases.
Case 1: with . In this case, has the representation and can be transformed to By multiplying with it yields ot
and are now obtained by the method of equating the coefficients.
Example 2
Determe the partial fraction decomposition of .
Start with the equation to get the parameters and . Multiplication by leads to Now we get the system of linear equations
\begin{eqnarray}A+B & = & 2 \\ 5A - 4 B &=& 3\end{eqnarray} with the solution and . The representation with proper rational functions is The integral of the type is no longer mystic.
With the help of partial fraction decomposition, this integral can now be calculated in the following manner \begin{eqnarray}\int \frac{ax+b}{(x-\lambda_1)(x-\lambda_2)}\mathrm{d} x &=& \int\frac{A}{(x-\lambda_1)}+\frac{B}{(x-\lambda_2)}\mathrm{d} x \\ &=&A\int\frac{1}{(x-\lambda_1)}\mathrm{d} x +B\int\frac{1}{(x-\lambda_2)}\mathrm{d} x \\ & = & A\ln(|x-\lambda_1|) + B\ln(|x-\lambda_2|).\end{eqnarray}
Example 3
Determine the antiderivative for , i.e.
From the above example we already know:
Using the idea explained above immediately follow: So is the result
In this case has the representation and the ansatz is used.
By multiplying the equation with we get Again equating the coefficients leads us to a system of linear equations in and
In this case has the representation and the representation can not be simplified.
Only the special case is now considered.
Integration by Parts
The derivative of a product of two continuously differentiable functions and is
This leads us to the following theorem:
Theorem: Integration by Parts
Let and be continuously differentiable functions on the interval . Then Likewise for the indefinite integral it holds that
It follows from the product rule that or rearranging the terms Integrating both sides of the equation with respect to and ignoring the constant of integration now yields
Example
Solution. Set and . Then and and the integration by parts gives
Notice that had we chosen and the other way around this would have led to an even more complicated integral.
Integration by Substitution
Example 1
Find the value of the integral .
Solution. Making the substitution when we have . Solving the limits from the inverse formula i.e. we find that and . Hence
Here the latter integral was solved applying integration by parts in the previous example.
Example 2
9. Differential equations
Introduction
Differential equation is an equation containing an unknown function, e.g. , and its derivatives . This kind of equation where the unknown function depends on a single variable, is called an ordinary differential equation (ODE) or simply a differential equation. If the unknown function contains several variables, it is called partial differential equation, but they are not covered in this course.
A typical application leading to a differential equation is radioactive decay. If is the number of radioactive nuclei present at time , then during a short time interval the
change in this number is approximately , where is a positive constant depending on the radioactive substance. The approximation becomes better as , so that . It follows that the differential equation is a mathematical model for the radioactive decay. In reality, the number of nuclei is an integer, so the function is not differentiable (or the derivative is mostly zero!). Therefore, the model describes the properties of some idealized smooth version of . This is a typical phenomenon in most models.
Order
The order of a differential equation is the highest order of the derivatives appearing in the equation.
For example, the order of the differential equation is 1. The order of the differential equation is 2.
Here the variable of the function is not visible; the equation is considered to determine implicitly.
Solutions of a differential equation
A differential equation of order n is of the form
The solution to an ODE is an n times differentiable function satisfying the above equation for all where is an open interval in the real axis.
Typically the solution is not unique and there can be an infinite number of solutions. Consider the equation The equation has the solutions
Here , and are called particular solutions. The general solution is Particular solutions can be derived from the general solution by assigning the parameter to some value. Solutions that cannot be derived from the general solution are called special solutions.
Differential equations do not necessarily have any solutions at all. For example, the first order differential equation does not have any solutions. If a first order equation can be written in normal form , where is continuous, then a solution exists.
Initial condition
Constants in the general solution can be assigned to some values if the solution is required to satisfy additional properties. We may for example demand that the solution equals at by setting an initial condition With first order equations, only one condition is (usually) needed to make the solution unique. With second order equations, we need two conditions, respectively. In this case, the initial condition is of the form
In general, for an equation of order n, we need n extra conditions to make the solution unique. A differential equation and its set initial conditions are jointly referred as an initial value problem.
Example 1.
We saw above that the the general solution to the differential equation is Therefore the solution to the initial value problem
Direction field
The differential equation can be interpreted geometrically: if the solution curve (i.e. graph of a solution) goes through the point , then it holds that , i.e. we can find the slopes of the tangents of the curve even if we do not know the solution itself. Direction field or slope field is a vector field drawn through the points . The direction field provides a fairly accurate image of the behavior of the solution curves.
1st Order Ordinary Differential Equations
A manifesting problem in the theory of differential equations is that there is only a relatively small amount of methods for finding solutions that are generally applicable. Even for a fairly simple differential equation a generalized formula does not usually exist, and especially for higher order differential equations it is rare to be able to find an analytic solution. For some equations it is possible, however, and here some of the most common cases are introduced.
Linear 1st order ODE
If a differential equation is of the form
then it is called a linear differential equation. The left side of the equation is a linear combination of the derivatives with multipliers . Thus a first order linear ODE is of the form
If for all , then the equation is called homogeneous. Otherwise the equation is nonhomogeneous.
Theorem 1.
Consider a normal form initial value problem
If the functions and are continuous in the interval containing the initial point , then the initial value problem has a unique solution.
The condition concerning the normality of the equation is crucial. For example, the equation may have either zero or an infinite number of solutions depending on the initial condition: substituting to the equation automatically forces the initial condition .
Solving a 1st order linear ODE
A first order linear ODE can be solved by using an integrating factor method. The idea of the method is to multiply both sides of the equation
by the integrating factor , which allows the equation to be written in the form
Integrating both sides of the equation, we get
It is not advisable to try to remember the formula as it is, but rather keep in mind the
idea of how the equation should be modified in order to proceed.
Example 1.
Let us solve the differential equation The integrating factor is so we multiply both sides by this expression:
Example 2.
Let us solve the initial value problem
First, we want to express the problem in normal form:
Now the integrating factor is Hence, we get
We have found the general solution. Because
that is, the value of the function does not equal the given initial value, the problem does not have a solution.
The main reason for this is that the initial condition is given at , where the normal form of the equation is not defined. Any other choice for will lead to a
unique solution.
Example 3.
Let us solve the ODE given the initial conditions
From the form we see that the equation in question is a linear ODE. The integrating factor is
Multiplying by the integrating factor, we get
so the general solution to the ODE is . From the initial value it follows that
, but the other condition leads to a contradiction .
Therefore, the solution in the (a) part is , but a solution satisfying the initial condition of part b) does not exist: by substituting ,
the equation forces .
A first order linear ODE can be solved by using an integrating factor method. The idea of the method is to multiply both sides of the equation by the integrating factor , which allows the equation to be written in the form
Integrating both sides of the equation, we get
It is not advisable to try to remember the formula as it is, but rather keep in mind the idea of how the equation should be modified in order to proceed.
Example 1.
Let us solve the differential equation The integrating factor is so we multiply both sides by this expression:
Example 2.
Let us solve the initial value problem
First, we want to express the problem in normal form:
Now the integrating factor is Hence, we get
We have found the general solution. Because that is, the value of the function does not equal the given initial value, the problem does not have a solution. The main reason for this is that the initial condition is given at , where the normal form of the equation is not defined. Any other choice for will lead to a unique solution.
Example 3.
Let us solve the ODE given the initial conditions
From the form we see that the equation in question is a linear ODE. The integrating factor is
Multiplying by the integrating factor, we get
so the general solution to the ODE is . From the initial value it follows that , but the other condition leads to a contradiction . Therefore, the solution in the (a) part is , but a solution satisfying the initial condition of part b) does not exist: by substituting , the equation forces .
Separable equation
A first order differential equation is separable, if it can be written in the form where and are integrable functions in the domain of interest. Treating formally as a fraction, multiplying by and dividing by , we obtain . Integrating the left hand side with respect to and the right hand side with respect to , we get
This method gives the solution to the differential equation in implicit form, which we may further be able to solve explicitly for . The justification for this formal treatment can be made by using change of variables in integrals.
Example 4.
Let us solve the differential equation by separating the variables. (We could also solve the equation by using the method of integrating factors.)
In the last step, we wrote for simplicity. The case is also allowed, since it leads to the trivial solution , see below.
Example 5.
Let us solve the initial value problem
Because the general solution is not required, we may take a little shortcut by applying integrals in the following way:
The trivial solutions of a separable ODE
General solution achieved by applying the method for separable functions typically lacks information about solutions related to the zeros of the function . The reason for this is that in the separation method we need to assume that in order to be able to divide the expression by . We notice that for each zero of the function there exists a corresponding constant solution of the ODE , since . These solutions are called trivial solutions (in contrast to the general solution).
If the conditions of the following theorem hold, then all the solutions to a separable differential equation can be derived from either the general solution or the trivial solutions.
Theorem 2.
Let us consider the initial value problem .
- If is continuous (as a function of two variables), then there exists at least one solution in some interval containing the point .
- Also, if is continuously differentiable with respect to , then the solution satisfying the initial condition is unique.
- The uniqueness also holds, when in addition to (i) the function is continuously differentiable with respect to and .
The proof of the theorem is based on a technique known as Picard-Lindelöf iteration, which was invented by Emile Picard and further developed by the Finnish mathematician Ernst Lindelöf (1870-1946), and others.
Applying the previous theorem, we can formulate the following result for separable equations.
Theorem 3.
Let us consider a separable differential equation , where is continuous and is continuously differentiable.
The solution curves at each point of the domain of the equation are always unique. In particular, the curves cannot intersect and it is not possible for a single curve to split into two or several parts.
∴ The other solution curves of the ODE cannot intersect the curves corresponding to the trivial solutions. That is, for all the other solutions the condition automatically holds!
Example 6.
Let us solve the linear homogeneous differential equation by applying the method of separation.
The equation has the trivial solution . Since the other solutions do not get the value 0 anywhere, it holds that
Here, the expression has been replaced by a simpler constant .
Equations expressible as separable
Some differential equations can made separable by using a suitable substitution.
i) ODEs of the form
Example 7.
Let us solve the differential equation The equation is not separable in this form, but we can make if separable by substituting resulting to We get
Separating the variables and integrating both sides, we get
Substituting and simplifying yields
Here, it is not possible to derive an expression for y so we have to make do with just the implicit solution. The solutions can be visualized graphically:
As we can see, the solutions are spirals expanding in the positive direction that are suitably cut for demonstration purposes. This is clear from the solutions' polar coordinate representation which we obtain by using the substitution
Hence, the solution is
ii) ODEs of the form
Another type of differential equation that can be made separable are equations of the form
To rewrite the equation as separable, we use the substitution
Example 8.
Let us find the solution to the differential equation
Another type of differential equation that can be made separable are equations of the form
To rewrite the equation as separable, we use the substitution
Example 8.
Let us find the solution to the differential equation
Euler's method
In practice, it is usually not feasible to find analytical solutions to differential equations. In these cases, the only choice for us is to resort to numerical methods. A prominent example of this kind of technique is called Euler's method. The idea behind the method is the observation made earlier with direction fields: even if we do not know the solution itself, we are still able to determine the tangents of the solution curve. In other words, we are seeking solutions for the initial value problem
In Euler's method, we begin the solving process by choosing the step length and using the iteration formula
The iteration starts from the index by substituting the given initial value to the right side of the iteration formula. Since is the slope of the tangent of the solution at , on each step we move the distance expressed by the step length in the direction of the tangent. Because of this, an error occurs, which grows as the step length is increased.
Example 9.
Use the gadget on the right to examine the solution to the initial value problem
obtained by using Euler's method and compare the result to the precise solution.
2nd and higher order ODEs
For higher order differential equations it is often impossible to find analytical solutions. In this section, we introduce some special cases for which analytical solutions can be found. Most of these cases are linear differential equations. Our focus is on second order differential equations, as they are more common in practical applications and for them it is more likely for an analytical solution to be found, compared to third or higher order differential equations.
Solving a homogeneous ODE
For second order linear equations, there is no easy way to find a general solution. We begin by examining a homogeneous equation
where and are continuous functions on their domains. Then, it holds that
1) the equation has linearly independent solutions and , called fundamental solutions. Roughly speaking, linear independence means that the ratio is not constant, so that the solutions are essentially different from each other.
2) the general solution can expressed by means of any linearly independent pair of solutions in the form , where and are constants.
3) if the initial values are fixed, then the solution is unique.
A general method for finding explicitly the fundamental solutions and does not exist. To find the solution, a typical approach is to try to make an educated guess about the solution's form and check the details by substituting this into the equation.
The above results can be generalized to higher order homogeneous equations as well, but then the number of required fundamental solutions and initial conditions increases with respect to the order of the equation.
Example 1.
The equation has solutions and These solutions are linearly independent, so the general solution is of the form
Equations with constant coefficients
As a relatively simple special case, let us consider the 2nd order equation
In order to solve the equation, we use the guess , where is an unknown constant. Substituting the guess into the equation yields
The last equation is called the characteristic equation of the ODE. Solving the characteristic equation allows us to find the solutions for the actual ODE. The roots of the characteristic equation can be divided into three cases:
1) The characteristic equation has two distinct real roots. Then, the ODE has the solutions and
2) The characteristic equation has a double root. Then, the ODE has the solutions and
3) The roots of the characteristic equation are of the form Then, the ODE has the solutions and
The second case can be justified by substitution into the original ODE, and the third case by using the Euler formula . With minor changes, these results can also be generalized to higher order differential equations.
Since the characteristic equation has exactly the same coefficients as the original ODE, it is not necessary to derive it again in concrete examples: just write it down by looking at the ODE!
Example 2.
Let us solve the initial value problem
The characteristic equation is which has the roots and Thus, the general solution is The constants can be determined by using the initial conditions:
Hence, the general solution is
Example 3.
Let us have a look at how the above results hold in higher order equations by solving
Now, the characteristic equation is which has the roots and Thus, the fundamental solutions to the ODE are , , and . The general solution is
For second order linear equations, there is no easy way to find a general solution. We begin by examining a homogeneous equation
where and are continuous functions on their domains. Then, it holds that
1) the equation has linearly independent solutions and , called fundamental solutions. Roughly speaking, linear independence means that the ratio is not constant, so that the solutions are essentially different from each other.
2) the general solution can expressed by means of any linearly independent pair of solutions in the form , where and are constants.
3) if the initial values are fixed, then the solution is unique.
A general method for finding explicitly the fundamental solutions and does not exist. To find the solution, a typical approach is to try to make an educated guess about the solution's form and check the details by substituting this into the equation.
The above results can be generalized to higher order homogeneous equations as well, but then the number of required fundamental solutions and initial conditions increases with respect to the order of the equation.
Example 1.
The equation has solutions and These solutions are linearly independent, so the general solution is of the form
Equations with constant coefficients
As a relatively simple special case, let us consider the 2nd order equation
In order to solve the equation, we use the guess , where is an unknown constant. Substituting the guess into the equation yields
The last equation is called the characteristic equation of the ODE. Solving the characteristic equation allows us to find the solutions for the actual ODE. The roots of the characteristic equation can be divided into three cases:
1) The characteristic equation has two distinct real roots. Then, the ODE has the solutions and
2) The characteristic equation has a double root. Then, the ODE has the solutions and
3) The roots of the characteristic equation are of the form Then, the ODE has the solutions and
The second case can be justified by substitution into the original ODE, and the third case by using the Euler formula . With minor changes, these results can also be generalized to higher order differential equations.
Since the characteristic equation has exactly the same coefficients as the original ODE, it is not necessary to derive it again in concrete examples: just write it down by looking at the ODE!
Example 2.
Let us solve the initial value problem
The characteristic equation is which has the roots and Thus, the general solution is The constants can be determined by using the initial conditions:
Hence, the general solution is
Example 3.
Let us have a look at how the above results hold in higher order equations by solving
Now, the characteristic equation is which has the roots and Thus, the fundamental solutions to the ODE are , , and . The general solution is
Example 4.
Let be a constant. The characteristic equation of the ODE is with roots . So and in Case 3). Since this ODE is a model for harmonic oscillation, we use time as variable, and obtain the general solution with constants. They will be uniquely determined if we know the initial location and the initial velocity . All solutions are periodic and their period is . In the animation to the right we have and you can choose and the initial displacement .
Euler's differential equation
Another relatively common type of 2nd order differential equation is Euler's differential equation
where and are constants. An equation of this form is solved by using the guess . Substituting the guess in the equation yields
Using the roots of this equation, we obtain the solutions for the ODE in the following way:
1) If the roots are distinct and real, then and .
2) If the equation has a double root, then and .
3) If the equation has roots of the form , then and .
Example 5.
Let us solve the equation Noticing that the equation is Euler's differential equation, we proceed by using the guess Substituting the guess into the equation, we get which yields Therefore, the general solution to the ODE is
Another relatively common type of 2nd order differential equation is Euler's differential equation
where and are constants. An equation of this form is solved by using the guess . Substituting the guess in the equation yields
Using the roots of this equation, we obtain the solutions for the ODE in the following way:
1) If the roots are distinct and real, then and .
2) If the equation has a double root, then and .
3) If the equation has roots of the form , then and .
Example 5.
Let us solve the equation Noticing that the equation is Euler's differential equation, we proceed by using the guess Substituting the guess into the equation, we get which yields Therefore, the general solution to the ODE is
Nonhomogeneous linear differential equations
The general solution to a nonhomogeneous equation
is the general solution to the corresponding homogeneous equation particular solution to the nonhomogeneous equation, i.e.
The particular solution is usually found by using a guess that is of the same form as with general coefficients. Substituting the guess into the ODE, we can solve these coefficients, but only if the guess is of the correct form.
In the table below, we have created a list of possible guesses for second order differential equations with constant coefficients. The form of the guess depends on what kind of elementary functions consists of. If is a combination of several different elementary functions, then we need to include corresponding elements for all of these functions in our guess. The characteristic equation of the corresponding homogeneous differential equation is .
Note. For roots of a second degree polynomial we have to keep in mind that
Example 6.
Let us find the general solution to the ODE , when
The solutions are of the form .
a) Substituting the guess we get
,
which solves for .
b) In this case a guess of the form is useless, as it is part of the general solution to the corresponding homogeneous equation and yields just zero when substituted to the left side of the ODE. Here, a right guess is of the form . Substitution yields
Using these values for and , we can write the general solutions to the given differential equations.
Example 7.
Let us find the solution to the ODE with the initial conditions
, .
Based on the previous example, the general solution is of the form
. Differentiation yields
. From the initial conditions, we get the following pair of equations:
which solves for and . Therefore, the solution to the initial value problem is
.
Example 8.
A typical application of a second order nonhomogeneous ODE is an RLC circuit containing a resistor (with resistance ), an inductor (with inductance ), a capacitor (with capacitance ), and a time-dependent electromotive force . The electric current in the circuit satisfies the ODE Let us solve this ODE with artificially chosen numerical values in the form
The homogeneous part has characteristic equation of the form
with solutions . This gives the solutions and for the homogeneous equation. For a particular solution we try . Substituting this into the nonhomogeneous ODE and collecting similar terms yields to This equation will be
satisfied for all (only) if
which solves for and . Therefore, the general solution is
Note. The exponential terms go to zero very fast and eventually, the current oscillates in the form
The general solution to a nonhomogeneous equation
is the general solution to the corresponding homogeneous equation particular solution to the nonhomogeneous equation, i.e.
The particular solution is usually found by using a guess that is of the same form as with general coefficients. Substituting the guess into the ODE, we can solve these coefficients, but only if the guess is of the correct form.
In the table below, we have created a list of possible guesses for second order differential equations with constant coefficients. The form of the guess depends on what kind of elementary functions consists of. If is a combination of several different elementary functions, then we need to include corresponding elements for all of these functions in our guess. The characteristic equation of the corresponding homogeneous differential equation is .
Note. For roots of a second degree polynomial we have to keep in mind that
Example 6.
Let us find the general solution to the ODE , when
The solutions are of the form .
a) Substituting the guess we get , which solves for .
b) In this case a guess of the form is useless, as it is part of the general solution to the corresponding homogeneous equation and yields just zero when substituted to the left side of the ODE. Here, a right guess is of the form . Substitution yields
Using these values for and , we can write the general solutions to the given differential equations.
Example 7.
Let us find the solution to the ODE with the initial conditions , .
Based on the previous example, the general solution is of the form . Differentiation yields . From the initial conditions, we get the following pair of equations:
which solves for and . Therefore, the solution to the initial value problem is .
Example 8.
A typical application of a second order nonhomogeneous ODE is an RLC circuit containing a resistor (with resistance ), an inductor (with inductance ), a capacitor (with capacitance ), and a time-dependent electromotive force . The electric current in the circuit satisfies the ODE Let us solve this ODE with artificially chosen numerical values in the form
The homogeneous part has characteristic equation of the form with solutions . This gives the solutions and for the homogeneous equation. For a particular solution we try . Substituting this into the nonhomogeneous ODE and collecting similar terms yields to This equation will be satisfied for all (only) if
which solves for and . Therefore, the general solution is Note. The exponential terms go to zero very fast and eventually, the current oscillates in the form