Mathematical methods for economic theory

Martin J. Osborne

8.2 First-order differential equations: existence of a solution

Definition
A first-order ordinary differential equation is an ordinary differential equation that may be written in the form
x'(t) = F(tx(t))
for some function F of two variables.
As I discussed on the previous page, a differential equation generally has many solutions. For a first-order equation, adding the requirement that x take a specific value for some specific value of t generally pins down a single solution. Roughly, the additional requirement determines the level of x whereas the differential equation determines the rate of change of x.

A first-order differential equation plus a condition of this type—that is, a condition of the form x(t0) = x0, called an initial condition—is called a first-order initial value problem. Despite the use of the word “initial”, the value of t0 in such a problem cannot necessarily be interpreted as a starting point. (Indeed, as I remarked on the previous page, the variable t cannot necessarily be interpreted as time.)

Definition
A first-order initial value problem consists of a first-order ordinary differential equation
x'(t) = F(tx(t))
and an initial condition
x(t0) = x0,
where t0 and x0 are numbers.

Existence of a solution

Before trying to find a solution of a first-order initial value problem, it is useful to know whether a solution exists.

A diagram known as a direction field or integral field helps us think about the existence of a solution. We plot t on the horizontal axis and x on the vertical axis, and for each of a set of pairs (tx) we draw a short line segment through (tx) with slope F(tx(t)), which from the equation is equal to x'(t). Such a diagram shows us the rate of change of x for each value of t and x(t).

Example
The direction field of the equation
x'(t) = x(t)t
is shown in the figure below. For example, at every point (t, 0) and every point (0, x) we have x'(t) = 0, so the slope of the line segment through every such point is 0. Similarly, at every point (1, x) we have x'(t) = x, so the slope of the line segment through (1, x) is x for each value of x. (The grid size in the figure is 1/2.)

x 0 t →

If the first-order initial value problem
x'(t)  =  F(tx(t))
x(t0)  =  x0
has a solution, then the direction field for the differential equation helps us to see what it looks like. The slope of the line segment at (t0x0) gives the direction in which x is headed at t0. For instance, in the previous example, if the initial condition is x(−1) = 1, then for (tx) = (−1, 1), the rate of change of x is −1. If we sketch a solution of the initial value problem starting at t = −1, we thus start by drawing a path that decreases, with slope −1. But then as we move along that path, the slopes of the line segments in the direction field change, and we need to adjust the slope of our solution to conform. When our solution crosses the vertical axis, for example, the rate of change of x is zero, so our solution path should be horizontal. The path we create in this way will look something like the blue line in the following figure (which is the solution of the initial value problem for (t0x0) = (−1, 1)).

x 0 t →

This construction suggests that any first-order initial value problem in which the slopes of the line segments in the direction field change continuously as (tx) changes—that is, in which F is continuous—has a solution. If the partial derivative of F with respect to its second argument is continuous, then in fact the initial value problem has a unique solution.

Proposition  source
If F is a function of two variables that is continuous at (t0x0) then there exists a number a > 0 and a continuously differentiable function x of a single variable defined on the interval (t0 − a, t0 + a) that solves the first-order initial value problem
x'(t)  =  F(tx(t))
x(t0)  =  x0
for all t ∈ (t0 − a, t0 + a). If in addition the partial derivative of F with respect to x is continuous on an open rectangle containing (t0x0) then there exists a number a > 0 and a unique function x defined on the interval (t0 − a, t0 + a) that solves the initial value problem for all t ∈ (t0 − a, t0 + a).
Source  hide
For a proof of existence (the first sentence of the result) see Coddington and Levinson (1955), p. 6. For a proof of uniqueness (the second sentence of the result), see Boyce and DiPrima (1969), pp. 71–78.
The condition guaranteeing a unique solution (that the partial derivative of F with respect to x be continuous) is relatively mild, and is satisfied in almost all the examples we study. After looking at some direction fields, you might wonder how even a first-order initial value problem that does not satisfy the condition could have more than one solution. Here is an example.
Example
Consider the first-order initial value problem
x'(t = (x(t))1/2
x(0)  = 0.
This problem does not satisfy the condition in the proposition for a unique solution, because the square root function is not differentiable at 0.

Two solutions of the problem are x(t) = 0 for all t and x(t) = (t/2)2 for all t.

Stability of solutions

Definition
If for some initial condition a first-order initial value problem has a solution that is a constant function (independent of t), the value of the constant is an equilibrium or stationary state of the associated differential equation.
Example
Consider the first-order initial value problem
x'(t) + x(t) = 2 with x(t0) = x0.
The solution of this problem, as we shall see later, is
x(t) = (x0 − 2)et0t + 2.
(You can verify that this function is a solution of the problem by calculating x'(t) and substituting it into the differential equation and by checking that x(t0) = x0.)

Thus for x0 = 2, the solution of the problem is x(t) = 2 for all t, so that 2 is an equilibrium of the differential equation. For no other value of x0 is the solution a constant function, so 2 is the only equilibrium.

If the solution of a first-order initial value problem converges to an equilibrium of the associated differential equation for all initial conditions, then we say that the equilibrium is “stable”. Here is a more precise definition that distinguishes between two types of stability.
Definition
If, for all initial conditions, the solution of a first-order initial value problem converges to an equilibrium of the associated differential equation as t increases without bound, then the equilibrium is globally stable. If, for all values of x0 sufficiently close to the equilibrium the solution converges to the equilibrium as t increases without bound, then the equilibrium is locally stable. An equilibrium that is not locally stable is unstable.
Example
The solution of the initial value problem in the previous example is
x(t) = (x0 − 2)et0t + 2.
For every value of (t0x0), (x0 − 2)et0t converges to zero, so the equilibrium x = 2 is globally stable.