Mathematical methods for economic theory

Martin J. Osborne

2.4 Differentials and comparative statics

Introduction

We may use the tool of implicit differentiation to study the dependence of a variable x on a list p of parameters when the variable is defined by an equation like f(xp) = 0. Many models in economic theory involve several variables that satisfy several equations simultaneously. In such cases, another (closely related) method is useful.

Suppose that we have two variables, x and y, and two parameters, p and q, and for any values of p and q the values of the variables satisfy the two equations

f(xypq = 0
g(xypq = 0.
These two equations implicitly define x and y as functions of p and q. As in the case of a single equation, two questions arise:
  • Do the equations have solutions for x and y for any given values of p and q?
  • How do the solutions change as p or q, or possibly both, change?

Existence of a solution

We have seen that even a single equation in a single variable may not have a solution, but that if it does, we may be able to use the Intermediate Value Theorem to show that it does. Generalizations of the Intermediate Value Theorem, which I do not discuss, can be helpful in showing that a collection of equations in many variables has a solution.

A useful rough guideline for a set of equations to have a unique solution is that the number of equations be equal to the number of variables. This condition definitely is neither necessary nor sufficient, however. For example, the single equation x2 = −1 in a single variable has no solution, while the single equation x2 + y2 = 0 in two variables has a unique solution ((xy) = (0, 0)). But there is some presumption that if the condition is satisfied and the equations are all “independent” of each other, the system is likely to have a unique solution, and if it is not satisfied there is little chance that it has a unique solution.

Differentials

Now consider the question of how a solution, if it exists, depends on the parameters. A useful tool to address this question involves the notion of a differential.

Let f be a differentiable function of a single variable. If x increases by a small amount from a to a + Δx, by how much does f(x) increase? A differentiable function of a single variable is approximated around a by its tangent at a. Thus if Δx is very small then the approximate increase in f(x) is

f'(ax
(where f'(a) is of course the derivative of f at a).

x a f(a) f(x) slope = f'(a)

For any change dx in x we define the differential of f(x) as follows.

Definition
Let f be a function of a single variable. For any real number dx, the differential of f(x) is

f'(x)dx.
By the argument above, if dx is small then the differential f'(x)dx is approximately equal to the change in the value of f when its argument increases or decreases by dx from x.

If f is a function of two variables, it is approximated by its tangent plane: for (xy) close to (ab) the approximate increase in f(xy) when x changes by Δx and y changes by Δy is

f'1(abx + f'2(aby.

For a function of many variables, the differential is defined as follows.

Definition
Let f be a function of n variables. For any real numbers dx1, ..., dxn, the differential of f(x1, ..., xn) is

f'1(x1, ..., xn)dx1 + ... + f'n(x1, ..., xn)dxn.
As in the case of a function of a single variable, if dxi is small for each i = 1, ..., n, then the differential f'1(x1, ..., xn)dx1 + ... + f'n(x1, ..., xn)dxn is approximately equal to the change in the value of f when each argument xi changes by dxi.

To find a differential we may simply find the partial derivatives with respect to each variable in turn. Alternatively we can use a set of rules that are analogous to those for derivatives. Denoting the differential of the function f by d(f), we have:

d(af + bg)  =  adf + bdg
d(f·g)  =  gdf + fdg
d(f/g)  =  (gdf − fdg)/g2
if z = g(f(xy)) then dz = g'(f(xy))df

Comparative statics

Start with the simplest case: a single equation in one variable and one parameter:
f(xp) = 0 for all x,
where x is the variable and p the parameter. We have previously seen how to use implicit differentiation to find the rate of change of x with respect to p. We may reach the same conclusion using differentials. The differential of the left-hand side of the equation is
f'1(xp)dx + f'2(xp)dp.
When p changes, the value of f(xp) must remain the same for the equation f(xp) = 0 to remain satisfied, so for small changes dx in x and dp in p we must have, approximately,
f'1(xp)dx + f'2(xp)dp = 0.
Rearranging this equation we have
dx
dp
 = −
f'2(xp)
f'1(xp)
.
The entity on the left-hand side is the quotient of the small quantities dx and dp, not a derivative. However, we can in fact show that the right-hand side is the derivative of x with respect to p, as we found previously.

This technique may be extended to systems of equations. Suppose, for example, that the variables, x and y, satisfy the following two equations, where p and q are parameters, as in the opening section above:

f(xypq = 0
g(xypq = 0.
Assume that the functions f and g are such that the two equations define two solution functions
x*(pq) and y*(pq).
That is, f(x*(pq), y*(pq), pq) = 0 and g(x*(pq), y*(pq), pq) = 0 for all p and all q.

How do x* and y* depend on the parameters p and q? Assuming that the functions x* and y* are differentiable, we can answer this question by calculating the differentials of the functions on each side of the two equations defining them. If the changes in p and q are small, then the differentials must be equal, so that the equations defining x* and y* remain satisfied. That is,

f'1 · dx + f'2 · dy + f'3 · dp + f'4 · dq  = 0
g'1 · dx + g'2 · dy + g'3 · dp + g'4 · dq  = 0.
(To make these equations easier to read, I have omitted the arguments of the partial derivatives.)

To find the changes dx and dy in x and y necessary for these equations to be satisfied we need to solve the equations for dx and dy as functions of dp and dq, the changes in the parameters. (See the page on matrices and solutions of systems of simultaneous equations if you have forgotten how.) We obtain

dx = 
g'2 · (f'3 · dp + f'4 · dq) + f'2 · (g'3 · dp + g'4 · dq)
f'1 · g'2 − f'2 · g'1
and
dy = 
g'1 · (f'3 · dp + f'4 · dq) − f'1 · (g'3 · dp + g'4 · dq)
f'1 · g'2 − f'2 · g'1
.

Now, to determine the impact on x and y of a change in p, holding q constant, we set dq = 0 to get

dx = 
(−g'2 · f'3 + f'2 · g'3) · dp
f'1 · g'2 − f'2 · g'1
and
dy = 
(g'1 · f'3 − f'1 · g'3) · dp
f'1 · g'2 − f'2 · g'1
.
We can alternatively write the first equation, for example, as
x
p
 = 
g'2 · f'3 + f'2 · g'3
f'1 · g'2 − f'2 · g'1
.
If we make some assumption about the signs of the partial derivatives of f and g, this expression may allow us to determine the sign of ∂x/∂p—that is, the direction in which the equilibrium value of x changes when the parameter p changes.

This technique allows us also to study the change in a variable when more than one parameter changes, as illustrated in the following economic example.

Example 2.4.1
Consider the macroeconomic model

Y  =  C + I + G
C  =  f(Y − T)
I  =  h(r)
r  =  m(M)
where the variables are Y (national income), C (consumption), I (investment) and r (the rate of interest), and the parameters are M (the money supply), T (the tax burden), and G (government spending). We want to find how the variables change with the parameters.

Take differentials:

dY  = dC + dI + dG
dC  = f'(Y − T)(dY − dT)
dI  = h'(r)dr
dr  = m'(M)dM
We need to solve for dY, dC, dI, and dr in terms of dM, dT, and dG. The system is too big to use Cramer's rule easily, but the simple structure allows us to proceed step-by-step.

From the last two equations we have

dI = h'(r)m'(M)dM.
Now substitute for dI in the first equation to get
dY − dC  =  h'(r)m'(M)dM + dG
f'(Y − T)dY − dC  =  f'(Y − T)dT
You can solve this system for dY and dC. For example,
dY = 
h'(r)m'(M)
1 − f'(Y − T)
dM − 
f'(Y − T)
1 − f'(Y − T)
dT + 
1
1 − f'(Y − T)
dG.
dY = 
h'(r)m'(M)
1 − f'(Y − T)
dM
− 
f'(Y − T)
1 − f'(Y − T)
dT
1
1 − f'(Y − T)
dG.
Thus, for example, if T changes while M and G remain constant (so that dM = dG = 0), then the rate of change of Y is given by
Y
T
 = 
f'(Y − T)
1 − f'(Y − T)
.
That is, if 0 < f'(z) < 1 for all z then Y decreases as T increases. Further, we can deduce that if T and G increase by equal (small) amounts (dT = dG) then the change in Y is
dY = 
1 − f'(Y − T)
1 − f'(Y − T)
dT = dT.
That is, an equal (small) increase in T and G leads to an increase in Y of the same size.