6.1.1 Optimization with an equality constraint: necessary conditions for an optimum for a function of two variables
Motivation
Consider the twovariable problem
Assuming that the functions f and g are differentiable, we see from the figure that at a solution (x*, y*) of the problem, the constraint curve is tangent to a level curve of f, so that (using the equation for a tangent),
− 

= − 


= 

, 
Now introduce a new variable, λ, and set it equal to the common value of the quotients: λ = f'_{1}(x*, y*)/g'_{1}(x*, y*) = f'_{2}(x*, y*)/g'_{2}(x*, y*). You might think that introducing a new variable merely complicates the problem, but in fact it is a clever step that allows the condition for a maximum to be expressed in an appealing way. In addition, the variable turns out to have a very useful interpretation.
Given the definition of λ, the condition for (x*, y*) to solve the problem may be written as the two equations
f'_{1}(x*, y*) − λg'_{1}(x*, y*)  = 0 
f'_{2}(x*, y*) − λg'_{2}(x*, y*)  = 0. 
f'_{1}(x*, y*) − λg'_{1}(x*, y*)  = 0 
f'_{2}(x*, y*) − λg'_{2}(x*, y*)  = 0 
c − g(x*, y*)  = 0. 
In summary, this argument suggests that if (x*, y*) solves the problem
This method was developed by JosephLouis Lagrange (1736–1813).
Necessary conditions for an optimum
Precise conditions for an optimum are given in the following result which, unlike the preceding informal argument, requires only either g'_{1}(x*, y*) ≠ 0 or g'_{2}(x*, y*) ≠ 0, not both of these conditions. (Recall that a continuously differentiable function is one whose partial derivatives all exist and are continuous.) Proposition 6.1.1.1

Let f and g be functions of two variables defined on a set S that are continuously differentiable on the interior of S, let c be a number, and suppose that (x*, y*) is an interior point of S that
solves the problem
max_{x,y} f(x, y) subject to g(x, y) = cor the problemmin_{x,y} f(x, y) subject to g(x, y) = cor is a local maximizer or minimizer of f(x, y) subject to g(x, y) = c. Suppose also that either g'_{1}(x*, y*) ≠ 0 or g'_{2}(x*, y*) ≠ 0.
Then there is a unique number λ such that (x*, y*) is a stationary point of the Lagrangean
L(x, y) = f(x, y) − λ(g(x, y) − c).That is, (x*, y*) satisfies the firstorder conditionsL'_{1}(x*, y*) = f'_{1}(x*, y*) − λg'_{1}(x*, y*) = 0 L'_{2}(x*, y*) = f'_{2}(x*, y*) − λg'_{2}(x*, y*) = 0.
 Source
 For proofs, see Simon and Blume (1994), pp. 478–480, and Sydsæter (1981), Theorem 5.20 (p. 275).
 Procedure for solving a twovariable maximization problem with an equality constraint

Let f and g be functions of two variables defined on a set S that are continuously differentiable on the interior of S, and let c be a number. If the problem max_{x,y}f(x, y) subject to
g(x, y) = c has a solution, it may be found as follows.
 Find all the values of (x, y, λ) in which (a) (x, y) is an interior point of S and (b) (x, y, λ) satisfies the firstorder conditions and the constraint (the points (x, y, λ) for which f'_{1}(x, y) − λg'_{1}(x, y) = 0, f'_{2}(x, y) − λg'_{2}(x, y) = 0, and g(x, y) = c).
 Find all the points (x, y) that satisfy g'_{1}(x, y) = 0, g'_{2}(x, y) = 0, and g(x, y) = c. (For most problems, there are no such values of (x, y). In particular, if g is linear there are no such values of (x, y).)
 If the set S has any boundary points, find all the points that solve the problem max_{x,y}f(x, y) subject to the two conditions g(x, y) = c and (x, y) is a boundary point of S.
 The points (x, y) you have found for which f(x, y) is largest are the solutions of the problem.
 Example 6.1.1.1

Consider the problem
max_{x,y} xy subject to x + y = 6,where the objective and constraint functions are defined on the set of all 2vectors, which has no boundary.
The constraint set is not bounded, so the Extreme Value Theorem does not imply that this problem has a solution.
The Lagrangean is
L(x, y) = xy − λ(x + y − 6)so the firstorder conditions areL'_{1}(x, y) = y − λ = 0 L'_{2}(x, y) = x − λ = 0 These equations have a unique solution, (x, y, λ) = (3, 3, 3). We have g'_{1}(x, y) = 1 ≠ 0 and g'_{2}(x, y) = 1 ≠ 0 for all (x, y), so we conclude that if the problem has a solution it is (x, y) = (3, 3).
(We can additionally argue that if the problem has a solution (x, y) then, given that x + y = 6, we have x ≥ 0 and y ≥ 0, because the value of the objective function at (3, 3) is positive and negative at any point (x, y) with x + y = 6 and either x < 0 or y < 0. Thus any solution of the problem with the additional constraints x ≥ 0 and y ≥ 0 is a solution of the problem. Now, the problem with the additional constraints has a solution because its constraint set is closed and bounded. Thus the original problem has a solution, and hence this solution is (3, 3).)
 Example 6.1.1.2

Consider the problem
max_{x,y} x^{2}y subject to 2x^{2} + y^{2} = 3,where the objective and constraint functions are defined on the set of all 2vectors, which has no boundary.
The constraint set is compact and the objective function is continuous, so the Extreme Value Theorem implies that the problem has a solution.
The Lagrangean is
L(x, y) = x^{2}y − λ(2x^{2} + y^{2} − 3)so the firstorder conditions areL'_{1}(x, y) = 2xy − 4λx = 2x(y − 2λ) = 0 L'_{2}(x, y) = x^{2} − 2λy = 0 To find the solutions of these three equations, first note that from the first equation we have either x = 0 or y = 2λ. We can check each possibility in turn.
 x = 0: we have y = 3^{1/2} and λ = 0, or y = −3^{1/2} and λ = 0.
 y = 2λ: we have x^{2} = y^{2} from the second equation, so either x = 1 or x = −1 from the third equation.
 x = 1: either y = 1 and λ = 1/2, or y = −1 and λ = −1/2.
 x = −1: either y = 1 and λ = 1/2, or y = −1 and λ = −1/2.
 (x, y, λ) = (0, 3^{1/2},0), with f(x, y) = 0.
 (x, y, λ) = (0, −3^{1/2},0), with f(x, y) = 0.
 (x, y, λ) = (1, 1, 1/2), with f(x, y) = 1.
 (x, y, λ) = (1, −1, −1/2), with f(x, y) = −1.
 (x, y, λ) = (−1, 1, 1/2), with f(x, y) = 1.
 (x, y, λ) = (−1, −1, −1/2), with f(x, y) = −1.
We conclude that the problem has two solutions, (x, y) = (1, 1) and (x, y) = (−1, 1).
 Example 6.1.1.3

Consider the problem
max_{x,y} x^{a}y^{b} subject to px + y = m,where a > 0, b > 0, p > 0, and m > 0, and the objective and constraint functions are defined on the set of all points (x, y) with x ≥ 0 and y ≥ 0 (and are continuously differentiable on the interior of this set).
The Lagrangean is
L(x, y) = x^{a}y^{b} − λ(px + y − m)so the firstorder conditions areax^{a−1}y^{b} − λp = 0 bx^{a}y^{b−1} − λ = 0 and the constraint is px + y = m. From the first two conditions we have ay = pbx. Substituting into the constraint we obtain
x = am/((a + b)p) and y = bm/(a + b),so that (x, y) is an interior point of the domain of the objective function andλ = [a^{a}b^{b}/(a+b)^{a+b−1}][m^{a+b−1}/p^{a}].The value of the objective function at this point is [am/((a + b)p)]^{a}[bm/(a + b)]^{b}, which is positive.We have g'_{1}(x, y) = p and g'_{2}(x, y) = 1, so there are no values of (x, y) for which g'_{1}(x, y) = g'_{2}(x, y) = 0.
The boundary of the set on which the objective function is defined is the set of points (x, y) with x = 0 or y = 0. At every such point the value of the objective function is 0.
We conclude that if the problem has a solution, it is (x, y) = (am/((a + b)p), bm/(a + b)).
 Example 6.1.1.4

Consider the problem
max_{x,y} x subject to x^{2} = 0,where the objective and constraint functions are defined on the set of all points (x, y).
The Lagrangean is
L(x, y) = x − λx^{2}so the firstorder conditions are1 − 2λx = 0 0 = 0 (In fact the problem has a solution, though we cannot deduce that it does from the Extreme Value Theorem, because the constraint set, which consists of all pairs (0, y), for any value of y, is not compact.)