Mathematical methods for economic theory

Martin J. Osborne

7.2 Optimization with inequality constraints: the necessity of the Kuhn-Tucker conditions

We have seen that a solution x* of an optimization problem with equality constraints is a stationary point of the Lagrangean if the constraints satisfy a condition sometimes referred to as a “regularity” condition. In the case of a single constraint g(x) = c, the condition is that the partial derivatives of g are not all zero at x*, or, in more compact notation, ∇g(x*) ≠ (0, ..., 0). In an optimization problem with inequality constraints a related regularity condition guarantees that a solution satisfies the Kuhn-Tucker conditions. The weakest forms of this regularity condition are difficult to verify. The next result gives three alternative strong forms that are much easier to verify.
Proposition 7.2.1 (Necessity of Kuhn-Tucker conditions)  
Let f and gj for j = 1, ..., m be differentiable functions of many variables defined on an open convex set S and let cj for j = 1, ..., m be constants. Suppose that x* ∈ S solves the problem
maxxSf(x) subject to gj(x) ≤ cj for j = 1, ..., m
and that
  • either each gj is concave
  • or each gj is convex and there exists x ∈ S such that gj(x) ≤ cj for every j for which gj is linear and gj(x) < cj for every j for which gj is not linear
  • or each gj is quasiconvex, ∇gj(x*) ≠ (0, ..., 0) for all j = 1, ..., m, and there exists x ∈ S such that gj(x) < cj for j = 1, ..., m.
Then there exists a vector λ = (λ1, ..., λm) such that (x*, λ) satisfies the Kuhn-Tucker conditions.
Source  
For proofs under the first two conditions, see Arrow, Hurwicz, and Uzawa (1961), Theorem 3 and Corollaries 1 and 3 (p. 183). See also Takayama (1985), pp. 106–111. (Note that in both cases the constraints are formulated as gj(x) ≥ 0 rather than gj(x) ≤ cj, so the conditions of concavity and convexity have to be interchanged to fit my statement of the result.) A proof under a slight variant of the second condition is given in Carter (2001), Theorem 5.4 (p. 577). The third condition is also included in Carter's Theorem 5.4; the proof is left as an exercise (5.42 on p. 579). The solution to the exercise is available on the author's website for the book.
Recall that a linear function is concave, so the conditions in the result are satisfied if each constraint function is linear.

Note that the last part of the second and third conditions requires only that some point strictly satisfy all the constraints. This requirement is given in Slater (1950) and is known as Slater's condition.

One way in which the conditions in the result may be weakened is sometimes useful: the conditions on the constraint functions need to be satisfied only by the binding constraints—those for which gj(x*) = cj.

See the next page for some examples illustrating how to use the result.

The next example shows that some conditions on the constraint functions are needed. The problem in the example has a solution, but this solution does not satisfy the Kuhn-Tucker conditions (which have no solution).

Example 7.2.1
Consider the problem
maxx,y x subject to y − (1 − x)3 ≤ 0 and y ≥ 0,
with the domain of both the objective function and the constraint function the set of all pairs (xy).

The constraint does not satisfy any of the conditions in the proposition.

The solution is clearly (1, 0).

The Lagrangean is

L(x) = x − λ1(y − (1 − x)3) + λ2y.
The Kuhn-Tucker conditions are
1 − 3λ1(1 − x)2 = 0
−λ1 + λ2 = 0
y − (1 − x)3 ≤ 0, λ1 ≥ 0, and λ1[y − (1 − x)3] = 0
y ≤ 0, λ2 ≥ 0, and λ2[−y] = 0.
These conditions have no solution. From the last condition, either λ2 = 0 or y = 0. If λ2 = 0 then λ1 = 0 from the second condition, so that no value of x is compatible with the first condition. If y = 0 then from the third condition either λ1 = 0 or x = 1, both of which are incompatible with the first condition.
The following variant of a previous example shows that the second part of the second condition cannot be dispensed with: the convexity alone of the constraint functions does not guarantee that a solution of the problem satisfies the Kuhn-Tucker conditions.
Example 7.2.2
Consider the problem
maxx,y x subject to x2 ≤ 0,
with the domain of both the objective function and the constraint function the set of all numbers x.

The constraint function is convex, but no value of x strictly satisfies the constraint.

The only value of x that satisfies the constraint is 0, so that is the solution of the problem.

The Lagrangean is

L(xy) = x − λx2
so the Kuhn-Tucker conditions are
1 − 2λx = 0
λ ≥ 0, x2 ≤ 0, and λx = 0.
The condition x2 ≤ 0 implies that x = 0. However, there is no value of λ for which (0, λ) satisfies the first condition.