7.2 Optimization with inequality constraints: the necessity of the Kuhn-Tucker conditions
- Proposition 7.2.1 (Necessity of Kuhn-Tucker conditions)
-
Let f and gj for j = 1, ..., m be differentiable functions of many variables defined on an open convex set S and let cj for j =
1, ..., m be constants. Suppose that x* ∈ S solves the problem
maxx∈Sf(x) subject to gj(x) ≤ cj for j = 1, ..., mand that
- either each gj is concave
- or each gj is convex and there exists x ∈ S such that gj(x) ≤ cj for every j for which gj is linear and gj(x) < cj for every j for which gj is not linear
- or each gj is quasiconvex, ∇gj(x*) ≠ (0, ..., 0) for all j = 1, ..., m, and there exists x ∈ S such that gj(x) < cj for j = 1, ..., m.
- Source
- For proofs under the first two conditions, see Arrow, Hurwicz, and Uzawa (1961), Theorem 3 and Corollaries 1 and 3 (p. 183). See also Takayama (1985), pp. 106–111. (Note that in both cases the constraints are formulated as gj(x) ≥ 0 rather than gj(x) ≤ cj, so the conditions of concavity and convexity have to be interchanged to fit my statement of the result.) A proof under a slight variant of the second condition is given in Carter (2001), Theorem 5.4 (p. 577). The third condition is also included in Carter's Theorem 5.4; the proof is left as an exercise (5.42 on p. 579). The solution to the exercise is available on the author's website for the book.
Note that the last part of the second and third conditions requires only that some point strictly satisfy all the constraints. This requirement is given in Slater (1950) and is known as Slater's condition.
One way in which the conditions in the result may be weakened is sometimes useful: the conditions on the constraint functions need to be satisfied only by the binding constraints—those for which gj(x*) = cj.
See the next page for some examples illustrating how to use the result.
The next example shows that some conditions on the constraint functions are needed. The problem in the example has a solution, but this solution does not satisfy the Kuhn-Tucker conditions (which have no solution).
- Example 7.2.1
-
Consider the problem
maxx,y x subject to y − (1 − x)3 ≤ 0 and y ≥ 0,with the domain of both the objective function and the constraint function the set of all pairs (x, y).
The constraint does not satisfy any of the conditions in the proposition.
The solution is clearly (1, 0).
The Lagrangean is
L(x) = x − λ1(y − (1 − x)3) + λ2y.The Kuhn-Tucker conditions are1 − 3λ1(1 − x)2 = 0 −λ1 + λ2 = 0 y − (1 − x)3 ≤ 0, λ1 ≥ 0, and λ1[y − (1 − x)3] = 0 −y ≤ 0, λ2 ≥ 0, and λ2[−y] = 0.
- Example 7.2.2
-
Consider the problem
maxx,y x subject to x2 ≤ 0,with the domain of both the objective function and the constraint function the set of all numbers x.
The constraint function is convex, but no value of x strictly satisfies the constraint.
The only value of x that satisfies the constraint is 0, so that is the solution of the problem.
The Lagrangean is
L(x, y) = x − λx2so the Kuhn-Tucker conditions are1 − 2λx = 0 λ ≥ 0, x2 ≤ 0, and λx = 0.