7.2 Optimization with inequality constraints: the necessity of the KuhnTucker conditions
 Proposition (Necessity of KuhnTucker conditions)

Let f and g_{j} for j = 1, ..., m be differentiable functions of many variables and let c_{j} for j = 1, ..., m be constants. Suppose that x* solves the problem
max_{x}f(x) subject to g_{j}(x) ≤ c_{j} for j = 1, ..., mand that
 either each g_{j} is concave
 or each g_{j} is convex and there exists x such that g_{j}(x) ≤ c_{j} for every j for which g_{j} is linear and g_{j}(x) < c_{j} for every j for which g_{j} is not linear
 or each g_{j} is quasiconvex, ∇g_{j}(x*) ≠ (0, ..., 0) for all j = 1, ..., m, and there exists x such that g_{j}(x) < c_{j} for j = 1, ..., m.
 Source
 For proofs under the first two conditions, see Arrow, Hurwicz, and Uzawa (1961), Theorem 3 and Corollaries 1 and 3 (p. 183). See also Takayama (1985), pp. 106–111. (Note that in both cases the constraints are formulated as g_{j}(x) ≥ 0 rather than g_{j}(x) ≤ c_{j}, so the conditions of concavity and convexity have to be interchanged to fit my statement of the result.) A proof under a slight variant of the second condition is given in Carter (2001), Theorem 5.4 (p. 577). The third condition is also included in Carter's Theorem 5.4; the proof is left as an exercise (5.42 on p. 579). The solution to the exercise is available on the author's website for the book.
Note that the last part of the second and third conditions requires only that some point strictly satisfy all the constraints. This requirement is given in Slater (1950) and is known as Slater's condition.
One way in which the conditions in the result may be weakened is sometimes useful: the conditions on the constraint functions need to be satisfied only by the binding constraints—those for which g_{j}(x*) = c_{j}.
See the next page for some examples illustrating how to use the result.
The next example shows that some conditions on the constraint functions are needed. The problem in the example has a solution, but this solution does not satisfy the KuhnTucker conditions (which have no solution).
 Example

Consider the problem
max_{x,y} x subject to y − (1 − x)^{3} ≤ 0 and y ≥ 0.The constraint does not satisfy any of the conditions in the proposition.
The solution is clearly (1, 0).
The Lagrangean is
L(x) = x − λ_{1}(y − (1 − x)^{3}) + λ_{2}y.The KuhnTucker conditions are1 − 3λ_{1}(1 − x)^{2} = 0 −λ_{1} + λ_{2} = 0 y − (1 − x)^{3} ≤ 0, λ_{1} ≥ 0, and λ_{1}[y − (1 − x)^{3}] = 0 −y ≤ 0, λ_{2} ≥ 0, and λ_{2}[−y] = 0.
 Example

Consider the problem
max_{x,y} x subject to x^{2} ≤ 0,in which the constraint function is convex but no value of x strictly satisfies the constraint.
The only value of x that satisfies the constraint is 0, so that is the solution of the problem.
The Lagrangean is
L(x, y) = x − λx^{2}so the KuhnTucker conditions are1 − 2λx = 0 λ ≥ 0, x^{2} ≤ 0, and λx = 0.