5.1 Necessary conditions for an interior optimum
One variable
From your previous study of mathematics, you probably know that if the function f of a single variable is differentiable and I is an interval of numbers then there is a relationship between the solutions of the problem Definition
 Let f be a function of a single variable defined on a set S. A point x ∈ S at which f is differentiable and f'(x) = 0 is a stationary point of f.
 The function in the left figure has a unique stationary point x*, which is the global maximizer of the function.
 The function in the middle figure has three stationary points, x*, x', and x". The point x* is the global maximizer of the function, while x' is a local (though not global) minimizer and x" is a local (but not global) maximizer.
 The function in the right figure has two stationary points, x' and x". The point x' in neither a local maximizer nor a local minimizer; x" is a global minimizer.
 a stationary point is not necessarily a global maximizer, or even a local maximizer, or even a local optimizer of any sort (maximizer or minimizer) (consider x' in the righthand figure)
 a global maximizer is not necessarily a stationary point (consider a in the righthand figure).
Although a maximizer may not be a stationary point, the only case in which it is not is when it is one of the endpoints of the interval I on which f is defined. That is, any point interior to this interval that is a maximizer must be a stationary point.
 Proposition proof
 Let f be a function of a single variable defined on the interval I. If a point x in the interior of I is a local maximizer or minimizer of f and f is differentiable at x then f'(x) = 0.
 Proof hide
 Suppose that x is a local maximizer of f. Because x is in the interior of I, for h > 0 sufficiently small we have x + h ∈ I, so that f(x + h) is defined. Thus because x is a local maximizer of f, for small enough values of h we have f(x + h) ≤ f(x), and hence (f(x + h) − f(x))/h ≤ 0. The limit of lefthand side of this inequality as h → 0 is f'(x) (by the definition of a derivative), so that f'(x) ≤ 0. A symmetric argument using h < 0 shows that f'(x) ≥ 0. Thus f'(x) = 0. A similar argument applies when x is a local minimizer of f.
Thus among all the points in the interval I, only the endpoints (if any) and the stationary points of f can be maximizers of f. Most functions have a relatively small number of stationary points, so the following procedure to find the maximizers is useful.
 Procedure for solving a singlevariable maximization problem on an interval

Let f be a differentiable function of a single variable and let I be an interval. If the problem max_{x}f(x) subject to x ∈ I has a solution, it may be found as follows.
 Find all the stationary points of f (the points x for which f'(x) = 0) that are in I, and calculate the values of f at each such point.
 Find the values of f at the endpoints, if any, of I.
 The points x you have found at which the value f(x) is largest are the maximizers of f.
 Example

Consider the problem
max_{x} x^{2} subject to x ∈ [−1, 2].This problem satisfies the conditions of the extreme value theorem, and hence has a solution. Let f(x) = x^{2}. We have f'(x) = 2x, so the function has a single stationary point, x = 0, which is in the constraint set. The value of the function at this point is f(0) = 0. The values of f at the endpoints of the interval on which it is defined are f(−1) = 1 and f(2) = 4. Thus the global maximizer of the function on [−1, 2] is x = 2 and the global minimizer is x = 0.
 Example

Consider the problem
max_{x} −x^{2} subject to x ∈ (−∞, ∞).This problem does not satisfy the conditions of the extreme value theorem, so that the theorem does not tell us whether it has a solution. Let f(x) = −x^{2}. We have f'(x) = −2x, so that the function has a single stationary point, x = 0, which is in the constraint set. The constraint set has no endpoints, so x = 0 is the only candidate for a solution to the problem. We conclude that if the problem has a solution then the solution is x = 0. In fact, the problem does have a solution: we have f(x) ≤ 0 for all x and f(0) = 0, so the solution is indeed x = 0.
 Example

Consider the problem
max_{x} x^{2} subject to x ∈ (−∞, ∞).Like the problem in the previous example, this problem does not satisfy the conditions of the extreme value theorem, so that the theorem does not tell us whether it has a solution. Let f(x) = x^{2}. We have f'(x) = 2x, so that the function has a single stationary point, x = 0, which is in the constraint set. The constraint set has no endpoints, so x = 0 is the only candidate for a solution to the problem. We conclude that if the problem has a solution then the solution is x = 0. In fact, the problem does not have a solution: the function f increases without bound as x increases (or decreases) without bound.
Many variables
Consider a maximum of a function of two variables. At this maximum the function must decrease in every direction (otherwise the point would not be a maximum!). In particular, the maximum must be a maximum along a line parallel to the xaxis and also a maximum along a line parallel to the yaxis. Hence, given the result for a function of a single variable, at the maximum both the partial derivative with respect to x and the partial derivative with respect to y must be zero. Extending this idea to many dimensions gives us the following result, where f'_{i} is the partial derivative of f with respect to its ith argument. Proposition proof

Let f be a function of n variables defined on the set S. If the point x in the interior of S is a local maximizer or minimizer of f and the partial derivative of
f with respect to its ith argument exists at x then
f'_{i}(x) = 0.In particular, if all of the partial derivatives of f exist at x thenf'_{j}(x) = 0 for j = 1, ..., n.
 Proof hide

Suppose that x is a local maximizer of f. Because x is in the interior of S, for some r > 0 the set of all nvectors h with h < r is a subset of S. Define the function g of a single variable by g(z_{i}) =
f(x_{1}, ..., x_{i−1}, z_{i}, x_{i+1}, ..., x_{n}) for z_{i} ∈ (x_{i} − r, x_{i} + r). The fact that the partial derivative of f with respect to its ith argument exists at x means that g is differentiable at x_{i}, by the definition of a partial derivative, and g'(x_{i}) = f'_{i}(x). Also, given that x is a local maximizer of f, x_{i} is a local maximizer of g. Thus by a previous result, g'(x_{i}) = 0. Thus f'_{i}(x) = 0. A similar argument applies when x is a local minimizer of f.
 Definition
 Let f be a function of n variables defined on a set S. A point x ∈ S at which f is differentiable and f'_{i}(x) = 0 for i = 1, ..., n is a stationary point of f.
 Procedure for solving a manyvariable maximization problem on a set

Let f be a differentiable function of n variables and let S be a set of nvectors. If the problem max_{x}f(x) subject to x ∈ S has a solution, it may be found as follows.
 Find all the stationary points of f (the points x for which f'_{i}(x) = 0 for i = 1, ..., n) in S and calculate the value of f at each point.
 Find the largest and smallest values of f on the boundary of S.
 The points x you have found at which the value of f is largest are the maximizers of f.
Here are some examples, however, where the method may be fairly easily applied.
 Example

Consider the problem
max_{x,y}[−(x − 1)^{2} − (y + 2)^{2}] subject to −∞ < x < ∞ and −∞ < y < ∞.This problem does not satisfy the conditions of the extreme value theorem (because the constraint set is not bounded), so the theorem does not tell us whether the problem has a solution. The firstorder conditions are
−2(x − 1) = 0 −2(y + 2) = 0,
 Example

Consider the problem
max_{x,y}[(x − 1)^{2} + (y − 1)^{2}] subject to 0 ≤ x ≤ 2 and −1 ≤ y ≤ 3.This problem satisfies the conditions of the extreme value theorem, and hence has a solution. The firstorder conditions are
2(x − 1) = 0 2(y − 1) = 0, Now consider the behavior of the objective function on the boundary of the constraint set, which is a rectangle.
 If x = 0 and −1 ≤ y ≤ 3 then the value of the objective function is 1 + (y − 1)^{2}. The problem of finding y to maximize this function subject to −1 ≤ y ≤ 3 satisfies the conditions of the extreme value theorem, and thus has a solution. The firstorder condition is 2(y − 1) = 0, which has a unique solution y = 1, which is in the constraint set. The value of the objective function at this point is 1. On the boundary of the set {(0, y): −1 ≤ y ≤ 3}—namely at the points (0, −1) and (0, 3)—the value of the objective function is 5. Thus on this part of the boundary, the points (0, −1) and (0, 3) are the only candidates for a solution of the original problem.
 A similar analysis leads to the conclusion that the points (2, −1) and (2, 3) are the only candidates for a maximizer on the part of the boundary for which x = 2 and −1 ≤ y ≤ 3, the points (0, −1) and (2, −1) are the only candidates for a maximizer on the part of the boundary for which 0 ≤ x ≤ 2 and y = −1, and the points (0, 3) and (2, 3) are the only candidates for a maximizer on the part of the boundary for which 0 ≤ x ≤ 2 and y = 3.
 The value of the objective function at all these candidates for a solution on the boundary of the constraint set is 5.
 Example

Consider the problems
max_{x,y} x^{2} + y^{2} + y − 1 subject to x^{2} + y^{2} ≤ 1andmin_{x,y} x^{2} + y^{2} + y − 1 subject to x^{2} + y^{2} ≤ 1.In each case the constraint set, {(x, y): x^{2} + y^{2} ≤ 1}, is compact. The objective function is continuous, so by the extreme value theorem, the problem has a solution.
We apply the procedure as follows, denoting the objective function by f.
 We have f'_{1}(x, y) = 2x and f'_{2}(x, y) = 2y + 1, so the stationary points are the solutions of 2x = 0 and 2y + 1 = 0. Thus the function has a single stationary point, (x, y) = (0, −1/2), which is in the constraint set. The value of the function at this point is f(0, −1/2) = −5/4.
 The boundary of the constraint set is the set of points (x, y) such that x^{2} + y^{2} = 1, as shown in the following figure.
Thus for a point (x, y) on the boundary we have f(x, y) = x^{2} + 1 − x^{2} + y − 1 = y. We have −1 ≤ y ≤ 1 on the boundary, so the maximum of the function on the boundary is 1, which is achieved at (x, y) = (0, 1), and the minimum is −1, achieved at (x, y) = (0, −1).
 Looking at all the values we have found, we see that the global maximum of f is 1, achieved at (0, 1), and the global minimum is −5/4, achieved at (0, −1/2).