4.2 Optimization: definitions
To be very clear about what we mean by the “max” operator, here is a precise definition.
- Definition
-
Let f be a function of many variables defined on a set X and let S be a subset of X. The point x* ∈ S solves the problem
maxx∈X f(x) subject to x ∈ Siff(x) ≤ f(x*) for all x ∈ S.In this case we say that x* is a maximizer of f(x) subject to the constraint x ∈ S, and that f(x*) is the maximum (or maximum value) of f(x) subject to the constraint x ∈ S.
Consider the function f of one variable shown in the following figure, with S equal to the blue line segment. Both x* and x** are maximizers of f(x) subject to the constraint x ∈ S, and x" is a minimizer. (The graph of f is horizontal between x3 and x4 and between x5 and x6.) The number M is the maximum of f(x) subject to x ∈ S and the number m is the minimum.
What about the point x'? This point is decidedly not a maximizer, because f(x*) > f(x'), for example. But it is a maximum among the points close to it. We call such a point a “local maximizer”. Recall that the distance between two
n-vectors x and y is the square root of ∑n
i=1(xi − yi)2 (the length of the straight line joining x and y).
- Definition
-
Let f be a function of many variables defined on a set X and let S be a subset of X. The point x* ∈ S is a local maximizer of f(x) subject to the constraint x ∈ S if there is a number
ε > 0 such that
f(x) ≤ f(x*) for all x ∈ S for which the distance between x and x* is at most ε.
For the function f in the figure above we have f(x) ≤ f(x') for all x between x1 and x2 (for example), so that x' is a local maximizer of f(x); because x' − x1 = x2 − x', we can take ε = x' − x1 in the definition.
But note that the point x" in this figure is also a local maximizer of f(x), even though it is a global minimizer. The function is constant between x3 and x4. The point x4 is (slightly) closer to x" than is the point x3, so we can take the ε in the definition of a local maximizer to be x4 − x". For every point x within the distance ε of x", we have f(x) = f(x"), so that in particular f(x) ≤ f(x"). The point is that the definition of a local maximizer does not require the value of the function at nearby points to be less than f(x*), only less than or equal to f(x*).
In economic theory we are almost always interested in global maximizers, not merely local maximizers. For example, the standard theory is that a consumer chooses the bundle she most prefers among all those that are available; she is not satisfied by a bundle that is merely better than the other bundles that are nearby. Similarly, a firm is assumed to choose the input-output vector that maximizes its profit among all those that are feasible; it is not satisfied by an input-output vector that merely yields a higher profit than does similar vectors.
Transforming the objective function
Let g be an increasing function of a single variable. (That is, if z' > z then g(z') > g(z).) Then the set of solutions to the problem
This fact is sometimes useful: for example, if the objective function is f(x1, x2) = xα
1xβ
2 and x1 and x2 are positive for all points in the constraint set, it may be easier to work with the
logarithm of this function, namely αln x1 + βln x2. Since log z is an increasing function, the set of solutions of the problem in which xα
1xβ
2 is the objective function is the same as the set of solutions of the problem is which
αln x1 + βln x2 is the objective function.
Minimization problems
Throughout the tutorial, I concentrate on maximization problems. What about minimization problems? Any discussion of maximization problems encompasses minimization problems because of a simple fact: any minimization problem can be turned into a maximization problem by multiplying the objective function by minus one. That is, the problemAlthough I emphasize maximization problems, I usually state results separately both for maximization problems and for minimization problems.