Mathematical methods for economic theory

Martin J. Osborne

1.2 Matrices: determinant, inverse, and rank

I assume that you are familiar with vectors and matrices and know, in particular, how to multiply them together. (Do the first few exercises to check your knowledge.) On this page I present only the properties of matrices that you need to know to understand the material in the rest of the tutorial.

Transpose

Definition
Let A be an n × m matrix (i.e. a matrix with n rows and m columns). The transpose A' of A is the m × n matrix in which, for i = 1, ..., m, the ith row is the ith column of A.
In particular, if x is a column vector (n × 1 matrix) then x' is a row vector (1 × n matrix).

Determinant

An important characteristic of a square matrix (a matrix with the same number of rows as columns) is its “determinant”. We can conveniently define the determinant of a matrix inductively, in terms of the determinants of smaller matrices.
Definition
The determinant of a 1 × 1 matrix is the single number in the matrix. For any n ≥ 2, the determinant of the n × n matrix A is
|A| = ∑n
j=1
(−1)1+ja1j|A1j|
where a1j is the number in the first row and jth column of A and A1j is the n − 1 × n − 1 matrix obtained by deleting the first row and jth column of A.
Notice that the term (−1)1+j in the sum in the definition is equal to 1 if 1 + j is even and −1 if 1 + j is odd, so that the signs of the coefficients of the a1j|Aij| terms in the sum alternate between positive and negative. If we expand the sum we get
|A|  =  a11|A11| − a12|A12| + a13|A13| − ... + (−1)1+na1n|A1n|.
That is, plus a11|A11| minus a12|A12| plus a13|A13| and so on.

To use the definition to find the determinant of an n × n matrix, you first write down the expression it gives for the determinant as a sum of the determinants of a collection of n − 1 × n − 1 matrices. Then, for each of these determinants, you substitute the expression the definition gives as a sum of the determinants of a collection of n − 2 × n − 2 matrices. You continue in the same way until you get to an expression involving the determinants of a collection of 1 × 1 matrices, which the definition says are simply the single elements of the matrices. If n is bigger than 3, this process will involve a lot of bytes (or paper). But in principle it is possible. The following examples illustrate it.

Example 1.2.1
Let A be the 2 × 2 matrix
left parenthesis a b right parenthesis
c d
.
The matrix A11 is the 1 × 1 matrix consisting of the number d and the matrix A12 is the 1 × 1 matrix consisting of the number c. Thus using the expression in the definition, the determinant of A is ad − bc.
Example 1.2.2
Let A be the 3 × 3 matrix
left parenthesis a b c right parenthesis
d e f
g h i
.
The matrix A11 is the 2 × 2 matrix
left parenthesis e f right parenthesis
h i
,
whose determinant is ei − fh. Similarly |A12| = di − fg and A13 = dh − eg. Thus using the expression in the definition, the determinant of A is
a(ei − fh) − b(di − fg) + c(dh − eg).
You have probably noticed an odd asymmetry in the definition of the determinant: all the minors are obtained by deleting the first row of the matrix. What is special about the first row? Nothing! We can show that the determinant of any square matrix is equal to a sum like the one in the definition except that the first row is replaced by any other row. There is nothing special either about rows rather than columns; we can switch their roles in the sum. The next result gives the exact expressions.
Proposition 1.2.1
The determinant of the n × n matrix A is equal to
n
j=1
(−1)i+jaij|Aij|
for any i = 1, ..., n and is also equal to
n
i=1
(−1)i+jaij|Aij|
for any j = 1, ..., n, where Aij is the matrix obtained by deleting the ith row and jth column of A.
Source  
For a proof, see Simon and Blume (1994), Theorem 26.1 on p. 743.
An immediate implication of this result is that the determinant of a matrix for which one or more columns or rows consists entirely of 0's is 0.

Notice that, as in the original definition, the coefficients (−1)i+j in each of these sums alternate in sign. Notice also that the sign of the first term in either sum is positive if the exponent of −1 in that term is even and negative if the exponent is odd. In particular, it is not always positive. For example, the coefficient (−1)i+1 of the first term in the first sum is negative if i is even.

The first expression in the proposition is called the expansion along the ith row of the matrix; the second expression is called the expansion along the jth column of the matrix.

The next example verifies the proposition for an arbitrary 3 × 3 matrix.
Example 1.2.3
Let A be a 3 × 3 matrix; denote its elements as in an earlier example (to avoid a dizzying collection of subscripts). The first sum in the proposition is
a21|A21| + a22|A22| − a23|A23| = −d(bi − ch) + e(ai − cg) − f(ah − bg)
for i = 2 and
a31|A31| − a32|A32| + a33|A33| = g(bf − ce) − h(af − cd) + i(ae − bd)
for i = 3, both of which are equal to the value of |A| calculated in the earlier example. The second sum in the proposition is
a11|A11| − a21|A21| + a31|A31| = a(ei − fh) − d(bi − ch) + g(bf − ce)
for j = 1,
a12|A12| + a22|A22| − a32|A32| = −b(di − fg) + e(ai − cg) − h(af − cd)
for j = 2 and
a13|A13| − a23|A23| + a33|A33| = c(dh − eg) − f(ah − bg) + i(ae − bd)
for j = 3, all of which are equal to the value of |A| calculated in the earlier example.
Example 1.2.4
Let A be an n × n matrix in which all the elements are zero except the ones on the “main diagonal”—that is, the elements akk in the kth row and kth column for k = 1, ..., n. Applying the definition, the determinant of A is simply a11 times the determinant of the matrix obtained by deleting the first row and first column of A (because all the other elements in the first row of A are zero). This latter matrix has the same structure as A, and its determinant is a22 times the determinant of the matrix obtained by deleting the first two rows and first two columns of A. Proceeding in the same way, we see that the determinant of A is a11a22···ann, the product of all the elements on the main diagonal of the matrix (the only positive elements in the matrix).

Inverse

Definition
The square matrix is nonsingular if its determinant is not zero.
Example 1.2.5
The determinant of the matrix
left parenthesis a b right parenthesis
c d
is ad − bc by an earlier example. Thus the matrix is nonsingular if ad − bc ≠ 0.
If a, x, and b are numbers with a ≠ 0, we solve the equation ax = b for x by dividing both sides by a. Or, if you like, by multiplying each side by a−1, which is the inverse operation to multiplication by a. If A is an n × n matrix, x is an n × 1 matrix (column vector), and b is an n × 1 matrix, then we'd like to solve for x in the same way, by multiplying both sides by an “inverse” of A. What exactly do we mean by “inverse”?
Definition
Let A be an n × n matrix. If there is an n × n matrix B such that
BA = AB = I,
where I is the n × n identity matrix (in which every entry on the main diagonal is 1 and all other entries are 0), then B is an inverse of A, denoted A−1.
This definition does not rule out the possibility that a matrix has more than one inverse. But in fact, as the following result shows, any matrix has at most one inverse.
Proposition 1.2.2
A square (n × n) matrix has at most one inverse.
Proof  
Let A be a n × n matrix, and suppose that B and C are both inverses of A. Then by the definition of an inverse, BA = AB = I and CA = AC = I, where I is the n × n identity matrix. Thus C = CI = C(AB) = (CA)B = IB = B, so that C and B are the same.
The next result gives a useful condition for a matrix to have an inverse.
Proposition 1.2.3
A matrix has an inverse if and only if it is nonsingular.
Source  
For a proof, see Simon and Blume (1994), Theorem 26.3 on p. 732.
And here is an explicit formula for the inverse.
Proposition 1.2.4
The inverse of the nonsingular n × n matrix A is the n × n matrix for which the (i,j)th component is
(−1)i+j|Aji|/|A|,
where Aji is the matrix obtained by deleting the jth row and ith column of A.
Source  
For a proof, see Simon and Blume (1994), Theorem 26.7 (which follows from Theorem 26.6) on p. 736.
Notice that the (i,j)th component of the inverse involves the determinant of the matrix Aji, not the determinant of the matrix Aij.
Example 1.2.6
For the 2 × 2 matrix
A = 
left parenthesis a b right parenthesis
c d
we have |A11| = d, |A12| = c, |A21| = b, and |A22| = a, and the determinant is ad − bc. Thus if ad − bc ≠ 0 then the inverse of A is
1
ad − bc
left parenthesis d b right parenthesis
c a
.
(You can check that the product of the matrix and its inverse is the identity matrix.)
Example 1.2.7
The inverse of the matrix
A = 
left parenthesis a b c right parenthesis
d e f
g h i
is
1
|A|
left parenthesis |A11| −|A21| |A31| right parenthesis
−|A12| |A22| −|A32|
|A13| −|A23| |A33|
if |A| ≠ 0. We have |A11| = ei − fh, |A21| = bi − ch, |A31| = bf − ce, |A12| = di − fg, |A22| = ai − cg, |A32| = af − cd, |A13| = dh − eg, |A23| = ah − bg, and |A33| = ae − bd. (Again, you can check that the product of the matrix and its inverse is the identity matrix.)

Rank

The “rank” of a matrix is usually defined as the maximal number of linearly independent column vectors in the matrix. But to give this definition, I would need to define the concept of linear independence, which is not otherwise needed in this tutorial. So instead I give a definition that uses only concepts defined so far. (This definition is usually given as a result, following a definition in terms of linear independence.) Note that the matrix in the definition is not required to be square.
Definition
The rank of a matrix A is the number of rows and columns in the largest square matrix obtained by deleting rows and columns of A that has a determinant different from 0.
An immediate consequence of this definition is that the rank of an n × n nonsingular matrix is n.
Example 1.2.8
The rank of the matrix
left parenthesis 1 0 right parenthesis
1 0
is 1 because the determinant of the matrix is 0 and the determinant of the 1 × 1 matrix obtained by deleting the second row and second column is 1 ≠ 0.
Example 1.2.9
The rank of the matrix
left parenthesis 1 0 2 right parenthesis
0 2 4
is 2 because the determinant of the matrix obtained by deleting the last column (i.e. all but the first two rows and columns) is 2 ≠ 0.