Linear Algebra Review and Reference for Machine Learning

1. Basic Concepts and Notation

Linear algebra provides a way of compactly representing and operating on sets of linear equations. For example, consider the following system of equations:
4 x 1 5 x 2 = 13 2 x 1 + 3 x 2 = 9. 4 x 1 5 x 2 = 13 2 x 1 + 3 x 2 = 9. {:[4x_(1)-5x_(2)=-13],[-2x_(1)+3x_(2)=9.]:}\begin{align*} 4 x_{1}-5 x_{2} &=-13 \\ -2 x_{1}+3 x_{2} &=9. \end{align*}
This is two equations and two variables, so as you know from high school algebra, you can find a unique solution for x 1 x 1 x_(1)x_{1} and x 2 x 2 x_(2)x_{2} (unless the equations are somehow degenerate, for example if the second equation is simply a multiple of the first, but in the case above there is in fact a unique solution). In matrix notation, we can write the system more compactly as
A x = b A x = b Ax=bA x=b
with
A = [ 4 5 2 3 ] , b = [ 13 9 ] . A = 4 5 2 3 , b = 13 9 . A=[[4,-5],[-2,3]],quad b=[[-13],[9]].A=\left[\begin{array}{cc} 4 & -5 \\ -2 & 3 \end{array}\right], \quad b=\left[\begin{array}{c} -13 \\ 9 \end{array}\right] .
As we will see shortly, there are many advantages (including the obvious space savings) to analyzing linear equations in this form.

1.1. Basic Notation

We use the following notation:
  • By A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} we denote a matrix with m m mm rows and n n nn columns, where the entries of A A AA are real numbers.
  • By x R n x R n x inR^(n)x \in \mathbb{R}^{n}, we denote a vector with n n nn entries. By convention, an n n nn-dimensional vector is often thought of as a matrix with n n nn rows and 1 column, known as a column vector. If we want to explicitly represent a row vector – a matrix with 1 row and n n nn columns – we typically write x T x T x^(T)x^{T} (here x T x T x^(T)x^{T} denotes the transpose of x x xx, which we will define shortly).
  • The i i iith element of a vector x x xx is denoted x i x i x_(i)x_{i}: x = [ x 1 x 2 x n ] . x = x 1 x 2 x n . x=[[x_(1)],[x_(2)],[vdots],[x_(n)]].x=\left[\begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{array}\right] .
  • We use the notation a i j a i j a_(ij)a_{i j} (or A i j , A i , j A i j , A i , j A_(ij),A_(i,j)A_{i j}, A_{i, j}, etc) to denote the entry of A A AA in the i i iith row and j j jjth column: A = [ a 11 a 12 a 1 n a 21 a 22 a 2 n a m 1 a m 2 a m n ] . A = a 11 a 12 a 1 n a 21 a 22 a 2 n a m 1 a m 2 a m n . A=[[a_(11),a_(12),cdots,a_(1n)],[a_(21),a_(22),cdots,a_(2n)],[vdots,vdots,ddots,vdots],[a_(m1),a_(m2),cdots,a_(mn)]].A=\left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 n} \\ a_{21} & a_{22} & \cdots & a_{2 n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m 1} & a_{m 2} & \cdots & a_{m n} \end{array}\right].
  • We denote the j j jjth column of A A AA by a j a j a^(j)a^{j} or A : , j A : , j A_(:,j)A_{:, j}: A = [ | | | a 1 a 2 a n | | | ] . A = | | | a 1 a 2 a n | | | . A=[[|,|,,|],[a^(1),a^(2),cdots,a^(n)],[|,|,,|]].A=\left[\begin{array}{cccc} | & | & & | \\ a^{1} & a^{2} & \cdots & a^{n} \\ | & | & & | \end{array}\right].
  • We denote the i i iith row of A A AA by a T a T a^(T)a^{T} or A i , : A i , : A_(i,:)A_{i,:}: A = [ a 1 T a 2 T a m T ] . A = a 1 T a 2 T a m T . A=[[-,a_(1)^(T),-],[-,a_(2)^(T),-],[,vdots,],[-,a_(m)^(T),-]].A=\left[\begin{array}{ccc} - & a_{1}^{T} & - \\ - & a_{2}^{T} & - \\ & \vdots & \\ - & a_{m}^{T} & - \end{array}\right].
  • Viewing a matrix as a collection of column or row vectors is very important and convenient in many cases. In general, it would be mathematically (and conceptually) cleaner to operate on the level of vectors instead of scalars. There is no universal convention for denoting the columns or rows of a matrix, and thus you can feel free to change the notations as long as it's explicitly defined.

2. Matrix Multiplication

The product of two matrices A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} and B R n × p B R n × p B inR^(n xx p)B \in \mathbb{R}^{n \times p} is the matrix
C = A B R m × p , C = A B R m × p , C=AB inR^(m xx p),C=A B \in \mathbb{R}^{m \times p},
where
C i j = k = 1 n A i k B k j . C i j = k = 1 n A i k B k j . C_(ij)=sum_(k=1)^(n)A_(ik)B_(kj).C_{i j}=\sum_{k=1}^{n} A_{i k} B_{k j} .
Note that in order for the matrix product to exist, the number of columns in A A AA must equal the number of rows in B B BB. There are many other ways of looking at matrix multiplication that may be more convenient and insightful than the standard definition above, and we'll start by examining a few special cases.

2.1. Vector-Vector Products

Given two vectors x , y R n x , y R n x,y inR^(n)x, y \in \mathbb{R}^{n}, the quantity x T y x T y x^(T)yx^{T} y, sometimes called the inner product or dot product of the vectors, is a real number given by
x T y R = [ x 1 x 2 x n ] [ y 1 y 2 y n ] = i = 1 n x i y i . x T y R = x 1      x 2           x n y 1 y 2 y n = i = 1 n x i y i . x^(T)y inR=[[x_(1),x_(2),cdots,x_(n)]][[y_(1)],[y_(2)],[vdots],[y_(n)]]=sum_(i=1)^(n)x_(i)y_(i).x^{T} y \in \mathbb{R}=\left[\begin{array}{llll} x_{1} & x_{2} & \cdots & x_{n} \end{array}\right]\left[\begin{array}{c} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{array}\right]=\sum_{i=1}^{n} x_{i} y_{i} .
Observe that inner products are really just special case of matrix multiplication. Note that it is always the case that x T y = y T x x T y = y T x x^(T)y=y^(T)xx^{T} y=y^{T} x.
Given vectors x R m , y R n x R m , y R n x inR^(m),y inR^(n)x \in \mathbb{R}^{m}, y \in \mathbb{R}^{n} (not necessarily of the same size), x y T R m × n x y T R m × n xy^(T)inR^(m xx n)x y^{T} \in \mathbb{R}^{m \times n} is called the outer product of the vectors. It is a matrix whose entries are given by ( x y T ) i j = x i y j x y T i j = x i y j (xy^(T))_(ij)=x_(i)y_(j)\left(x y^{T}\right)_{i j}=x_{i} y_{j}, i.e.,
x y T R m × n = [ x 1 x 2 x m ] [ y 1 y 2 y n ] = [ x 1 y 1 x 1 y 2 x 1 y n x 2 y 1 x 2 y 2 x 2 y n x m y 1 x m y 2 x m y n ] . x y T R m × n = x 1 x 2 x m y 1      y 2           y n = x 1 y 1 x 1 y 2 x 1 y n x 2 y 1 x 2 y 2 x 2 y n x m y 1 x m y 2 x m y n . xy^(T)inR^(m xx n)=[[x_(1)],[x_(2)],[vdots],[x_(m)]][[y_(1),y_(2),cdots,y_(n)]]=[[x_(1)y_(1),x_(1)y_(2),cdots,x_(1)y_(n)],[x_(2)y_(1),x_(2)y_(2),cdots,x_(2)y_(n)],[vdots,vdots,ddots,vdots],[x_(m)y_(1),x_(m)y_(2),cdots,x_(m)y_(n)]].x y^{T} \in \mathbb{R}^{m \times n}=\left[\begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{m} \end{array}\right]\left[\begin{array}{llll} y_{1} & y_{2} & \cdots & y_{n} \end{array}\right]=\left[\begin{array}{cccc} x_{1} y_{1} & x_{1} y_{2} & \cdots & x_{1} y_{n} \\ x_{2} y_{1} & x_{2} y_{2} & \cdots & x_{2} y_{n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{m} y_{1} & x_{m} y_{2} & \cdots & x_{m} y_{n} \end{array}\right].
As an example of how the outer product can be useful, let 1 R n 1 R n 1inR^(n)\mathbf{1} \in \mathbb{R}^{n} denote an n n nn-dimensional vector whose entries are all equal to 1 1 11. Furthermore, consider the matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} whose columns are all equal to some vector x R m x R m x inR^(m)x \in \mathbb{R}^{m}. Using outer products, we can represent A A AA compactly as,
A = [ | | | x x x | | | ] = [ x 1 x 1 x 1 x 2 x 2 x 2 x m x m x m ] = [ x 1 x 2 x m ] [ 1 1 1 ] = x 1 T . A = | | | x x x | | | = x 1 x 1 x 1 x 2 x 2 x 2 x m x m x m = x 1 x 2 x m 1      1           1 = x 1 T . A=[[|,|,,|],[x,x,cdots,x],[|,|,,|]]=[[x_(1),x_(1),cdots,x_(1)],[x_(2),x_(2),cdots,x_(2)],[vdots,vdots,ddots,vdots],[x_(m),x_(m),cdots,x_(m)]]=[[x_(1)],[x_(2)],[vdots],[x_(m)]][[1,1,cdots,1]]=x1^(T).A=\left[\begin{array}{cccc} | & | & & | \\ x & x & \cdots & x \\ | & | & & | \end{array}\right]=\left[\begin{array}{cccc} x_{1} & x_{1} & \cdots & x_{1} \\ x_{2} & x_{2} & \cdots & x_{2} \\ \vdots & \vdots & \ddots & \vdots \\ x_{m} & x_{m} & \cdots & x_{m} \end{array}\right]=\left[\begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{m} \end{array}\right]\left[\begin{array}{llll} 1 & 1 & \cdots & 1 \end{array}\right]=x \mathbf{1}^{T}.

2.2. Matrix-Vector Products

Given a matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} and a vector x R n x R n x inR^(n)x \in \mathbb{R}^{n}, their product is a vector y = A x R m y = A x R m y=Ax inR^(m)y=A x \in \mathbb{R}^{m}. There are a couple ways of looking at matrix-vector multiplication, and we will look at each of them in turn.
If we write A A AA by rows, then we can express A x A x AxA x as,
y = A x = [ a 1 T a 2 T a m T ] x = [ a 1 T x a 2 T x a m T x ] . y = A x = a 1 T a 2 T a m T x = a 1 T x a 2 T x a m T x . y=Ax=[[-,a_(1)^(T),-],[-,a_(2)^(T),-],[,vdots,],[-,a_(m)^(T),-]]x=[[a_(1)^(T)x],[a_(2)^(T)x],[vdots],[a_(m)^(T)x]].y=A x=\left[\begin{array}{ccc} - & a_{1}^{T} & - \\ - & a_{2}^{T} & - \\ &\vdots& \\ - & a_{m}^{T}&- \end{array}\right] x=\left[\begin{array}{c} a_{1}^{T} x \\ a_{2}^{T} x \\ \vdots \\ a_{m}^{T} x \end{array}\right] .
In other words, the i i iith entry of y y yy is equal to the inner product of the i i iith row of A A AA and x x xx, y i = a i T x y i = a i T x y_(i)=a_(i)^(T)xy_{i}=a_{i}^{T} x.
Alternatively, let's write A A AA in column form. In this case we see that,
(1) y = A x = [ | | | a 1 a 2 a n | | | ] [ x 1 x 2 x n ] = [ a 1 ] x 1 + [ a 2 ] x 2 + + [ a n ] x n . (1) y = A x = | | | a 1 a 2 a n | | | x 1 x 2 x n = a 1 x 1 + a 2 x 2 + + a n x n . {:(1)y=Ax=[[|,|,,|],[a^(1),a^(2),cdots,a^(n)],[|,|,,|]][[x_(1)],[x_(2)],[vdots],[x_(n)]]=[a^(1)]x_(1)+[a^(2)]x_(2)+dots+[a^(n)]x_(n).:}\begin{equation} y=A x=\left[\begin{array}{cccc} | & | & & | \\ a^{1} & a^{2} & \cdots & a^{n} \\ | & | & & | \end{array}\right]\left[\begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{array}\right]=\left[\begin{array}{l} a^{1} \end{array}\right] x_{1}+\left[a^{2}\right] x_{2}+\ldots+\left[a^{n}\right] x_{n}. \end{equation}
In other words, y is a linear combination of the columns of A A AA, where the coefficients of the linear combination are given by the entries of x x xx.
So far we have been multiplying on the right by a column vector, but it is also possible to multiply on the left by a row vector. This is written, y T = x T A y T = x T A y^(T)=x^(T)Ay^{T}=x^{T} A for A R m × n , x R m A R m × n , x R m A inR^(m xx n),x inR^(m)A \in \mathbb{R}^{m \times n}, x \in \mathbb{R}^{m}, and y R n y R n y inR^(n)y \in \mathbb{R}^{n}. As before, we can express y T y T y^(T)y^{T} in two obvious ways, depending on whether we express A A AA in terms on its rows or columns. In the first case we express A A AA in terms of its columns, which gives
y T = x T A = x T [ | | | a 1 a 2 a n | | | ] = [ x T a 1 x T a 2 x T a n ] y T = x T A = x T | | | a 1 a 2 a n | | | = x T a 1      x T a 2           x T a n y^(T)=x^(T)A=x^(T)[[|,|,,|],[a^(1),a^(2),cdots,a^(n)],[|,|,,|]]=[[x^(T)a^(1),x^(T)a^(2),cdots,x^(T)a^(n)]]y^{T}=x^{T} A=x^{T}\left[\begin{array}{cccc} | & | & & | \\ a^{1} & a^{2} & \cdots & a^{n} \\ | & | & & | \end{array}\right]=\left[\begin{array}{llll} x^{T} a^{1} & x^{T} a^{2} & \cdots & x^{T} a^{n} \end{array}\right]
which demonstrates that the i i iith entry of y T y T y^(T)y^{T} is equal to the inner product of x x xx and the i i iith column of A A AA.
Finally, expressing A A AA in terms of rows we get the final representation of the vector-matrix product,
y T = x T A = [ x 1 x 2 x n ] [ a 1 T a 2 T a m T ] = x 1 [ a 1 T ] + x 2 [ a 2 T ] + + x n [ a n T ] y T = x T A = x 1 x 2 x n a 1 T a 2 T a m T = x 1 a 1 T + x 2 a 2 T + + x n a n T {:[y^(T)=x^(T)A],[=[[x_(1),x_(2),cdots,x_(n)]][[-,a_(1)^(T),-],[-,a_(2)^(T),-],[,vdots,],[-,a_(m)^(T),-]]],[=x_(1)[[-,a_(1)^(T),-]]+x_(2)[[-,a_(2)^(T),-]]+dots+x_(n)[[-,a_(n)^(T),-]]]:}\begin{align*} y^{T} &=x^{T} A \\ &=\left[\begin{array}{llll} x_{1} & x_{2} & \cdots & x_{n} \end{array}\right]\left[\begin{array}{ccc} - & a_{1}^{T} & - \\ - & a_{2}^{T} & - \\ &\vdots & \\ -&a_{m}^{T} & - \end{array}\right] \\ &=x_{1}\left[\begin{array}{llll} - & a_{1}^{T} & - \end{array}\right]+x_{2}\left[\begin{array}{lll} - & a_{2}^{T} & - \end{array}\right]+\ldots+x_{n}\left[\begin{array}{lll} - & a_{n}^{T} & - \end{array}\right] \end{align*}
so we see that y T y T y^(T)y^{T} is a linear combination of the rows of A A AA, where the coefficients for the linear combination are given by the entries of x x xx.

2.3. Matrix-Matrix Products

Armed with this knowledge, we can now look at four different (but, of course, equivalent) ways of viewing the matrix-matrix multiplication C = A B C = A B C=ABC=A B as defined at the beginning of this section.
First, we can view matrix-matrix multiplication as a set of vector-vector products. The most obvious viewpoint, which follows immediately from the definition, is that the ( i , j ) ( i , j ) (i,j)(i, j)th entry of C C CC is equal to the inner product of the i i iith row of A A AA and the j j jjth column of B B BB. Symbolically, this looks like the following,
C = A B = [ a 1 T a 2 T a m T ] [ | | | b 1 b 2 b p | | | ] = [ a 1 T b 1 a 1 T b 2 a 1 T b p a 2 T b 1 a 2 T b 2 a 2 T b p a m T b 1 a m T b 2 a m T b p ] . C = A B = a 1 T a 2 T a m T | | | b 1 b 2 b p | | | = a 1 T b 1 a 1 T b 2 a 1 T b p a 2 T b 1 a 2 T b 2 a 2 T b p a m T b 1 a m T b 2 a m T b p . C=AB=[[-,a_(1)^(T),-],[-,a_(2)^(T),-],[,vdots,],[-,a_(m)^(T),-]][[|,|,,|],[b^(1),b^(2),cdots,b^(p)],[|,|,,|]]=[[a_(1)^(T)b^(1),a_(1)^(T)b^(2),cdots,a_(1)^(T)b^(p)],[a_(2)^(T)b^(1),a_(2)^(T)b^(2),cdots,a_(2)^(T)b^(p)],[vdots,vdots,ddots,vdots],[a_(m)^(T)b^(1),a_(m)^(T)b^(2),cdots,a_(m)^(T)b^(p)]].C=AB=\left[\begin{array}{ccc} - &a_{1}^{T}&- \\ -&a_{2}^{T}&- \\ &\vdots& \\ -&a_{m}^{T}&- \end{array} \right] \left[ \begin{array}{cccc} |&|&&| \\ b^{1}&b^{2}&\cdots &b^{p}\\ |&|&&| \end{array} \right]=\left[\begin{array}{cccc}a_{1}^{T}b^{1}&a_{1}^{T}b^{2}&\cdots&a_{1}^{T}b^{p}\\ a_{2}^{T}b^{1}&a_{2}^{T}b^{2}&\cdots&a_{2}^{T}b^{p} \\ \vdots&\vdots&\ddots&\vdots \\ a_{m}^{T}b^{1}&a_{m}^{T}b^{2}&\cdots&a_{m}^{T}b^{p} \end{array} \right].
Remember that since A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} and B R n × p , a i R n B R n × p , a i R n B inR^(n xx p),a_(i)inR^(n)B \in \mathbb{R}^{n \times p}, a_{i} \in \mathbb{R}^{n} and b j R n b j R n b^(j)inR^(n)b^{j} \in \mathbb{R}^{n}, so these inner products all make sense. This is the most "natural" representation when we represent A A AA by rows and B B BB by columns. Alternatively, we can represent A A AA by columns, and B B BB by rows. This representation leads to a much trickier interpretation of A B A B ABA B as a sum of outer products. Symbolically,
C = A B = [ | | | a 1 a 2 a n | | | ] [ b 1 T b 2 T b n T ] = i = 1 n a i b i T . C = A B = | | | a 1 a 2 a n | | | b 1 T b 2 T b n T = i = 1 n a i b i T . C=AB=[[|,|,,|],[a^(1),a^(2),cdots,a^(n)],[|,|,,|]][[-,b_(1)^(T),-],[-,b_(2)^(T),-],[,vdots,],[-,b_(n)^(T),-]]=sum_(i=1)^(n)a^(i)b_(i)^(T).C=A B=\left[\begin{array}{cccc} | & | & & | \\ a^{1} & a^{2} & \cdots & a^{n} \\ | & | & & | \end{array}\right]\left[\begin{array}{ccc} - & b_{1}^{T} & - \\ - & b_{2}^{T} & - \\ &\vdots& \\ - & b_{n}^{T} & - \end{array}\right]=\sum_{i=1}^{n} a^{i} b_{i}^{T}.
Put another way, A B A B ABA B is equal to the sum, over all i i ii, of the outer product of the i i iith column of A A AA and the i i iith row of B B BB. Since, in this case, a i R m a i R m a^(i)inR^(m)a^{i} \in \mathbb{R}^{m} and b i R p b i R p b_(i)inR^(p)b_{i} \in \mathbb{R}^{p}, the dimension of the outer product a i b i T a i b i T a^(i)b_(i)^(T)a^{i} b_{i}^{T} is m × p m × p m xx pm \times p, which coincides with the dimension of C C CC. Chances are, the last equality above may appear confusing to you. If so, take the time to check it for yourself!
Second, we can also view matrix-matrix multiplication as a set of matrix-vector products. Specifically, if we represent B B BB by columns, we can view the columns of C C CC as matrix-vector products between A A AA and the columns of B B BB. Symbolically,
(2) C = A B = A [ | | | b 1 b 2 b p | | | ] = [ | | | A b 1 A b 2 A b p | | | ] . (2) C = A B = A | | | b 1 b 2 b p | | | = | | | A b 1 A b 2 A b p | | | . {:(2)C=AB=A[[|,|,,|],[b^(1),b^(2),cdots,b^(p)],[|,|,,|]]=[[|,|,,|],[Ab^(1),Ab^(2),cdots,Ab^(p)],[|,|,,|]].:}\begin{equation} C=A B=A\left[\begin{array}{cccc} |& | & & |\\ b^{1} & b^{2} & \cdots & b^{p} \\ | & | & & | \end{array}\right]=\left[\begin{array}{cccc} | & | & & | \\ A b^{1} & A b^{2} & \cdots & A b^{p} \\ | & | & & | \end{array}\right]. \end{equation}
Here the i i iith column of C C CC is given by the matrix-vector product with the vector on the right, c i = A b i c i = A b i c_(i)=Ab_(i)c_{i}=A b_{i}. These matrix-vector products can in turn be interpreted using both viewpoints given in the previous subsection. Finally, we have the analogous viewpoint, where we represent A A AA by rows, and view the rows of C C CC as the matrix-vector product between the rows of A A AA and C C CC. Symbolically,
C = A B = [ a 1 T a 2 T a m T ] B = [ a 1 T B a 2 T B a m T B ] . C = A B = a 1 T a 2 T a m T B = a 1 T B a 2 T B a m T B . C=AB=[[-,a_(1)^(T),-],[-,a_(2)^(T),-],[,vdots,],[-,a_(m)^(T),-]]B=[[-,a_(1)^(T)B,-],[-,a_(2)^(T)B,-],[,vdots,],[-,a_(m)^(T)B,-]].C=A B=\left[\begin{array}{ccc} - & a_{1}^{T} & - \\ - & a_{2}^{T}&- \\ &\vdots& \\ - & a_{m}^{T} & - \end{array}\right] B=\left[\begin{array}{ccc} - & a_{1}^{T} B & - \\ - & a_{2}^{T} B & - \\ &\vdots& \\ - & a_{m}^{T} B & - \end{array}\right].
Here the i i iith row of C C CC is given by the matrix-vector product with the vector on the left, c i T = a i T B c i T = a i T B c_(i)^(T)=a_(i)^(T)Bc_{i}^{T}=a_{i}^{T} B.
It may seem like overkill to dissect matrix multiplication to such a large degree, especially when all these viewpoints follow immediately from the initial definition we gave (in about a line of math) at the beginning of this section. The direct advantage of these various viewpoints is that they allow you to operate on the level/unit of vectors instead of scalars. To fully understand linear algebra without getting lost in the complicated manipulation of indices, the key is to operate with as large concepts as possible.[1]
Virtually all of linear algebra deals with matrix multiplications of some kind, and it is worthwhile to spend some time trying to develop an intuitive understanding of the viewpoints presented here.
In addition to this, it is useful to know a few basic properties of matrix multiplication at a higher level:
  • Matrix multiplication is associative: ( A B ) C = A ( B C ) ( A B ) C = A ( B C ) (AB)C=A(BC)(A B) C=A(B C).
  • Matrix multiplication is distributive: A ( B + C ) = A B + A C A ( B + C ) = A B + A C A(B+C)=AB+ACA(B+C)=A B+A C.
  • Matrix multiplication is, in general, not commutative; that is, it can be the case that A B B A A B B A AB!=BAA B \neq B A. (For example, if A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} and B R n × q B R n × q B inR^(n xx q)B \in \mathbb{R}^{n \times q}, the matrix product B A B A BAB A does not even exist if m m mm and q q qq are not equal!)
If you are not familiar with these properties, take the time to verify them for yourself. For example, to check the associativity of matrix multiplication, suppose that A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n}, B R n × p B R n × p B inR^(n xx p)B \in \mathbb{R}^{n \times p}, and C R p × q C R p × q C inR^(p xx q)C \in \mathbb{R}^{p \times q}. Note that A B R m × p A B R m × p AB inR^(m xx p)A B \in \mathbb{R}^{m \times p}, so ( A B ) C R m × q ( A B ) C R m × q (AB)C inR^(m xx q)(A B) C \in \mathbb{R}^{m \times q}. Similarly, B C R n × q B C R n × q BC inR^(n xx q)B C \in \mathbb{R}^{n \times q}, so A ( B C ) R m × q A ( B C ) R m × q A(BC)inR^(m xx q)A(B C) \in \mathbb{R}^{m \times q}. Thus, the dimensions of the resulting matrices agree. To show that matrix multiplication is associative, it suffices to check that the ( i , j ) ( i , j ) (i,j)(i, j)th entry of ( A B ) C ( A B ) C (AB)C(A B) C is equal to the ( i , j ) ( i , j ) (i,j)(i, j)th entry of A ( B C ) A ( B C ) A(BC)A(B C). We can verify this directly using the definition of matrix multiplication:
( ( A B ) C ) i j = k = 1 p ( A B ) i k C k j = k = 1 p ( l = 1 n A i l B l k ) C k j = k = 1 p ( l = 1 n A i l B l k C k j ) = l = 1 n ( k = 1 p A i l B l k C k j ) = l = 1 n A i l ( k = 1 p B l k C k j ) = l = 1 n A i l ( B C ) l j = ( A ( B C ) ) i j . ( ( A B ) C ) i j = k = 1 p ( A B ) i k C k j = k = 1 p l = 1 n A i l B l k C k j = k = 1 p l = 1 n A i l B l k C k j = l = 1 n k = 1 p A i l B l k C k j = l = 1 n A i l k = 1 p B l k C k j = l = 1 n A i l ( B C ) l j = ( A ( B C ) ) i j . {:[((AB)C)_(ij)=sum_(k=1)^(p)(AB)_(ik)C_(kj)=sum_(k=1)^(p)(sum_(l=1)^(n)A_(il)B_(lk))C_(kj)],[=sum_(k=1)^(p)(sum_(l=1)^(n)A_(il)B_(lk)C_(kj))=sum_(l=1)^(n)(sum_(k=1)^(p)A_(il)B_(lk)C_(kj))],[=sum_(l=1)^(n)A_(il)(sum_(k=1)^(p)B_(lk)C_(kj))=sum_(l=1)^(n)A_(il)(BC)_(lj)=(A(BC))_(ij).]:}\begin{align*} ((A B) C)_{i j} &=\sum_{k=1}^{p}(A B)_{i k} C_{k j}=\sum_{k=1}^{p}\left(\sum_{l=1}^{n} A_{i l} B_{l k}\right) C_{k j} \\ &=\sum_{k=1}^{p}\left(\sum_{l=1}^{n} A_{i l} B_{l k} C_{k j}\right)=\sum_{l=1}^{n}\left(\sum_{k=1}^{p} A_{i l} B_{l k} C_{k j}\right) \\ &=\sum_{l=1}^{n} A_{i l}\left(\sum_{k=1}^{p} B_{l k} C_{k j}\right)=\sum_{l=1}^{n} A_{i l}(B C)_{l j}=(A(B C))_{i j} . \end{align*}
Here, the first and last two equalities simply use the definition of matrix multiplication, the third and fifth equalities use the distributive property for scalar multiplication over addition, and the fourth equality uses the commutative and associativity of scalar addition. This technique for proving matrix properties by reduction to simple scalar properties will come up often, so make sure you're familiar with it.

3. Operations and Properties

In this section we present several operations and properties of matrices and vectors. Hopefully a great deal of this will be review for you, so the notes can just serve as a reference for these topics.

3.1. The Identity Matrix and Diagonal Matrices

The identity matrix, denoted I R n × n I R n × n I inR^(n xx n)I \in \mathbb{R}^{n \times n}, is a square matrix with ones on the diagonal and zeros everywhere else. That is,
I i j = { 1 i = j 0 i j I i j = 1 i = j 0 i j I_(ij)={[1,i=j],[0,i!=j]:}I_{i j}=\left\{\begin{array}{cc} 1 & i=j \\ 0 & i \neq j \end{array}\right.
It has the property that for all A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n},
A I = A = I A . A I = A = I A . AI=A=IA.A I=A=I A.
Note that in some sense, the notation for the identity matrix is ambiguous, since it does not specify the dimension of I I II. Generally, the dimensions of I I II are inferred from context so as to make matrix multiplication possible. For example, in the equation above, the I I II in A I = A A I = A AI=AA I=A is an n × n n × n n xx nn \times n matrix, whereas the I I II in A = I A A = I A A=IAA=I A is an m × m m × m m xx mm \times m matrix.
A diagonal matrix is a matrix where all non-diagonal elements are 0 0 00. This is typically denoted D = diag ( d 1 , d 2 , , d n ) D = diag d 1 , d 2 , , d n D=diag(d_(1),d_(2),dots,d_(n))D=\operatorname{diag}\left(d_{1}, d_{2}, \ldots, d_{n}\right), with
D i j = { d i i = j 0 i j D i j = d i i = j 0 i j D_(ij)={[d_(i),i=j],[0,i!=j]:}D_{i j}=\left\{\begin{array}{cc} d_{i} & i=j \\ 0 & i \neq j \end{array}\right.
Clearly, I = diag ( 1 , 1 , , 1 ) I = diag ( 1 , 1 , , 1 ) I=diag(1,1,dots,1)I=\operatorname{diag}(1,1, \ldots, 1).

3.2. The Transpose

The transpose of a matrix results from "flipping" the rows and columns. Given a matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n}, its transpose, written A T R n × m A T R n × m A^(T)inR^(n xx m)A^{T} \in \mathbb{R}^{n \times m}, is the n × m n × m n xx mn \times m matrix whose entries are given by
( A T ) i j = A j i . A T i j = A j i . (A^(T))_(ij)=A_(ji).\left(A^{T}\right)_{i j}=A_{j i} .
We have in fact already been using the transpose when describing row vectors, since the transpose of a column vector is naturally a row vector.
The following properties of transposes are easily verified:
  • ( A T ) T = A A T T = A (A^(T))^(T)=A\left(A^{T}\right)^{T}=A
  • ( A B ) T = B T A T ( A B ) T = B T A T (AB)^(T)=B^(T)A^(T)(A B)^{T}=B^{T} A^{T}
  • ( A + B ) T = A T + B T ( A + B ) T = A T + B T (A+B)^(T)=A^(T)+B^(T)(A+B)^{T}=A^{T}+B^{T}

3.3. Symmetric Matrices

A square matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n} is symmetric if A = A T A = A T A=A^(T)A=A^{T}. It is anti-symmetric if A = A T A = A T A=-A^(T)A=-A^{T}. It is easy to show that for any matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, the matrix A + A T A + A T A+A^(T)A+A^{T} is symmetric and the matrix A A T A A T A-A^(T)A-A^{T} is anti-symmetric. From this it follows that any square matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n} can be represented as a sum of a symmetric matrix and an anti-symmetric matrix, since
A = 1 2 ( A + A T ) + 1 2 ( A A T ) A = 1 2 A + A T + 1 2 A A T A=(1)/(2)(A+A^(T))+(1)/(2)(A-A^(T))A=\frac{1}{2}\left(A+A^{T}\right)+\frac{1}{2}\left(A-A^{T}\right)
and the first matrix on the right is symmetric, while the second is anti-symmetric. It turns out that symmetric matrices occur a great deal in practice, and they have many nice properties which we will look at shortly. It is common to denote the set of all symmetric matrices of size n n nn as S n S n S^(n)\mathbb{S}^{n}, so that A S n A S n A inS^(n)A \in \mathbb{S}^{n} means that A A AA is a symmetric n × n n × n n xx nn \times n matrix;

3.4. The Trace

The trace of a square matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, denoted tr ( A ) tr ( A ) tr(A)\operatorname{tr}(A) (or just tr A tr A tr A\operatorname{tr} A if the parentheses are obviously implied), is the sum of diagonal elements in the matrix:
tr A = i = 1 n A i i . tr A = i = 1 n A i i . tr A=sum_(i=1)^(n)A_(ii).\operatorname{tr} A=\sum_{i=1}^{n} A_{i i} .
As described in the CS229 lecture notes, the trace has the following properties (included here for the sake of completeness):
  • For A R n × n , tr A = tr A T A R n × n , tr A = tr A T A inR^(n xx n),tr A=tr A^(T)A \in \mathbb{R}^{n \times n}, \operatorname{tr} A=\operatorname{tr} A^{T}.
  • For A , B R n × n , tr ( A + B ) = tr A + tr B A , B R n × n , tr ( A + B ) = tr A + tr B A,B inR^(n xx n),tr(A+B)=tr A+tr BA, B \in \mathbb{R}^{n \times n}, \operatorname{tr}(A+B)=\operatorname{tr} A+\operatorname{tr} B.
  • For A R n × n , t R , tr ( t A ) = t tr A A R n × n , t R , tr ( t A ) = t tr A A inR^(n xx n),t inR,tr(tA)=t tr AA \in \mathbb{R}^{n \times n}, t \in \mathbb{R}, \operatorname{tr}(t A)=t \operatorname{tr} A.
  • For A , B A , B A,BA, B such that A B A B ABA B is square, tr A B = tr B A tr A B = tr B A tr AB=tr BA\operatorname{tr} A B=\operatorname{tr} B A.
  • For A , B , C A , B , C A,B,CA, B, C such that A B C A B C ABCA B C is square, tr A B C = tr B C A = tr C A B tr A B C = tr B C A = tr C A B tr ABC=tr BCA=tr CAB\operatorname{tr} A B C=\operatorname{tr} B C A=\operatorname{tr} C A B, and so on for the product of more matrices.
As an example of how these properties can be proven, we'll consider the fourth property given above. Suppose that A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} and B R n × m B R n × m B inR^(n xx m)B \in \mathbb{R}^{n \times m} (so that A B R m × m A B R m × m AB inR^(m xx m)A B \in \mathbb{R}^{m \times m} is a square matrix). Observe that B A R n × n B A R n × n BA inR^(n xx n)B A \in \mathbb{R}^{n \times n} is also a square matrix, so it makes sense to apply the trace operator to it. To verify that tr A B = tr B A tr A B = tr B A tr AB=tr BA\operatorname{tr} A B=\operatorname{tr} B A, note that
tr A B = i = 1 m ( A B ) i i = i = 1 m ( j = 1 n A i j B j i ) = i = 1 m j = 1 n A i j B j i = j = 1 n i = 1 m B j i A i j = j = 1 n ( i = 1 m B j i A i j ) = j = 1 n ( B A ) j j = tr B A . tr A B = i = 1 m ( A B ) i i = i = 1 m j = 1 n A i j B j i = i = 1 m j = 1 n A i j B j i = j = 1 n i = 1 m B j i A i j = j = 1 n i = 1 m B j i A i j = j = 1 n ( B A ) j j = tr B A . {:[tr AB=sum_(i=1)^(m)(AB)_(ii)=sum_(i=1)^(m)(sum_(j=1)^(n)A_(ij)B_(ji))],[=sum_(i=1)^(m)sum_(j=1)^(n)A_(ij)B_(ji)=sum_(j=1)^(n)sum_(i=1)^(m)B_(ji)A_(ij)],[=sum_(j=1)^(n)(sum_(i=1)^(m)B_(ji)A_(ij))=sum_(j=1)^(n)(BA)_(jj)=tr BA.]:}\begin{aligned} \operatorname{tr} A B &=\sum_{i=1}^{m}(A B)_{i i}=\sum_{i=1}^{m}\left(\sum_{j=1}^{n} A_{i j} B_{j i}\right) \\ &=\sum_{i=1}^{m} \sum_{j=1}^{n} A_{i j} B_{j i}=\sum_{j=1}^{n} \sum_{i=1}^{m} B_{j i} A_{i j} \\ &=\sum_{j=1}^{n}\left(\sum_{i=1}^{m} B_{j i} A_{i j}\right)=\sum_{j=1}^{n}(B A)_{j j}=\operatorname{tr} B A . \end{aligned}
Here, the first and last two equalities use the definition of the trace operator and matrix multiplication. The fourth equality, where the main work occurs, uses the commutativity of scalar multiplication in order to reverse the order of the terms in each product, and the commutativity and associativity of scalar addition in order to rearrange the order of the summation.

3.5. Norms

A norm of a vector x x ||x||\|x\| is informally a measure of the "length" of the vector. For example, we have the commonly-used Euclidean or 2 2 ℓ_(2)\ell_{2} norm,
x 2 = i = 1 n x i 2 . x 2 = i = 1 n x i 2 . ||x||_(2)=sqrt(sum_(i=1)^(n)x_(i)^(2)).\|x\|_{2}=\sqrt{\sum_{i=1}^{n} x_{i}^{2}}.
Note that x 2 2 = x T x x 2 2 = x T x ||x||_(2)^(2)=x^(T)x\|x\|_{2}^{2}=x^{T} x.
More formally, a norm is any function f : R n R f : R n R f:R^(n)rarrRf: \mathbb{R}^{n} \rightarrow \mathbb{R} that satisfies 4 4 44 properties:
  1. For all x R n , f ( x ) 0 x R n , f ( x ) 0 x inR^(n),f(x) >= 0x \in \mathbb{R}^{n}, f(x) \geq 0 (non-negativity).
  2. f ( x ) = 0 f ( x ) = 0 f(x)=0f(x)=0 if and only if x = 0 x = 0 x=0x=0 (definiteness).
  3. For all x R n , t R , f ( t x ) = | t | f ( x ) x R n , t R , f ( t x ) = | t | f ( x ) x inR^(n),t inR,f(tx)=|t|f(x)x \in \mathbb{R}^{n}, t \in \mathbb{R}, f(t x)=|t| f(x) (homogeneity).
  4. For all x , y R n , f ( x + y ) f ( x ) + f ( y ) x , y R n , f ( x + y ) f ( x ) + f ( y ) x,y inR^(n),f(x+y) <= f(x)+f(y)x, y \in \mathbb{R}^{n}, f(x+y) \leq f(x)+f(y) (triangle inequality).
Other examples of norms are the 1 1 ℓ_(1)\ell_{1} norm,
x 1 = i = 1 n | x i | x 1 = i = 1 n x i ||x||_(1)=sum_(i=1)^(n)|x_(i)|\|x\|_{1}=\sum_{i=1}^{n}\left|x_{i}\right|
and the ℓ_(oo)\ell_{\infty} norm,
x = max i | x i | . x = max i x i . ||x||_(oo)=max_(i)|x_(i)|.\|x\|_{\infty}=\max _{i}\left|x_{i}\right| .
In fact, all three norms presented so far are examples of the family of p p ℓ_(p)\ell_{p} norms, which are parameterized by a real number p 1 p 1 p >= 1p \geq 1, and defined as
x p = ( i = 1 n | x i | p ) 1 / p . x p = i = 1 n x i p 1 / p . ||x||_(p)=(sum_(i=1)^(n)|x_(i)|^(p))^(1//p).\|x\|_{p}=\left(\sum_{i=1}^{n}\left|x_{i}\right|^{p}\right)^{1 / p}.
Norms can also be defined for matrices, such as the Frobenius norm,
A F = i = 1 m j = 1 n A i j 2 = tr ( A T A ) . A F = i = 1 m j = 1 n A i j 2 = tr A T A . ||A||_(F)=sqrt(sum_(i=1)^(m)sum_(j=1)^(n)A_(ij)^(2))=sqrt(tr(A^(T)A)).\|A\|_{F}=\sqrt{\sum_{i=1}^{m} \sum_{j=1}^{n} A_{i j}^{2}}=\sqrt{\operatorname{tr}\left(A^{T} A\right)} .
Many other norms exist, but they are beyond the scope of this review.

3.6. Linear Independence and Rank

A set of vectors { x 1 , x 2 , x n } R m x 1 , x 2 , x n R m {x_(1),x_(2),dotsx_(n)}subR^(m)\left\{x_{1}, x_{2}, \ldots x_{n}\right\} \subset \mathbb{R}^{m} is said to be (linearly) independent if no vector can be represented as a linear combination of the remaining vectors. Conversely, if one vector belonging to the set can be represented as a linear combination of the remaining vectors, then the vectors are said to be (linearly) dependent. That is, if
x n = i = 1 n 1 α i x i x n = i = 1 n 1 α i x i x_(n)=sum_(i=1)^(n-1)alpha_(i)x_(i)x_{n}=\sum_{i=1}^{n-1} \alpha_{i} x_{i}
for some scalar values α 1 , , α n 1 R α 1 , , α n 1 R alpha_(1),dots,alpha_(n-1)inR\alpha_{1}, \ldots, \alpha_{n-1} \in \mathbb{R}, then we say that the vectors x 1 , , x n x 1 , , x n x_(1),dots,x_(n)x_{1}, \ldots, x_{n} are linearly dependent; otherwise, the vectors are linearly independent. For example, the vectors
x 1 = [ 1 2 3 ] x 2 = [ 4 1 5 ] x 3 = [ 2 3 1 ] x 1 = 1 2 3 x 2 = 4 1 5 x 3 = 2 3 1 x_(1)=[[1],[2],[3]]quadx_(2)=[[4],[1],[5]]quadx_(3)=[[2],[-3],[-1]]x_{1}=\left[\begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right] \quad x_{2}=\left[\begin{array}{l} 4 \\ 1 \\ 5 \end{array}\right] \quad x_{3}=\left[\begin{array}{c} 2 \\ -3 \\ -1 \end{array}\right]
are linearly dependent because x 3 = 2 x 1 + x 2 x 3 = 2 x 1 + x 2 x_(3)=-2x_(1)+x_(2)x_{3}=-2 x_{1}+x_{2}.
The column rank of a matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} is the size of the largest subset of columns of A A AA that constitute a linearly independent set. With some abuse of terminology, this is often referred to simply as the number of linearly independent columns of A A AA. In the same way, the row rank is the largest number of rows of A A AA that constitute a linearly independent set.
For any matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n}, it turns out that the column rank of A A AA is equal to the row rank of A A AA (though we will not prove this), and so both quantities are referred to collectively as the rank of A A AA, denoted as rank ( A ) rank ( A ) rank(A)\operatorname{rank}(A). The following are some basic properties of the rank:
  • For A R m × n , rank ( A ) min ( m , n ) A R m × n , rank ( A ) min ( m , n ) A inR^(m xx n),rank(A) <= min(m,n)A \in \mathbb{R}^{m \times n}, \operatorname{rank}(A) \leq \min (m, n). If rank ( A ) = min ( m , n ) rank ( A ) = min ( m , n ) rank(A)=min(m,n)\operatorname{rank}(A)=\min (m, n), then A A AA is said to be full rank.
  • For A R m × n , rank ( A ) = rank ( A T ) A R m × n , rank ( A ) = rank A T A inR^(m xx n),rank(A)=rank(A^(T))A \in \mathbb{R}^{m \times n}, \operatorname{rank}(A)=\operatorname{rank}\left(A^{T}\right).
  • For A R m × n , B R n × p , rank ( A B ) min ( rank ( A ) , rank ( B ) ) A R m × n , B R n × p , rank ( A B ) min ( rank ( A ) , rank ( B ) ) A inR^(m xx n),B inR^(n xx p),rank(AB) <= min(rank(A),rank(B))A \in \mathbb{R}^{m \times n}, B \in \mathbb{R}^{n \times p}, \operatorname{rank}(A B) \leq \min (\operatorname{rank}(A), \operatorname{rank}(B)).
  • For A , B R m × n , rank ( A + B ) rank ( A ) + rank ( B ) A , B R m × n , rank ( A + B ) rank ( A ) + rank ( B ) A,B inR^(m xx n),rank(A+B) <= rank(A)+rank(B)A, B \in \mathbb{R}^{m \times n}, \operatorname{rank}(A+B) \leq \operatorname{rank}(A)+\operatorname{rank}(B).

3.7. The Inverse of a Square Matrix

The inverse of a square matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n} is denoted A 1 A 1 A^(-1)A^{-1}, and is the unique matrix such that
A 1 A = I = A A 1 . A 1 A = I = A A 1 . A^(-1)A=I=AA^(-1).A^{-1} A=I=A A^{-1} .
Note that not all matrices have inverses. Non-square matrices, for example, do not have inverses by definition. However, for some square matrices A A AA, it may still be the case that A 1 A 1 A^(-1)A^{-1} may not exist. In particular, we say that A A AA is invertible or non-singular if A 1 A 1 A^(-1)A^{-1} exists and non-invertible or singular otherwise.[2]
In order for a square matrix A A AA to have an inverse A 1 A 1 A^(-1)A^{-1}, then A A AA must be full rank. We will soon see that there are many alternative sufficient and necessary conditions, in addition to full rank, for invertibility.
The following are properties of the inverse; all assume that A , B R n × n A , B R n × n A,B inR^(n xx n)A, B \in \mathbb{R}^{n \times n} are non-singular:
  • ( A 1 ) 1 = A A 1 1 = A (A^(-1))^(-1)=A\left(A^{-1}\right)^{-1}=A
  • ( A B ) 1 = B 1 A 1 ( A B ) 1 = B 1 A 1 (AB)^(-1)=B^(-1)A^(-1)(A B)^{-1}=B^{-1} A^{-1}
  • ( A 1 ) T = ( A T ) 1 A 1 T = A T 1 (A^(-1))^(T)=(A^(T))^(-1)\left(A^{-1}\right)^{T}=\left(A^{T}\right)^{-1}. For this reason this matrix is often denoted A T A T A^(-T)A^{-T}.
As an example of how the inverse is used, consider the linear system of equations, A x = b A x = b Ax=bA x=b where A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, and x , b R n x , b R n x,b inR^(n)x, b \in \mathbb{R}^{n}. If A A AA is nonsingular (i.e., invertible), then x = A 1 b x = A 1 b x=A^(-1)bx=A^{-1} b.
(What if A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} is not a square matrix? Does this work?)

3.8. Orthogonal Matrices

Two vectors x , y R n x , y R n x,y inR^(n)x, y \in \mathbb{R}^{n} are orthogonal if x T y = 0 x T y = 0 x^(T)y=0x^{T} y=0. A vector x R n x R n x inR^(n)x \in \mathbb{R}^{n} is normalized if x 2 = 1 x 2 = 1 ||x||_(2)=1\|x\|_{2}=1. A square matrix U R n × n U R n × n U inR^(n xx n)U \in \mathbb{R}^{n \times n} is orthogonal (note the different meanings when talking about vectors versus matrices) if all its columns are orthogonal to each other and are normalized (the columns are then referred to as being orthonormal).
It follows immediately from the definition of orthogonality and normality that
U T U = I = U U T . U T U = I = U U T . U^(T)U=I=UU^(T).U^{T} U=I=U U^{T} .
In other words, the inverse of an orthogonal matrix is its transpose. Note that if U U UU is not square – i.e., U R m × n , n < m U R m × n , n < m U inR^(m xx n),quad n < mU \in \mathbb{R}^{m \times n}, \quad n<m – but its columns are still orthonormal, then U T U = I U T U = I U^(T)U=IU^{T} U=I, but U U T I U U T I UU^(T)!=IU U^{T} \neq I. We generally only use the term orthogonal to describe the previous case, where U U UU is square.
Another nice property of orthogonal matrices is that operating on a vector with an orthogonal matrix will not change its Euclidean norm, i.e.,
(3) U x 2 = x 2 (3) U x 2 = x 2 {:(3)||Ux||_(2)=||x||_(2):}\begin{equation} \|U x\|_{2}=\|x\|_{2} \end{equation}
for any x R n , U R n × n x R n , U R n × n x inR^(n),U inR^(n xx n)x \in \mathbb{R}^{n}, U \in \mathbb{R}^{n \times n} orthogonal.

3.9. Range and Nullspace of a Matrix

The span of a set of vectors { x 1 , x 2 , x n } x 1 , x 2 , x n {x_(1),x_(2),dotsx_(n)}\left\{x_{1}, x_{2}, \ldots x_{n}\right\} is the set of all vectors that can be expressed as a linear combination of { x 1 , , x n } x 1 , , x n {x_(1),dots,x_(n)}\left\{x_{1}, \ldots, x_{n}\right\}. That is,
span ( { x 1 , x n } ) = { v : v = i = 1 n α i x i , α i R } . span x 1 , x n = v : v = i = 1 n α i x i , α i R . span({x_(1),dotsx_(n)})={v:v=sum_(i=1)^(n)alpha_(i)x_(i),quadalpha_(i)inR}.\operatorname{span}\left(\left\{x_{1}, \ldots x_{n}\right\}\right)=\left\{v: v=\sum_{i=1}^{n} \alpha_{i} x_{i}, \quad \alpha_{i} \in \mathbb{R}\right\} .
It can be shown that if { x 1 , , x n } x 1 , , x n {x_(1),dots,x_(n)}\left\{x_{1}, \ldots, x_{n}\right\} is a set of n n nn linearly independent vectors, where each x i R n x i R n x_(i)inR^(n)x_{i} \in \mathbb{R}^{n}, then span ( { x 1 , x n } ) = R n span x 1 , x n = R n span({x_(1),dotsx_(n)})=R^(n)\operatorname{span}\left(\left\{x_{1}, \ldots x_{n}\right\}\right)=\mathbb{R}^{n}. In other words, any vector v R n v R n v inR^(n)v \in \mathbb{R}^{n} can be written as a linear combination of x 1 x 1 x_(1)x_{1} through x n x n x_(n)x_{n}. The projection of a vector y R m y R m y inR^(m)y \in \mathbb{R}^{m} onto the span of { x 1 , , x n } x 1 , , x n {x_(1),dots,x_(n)}\left\{x_{1}, \ldots, x_{n}\right\} (here we assume x i R m ) x i R m {:x_(i)inR^(m))\left.x_{i} \in \mathbb{R}^{m}\right) is the vector v span ( { x 1 , x n } ) v span x 1 , x n v in span({x_(1),dotsx_(n)})v \in \operatorname{span}\left(\left\{x_{1}, \ldots x_{n}\right\}\right), such that v v vv is as close as possible to y y yy, as measured by the Euclidean norm v y 2 v y 2 ||v-y||_(2)\|v-y\|_{2}. We denote the projection as Proj ( y ; { x 1 , , x n } ) Proj y ; x 1 , , x n Proj(y;{x_(1),dots,x_(n)})\operatorname{Proj}\left(y ;\left\{x_{1}, \ldots, x_{n}\right\}\right) and can define it formally as,
Proj ( y ; { x 1 , x n } ) = argmin v span ( { x 1 , , x n } ) y v 2 . Proj y ; x 1 , x n = argmin v span x 1 , , x n y v 2 . Proj(y;{x_(1),dotsx_(n)})=argmin_(v in span({x_(1),dots,x_(n)}))||y-v||_(2).\operatorname{Proj}\left(y ;\left\{x_{1}, \ldots x_{n}\right\}\right)=\operatorname{argmin}_{v \in \operatorname{span}\left(\left\{x_{1}, \ldots, x_{n}\right\}\right)}\|y-v\|_{2} .
The range (sometimes also called the columnspace) of a matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n}, denoted R ( A ) R ( A ) R(A)\mathcal{R}(A), is the the span of the columns of A A AA. In other words,
R ( A ) = { v R m : v = A x , x R n } . R ( A ) = v R m : v = A x , x R n . R(A)={v inR^(m):v=Ax,x inR^(n)}.\mathcal{R}(A)=\left\{v \in \mathbb{R}^{m}: v=A x, x \in \mathbb{R}^{n}\right\} .
Making a few technical assumptions (namely that A A AA is full rank and that n < m n < m n < mn<m), the projection of a vector y R m y R m y inR^(m)y \in \mathbb{R}^{m} onto the range of A A AA is given by,
Proj ( y ; A ) = argmin v R ( A ) v y 2 = A ( A T A ) 1 A T y . Proj ( y ; A ) = argmin v R ( A ) v y 2 = A A T A 1 A T y . Proj(y;A)=argmin_(v inR(A))||v-y||_(2)=A(A^(T)A)^(-1)A^(T)y.\operatorname{Proj}(y ; A)=\operatorname{argmin}_{v \in \mathcal{R}(A)}\|v-y\|_{2}=A\left(A^{T} A\right)^{-1} A^{T} y .
This last equation should look extremely familiar, since it is almost the same formula we derived in class (and which we will soon derive again) for the least squares estimation of parameters. Looking at the definition for the projection, it should not be too hard to convince yourself that this is in fact the same objective that we minimized in our least squares problem (except for a squaring of the norm, which doesn't affect the optimal point) and so these problems are naturally very connected. When A A AA contains only a single column, a R m a R m a inR^(m)a \in \mathbb{R}^{m}, this gives the special case for a projection of a vector on to a line:
Proj ( y ; a ) = a a T a T a y . Proj ( y ; a ) = a a T a T a y . Proj(y;a)=(aa^(T))/(a^(T)a)y.\operatorname{Proj}(y ; a)=\frac{a a^{T}}{a^{T} a} y .
The nullspace of a matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n}, denoted N ( A ) N ( A ) N(A)\mathcal{N}(A) is the set of all vectors that equal 0 0 00 when multiplied by A A AA, i.e.,
N ( A ) = { x R n : A x = 0 } . N ( A ) = x R n : A x = 0 . N(A)={x inR^(n):Ax=0}.\mathcal{N}(A)=\left\{x \in \mathbb{R}^{n}: A x=0\right\} .
Note that vectors in R ( A ) R ( A ) R(A)\mathcal{R}(A) are of size m m mm, while vectors in the N ( A ) N ( A ) N(A)\mathcal{N}(A) are of size n n nn, so vectors in R ( A T ) R A T R(A^(T))\mathcal{R}\left(A^{T}\right) and N ( A ) N ( A ) N(A)\mathcal{N}(A) are both in R n R n R^(n)\mathbb{R}^{n}. In fact, we can say much more. It turns out that
{ w : w = u + v , u R ( A T ) , v N ( A ) } = R n and R ( A T ) N ( A ) = { 0 } . w : w = u + v , u R A T , v N ( A ) = R n  and  R A T N ( A ) = { 0 } . {w:w=u+v,u inR(A^(T)),v inN(A)}=R^(n)" and "R(A^(T))nnN(A)={0}.\left\{w: w=u+v, u \in \mathcal{R}\left(A^{T}\right), v \in \mathcal{N}(A)\right\}=\mathbb{R}^{n} \text { and } \mathcal{R}\left(A^{T}\right) \cap \mathcal{N}(A)=\{\mathbf{0}\} .
In other words, R ( A T ) R A T R(A^(T))\mathcal{R}\left(A^{T}\right) and N ( A ) N ( A ) N(A)\mathcal{N}(A) are disjoint subsets that together span the entire space of R n R n R^(n)\mathbb{R}^{n}. Sets of this type are called orthogonal complements, and we denote this R ( A T ) = R A T = R(A^(T))=\mathcal{R}\left(A^{T}\right)= N ( A ) N ( A ) N(A)^(_|_)\mathcal{N}(A)^{\perp}.

3.10. The Determinant

The determinant of a square matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, is a function det: R n × n R R n × n R R^(n xx n)rarrR\mathbb{R}^{n \times n} \rightarrow \mathbb{R}, and is denoted | A | | A | |A||A| or det A det A det A\det A (like the trace operator, we usually omit parentheses). Algebraically, one could write down an explicit formula for the determinant of A A AA, but this unfortunately gives little intuition about its meaning. Instead, we'll start out by providing a geometric interpretation of the determinant and then visit some of its specific algebraic properties afterwards.
Given a matrix
[ a 1 T a 2 T a n T ] , a 1 T a 2 T a n T , [[-,a_(1)^(T),-],[-,a_(2)^(T),-],[,vdots,],[-,a_(n)^(T),-]],\left[\begin{array}{ccc} - & a_{1}^{T} & - \\ - & a_{2}^{T} & - \\ & \vdots & \\ - & a_{n}^{T} & - \end{array}\right],
consider the set of points S R n S R n S subR^(n)S \subset \mathbb{R}^{n} formed by taking all possible linear combinations of the row vectors a 1 , , a n R n a 1 , , a n R n a_(1),dots,a_(n)inR^(n)a_{1}, \ldots, a_{n} \in \mathbb{R}^{n} of A A AA, where the coefficients of the linear combination are all between 0 and 1 ; that is, the set S S SS is the restriction of span ( { a 1 , , a n } ) span a 1 , , a n span({a_(1),dots,a_(n)})\operatorname{span}\left(\left\{a_{1}, \ldots, a_{n}\right\}\right) to only those linear combinations whose coefficients α 1 , , α n α 1 , , α n alpha_(1),dots,alpha_(n)\alpha_{1}, \ldots, \alpha_{n} satisfy 0 α i 1 , i = 1 , , n 0 α i 1 , i = 1 , , n 0 <= alpha_(i) <= 1,i=1,dots,n0 \leq \alpha_{i} \leq 1, i=1, \ldots, n. Formally,
S = { v R n : v = i = 1 n α i a i where 0 α i 1 , i = 1 , , n } . S = { v R n : v = i = 1 n α i a i  where  0 α i 1 , i = 1 , , n } . S={v inR^(n):v=sum_(i=1)^(n)alpha_(i)a_(i)" where "0 <= alpha_(i) <= 1,i=1,dots,n}.S=\{v \in \mathbb{R}^{n}: v=\sum_{i=1}^{n} \alpha_{i} a_{i} \text { where } 0 \leq \alpha_{i} \leq 1, i=1, \ldots, n\} .
The absolute value of the determinant of A A AA, it turns out, is a measure of the "volume" of the set S S SS.[3]
For example, consider the 2 × 2 2 × 2 2xx22 \times 2 matrix,
(4) A = [ 1 3 3 2 ] . (4) A = 1 3 3 2 . {:(4)A=[[1,3],[3,2]].:}\begin{equation} A=\left[\begin{array}{ll} 1 & 3 \\ 3 & 2 \end{array}\right]. \end{equation}
Here, the rows of the matrix are
a 1 = [ 1 3 ] a 2 = [ 3 2 ] . a 1 = 1 3 a 2 = 3 2 . a_(1)=[[1],[3]]quada_(2)=[[3],[2]].a_{1}=\left[\begin{array}{l} 1 \\ 3 \end{array}\right] \quad a_{2}=\left[\begin{array}{l} 3 \\ 2 \end{array}\right] .
The set S S SS corresponding to these rows is shown in Figure 3.10. For two-dimensional matrices, S S SS generally has the shape of a parallelogram. In our example, the value of the determinant is | A | = 7 | A | = 7 |A|=-7|A|=-7 (as can be computed using the formulas shown later in this section), so the area of the parallelogram is 7 7 77. (Verify this for yourself!)
In three dimensions, the set S S SS corresponds to an object known as a parallelepiped (a threedimensional box with skewed sides, such that every face has the shape of a parallelogram). The absolute value of the determinant of the 3 × 3 3 × 3 3xx33 \times 3 matrix whose rows define S S SS give the three-dimensional volume of the parallelepiped. In even higher dimensions, the set S S SS is an object known as an n n nn-dimensional parallelotope.
Figure 1: Illustration of the determinant for the 2 × 2 2 × 2 2xx22 \times 2 matrix A A AA given in (4). Here, a 1 a 1 a_(1)a_{1} and a 2 a 2 a_(2)a_{2} are vectors corresponding to the rows of A A AA, and the set S S SS corresponds to the shaded region (i.e., the parallelogram). The absolute value of the determinant, | det A | = 7 | det A | = 7 |det A|=7|\operatorname{det} A|=7, is the area of the parallelogram.
Algebraically, the determinant satisfies the following three properties (from which all other properties follow, including the general formula):
  1. The determinant of the identity is 1 , | I | = 1 1 , | I | = 1 1,|I|=11,|I|=1. (Geometrically, the volume of a unit hypercube is 1 1 11).
  2. Given a matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, if we multiply a single row in A A AA by a scalar t R t R t inRt \in \mathbb{R}, then the determinant of the new matrix is t | A | t | A | t|A|t|A|, | [ t a 1 T a 2 T a m T ] | = t | A | . t a 1 T a 2 T a m T = t | A | . |[[-,ta_(1)^(T),-],[-,a_(2)^(T),-],[,vdots,],[-,a_(m)^(T),-]]|=t|A|.\left| \left[\begin{array}{ccc}-&ta_{1}^{T}&- \\- &a_{2}^{T}&- \\ &\vdots& \\ -&a_{m}^{T}&- \end{array} \right]\right|=t|A|. (Geometrically, multiplying one of the sides of the set S S SS by a factor t t tt causes the volume to increase by a factor t t tt.)
  3. If we exchange any two rows a i T a i T a_(i)^(T)a_{i}^{T} and a j T a j T a_(j)^(T)a_{j}^{T} of A A AA, then the determinant of the new matrix is | A | | A | -|A|-|A|, for example | [ a 2 T a 1 T a m T ] | = | A | . a 2 T a 1 T a m T = | A | . |[[-,a_(2)^(T),-],[-,a_(1)^(T),-],[,vdots,],[-,a_(m)^(T),-]]|=-|A|.\left|\left[\begin{array}{ccc} - & a_{2}^{T} & - \\ - & a_{1}^{T} & - \\ &\vdots & \\ - & a_{m}^{T} & - \end{array}\right]\right|=-|A| .
In case you are wondering, it is not immediately obvious that a function satisfying the above three properties exists. In fact, though, such a function does exist, and is unique (which we will not prove here).
Several properties that follow from the three properties above include:
  • For A R n × n , | A | = | A T | A R n × n , | A | = A T A inR^(n xx n),|A|=|A^(T)|A \in \mathbb{R}^{n \times n},|A|=\left|A^{T}\right|.
  • For A , B R n × n , | A B | = | A | | B | A , B R n × n , | A B | = | A | | B | A,B inR^(n xx n),|AB|=|A||B|A, B \in \mathbb{R}^{n \times n},|A B|=|A||B|.
  • For A R n × n , | A | = 0 A R n × n , | A | = 0 A inR^(n xx n),|A|=0A \in \mathbb{R}^{n \times n},|A|=0 if and only if A A AA is singular (i.e., non-invertible). (If A A AA is singular then it does not have full rank, and hence its columns are linearly dependent. In this case, the set S S SS corresponds to a "flat sheet" within the n n nn-dimensional space and hence has zero volume.)
  • For A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n} and A A AA non-singular, | A 1 | = 1 / | A | A 1 = 1 / | A | |A^(-1)|=1//|A|\left|A^{-1}\right|=1 /|A|.
Before giving the general definition for the determinant, we define, for A R n × n , A i , j A R n × n , A i , j A inR^(n xx n),A_(\\i,\\j)inA \in \mathbb{R}^{n \times n}, A_{\backslash i, \backslash j} \in R ( n 1 ) × ( n 1 ) R ( n 1 ) × ( n 1 ) R^((n-1)xx(n-1))\mathbb{R}^{(n-1) \times(n-1)} to be the matrix that results from deleting the i i iith row and j j jjth column from A A AA. The general (recursive) formula for the determinant is
| A | = i = 1 n ( 1 ) i + j a i j | A i , j | ( for any j 1 , , n ) = j = 1 n ( 1 ) i + j a i j | A i , j | ( for any i 1 , , n ) | A | = i = 1 n ( 1 ) i + j a i j A i , j for any  j 1 , , n = j = 1 n ( 1 ) i + j a i j A i , j for any  i 1 , , n {:[|A|=sum_(i=1)^(n)(-1)^(i+j)a_(ij)|A_(\\i,\\j)|quad("for any "j in1,dots,n)],[=sum_(j=1)^(n)(-1)^(i+j)a_(ij)|A_(\\i,\\j)|quad("for any "i in1,dots,n)]:}\begin{align*} |A| & =\sum_{i=1}^{n}(-1)^{i+j} a_{i j}\left|A_{\backslash i, \backslash j}\right| \quad \left(\text {for any } j \in 1, \ldots, n\right) \\ & =\sum_{j=1}^{n}(-1)^{i+j} a_{i j}\left|A_{\backslash i, \backslash j}\right| \quad \left(\text {for any } i \in 1, \ldots, n\right) \end{align*}
with the initial case that | A | = a 11 | A | = a 11 |A|=a_(11)|A|=a_{11} for A R 1 × 1 A R 1 × 1 A inR^(1xx1)A \in \mathbb{R}^{1 \times 1}. If we were to expand this formula completely for A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, there would be a total of n n nn! ( n n nn factorial) different terms. For this reason, we hardly ever explicitly write the complete equation of the determinant for matrices bigger than 3 × 3 3 × 3 3xx33 \times 3. However, the equations for determinants of matrices up to size 3 × 3 3 × 3 3xx33 \times 3 are fairly common, and it is good to know them:
| [ a 11 ] | = a 11 | [ a 11 a 12 a 21 a 22 ] | = a 11 a 22 a 12 a 21 | [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ] | = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 11 a 23 a 32 a 12 a 21 a 33 a 13 a 22 a 31 a 11 = a 11 a 11 a 12 a 21 a 22 = a 11 a 22 a 12 a 21 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 11 a 23 a 32 a 12 a 21 a 33 a 13 a 22 a 31 {:[|[a_(11)]|=a_(11)],[|[[a_(11),a_(12)],[a_(21),a_(22)]]|=a_(11)a_(22)-a_(12)a_(21)],[|[[a_(11),a_(12),a_(13)],[a_(21),a_(22),a_(23)],[a_(31),a_(32),a_(33)]]|={:[a_(11)a_(22)a_(33)+a_(12)a_(23)a_(31)+a_(13)a_(21)a_(32)],[-a_(11)a_(23)a_(32)-a_(12)a_(21)a_(33)-a_(13)a_(22)a_(31)]:}]:}\begin{aligned} \left|\left[a_{11}\right]\right|&=a_{11} \\ \left|\left[\begin{array}{cc}a_{11} & a_{12} \\a_{21} & a_{22}\end{array}\right]\right|&=a_{11} a_{22}-a_{12} a_{21} \\ \left|\left[\begin{array}{lll}a_{11} & a_{12} & a_{13} \\a_{21} & a_{22} & a_{23} \\a_{31} & a_{32} & a_{33}\end{array}\right]\right| &=\begin{aligned}a_{11} &a_{22} a_{33}+a_{12} a_{23} a_{31}+a_{13} a_{21} a_{32} \\ &-a_{11} a_{23} a_{32}-a_{12} a_{21} a_{33}-a_{13} a_{22} a_{31}\end{aligned} \end{aligned}
The classical adjoint (often just called the adjoint) of a matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, is denoted adj ( A ) adj ( A ) adj(A)\operatorname{adj}(A), and defined as
adj ( A ) R n × n , ( adj ( A ) ) i j = ( 1 ) i + j | A j , i | adj ( A ) R n × n , ( adj ( A ) ) i j = ( 1 ) i + j A j , i adj(A)inR^(n xx n),quad(adj(A))_(ij)=(-1)^(i+j)|A_(\\j,\\i)|\operatorname{adj}(A) \in \mathbb{R}^{n \times n}, \quad(\operatorname{adj}(A))_{i j}=(-1)^{i+j}\left|A_{\backslash j, \backslash i}\right|
(note the switch in the indices A j , i A j , i A_(\\j,\\i)A_{\backslash j, \backslash i}). It can be shown that for any nonsingular A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n},
A 1 = 1 | A | adj ( A ) . A 1 = 1 | A | adj ( A ) . A^(-1)=(1)/(|A|)adj(A).A^{-1}=\frac{1}{|A|} \operatorname{adj}(A) .
While this is a nice "explicit" formula for the inverse of matrix, we should note that, numerically, there are in fact much more efficient ways of computing the inverse.

3.11. Quadratic Forms and Positive Semidefinite Matrices

Given a square matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n} and a vector x R n x R n x inR^(n)x \in \mathbb{R}^{n}, the scalar value x T A x x T A x x^(T)Axx^{T} A x is called a quadratic form. Written explicitly, we see that
x T A x = i = 1 n x i ( A x ) i = i = 1 n x i ( j = 1 n A i j x j ) = i = 1 n j = 1 n A i j x i x j . x T A x = i = 1 n x i ( A x ) i = i = 1 n x i j = 1 n A i j x j = i = 1 n j = 1 n A i j x i x j . x^(T)Ax=sum_(i=1)^(n)x_(i)(Ax)_(i)=sum_(i=1)^(n)x_(i)(sum_(j=1)^(n)A_(ij)x_(j))=sum_(i=1)^(n)sum_(j=1)^(n)A_(ij)x_(i)x_(j).x^{T} A x=\sum_{i=1}^{n} x_{i}(A x)_{i}=\sum_{i=1}^{n} x_{i}\left(\sum_{j=1}^{n} A_{i j} x_{j}\right)=\sum_{i=1}^{n} \sum_{j=1}^{n} A_{i j} x_{i} x_{j}.
Note that,
x T A x = ( x T A x ) T = x T A T x = x T ( 1 2 A + 1 2 A T ) x , x T A x = x T A x T = x T A T x = x T 1 2 A + 1 2 A T x , x^(T)Ax=(x^(T)Ax)^(T)=x^(T)A^(T)x=x^(T)((1)/(2)A+(1)/(2)A^(T))x,x^{T} A x=\left(x^{T} A x\right)^{T}=x^{T} A^{T} x=x^{T}\left(\frac{1}{2} A+\frac{1}{2} A^{T}\right) x,
where the first equality follows from the fact that the transpose of a scalar is equal to itself, and the second equality follows from the fact that we are averaging two quantities which are themselves equal. From this, we can conclude that only the symmetric part of A A AA contributes to the quadratic form. For this reason, we often implicitly assume that the matrices appearing in a quadratic form are symmetric.
We give the following definitions:
  • A symmetric matrix A S n A S n A inS^(n)A \in \mathbb{S}^{n} is positive definite (PD) if for all non-zero vectors x R n , x T A x > 0 x R n , x T A x > 0 x inR^(n),x^(T)Ax > 0x \in \mathbb{R}^{n}, x^{T} A x>0. This is usually denoted A 0 A 0 A>-0A \succ 0 (or just A > 0 A > 0 A > 0A>0 ), and often times the set of all positive definite matrices is denoted S + + n S + + n S_(++)^(n)\mathbb{S}_{++}^{n}.
  • A symmetric matrix A S n A S n A inS^(n)A \in \mathbb{S}^{n} is positive semidefinite (PSD) if for all vectors x T A x 0 x T A x 0 x^(T)Ax >= 0x^{T} A x \geq 0. This is written A 0 A 0 A>-=0A \succeq 0 (or just A 0 A 0 A >= 0A \geq 0 ), and the set of all positive semidefinite matrices is often denoted S + n S + n S_(+)^(n)\mathbb{S}_{+}^{n}.
  • Likewise, a symmetric matrix A S n A S n A inS^(n)A \in \mathbb{S}^{n} is negative definite (ND), denoted A 0 A 0 A-<0A \prec 0 (or just A < 0 ) A < 0 ) A < 0)A<0) if for all non-zero x R n , x T A x < 0 x R n , x T A x < 0 x inR^(n),x^(T)Ax < 0x \in \mathbb{R}^{n}, x^{T} A x<0.
  • Similarly, a symmetric matrix A S n A S n A inS^(n)A \in \mathbb{S}^{n} is negative semidefinite (NSD), denoted A 0 A 0 A-<=0A \preceq 0 (or just A 0 A 0 A <= 0A \leq 0 ) if for all x R n , x T A x 0 x R n , x T A x 0 x inR^(n),x^(T)Ax <= 0x \in \mathbb{R}^{n}, x^{T} A x \leq 0.
  • Finally, a symmetric matrix A S n A S n A inS^(n)A \in \mathbb{S}^{n} is indefinite, if it is neither positive semidefinite nor negative semidefinite – i.e., if there exists x 1 , x 2 R n x 1 , x 2 R n x_(1),x_(2)inR^(n)x_{1}, x_{2} \in \mathbb{R}^{n} such that x 1 T A x 1 > 0 x 1 T A x 1 > 0 x_(1)^(T)Ax_(1) > 0x_{1}^{T} A x_{1}>0 and x 2 T A x 2 < 0 x 2 T A x 2 < 0 x_(2)^(T)Ax_(2) < 0x_{2}^{T} A x_{2}<0.
It should be obvious that if A A AA is positive definite, then A A -A-A is negative definite and vice versa. Likewise, if A A AA is positive semidefinite then A A -A-A is negative semidefinite and vice versa. If A A AA is indefinite, then so is A A -A-A.
One important property of positive definite and negative definite matrices is that they are always full rank, and hence, invertible. To see why this is the case, suppose that some matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n} is not full rank. Then, suppose that the j j jjth column of A A AA is expressible as a linear combination of other n 1 n 1 n-1n-1 columns:
a j = i j x i a i , a j = i j x i a i , a_(j)=sum_(i!=j)x_(i)a_(i),a_{j}=\sum_{i \neq j} x_{i} a_{i},
for some x 1 , , x j 1 , x j + 1 , , x n R x 1 , , x j 1 , x j + 1 , , x n R x_(1),dots,x_(j-1),x_(j+1),dots,x_(n)inRx_{1}, \ldots, x_{j-1}, x_{j+1}, \ldots, x_{n} \in \mathbb{R}. Setting x j = 1 x j = 1 x_(j)=-1x_{j}=-1, we have
A x = i = 1 n x i a i = 0 . A x = i = 1 n x i a i = 0 . Ax=sum_(i=1)^(n)x_(i)a_(i)=0.A x=\sum_{i=1}^{n} x_{i} a_{i}=0 .
But this implies x T A x = 0 x T A x = 0 x^(T)Ax=0x^{T} A x=0 for some non-zero vector x x xx, so A A AA must be neither positive definite nor negative definite. Therefore, if A A AA is either positive definite or negative definite, it must be full rank.
Finally, there is one type of positive definite matrix that comes up frequently, and so deserves some special mention. Given any matrix A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} (not necessarily symmetric or even square), the matrix G = A T A G = A T A G=A^(T)AG=A^{T} A (sometimes called a Gram matrix) is always positive semidefinite. Further, if m n m n m >= nm \geq n (and we assume for convenience that A A AA is full rank), then G = A T A G = A T A G=A^(T)AG=A^{T} A is positive definite.

3.12. Eigenvalues and Eigenvectors

Given a square matrix A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, we say that λ C λ C lambda inC\lambda \in \mathbb{C} is an eigenvalue of A A AA and x C n x C n x inC^(n)x \in \mathbb{C}^{n} is the corresponding eigenvector[4] if
A x = λ x , x 0. A x = λ x , x 0. Ax=lambda x,quad x!=0.A x=\lambda x, \quad x \neq 0.
Intuitively, this definition means that multiplying A A AA by the vector x x xx results in a new vector that points in the same direction as x x xx, but scaled by a factor λ λ lambda\lambda. Also note that for any eigenvector x C n x C n x inC^(n)x \in \mathbb{C}^{n}, and scalar t C , A ( c x ) = c A x = c λ x = λ ( c x ) t C , A ( c x ) = c A x = c λ x = λ ( c x ) t inC,A(cx)=cAx=c lambda x=lambda(cx)t \in \mathbb{C}, A(c x)=c A x=c \lambda x=\lambda(c x), so c x c x cxc x is also an eigenvector. For this reason when we talk about "the" eigenvector associated with λ λ lambda\lambda, we usually assume that the eigenvector is normalized to have length 1 1 11 (this still creates some ambiguity, since x x xx and x x -x-x will both be eigenvectors, but we will have to live with this).
We can rewrite the equation above to state that ( λ , x ) ( λ , x ) (lambda,x)(\lambda, x) is an eigenvalue-eigenvector pair of A A AA if,
( λ I A ) x = 0 , x 0 . ( λ I A ) x = 0 , x 0 . (lambda I-A)x=0,quad x!=0.(\lambda I-A) x=0, \quad x \neq 0 .
But ( λ I A ) x = 0 ( λ I A ) x = 0 (lambda I-A)x=0(\lambda I-A) x=0 has a non-zero solution to x x xx if and only if ( λ I A ) ( λ I A ) (lambda I-A)(\lambda I-A) has a non-empty nullspace, which is only the case if ( λ I A ) ( λ I A ) (lambda I-A)(\lambda I-A) is singular, i.e.,
| ( λ I A ) | = 0 . | ( λ I A ) | = 0 . |(lambda I-A)|=0.|(\lambda I-A)|=0 .
We can now use the previous definition of the determinant to expand this expression | ( λ I A ) | | ( λ I A ) | |(lambda I-A)||(\lambda I-A)| into a (very large) polynomial in λ λ lambda\lambda, where λ λ lambda\lambda will have degree n n nn. It's often called the characteristic polynomial of the matrix A A AA.
We then find the n n nn (possibly complex) roots of this characteristic polynomial and denote them by λ 1 , , λ n λ 1 , , λ n lambda_(1),dots,lambda_(n)\lambda_{1}, \ldots, \lambda_{n}. These are all the eigenvalues of the matrix A A AA, but we note that they may not be distinct. To find the eigenvector corresponding to the eigenvalue λ i λ i lambda_(i)\lambda_{i}, we simply solve the linear equation ( λ i I A ) x = 0 λ i I A x = 0 (lambda_(i)I-A)x=0\left(\lambda_{i} I-A\right) x=0, which is guaranteed to have a non-zero solution because λ i I A λ i I A lambda_(i)I-A\lambda_{i} I-A is singular (but there could also be multiple or infinite solutions.)
It should be noted that this is not the method which is actually used in practice to numerically compute the eigenvalues and eigenvectors (remember that the complete expansion of the determinant has n ! n ! n!n! terms); it is rather a mathematical argument.
The following are properties of eigenvalues and eigenvectors (in all cases assume A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n} has eigenvalues λ i , , λ n λ i , , λ n lambda_(i),dots,lambda_(n)\lambda_{i}, \ldots, \lambda_{n}):
  • The trace of a A A AA is equal to the sum of its eigenvalues, tr A = i = 1 n λ i . tr A = i = 1 n λ i . tr A=sum_(i=1)^(n)lambda_(i).\operatorname{tr} A=\sum_{i=1}^{n} \lambda_{i} .
  • The determinant of A A AA is equal to the product of its eigenvalues, | A | = i = 1 n λ i . | A | = i = 1 n λ i . |A|=prod_(i=1)^(n)lambda_(i).|A|=\prod_{i=1}^{n} \lambda_{i} .
  • The rank of A A AA is equal to the number of non-zero eigenvalues of A A AA.
  • Suppose A A AA is non-singular with eigenvalue λ λ lambda\lambda and an associated eigenvector x x xx. Then 1 / λ 1 / λ 1//lambda1 / \lambda is an eigenvalue of A 1 A 1 A^(-1)A^{-1} with an associated eigenvector x x xx, i.e., A 1 x = ( 1 / λ ) x A 1 x = ( 1 / λ ) x A^(-1)x=(1//lambda)xA^{-1} x=(1 / \lambda) x. (To prove this, take the eigenvector equation, A x = λ x A x = λ x Ax=lambda xA x=\lambda x and left-multiply each side by A 1 A 1 A^(-1)A^{-1}.)
  • The eigenvalues of a diagonal matrix D = diag ( d 1 , d n ) D = diag d 1 , d n D=diag(d_(1),dotsd_(n))D=\operatorname{diag}\left(d_{1}, \ldots d_{n}\right) are just the diagonal entries d 1 , d n d 1 , d n d_(1),dotsd_(n)d_{1}, \ldots d_{n}.

3.13. Eigenvalues and Eigenvectors of Symmetric Matrices

In general, the structures of the eigenvalues and eigenvectors of a general square matrix can be subtle to characterize. Fortunately, in most of the cases in machine learning, it suffices to deal with symmetric real matrices, whose eigenvalues and eigenvectors have remarkable properties.
Throughout this section, let's assume that A A AA is a symmetric real matrix. We have the following properties:
  1. All eigenvalues of A A AA are real numbers. We denote them by λ 1 , , λ n λ 1 , , λ n lambda_(1),dots,lambda_(n)\lambda_{1}, \ldots, \lambda_{n}.
  2. There exists a set of eigenvectors u 1 , , u n u 1 , , u n u_(1),dots,u_(n)u_{1}, \ldots, u_{n} such that a) for all i , u i i , u i i,u_(i)i, u_{i} is an eigenvector with eigenvalue λ i λ i lambda_(i)\lambda_{i} and b) u 1 , , u n u 1 , , u n u_(1),dots,u_(n)u_{1}, \ldots, u_{n} are unit vectors and orthogonal to each other.[5]
Let U U UU be the orthonormal matrix that contains u i u i u_(i)u_{i}'s as columns:[6]
(5) U = [ | | | u 1 u 2 u n | | | ] (5) U = | | | u 1 u 2 u n | | | {:(5)U=[[|,|,,|],[u_(1),u_(2),cdots,u_(n)],[|,|,,|]]:}\begin{equation} U=\left[\begin{array}{cccc} | & | & & | \\ u_{1} & u_{2} & \cdots & u_{n} \\ | & | & & | \end{array}\right] \end{equation}
Let Λ = diag ( λ 1 , , λ n ) Λ = diag λ 1 , , λ n Lambda=diag(lambda_(1),dots,lambda_(n))\Lambda=\operatorname{diag}\left(\lambda_{1}, \ldots, \lambda_{n}\right) be the diagonal matrix that contains λ 1 , , λ n λ 1 , , λ n lambda_(1),dots,lambda_(n)\lambda_{1}, \ldots, \lambda_{n} as entries on the diagonal. Using the view of matrix-matrix vector multiplication in equation (2) of Section 2.3, we can verify that
A U = [ | | | A u 1 A u 2 A u n | | | ] = [ | | | λ 1 u 1 λ 2 u 2 λ n u n | | | ] = U diag ( λ 1 , , λ n ) = U Λ A U = | | | A u 1 A u 2 A u n | | | = | | | λ 1 u 1 λ 2 u 2 λ n u n | | | = U diag λ 1 , , λ n = U Λ AU=[[|,|,,|],[Au_(1),Au_(2),cdots,Au_(n)],[|,|,,|]]=[[|,|,,|],[lambda_(1)u_(1),lambda_(2)u_(2),cdots,lambda_(n)u_(n)],[|,|,,|]]=U diag(lambda_(1),dots,lambda_(n))=U LambdaA U=\left[\begin{array}{cccc}| & | & & | \\ A u_{1} & A u_{2} & \cdots & A u_{n} \\ | & | & & |\end{array}\right]=\left[\begin{array}{cccc}| & | & & | \\ \lambda_{1} u_{1} & \lambda_{2} u_{2} & \cdots & \lambda_{n} u_{n} \\ | & | & & |\end{array}\right]=U \operatorname{diag}\left(\lambda_{1}, \ldots, \lambda_{n}\right)=U \Lambda
Recalling that orthonormal matrix U U UU satisfies that U U T = I U U T = I UU^(T)=IU U^{T}=I and using the equation above, we have
(6) A = A U U T = U Λ U T (6) A = A U U T = U Λ U T {:(6)A=AUU^(T)=U LambdaU^(T):}\begin{equation} A=A U U^{T}=U \Lambda U^{T} \end{equation}
This new presentation of A A AA as U Λ U T U Λ U T U LambdaU^(T)U \Lambda U^{T} is often called the diagonalization of the matrix A A AA. The term diagonalization comes from the fact that with such representation, we can often effectively treat a symmetric matrix A A AA as a diagonal matrix – which is much easier to understand – w.r.t the basis defined by the eigenvectors U U UU. We will elaborate this below by several examples.
Background: representing vector w.r.t. another basis. Any orthonormal matrix U = [ | | | u 1 u 2 u n | | | ] U = | | | u 1 u 2 u n | | | U=[[|,|,,|],[u_(1),u_(2),cdots,u_(n)],[|,|,,|]]U=\left[\begin{array}{cccc}| & | & & | \\ u_{1} & u_{2} & \cdots & u_{n} \\ | & | & & |\end{array}\right] defines a new basis (coordinate system) of R n R n R^(n)\mathbb{R}^{n} in the following sense. For any vector x R n x R n x inR^(n)x \in \mathbb{R}^{n} can be represented as a linear combination of u 1 , , u n u 1 , , u n u_(1),dots,u_(n)u_{1}, \ldots, u_{n} with coefficient x ^ 1 , , x ^ n x ^ 1 , , x ^ n hat(x)_(1),dots, hat(x)_(n)\hat{x}_{1}, \ldots, \hat{x}_{n}:
x = x ^ 1 u 1 + + x ^ n u n = U x ^ x = x ^ 1 u 1 + + x ^ n u n = U x ^ x= hat(x)_(1)u_(1)+cdots+ hat(x)_(n)u_(n)=U hat(x)x=\hat{x}_{1} u_{1}+\cdots+\hat{x}_{n} u_{n}=U \hat{x}
where in the second equality we use the view of equation (1). Indeed, such x ^ x ^ hat(x)\hat{x} uniquely exists
x = U x ^ U T x = x ^ x = U x ^ U T x = x ^ x=U hat(x)<=>U^(T)x= hat(x)x=U \hat{x} \Leftrightarrow U^{T} x=\hat{x}
In other words, the vector x ^ = U T x x ^ = U T x hat(x)=U^(T)x\hat{x}=U^{T} x can serve as another representation of the vector x x xx w.r.t the basis defined by U U UU.
"Diagonalizing" matrix-vector multiplication. With the setup above, we will see that left-multiplying matrix A A AA can be viewed as left-multiplying a diagonal matrix w.r.t the basic of the eigenvectors. Suppose x x xx is a vector and x ^ x ^ hat(x)\hat{x} is its representation w.r.t to the basis of U U UU. Let z = A x z = A x z=Axz=A x be the matrix-vector product. Now let's compute the representation z z zz w.r.t the basis of U U UU:
Then, again using the fact that U U T = U T U = I U U T = U T U = I UU^(T)=U^(T)U=IU U^{T}=U^{T} U=I and equation (6), we have that
z ^ = U T z = U T A x = U T U Λ U T x = Λ x ^ = [ λ 1 x ^ 1 λ 2 x ^ 2 λ n x ^ n ] z ^ = U T z = U T A x = U T U Λ U T x = Λ x ^ = λ 1 x ^ 1 λ 2 x ^ 2 λ n x ^ n hat(z)=U^(T)z=U^(T)Ax=U^(T)U LambdaU^(T)x=Lambda hat(x)=[[lambda_(1) hat(x)_(1)],[lambda_(2) hat(x)_(2)],[vdots],[lambda_(n) hat(x)_(n)]]\hat{z}=U^{T} z=U^{T} A x=U^{T} U \Lambda U^{T} x=\Lambda \hat{x}=\left[\begin{array}{c} \lambda_{1} \hat{x}_{1} \\ \lambda_{2} \hat{x}_{2} \\ \vdots \\ \lambda_{n} \hat{x}_{n} \end{array}\right]
We see that left-multiplying matrix A A AA in the original space is equivalent to left-multiplying the diagonal matrix Λ Λ Lambda\Lambda w.r.t the new basis, which is merely scaling each coordinate by the corresponding eigenvalue.
Under the new basis, multiplying a matrix multiple times becomes much simpler as well. For example, suppose q = A A A x q = A A A x q=AAAxq=A A A x. Deriving out the analytical form of q q qq in terms of the entries of A A AA may be a nightmare under the original basis, but can be much easier under the new on:
(7) q ^ = U T q = U T A A A x = U T U Λ U T U Λ U T U Λ U T x = Λ 3 x ^ = [ λ 1 3 x ^ 1 λ 2 3 x ^ 2 λ n 3 x ^ n ] (7) q ^ = U T q = U T A A A x = U T U Λ U T U Λ U T U Λ U T x = Λ 3 x ^ = λ 1 3 x ^ 1 λ 2 3 x ^ 2 λ n 3 x ^ n {:(7) hat(q)=U^(T)q=U^(T)AAAx=U^(T)U LambdaU^(T)U LambdaU^(T)U LambdaU^(T)x=Lambda^(3) hat(x)=[[lambda_(1)^(3) hat(x)_(1)],[lambda_(2)^(3) hat(x)_(2)],[vdots],[lambda_(n)^(3) hat(x)_(n)]]:}\begin{equation} \hat{q}=U^{T} q=U^{T} A A A x=U^{T} U \Lambda U^{T} U \Lambda U^{T} U \Lambda U^{T} x=\Lambda^{3} \hat{x}=\left[\begin{array}{c} \lambda_{1}^{3} \hat{x}_{1} \\ \lambda_{2}^{3} \hat{x}_{2} \\ \vdots \\ \lambda_{n}^{3} \hat{x}_{n} \end{array}\right] \end{equation}
"Diagonalizing" quadratic form. As a directly corollary, the quadratic form x T A x x T A x x^(T)Axx^{T} A x can also be simplified under the new basis
(8) x T A x = x T U Λ U T x = x ^ T Λ x ^ = i = 1 n λ i x ^ i 2 (8) x T A x = x T U Λ U T x = x ^ T Λ x ^ = i = 1 n λ i x ^ i 2 {:(8)x^(T)Ax=x^(T)U LambdaU^(T)x= hat(x)^(T)Lambda hat(x)=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2):}\begin{equation} x^{T} A x=x^{T} U \Lambda U^{T} x=\hat{x}^{T} \Lambda \hat{x}=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2} \end{equation}
(Recall that with the old representation, x T A x = i = 1 , j = 1 n x i x j A i j x T A x = i = 1 , j = 1 n x i x j A i j x^(T)Ax=sum_(i=1,j=1)^(n)x_(i)x_(j)A_(ij)x^{T} A x=\sum_{i=1, j=1}^{n} x_{i} x_{j} A_{i j} involves a sum of n 2 n 2 n^(2)n^{2} terms instead of n n nn terms in the equation above.) With this viewpoint, we can also show that the definiteness of the matrix A A AA depends entirely on the sign of its eigenvalues:
  1. If all λ i > 0 λ i > 0 lambda_(i) > 0\lambda_{i}>0, then the matrix A A AA s positivedefinite because x T A x = i = 1 n λ i x ^ i 2 > 0 x T A x = i = 1 n λ i x ^ i 2 > 0 x^(T)Ax=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2) > 0x^{T} A x=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2}>0 for any x ^ 0 . x ^ 0 . hat(x)!=0.\hat{x} \neq 0 .[7]
  2. If all λ i 0 λ i 0 lambda_(i) >= 0\lambda_{i} \geq 0, it is positive semidefinite because x T A x = i = 1 n λ i x ^ i 2 0 x T A x = i = 1 n λ i x ^ i 2 0 x^(T)Ax=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2) >= 0x^{T} A x=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2} \geq 0 for all x ^ x ^ hat(x)\hat{x}.
  3. Likewise, if all λ i < 0 λ i < 0 lambda_(i) < 0\lambda_{i}<0 or λ i 0 λ i 0 lambda_(i) <= 0\lambda_{i} \leq 0, then A A AA is negative definite or negative semidefinite respectively.
  4. Finally, if A A AA has both positive and negative eigenvalues, say λ i > 0 λ i > 0 lambda_(i) > 0\lambda_{i}>0 and λ j < 0 λ j < 0 lambda_(j) < 0\lambda_{j}<0, then it is indefinite. This is because if we let x ^ x ^ hat(x)\hat{x} satisfy x ^ i = 1 x ^ i = 1 hat(x)_(i)=1\hat{x}_{i}=1 and x ^ k = 0 , k i x ^ k = 0 , k i hat(x)_(k)=0,AA k!=i\hat{x}_{k}=0, \forall k \neq i, then x T A x = i = 1 n λ i x ^ i 2 > 0 x T A x = i = 1 n λ i x ^ i 2 > 0 x^(T)Ax=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2) > 0x^{T} A x=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2}>0. Similarly we can let x ^ x ^ hat(x)\hat{x} satisfy x ^ j = 1 x ^ j = 1 hat(x)_(j)=1\hat{x}_{j}=1 and x ^ k = 0 , k j x ^ k = 0 , k j hat(x)_(k)=0,AA k!=j\hat{x}_{k}=0, \forall k \neq j, then x T A x = i = 1 n λ i x ^ i 2 < 0 . x T A x = i = 1 n λ i x ^ i 2 < 0 . x^(T)Ax=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2) < 0.x^{T} A x=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2}<0 .[8]
An application where eigenvalues and eigenvectors come up frequently is in maximizing some function of a matrix. In particular, for a matrix A S n A S n A inS^(n)A \in \mathbb{S}^{n}, consider the following maximization problem,
(9) max x R n x T A x = i = 1 n λ i x ^ i 2 subject to x 2 2 = 1 (9) max x R n x T A x = i = 1 n λ i x ^ i 2 subject to  x 2 2 = 1 {:(9)max_(x inR^(n))x^(T)Ax=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2)quad"subject to "||x||_(2)^(2)=1:}\begin{equation} \max _{x \in \mathbb{R}^{n}} x^{T} A x=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2} \quad \text {subject to }\|x\|_{2}^{2}=1 \end{equation}
i.e., we want to find the vector (of norm 1 1 11) which maximizes the quadratic form. Assuming the eigenvalues are ordered as λ 1 λ 2 λ n λ 1 λ 2 λ n lambda_(1) >= lambda_(2) >= dots >= lambda_(n)\lambda_{1} \geq \lambda_{2} \geq \ldots \geq \lambda_{n}, the optimal value of this optimization problem is λ 1 λ 1 lambda_(1)\lambda_{1} and any eigenvector u 1 u 1 u_(1)u_{1} corresponding to λ 1 λ 1 lambda_(1)\lambda_{1} is one of the maximizers. (If λ 1 > λ 2 λ 1 > λ 2 lambda_(1) > lambda_(2)\lambda_{1}>\lambda_{2}, then there is a unique eigenvector corresponding to eigenvalue λ 1 λ 1 lambda_(1)\lambda_{1}, which is the unique maximizer of the optimization problem (9).)
We can show this by using the diagonalization technique: Note that x 2 = x ^ 2 x 2 = x ^ 2 ||x||_(2)=|| hat(x)||_(2)\|x\|_{2}=\|\hat{x}\|_{2} by equation (3), and using equation (8), we can rewrite the optimization (9) as
(10) max x ^ R n x ^ T Λ x ^ = i = 1 n λ i x ^ i 2 subject to x ^ 2 2 = 1 (10) max x ^ R n x ^ T Λ x ^ = i = 1 n λ i x ^ i 2  subject to  x ^ 2 2 = 1 {:(10)max_( hat(x)inR^(n)) hat(x)^(T)Lambda hat(x)=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2)quad" subject to "|| hat(x)||_(2)^(2)=1:}\begin{equation} \max _{\hat{x} \in \mathbb{R}^{n}} \hat{x}^{T} \Lambda \hat{x}=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2} \quad \text { subject to }\|\hat{x}\|_{2}^{2}=1 \end{equation}
Then, we have that the objective is upper bounded by λ 1 λ 1 lambda_(1)\lambda_{1}:
(11) x ^ T Λ x ^ = i = 1 n λ i x ^ i 2 i = 1 n λ 1 x ^ i 2 = λ 1 (11) x ^ T Λ x ^ = i = 1 n λ i x ^ i 2 i = 1 n λ 1 x ^ i 2 = λ 1 {:(11) hat(x)^(T)Lambda hat(x)=sum_(i=1)^(n)lambda_(i) hat(x)_(i)^(2) <= sum_(i=1)^(n)lambda_(1) hat(x)_(i)^(2)=lambda_(1):}\begin{equation} \hat{x}^{T} \Lambda \hat{x}=\sum_{i=1}^{n} \lambda_{i} \hat{x}_{i}^{2} \leq \sum_{i=1}^{n} \lambda_{1} \hat{x}_{i}^{2}=\lambda_{1} \end{equation}
Moreover, setting x ^ = [ 1 0 0 ] x ^ = 1 0 0 hat(x)=[[1],[0],[vdots],[0]]\hat{x}=\left[\begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array}\right] achieves the equality in the equation above, and this corresponds to setting x = u 1 x = u 1 x=u_(1)x=u_{1}.

4. Matrix Calculus

While the topics in the previous sections are typically covered in a standard course on linear algebra, one topic that does not seem to be covered very often (and which we will use extensively) is the extension of calculus to the vector setting. Despite the fact that all the actual calculus we use is relatively trivial, the notation can often make things look much more difficult than they are. In this section we present some basic definitions of matrix calculus and provide a few examples.

4.1. The Gradient

Suppose that f : R m × n R f : R m × n R f:R^(m xx n)rarrRf: \mathbb{R}^{m \times n} \rightarrow \mathbb{R} is a function that takes as input a matrix A A AA of size m × n m × n m xx nm \times n and returns a real value. Then the gradient of f f ff (with respect to A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} ) is the matrix of partial derivatives, defined as:
A f ( A ) R m × n = [ f ( A ) A 11 f ( A ) A 12 f ( A ) A 1 n f ( A ) A 21 f ( A ) A 22 f ( A ) A 2 n f ( A ) A m 1 f ( A ) A m 2 f ( A ) A m n ] A f ( A ) R m × n = f ( A ) A 11 f ( A ) A 12 f ( A ) A 1 n f ( A ) A 21 f ( A ) A 22 f ( A ) A 2 n f ( A ) A m 1 f ( A ) A m 2 f ( A ) A m n grad_(A)f(A)inR^(m xx n)=[[(del f(A))/(delA_(11)),(del f(A))/(delA_(12)),cdots,(del f(A))/(delA_(1n))],[(del f(A))/(delA_(21)),(del f(A))/(delA_(22)),cdots,(del f(A))/(delA_(2n))],[vdots,vdots,ddots,vdots],[(del f(A))/(delA_(m1)),(del f(A))/(delA_(m2)),cdots,(del f(A))/(delA_(mn))]]\nabla_{A} f(A) \in \mathbb{R}^{m \times n}=\left[\begin{array}{cccc} \frac{\partial f(A)}{\partial A_{11}} & \frac{\partial f(A)}{\partial A_{12}} & \cdots & \frac{\partial f(A)}{\partial A_{1n}} \\ \frac{\partial f(A)}{\partial A_{21}} & \frac{\partial f(A)}{\partial A_{22}} & \cdots & \frac{\partial f(A)}{\partial A_{2 n}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f(A)}{\partial A_{m 1}} & \frac{\partial f(A)}{\partial A_{m 2}} & \cdots & \frac{\partial f(A)}{\partial A_{m n}} \end{array}\right]
i.e., an m × n m × n m xx nm \times n matrix with
( A f ( A ) ) i j = f ( A ) A i j . A f ( A ) i j = f ( A ) A i j . (grad_(A)f(A))_(ij)=(del f(A))/(delA_(ij)).\left(\nabla_{A} f(A)\right)_{i j}=\frac{\partial f(A)}{\partial A_{i j}} .
Note that the size of A f ( A ) A f ( A ) grad_(A)f(A)\nabla_{A} f(A) is always the same as the size of A A AA. So if, in particular, A A AA is just a vector x R n x R n x inR^(n)x \in \mathbb{R}^{n},
x f ( x ) = [ f ( x ) x 1 f ( x ) x 2 f ( x ) x n ] . x f ( x ) = f ( x ) x 1 f ( x ) x 2 f ( x ) x n . grad_(x)f(x)=[[(del f(x))/(delx_(1))],[(del f(x))/(delx_(2))],[vdots],[(del f(x))/(delx_(n))]].\nabla_{x} f(x)=\left[\begin{array}{c} \frac{\partial f(x)}{\partial x_{1}} \\ \frac{\partial f(x)}{\partial x_{2}} \\ \vdots \\ \frac{\partial f(x)}{\partial x_{n}} \end{array}\right] .
It is very important to remember that the gradient of a function is only defined if the function is real-valued, that is, if it returns a scalar value. We can not, for example, take the gradient of A x , A R n × n A x , A R n × n Ax,A inR^(n xx n)A x, A \in \mathbb{R}^{n \times n} with respect to x x xx, since this quantity is vector-valued. It follows directly from the equivalent properties of partial derivatives that:
  • x ( f ( x ) + g ( x ) ) = x f ( x ) + x g ( x ) x ( f ( x ) + g ( x ) ) = x f ( x ) + x g ( x ) grad_(x)(f(x)+g(x))=grad_(x)f(x)+grad_(x)g(x)\nabla_{x}(f(x)+g(x))=\nabla_{x} f(x)+\nabla_{x} g(x).
  • For t R , x ( t f ( x ) ) = t x f ( x ) t R , x ( t f ( x ) ) = t x f ( x ) t inR,grad_(x)(tf(x))=tgrad_(x)f(x)t \in \mathbb{R}, \nabla_{x}(t f(x))=t \nabla_{x} f(x).
In principle, gradients are a natural extension of partial derivatives to functions of multiple variables. In practice, however, working with gradients can sometimes be tricky for notational reasons. For example, suppose that A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} is a matrix of fixed coefficients and suppose that b R m b R m b inR^(m)b \in \mathbb{R}^{m} is a vector of fixed coefficients. Let f : R m R f : R m R f:R^(m)rarrRf: \mathbb{R}^{m} \rightarrow \mathbb{R} be the function defined by f ( z ) = z T z f ( z ) = z T z f(z)=z^(T)zf(z)=z^{T} z, such that z f ( z ) = 2 z z f ( z ) = 2 z grad_(z)f(z)=2z\nabla_{z} f(z)=2 z. But now, consider the expression,
f ( A x ) . f ( A x ) . grad f(Ax).\nabla f(A x) .
How should this expression be interpreted? There are at least two possibilities:
  1. In the first interpretation, recall that z f ( z ) = 2 z z f ( z ) = 2 z grad_(z)f(z)=2z\nabla_{z} f(z)=2 z. Here, we interpret f ( A x ) f ( A x ) grad f(Ax)\nabla f(A x) as evaluating the gradient at the point A x A x AxA x, hence, f ( A x ) = 2 ( A x ) = 2 A x R m . f ( A x ) = 2 ( A x ) = 2 A x R m . grad f(Ax)=2(Ax)=2Ax inR^(m).\nabla f(A x)=2(A x)=2 A x \in \mathbb{R}^{m} .
  2. In the second interpretation, we consider the quantity f ( A x ) f ( A x ) f(Ax)f(A x) as a function of the input variables x x xx. More formally, let g ( x ) = f ( A x ) g ( x ) = f ( A x ) g(x)=f(Ax)g(x)=f(A x). Then in this interpretation, f ( A x ) = x g ( x ) R n . f ( A x ) = x g ( x ) R n . grad f(Ax)=grad_(x)g(x)inR^(n).\nabla f(A x)=\nabla_{x} g(x) \in \mathbb{R}^{n} .
Here, we can see that these two interpretations are indeed different. One interpretation yields an m m mm-dimensional vector as a result, while the other interpretation yields an n n nn-dimensional vector as a result! How can we resolve this?
Here, the key is to make explicit the variables which we are differentiating with respect to. In the first case, we are differentiating the function f f ff with respect to its arguments z z zz and then substituting the argument A x A x AxA x. In the second case, we are differentiating the composite function g ( x ) = f ( A x ) g ( x ) = f ( A x ) g(x)=f(Ax)g(x)=f(A x) with respect to x x xx directly. We denote the first case as z f ( A x ) z f ( A x ) grad_(z)f(Ax)\nabla_{z} f(A x) and the second case as x f ( A x ) x f ( A x ) grad_(x)f(Ax)\nabla_{x} f(A x).[9] Keeping the notation clear is extremely important (as you'll find out in your homework, in fact!).

4.2. The Hessian

Suppose that f : R n R f : R n R f:R^(n)rarrRf: \mathbb{R}^{n} \rightarrow \mathbb{R} is a function that takes a vector in R n R n R^(n)\mathbb{R}^{n} and returns a real number. Then the Hessian matrix with respect to x x xx, written x 2 f ( x ) x 2 f ( x ) grad_(x)^(2)f(x)\nabla_{x}^{2} f(x) or simply as H H HH is the n × n n × n n xx nn \times n matrix of partial derivatives,
x 2 f ( x ) R n × n = [ 2 f ( x ) x 1 2 2 f ( x ) x 1 x 2 2 f ( x ) x 1 x n 2 f ( x ) x 2 x 1 2 f ( x ) x 2 2 2 f ( x ) x 2 x n 2 f ( x ) x n x 1 2 f ( x ) x n x 2 2 f ( x ) x n 2 ] . x 2 f ( x ) R n × n = 2 f ( x ) x 1 2 2 f ( x ) x 1 x 2 2 f ( x ) x 1 x n 2 f ( x ) x 2 x 1 2 f ( x ) x 2 2 2 f ( x ) x 2 x n 2 f ( x ) x n x 1 2 f ( x ) x n x 2 2 f ( x ) x n 2 . grad_(x)^(2)f(x)inR^(n xx n)=[[(del^(2)f(x))/(delx_(1)^(2)),(del^(2)f(x))/(delx_(1)delx_(2)),cdots,(del^(2)f(x))/(delx_(1)delx_(n))],[(del^(2)f(x))/(delx_(2)delx_(1)),(del^(2)f(x))/(delx_(2)^(2)),cdots,(del^(2)f(x))/(delx_(2)delx_(n))],[vdots,vdots,ddots,vdots],[(del^(2)f(x))/(delx_(n)delx_(1)),(del^(2)f(x))/(delx_(n)delx_(2)),cdots,(del^(2)f(x))/(delx_(n)^(2))]].\nabla_{x}^{2} f(x) \in \mathbb{R}^{n \times n}=\left[\begin{array}{cccc} \frac{\partial^{2} f(x)}{\partial x_{1}^{2}} & \frac{\partial^{2} f(x)}{\partial x_{1} \partial x_{2}} & \cdots & \frac{\partial^{2} f(x)}{\partial x_{1} \partial x_{n}} \\ \frac{\partial^{2} f(x)}{\partial x_{2} \partial x_{1}} & \frac{\partial^{2} f(x)}{\partial x_{2}^{2}} & \cdots & \frac{\partial^{2} f(x)}{\partial x_{2} \partial x_{n}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^{2} f(x)}{\partial x_{n} \partial x_{1}} & \frac{\partial^{2} f(x)}{\partial x_{n} \partial x_{2}} & \cdots & \frac{\partial^{2} f(x)}{\partial x_{n}^{2}} \end{array}\right] .
In other words, x 2 f ( x ) R n × n x 2 f ( x ) R n × n grad_(x)^(2)f(x)inR^(n xx n)\nabla_{x}^{2} f(x) \in \mathbb{R}^{n \times n}, with
( x 2 f ( x ) ) i j = 2 f ( x ) x i x j . ( x 2 f ( x ) ) i j = 2 f ( x ) x i x j . (grad_(x)^(2)f(x))_(ij)=(del^(2)f(x))/(delx_(i)delx_(j)).(\nabla_{x}^{2} f(x))_{i j}=\frac{\partial^{2} f(x)}{\partial x_{i} \partial x_{j}} .
Note that the Hessian is always symmetric, since
2 f ( x ) x i x j = 2 f ( x ) x j x i . 2 f ( x ) x i x j = 2 f ( x ) x j x i . (del^(2)f(x))/(delx_(i)delx_(j))=(del^(2)f(x))/(delx_(j)delx_(i)).\frac{\partial^{2} f(x)}{\partial x_{i} \partial x_{j}}=\frac{\partial^{2} f(x)}{\partial x_{j} \partial x_{i}} .
Similar to the gradient, the Hessian is defined only when f ( x ) f ( x ) f(x)f(x) is real-valued.
It is natural to think of the gradient as the analogue of the first derivative for functions of vectors, and the Hessian as the analogue of the second derivative (and the symbols we use also suggest this relation). This intuition is generally correct, but there a few caveats to keep in mind.
First, for real-valued functions of one variable f : R R f : R R f:RrarrRf: \mathbb{R} \rightarrow \mathbb{R}, it is a basic definition that the second derivative is the derivative of the first derivative, i.e.,
2 f ( x ) x 2 = x x f ( x ) . 2 f ( x ) x 2 = x x f ( x ) . (del^(2)f(x))/(delx^(2))=(del)/(del x)(del)/(del x)f(x).\frac{\partial^{2} f(x)}{\partial x^{2}}=\frac{\partial}{\partial x} \frac{\partial}{\partial x} f(x).
However, for functions of a vector, the gradient of the function is a vector, and we cannot take the gradient of a vector – i.e.,
x x f ( x ) = x [ f ( x ) x 1 f ( x ) x 2 f ( x ) x n ] x x f ( x ) = x f ( x ) x 1 f ( x ) x 2 f ( x ) x n grad_(x)grad_(x)f(x)=grad_(x)[[(del f(x))/(delx_(1))],[(del f(x))/(delx_(2))],[vdots],[(del f(x))/(delx_(n))]]\nabla_{x} \nabla_{x} f(x)=\nabla_{x}\left[\begin{array}{c} \frac{\partial f(x)}{\partial x_{1}} \\ \frac{\partial f(x)}{\partial x_{2}} \\ \vdots \\ \frac{\partial f(x)}{\partial x_{n}} \end{array}\right]
and this expression is not defined. Therefore, it is not the case that the Hessian is the gradient of the gradient. However, this is almost true, in the following sense: If we look at the i i iith entry of the gradient ( x f ( x ) ) i = f ( x ) / x i x f ( x ) i = f ( x ) / x i (grad_(x)f(x))_(i)=del f(x)//delx_(i)\left(\nabla_{x} f(x)\right)_{i}=\partial f(x) / \partial x_{i}, and take the gradient with respect to x x xx we get
x f ( x ) x i = [ 2 f ( x ) x i x 1 2 f ( x ) x i x 2 f ( x ) x i x n ] x f ( x ) x i = 2 f ( x ) x i x 1 2 f ( x ) x i x 2 f ( x ) x i x n grad_(x)(del f(x))/(delx_(i))=[[(del^(2)f(x))/(delx_(i)delx_(1))],[(del^(2)f(x))/(delx_(i)delx_(2))],[vdots],[(del f(x))/(delx_(i)delx_(n))]]\nabla_{x} \frac{\partial f(x)}{\partial x_{i}}=\left[\begin{array}{c} \frac{\partial^{2} f(x)}{\partial x_{i} \partial x_{1}} \\ \frac{\partial^{2} f(x)}{\partial x_{i} \partial x_{2}} \\ \vdots \\ \frac{\partial f(x)}{\partial x_{i} \partial x_{n}} \end{array}\right]
which is the i i iith column (or row) of the Hessian. Therefore,
x 2 f ( x ) = [ x ( x f ( x ) ) 1 x ( x f ( x ) ) 2 x ( x f ( x ) ) n ] . x 2 f ( x ) = x x f ( x ) 1      x x f ( x ) 2           x x f ( x ) n . grad_(x)^(2)f(x)=[[grad_(x)(grad_(x)f(x))_(1),grad_(x)(grad_(x)f(x))_(2),cdots,grad_(x)(grad_(x)f(x))_(n)]].\nabla_{x}^{2} f(x)=\left[\begin{array}{llll} \nabla_{x}\left(\nabla_{x} f(x)\right)_{1} & \nabla_{x}\left(\nabla_{x} f(x)\right)_{2} & \cdots & \nabla_{x}\left(\nabla_{x} f(x)\right)_{n} \end{array}\right] .
If we don't mind being a little bit sloppy we can say that (essentially) x 2 f ( x ) = x ( x f ( x ) ) T x 2 f ( x ) = x x f ( x ) T grad_(x)^(2)f(x)=grad_(x)(grad_(x)f(x))^(T)\nabla_{x}^{2} f(x)=\nabla_{x}\left(\nabla_{x} f(x)\right)^{T}, so long as we understand that this really means taking the gradient of each entry of ( x f ( x ) ) T x f ( x ) T (grad_(x)f(x))^(T)\left(\nabla_{x} f(x)\right)^{T}, not the gradient of the whole vector.
Finally, note that while we can take the gradient with respect to a matrix A R n A R n A inR^(n)A \in \mathbb{R}^{n}, for the purposes of this class we will only consider taking the Hessian with respect to a vector x R n x R n x inR^(n)x \in \mathbb{R}^{n}. This is simply a matter of convenience (and the fact that none of the calculations we do require us to find the Hessian with respect to a matrix), since the Hessian with respect to a matrix would have to represent all the partial derivatives 2 f ( A ) / ( A i j A k ) 2 f ( A ) / A i j A k del^(2)f(A)//(delA_(ij)delA_(kℓ))\partial^{2} f(A) /\left(\partial A_{i j} \partial A_{k \ell}\right), and it is rather cumbersome to represent this as a matrix.

4.3. Gradients and Hessians of Quadratic and Linear Functions

Now let's try to determine the gradient and Hessian matrices for a few simple functions. It should be noted that all the gradients given here are special cases of the gradients given in the CS229 lecture notes.
For x R n x R n x inR^(n)x \in \mathbb{R}^{n}, let f ( x ) = b T x f ( x ) = b T x f(x)=b^(T)xf(x)=b^{T} x for some known vector b R n b R n b inR^(n)b \in \mathbb{R}^{n}. Then
f ( x ) = i = 1 n b i x i f ( x ) = i = 1 n b i x i f(x)=sum_(i=1)^(n)b_(i)x_(i)f(x)=\sum_{i=1}^{n} b_{i} x_{i}
so
f ( x ) x k = x k i = 1 n b i x i = b k . f ( x ) x k = x k i = 1 n b i x i = b k . (del f(x))/(delx_(k))=(del)/(delx_(k))sum_(i=1)^(n)b_(i)x_(i)=b_(k).\frac{\partial f(x)}{\partial x_{k}}=\frac{\partial}{\partial x_{k}} \sum_{i=1}^{n} b_{i} x_{i}=b_{k} .
From this we can easily see that x b T x = b x b T x = b grad_(x)b^(T)x=b\nabla_{x} b^{T} x=b. This should be compared to the analogous situation in single variable calculus, where / ( x ) a x = a / ( x ) a x = a del//(del x)ax=a\partial /(\partial x) a x=a.
Now consider the quadratic function f ( x ) = x T A x f ( x ) = x T A x f(x)=x^(T)Axf(x)=x^{T} A x for A S n A S n A inS^(n)A \in \mathbb{S}^{n}. Remember that
f ( x ) = i = 1 n j = 1 n A i j x i x j . f ( x ) = i = 1 n j = 1 n A i j x i x j . f(x)=sum_(i=1)^(n)sum_(j=1)^(n)A_(ij)x_(i)x_(j).f(x)=\sum_{i=1}^{n} \sum_{j=1}^{n} A_{i j} x_{i} x_{j} .
To take the partial derivative, we'll consider the terms including x k x k x_(k)x_{k} and x k 2 x k 2 x_(k)^(2)x_{k}^{2} factors separately:
f ( x ) x k = x k i = 1 n j = 1 n A i j x i x j = x k [ i k j k A i j x i x j + i k A i k x i x k + j k A k j x k x j + A k k x k 2 ] = i k A i k x i + j k A k j x j + 2 A k k x k = i = 1 n A i k x i + j = 1 n A k j x j = 2 i = 1 n A k i x i , f ( x ) x k = x k i = 1 n j = 1 n A i j x i x j = x k i k j k A i j x i x j + i k A i k x i x k + j k A k j x k x j + A k k x k 2 = i k A i k x i + j k A k j x j + 2 A k k x k = i = 1 n A i k x i + j = 1 n A k j x j = 2 i = 1 n A k i x i , {:[(del f(x))/(delx_(k))=(del)/(delx_(k))sum_(i=1)^(n)sum_(j=1)^(n)A_(ij)x_(i)x_(j)],[=(del)/(delx_(k))[sum_(i!=k)sum_(j!=k)A_(ij)x_(i)x_(j)+sum_(i!=k)A_(ik)x_(i)x_(k)+sum_(j!=k)A_(kj)x_(k)x_(j)+A_(kk)x_(k)^(2)]],[=sum_(i!=k)A_(ik)x_(i)+sum_(j!=k)A_(kj)x_(j)+2A_(kk)x_(k)],[=sum_(i=1)^(n)A_(ik)x_(i)+sum_(j=1)^(n)A_(kj)x_(j)=2sum_(i=1)^(n)A_(ki)x_(i)","]:}\begin{aligned} \frac{\partial f(x)}{\partial x_{k}} &=\frac{\partial}{\partial x_{k}} \sum_{i=1}^{n} \sum_{j=1}^{n} A_{i j} x_{i} x_{j} \\ &=\frac{\partial}{\partial x_{k}}\left[\sum_{i \neq k} \sum_{j \neq k} A_{i j} x_{i} x_{j}+\sum_{i \neq k} A_{i k} x_{i} x_{k}+\sum_{j \neq k} A_{k j} x_{k} x_{j}+A_{k k} x_{k}^{2}\right] \\ &=\sum_{i \neq k} A_{i k} x_{i}+\sum_{j \neq k} A_{k j} x_{j}+2 A_{k k} x_{k} \\ &=\sum_{i=1}^{n} A_{i k} x_{i}+\sum_{j=1}^{n} A_{k j} x_{j}=2 \sum_{i=1}^{n} A_{k i} x_{i}, \end{aligned}
where the last equality follows since A A AA is symmetric (which we can safely assume, since it is appearing in a quadratic form). Note that the k k kkth entry of x f ( x ) x f ( x ) grad_(x)f(x)\nabla_{x} f(x) is just the inner product of the k k kkth row of A A AA and x x xx. Therefore, x x T A x = 2 A x x x T A x = 2 A x grad_(x)x^(T)Ax=2Ax\nabla_{x} x^{T} A x=2 A x. Again, this should remind you of the analogous fact in single-variable calculus, that / ( x ) a x 2 = 2 a x / ( x ) a x 2 = 2 a x del//(del x)ax^(2)=2ax\partial /(\partial x) a x^{2}=2 a x.
Finally, let's look at the Hessian of the quadratic function f ( x ) = x T A x f ( x ) = x T A x f(x)=x^(T)Axf(x)=x^{T} A x (it should be obvious that the Hessian of a linear function b T x b T x b^(T)xb^{T} x is zero). In this case,
2 f ( x ) x k x = x k [ f ( x ) x ] = x k [ 2 i = 1 n A i x i ] = 2 A k = 2 A k . 2 f ( x ) x k x = x k f ( x ) x = x k 2 i = 1 n A i x i = 2 A k = 2 A k . (del^(2)f(x))/(delx_(k)delx_(ℓ))=(del)/(delx_(k))[(del f(x))/(delx_(ℓ))]=(del)/(delx_(k))[2sum_(i=1)^(n)A_(ℓi)x_(i)]=2A_(ℓk)=2A_(kℓ).\frac{\partial^{2} f(x)}{\partial x_{k} \partial x_{\ell}}=\frac{\partial}{\partial x_{k}}\left[\frac{\partial f(x)}{\partial x_{\ell}}\right]=\frac{\partial}{\partial x_{k}}\left[2 \sum_{i=1}^{n} A_{\ell i} x_{i}\right]=2 A_{\ell k}=2 A_{k \ell} .
Therefore, it should be clear that x 2 x T A x = 2 A x 2 x T A x = 2 A grad_(x)^(2)x^(T)Ax=2A\nabla_{x}^{2} x^{T} A x=2 A, which should be entirely expected (and again analogous to the single-variable fact that 2 / ( x 2 ) a x 2 = 2 a ) 2 / x 2 a x 2 = 2 a {:del^(2)//(delx^(2))ax^(2)=2a)\left.\partial^{2} /\left(\partial x^{2}\right) a x^{2}=2 a\right).
To recap,
  • x b T x = b x b T x = b grad_(x)b^(T)x=b\nabla_{x} b^{T} x=b
  • x x T A x = 2 A x x x T A x = 2 A x grad_(x)x^(T)Ax=2Ax\nabla_{x} x^{T} A x=2 A x (if A A AA symmetric)
  • x 2 x T A x = 2 A x 2 x T A x = 2 A grad_(x)^(2)x^(T)Ax=2A\nabla_{x}^{2} x^{T} A x=2 A (if A A AA symmetric)

4.4. Least Squares

Let's apply the equations we obtained in the last section to derive the least squares equations. Suppose we are given matrices A R m × n A R m × n A inR^(m xx n)A \in \mathbb{R}^{m \times n} (for simplicity we assume A A AA is full rank) and a vector b R m b R m b inR^(m)b \in \mathbb{R}^{m} such that b R ( A ) b R ( A ) b!inR(A)b \notin \mathcal{R}(A). In this situation we will not be able to find a vector x R n x R n x inR^(n)x \in \mathbb{R}^{n}, such that A x = b A x = b Ax=bA x=b, so instead we want to find a vector x x xx such that A x A x AxA x is as close as possible to b b bb, as measured by the square of the Euclidean norm A x b 2 2 A x b 2 2 ||Ax-b||_(2)^(2)\|A x-b\|_{2}^{2}.
Using the fact that x 2 2 = x T x x 2 2 = x T x ||x||_(2)^(2)=x^(T)x\|x\|_{2}^{2}=x^{T} x, we have
A x b 2 2 = ( A x b ) T ( A x b ) = x T A T A x 2 b T A x + b T b A x b 2 2 = ( A x b ) T ( A x b ) = x T A T A x 2 b T A x + b T b {:[||Ax-b||_(2)^(2)=(Ax-b)^(T)(Ax-b)],[=x^(T)A^(T)Ax-2b^(T)Ax+b^(T)b]:}\begin{aligned} \|A x-b\|_{2}^{2} &=(A x-b)^{T}(A x-b) \\ &=x^{T} A^{T} A x-2 b^{T} A x+b^{T} b \end{aligned}
Taking the gradient with respect to x x xx we have, and using the properties we derived in the previous section
x ( x T A T A x 2 b T A x + b T b ) = x x T A T A x x 2 b T A x + x b T b = 2 A T A x 2 A T b x x T A T A x 2 b T A x + b T b = x x T A T A x x 2 b T A x + x b T b = 2 A T A x 2 A T b {:[grad_(x)(x^(T)A^(T)Ax-2b^(T)Ax+b^(T)b)=grad_(x)x^(T)A^(T)Ax-grad_(x)2b^(T)Ax+grad_(x)b^(T)b],[=2A^(T)Ax-2A^(T)b]:}\begin{aligned} \nabla_{x}\left(x^{T} A^{T} A x-2 b^{T} A x+b^{T} b\right) &=\nabla_{x} x^{T} A^{T} A x-\nabla_{x} 2 b^{T} A x+\nabla_{x} b^{T} b \\ &=2 A^{T} A x-2 A^{T} b \end{aligned}
Setting this last expression equal to zero and solving for x x xx gives the normal equations
x = ( A T A ) 1 A T b x = A T A 1 A T b x=(A^(T)A)^(-1)A^(T)bx=\left(A^{T} A\right)^{-1} A^{T} b
which is the same as what we derived in class.

4.5. Gradients of the Determinant

Now let's consider a situation where we find the gradient of a function with respect to a matrix, namely for A R n × n A R n × n A inR^(n xx n)A \in \mathbb{R}^{n \times n}, we want to find A | A | A | A | grad_(A)|A|\nabla_{A}|A|. Recall from our discussion of determinants that
| A | = i = 1 n ( 1 ) i + j A i j | A i , j | ( for any j 1 , , n ) | A | = i = 1 n ( 1 ) i + j A i j A i , j ( for any  j 1 , , n ) |A|=sum_(i=1)^(n)(-1)^(i+j)A_(ij)|A_(\\i,\\j)|quad("for any "j in1,dots,n)|A|=\sum_{i=1}^{n}(-1)^{i+j} A_{i j}\left|A_{\backslash i, \backslash j}\right| \quad(\text {for any } j \in 1, \ldots, n)
so
A k | A | = A k i = 1 n ( 1 ) i + j A i j | A i , j | = ( 1 ) k + | A k , | = ( adj ( A ) ) k . A k | A | = A k i = 1 n ( 1 ) i + j A i j A i , j = ( 1 ) k + A k , = ( adj ( A ) ) k . (del)/(delA_(kℓ))|A|=(del)/(delA_(kℓ))sum_(i=1)^(n)(-1)^(i+j)A_(ij)|A_(\\i,\\j)|=(-1)^(k+ℓ)|A_(\\k,\\ℓ)|=(adj(A))_(ℓk).\frac{\partial}{\partial A_{k \ell}}|A|=\frac{\partial}{\partial A_{k \ell}} \sum_{i=1}^{n}(-1)^{i+j} A_{i j}\left|A_{\backslash i, \backslash j}\right|=(-1)^{k+\ell}\left|A_{\backslash k, \backslash \ell}\right|=(\operatorname{adj}(A))_{\ell k} .
From this it immediately follows from the properties of the adjoint that
A | A | = ( adj ( A ) ) T = | A | A T . A | A | = ( adj ( A ) ) T = | A | A T . grad_(A)|A|=(adj(A))^(T)=|A|A^(-T).\nabla_{A}|A|=(\operatorname{adj}(A))^{T}=|A| A^{-T} .
Now let's consider the function f : S + + n R , f ( A ) = log | A | f : S + + n R , f ( A ) = log | A | f:S_(++)^(n)rarrR,f(A)=log |A|f: \mathbb{S}_{++}^{n} \rightarrow \mathbb{R}, f(A)=\log |A|. Note that we have to restrict the domain of f f ff to be the positive definite matrices, since this ensures that | A | > 0 | A | > 0 |A| > 0|A|>0, so that the log of | A | | A | |A||A| is a real number. In this case we can use the chain rule (nothing fancy, just the ordinary chain rule from single-variable calculus) to see that
log | A | A i j = log | A | | A | | A | A i j = 1 | A | | A | A i j . log | A | A i j = log | A | | A | | A | A i j = 1 | A | | A | A i j . (del log |A|)/(delA_(ij))=(del log |A|)/(del|A|)(del|A|)/(delA_(ij))=(1)/(|A|)(del|A|)/(delA_(ij)).\frac{\partial \log |A|}{\partial A_{i j}}=\frac{\partial \log |A|}{\partial|A|} \frac{\partial|A|}{\partial A_{i j}}=\frac{1}{|A|} \frac{\partial|A|}{\partial A_{i j}} .
From this it should be obvious that
A log | A | = 1 | A | A | A | = A 1 , A log | A | = 1 | A | A | A | = A 1 , grad_(A)log |A|=(1)/(|A|)grad_(A)|A|=A^(-1),\nabla_{A} \log |A|=\frac{1}{|A|} \nabla_{A}|A|=A^{-1},
where we can drop the transpose in the last expression because A A AA is symmetric. Note the similarity to the single-valued case, where / ( x ) log x = 1 / x / ( x ) log x = 1 / x del//(del x)log x=1//x\partial /(\partial x) \log x=1 / x.

4.6. Eigenvalues as Optimization

Finally, we use matrix calculus to solve an optimization problem in a way that leads directly to eigenvalue/eigenvector analysis. Consider the following, equality constrained optimization problem:
max x R n x T A x subject to x 2 2 = 1 max x R n x T A x subject to  x 2 2 = 1 max_(x inR^(n))x^(T)Ax quad"subject to "||x||_(2)^(2)=1\max _{x \in \mathbb{R}^{n}} x^{T} A x \quad \text {subject to }\|x\|_{2}^{2}=1
for a symmetric matrix A S n A S n A inS^(n)A \in \mathbb{S}^{n}. A standard way of solving optimization problems with equality constraints is by forming the Lagrangian, an objective function that includes the equality constraints.[10] The Lagrangian in this case can be given by
L ( x , λ ) = x T A x λ ( x T x 1 ) L ( x , λ ) = x T A x λ x T x 1 L(x,lambda)=x^(T)Ax-lambda(x^(T)x-1)\mathcal{L}(x, \lambda)=x^{T} A x-\lambda\left(x^{T} x-1\right)
where λ λ lambda\lambda is called the Lagrange multiplier associated with the equality constraint. It can be established that for x x x^(**)x^{*} to be a optimal point to the problem, the gradient of the Lagrangian has to be zero at x x x^(**)x^{*} (this is not the only condition, but it is required). That is,
x L ( x , λ ) = x ( x T A x λ x T x ) = 2 A T x 2 λ x = 0 . x L ( x , λ ) = x x T A x λ x T x = 2 A T x 2 λ x = 0 . grad_(x)L(x,lambda)=grad_(x)(x^(T)Ax-lambdax^(T)x)=2A^(T)x-2lambda x=0.\nabla_{x} \mathcal{L}(x, \lambda)=\nabla_{x}\left(x^{T} A x-\lambda x^{T} x\right)=2 A^{T} x-2 \lambda x=0 .
Notice that this is just the linear equation A x = λ x A x = λ x Ax=lambda xA x=\lambda x. This shows that the only points which can possibly maximize (or minimize) x T A x x T A x x^(T)Axx^{T} A x assuming x T x = 1 x T x = 1 x^(T)x=1x^{T} x=1 are the eigenvectors of A A AA.

  1. E.g., if you could write all your math derivations with matrices or vectors, it would be better than doing them with scalar elements. ↩︎
  2. It's easy to get confused and think that non-singular means non-invertible. But in fact, it means the opposite! Watch out! ↩︎
  3. Admittedly, we have not actually defined what we mean by "volume" here, but hopefully the intuition should be clear enough. When n = 2 n = 2 n=2n=2, our notion of "volume" corresponds to the area of S S SS in the Cartesian plane. When n = 3 n = 3 n=3n=3, "volume" corresponds with our usual notion of volume for a three-dimensional object. ↩︎
  4. Note that λ λ lambda\lambda and the entries of x x xx are actually in C C C\mathbb{C}, the set of complex numbers, not just the reals; we will see shortly why this is necessary. Don't worry about this technicality for now, you can think of complex vectors in the same way as real vectors. ↩︎
  5. Mathematically, we have i , A u i = λ i u i , u i 2 = 1 i , A u i = λ i u i , u i 2 = 1 AA i,Au_(i)=lambda_(i)u_(i),||u_(i)||_(2)=1\forall i, A u_{i}=\lambda_{i} u_{i},\left\|u_{i}\right\|_{2}=1, and j i , u i T u j = 0 j i , u i T u j = 0 AA j!=i,u_(i)^(T)u_(j)=0\forall j \neq i, u_{i}^{T} u_{j}=0. Moreover, we remark that it's not true that all eigenvectors u 1 , , u n u 1 , , u n u_(1),dots,u_(n)u_{1}, \ldots, u_{n} satisfying a) of any matrix A A AA are orthogonal to each other, because the eigenvalues can be repetitive and so can eigenvectors. ↩︎
  6. Here for notational simplicity, we deviate from the notational convention for columns of matrices in the previous sections. ↩︎
  7. Note that x ^ 0 x 0 x ^ 0 x 0 hat(x)!=0<=>x!=0\hat{x} \neq 0 \Leftrightarrow x \neq 0. ↩︎
  8. Note that x = U x ^ x = U x ^ x=U hat(x)x=U \hat{x} and therefore constructing x ^ x ^ hat(x)\hat{x} gives an implicit construction of x x xx. ↩︎
  9. A drawback to this notation that we will have to live with is the fact that in the first case, z f ( A x ) z f ( A x ) grad_(z)f(Ax)\nabla_{z} f(A x) it appears that we are differentiating with respect to a variable that does not even appear in the expression being differentiated! For this reason, the first case is often written as f ( A x ) f ( A x ) grad f(Ax)\nabla f(A x), and the fact that we are differentiating with respect to the arguments of f f ff is understood. However, the second case is always written as x f ( A x ) x f ( A x ) grad_(x)f(Ax)\nabla_{x} f(A x). ↩︎
  10. Don't worry if you haven't seen Lagrangians before, as we will cover them in greater detail later in CS229. ↩︎

Recommended for you

Juntao Jiang
Group Equivariant Convolutional Networks in Medical Image Analysis
Group Equivariant Convolutional Networks in Medical Image Analysis
This is a brief review of G-CNNs' applications in medical image analysis, including fundamental knowledge of group equivariant convolutional networks, and applications in medical images' classification and segmentation.
9 points
0 issues