A vector-valued random variable X=[{:[X_(1),cdots,X_(n)]:}]^(T)X=[\begin{array}{lll}X_{1} & \cdots & X_{n}\end{array}]^{T} is said to have a multivariate normal (or Gaussian) distribution with mean mu inR^(n)\mu \in \mathbf{R}^{n} and covariance matrix Sigma inS_(++)^(n)\Sigma \in \mathbf{S}_{++}^{n}[1] if its probability density function[2] is given by
Here, the argument of the exponential function, -(1)/(2sigma^(2))(x-mu)^(2)-\frac{1}{2 \sigma^{2}}(x-\mu)^{2}, is a quadratic function of the variable xx. Furthermore, the parabola points downwards, as the coefficient of the quadratic term is negative. The coefficient in front, (1)/(sqrt(2pi)sigma)\frac{1}{\sqrt{2 \pi} \sigma}, is a constant that does not depend on xx; hence, we can think of it as simply a "normalization factor" used to ensure that
The figure on the left shows a univariate Gaussian density for a single variable XX. The figure on the right shows a multivariate Gaussian density over two variables X_(1)X_{1} and X_(2)X_{2}.
In the case of the multivariate Gaussian density, the argument of the exponential function, -(1)/(2)(x-mu)^(T)Sigma^(-1)(x-mu)-\frac{1}{2}(x-\mu)^{T} \Sigma^{-1}(x-\mu), is a quadratic form in the vector variable xx. Since Sigma\Sigma is positive definite, and since the inverse of any positive definite matrix is also positive definite, then for any non-zero vector z,z^(T)Sigma^(-1)z > 0z, z^{T} \Sigma^{-1} z>0. This implies that for any vector x!=mux \neq \mu,
Like in the univariate case, you can think of the argument of the exponential function as being a downward opening quadratic bowl. The coefficient in front (i.e., (1)/((2pi)^(n//2)|Sigma|^(1//2))\frac{1}{(2 \pi)^{n / 2}|\Sigma|^{1 / 2}}) has an even more complicated form than in the univariate case. However, it still does not depend on xx, and hence it is again simply a normalization factor used to ensure that
(1)/((2pi)^(n//2)|Sigma|^(1//2))int_(-oo)^(oo)int_(-oo)^(oo)cdotsint_(-oo)^(oo)exp(-(1)/(2)(x-mu)^(T)Sigma^(-1)(x-mu))dx_(1)dx_(2)cdots dx_(n)=1.\frac{1}{(2 \pi)^{n / 2}|\Sigma|^{1 / 2}} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \cdots \int_{-\infty}^{\infty} \exp \left(-\frac{1}{2}(x-\mu)^{T} \Sigma^{-1}(x-\mu)\right) d x_{1} d x_{2} \cdots d x_{n}=1 .
2. The covariance matrix
The concept of the covariance matrix is vital to understanding multivariate Gaussian distributions. Recall that for a pair of random variables XX and YY, their covariance is defined as
When working with multiple variables, the covariance matrix provides a succinct way to summarize the covariances of all pairs of variables. In particular, the covariance matrix, which we usually denote as Sigma\Sigma, is the n xx nn \times n matrix whose (i,j)(i, j) th entry is Cov[X_(i),X_(j)]\operatorname{Cov}\left[X_{i}, X_{j}\right].
The following proposition (whose proof is provided in the Appendix A.1) gives an alternative way to characterize the covariance matrix of a random vector XX :
Proposition 1.For any random vectorXXwith meanmu\muand covariance matrixSigma\Sigma,
In the definition of multivariate Gaussians, we required that the covariance matrix Sigma\Sigma be symmetric positive definite (i.e., Sigma inS_(++)^(n)\Sigma \in \mathbf{S}_{++}^{n}). Why does this restriction exist? As seen in the following proposition, the covariance matrix of any random vector must always be symmetric positive semidefinite:
Proposition 2.Suppose thatSigma\Sigmais the covariance matrix corresponding to some random vectorXX. ThenSigma\Sigmais symmetric positive semidefinite.
Proof. The symmetry of Sigma\Sigma follows immediately from its definition. Next, for any vector z inR^(n)z \in \mathbf{R}^{n}, observe that
Here, (2) follows from the formula for expanding a quadratic form (see section notes on linear algebra), and (3) follows by linearity of expectations (see probability notes).
To complete the proof, observe that the quantity inside the brackets is of the form sum_(i)sum_(j)x_(i)x_(j)z_(i)z_(j)=(x^(T)z)^(2) >= 0\sum_{i} \sum_{j} x_{i} x_{j} z_{i} z_{j}=(x^{T} z)^{2} \geq 0 (see problem set #1). Therefore, the quantity inside the expectation is always nonnegative, and hence the expectation itself must be nonnegative. We conclude that z^(T)Sigma z >= 0z^{T} \Sigma z \geq 0.
From the above proposition it follows that Sigma\Sigma must be symmetric positive semidefinite in order for it to be a valid covariance matrix. However, in order for Sigma^(-1)\Sigma^{-1} to exist (as required in the definition of the multivariate Gaussian density), then Sigma\Sigma must be invertible and hence full rank. Since any full rank symmetric positive semidefinite matrix is necessarily symmetric positive definite, it follows that Sigma\Sigma must be symmetric positive definite.
3. The diagonal covariance matrix case
To get an intuition for what a multivariate Gaussian is, consider the simple case where n=2n=2, and where the covariance matrix Sigma\Sigma is diagonal, i.e.,
where we have relied on the explicit formula for the determinant of a 2xx22 \times 2 matrix[3], and the fact that the inverse of a diagonal matrix is simply found by taking the reciprocal of each diagonal entry. Continuing,
The last equation we recognize to simply be the product of two independent Gaussian densities, one with mean mu_(1)\mu_{1} and variance sigma_(1)^(2)\sigma_{1}^{2}, and the other with mean mu_(2)\mu_{2} and variance sigma_(2)^(2)\sigma_{2}^{2}.
More generally, one can show that an nn-dimensional Gaussian with mean mu inR^(n)\mu \in \mathbf{R}^{n} and diagonal covariance matrix Sigma=diag(sigma_(1)^(2),sigma_(2)^(2),dots,sigma_(n)^(2))\Sigma=\operatorname{diag}(\sigma_{1}^{2}, \sigma_{2}^{2}, \ldots, \sigma_{n}^{2}) is the same as a collection of nn independent Gaussian random variables with mean mu_(i)\mu_{i} and variance sigma_(i)^(2)\sigma_{i}^{2}, respectively.
4. Isocontours
Another way to understand a multivariate Gaussian conceptually is to understand the shape of its isocontours. For a function f:R^(2)rarrRf: \mathbf{R}^{2} \rightarrow \mathbf{R}, an isocontour is a set of the form
Now, let's consider the level set consisting of all points where p(x;mu,Sigma)=cp(x ; \mu, \Sigma)=c for some constant c inRc \in \mathbf{R}. In particular, consider the set of all x_(1),x_(2)inRx_{1}, x_{2} \in \mathbf{R} such that
Equation (5) should be familiar to you from high school analytic geometry: it is the equation of an axis-aligned ellipse, with center (mu_(1),mu_(2))(\mu_{1}, \mu_{2}), where the x_(1)x_{1} axis has length 2r_(1)2 r_{1} and the x_(2)x_{2} axis has length 2r_(2)2 r_{2} !
4.2. Length of axes
To get a better understanding of how the shape of the level curves vary as a function of the variances of the multivariate Gaussian distribution, suppose that we are interested in the values of r_(1)r_{1} and r_(2)r_{2} at which cc is equal to a fraction 1//e1 / e of the peak height of Gaussian density.
Figure 2:
The figure on the left shows a heatmap indicating values of the density function for an axis-aligned multivariate Gaussian with mean mu=[[3],[2]]\mu=\left[\begin{array}{l}3 \\ 2\end{array}\right] and diagonal covariance matrix Sigma=\Sigma=[[25,0],[0,9]]\left[\begin{array}{cc}25 & 0 \\ 0 & 9\end{array}\right]. Notice that the Gaussian is centered at (3,2)(3,2), and that the isocontours are all elliptically shaped with major/minor axis lengths in a 5:3 ratio. The figure on the right shows a heatmap indicating values of the density function for a non axis-aligned multivariate Gaussian with mean mu=[[3],[2]]\mu=\left[\begin{array}{l}3 \\ 2\end{array}\right] and covariance matrix Sigma=[[10,5],[5,5]]\Sigma=\left[\begin{array}{cc}10 & 5 \\ 5 & 5\end{array}\right]. Here, the ellipses are again centered at (3,2)(3,2), but now the major and minor axes have been rotated via a linear transformation.
First, observe that maximum of Equation (4) occurs where x_(1)=mu_(1)x_{1}=\mu_{1} and x_(2)=mu_(2)x_{2}=\mu_{2}. Substituting these values into Equation (4), we see that the peak height of the Gaussian density is (1)/(2pisigma_(1)sigma_(2))\frac{1}{2 \pi \sigma_{1} \sigma_{2}}.
Second, we substitute c=(1)/(e)((1)/(2pisigma_(1)sigma_(2)))c=\frac{1}{e}\left(\frac{1}{2 \pi \sigma_{1} \sigma_{2}}\right) into the equations for r_(1)r_{1} and r_(2)r_{2} to obtain
From this, it follows that the axis length needed to reach a fraction 1//e1 / e of the peak height of the Gaussian density in the iith dimension grows in proportion to the standard deviation sigma_(i)\sigma_{i}. Intuitively, this again makes sense: the smaller the variance of some random variable x_(i)x_{i}, the more "tightly" peaked the Gaussian distribution in that dimension, and hence the smaller the radius r_(i)r_{i}.
4.3. Non-diagonal case, higher dimensions
Clearly, the above derivations rely on the assumption that Sigma\Sigma is a diagonal matrix. However, in the non-diagonal case, it turns out that the picture is not all that different. Instead of being an axis-aligned ellipse, the isocontours turn out to be simply rotated ellipses. Furthermore, in the nn-dimensional case, the level sets form geometrical structures known as ellipsoids in R^(n)\mathbf{R}^{n}.
5. Linear transformation interpretation
In the last few sections, we focused primarily on providing an intuition for how multivariate Gaussians with diagonal covariance matrices behaved. In particular, we found that an nn dimensional multivariate Gaussian with diagonal covariance matrix could be viewed simply as a collection of nn independent Gaussian-distributed random variables with means and variances mu_(i)\mu_{i} and sigma_(i)^(2)\sigma_{i}^{2}, respectvely. In this section, we dig a little deeper and provide a quantitative interpretation of multivariate Gaussians when the covariance matrix is not diagonal.
The key result of this section is the following theorem (see proof in Appendix A.2).
Theorem 1. LetX∼N(mu,Sigma)X \sim \mathcal{N}(\mu, \Sigma)for somemu inR^(n)\mu \in \mathbf{R}^{n}andSigma inS_(++)^(n)\Sigma \in \mathbf{S}_{++}^{n}. Then, there exists a matrixB inR^(n xx n)B \in \mathbf{R}^{n \times n}such that if we defineZ=B^(-1)(X-mu)Z=B^{-1}(X-\mu), thenZ∼N(0,I)Z \sim \mathcal{N}(0, I).
To understand the meaning of this theorem, note that if Z∼N(0,I)Z \sim \mathcal{N}(0, I), then using the analysis from Section 4,Z4, Z can be thought of as a collection of nn independent standard normal random variables (i.e., Z_(i)∼N(0,1)Z_{i} \sim \mathcal{N}(0,1)). Furthermore, if Z=B^(-1)(X-mu)Z=B^{-1}(X-\mu) then X=BZ+muX=B Z+\mu follows from simple algebra.
Consequently, the theorem states that any random variable XX with a multivariate Gaussian distribution can be interpreted as the result of applying a linear transformation (X=(X=BZ+mu)B Z+\mu) to some collection of nn independent standard normal random variables (Z)(Z).
6. Appendix A.1
Proof. We prove the first of the two equalities in (1); the proof of the other equality is similar.
Here, (6) follows from the fact that the expectation of a matrix is simply the matrix found by taking the componentwise expectation of each entry. Also, (7) follows from the fact that for any vector z inR^(n)z \in \mathbf{R}^{n},
Theorem 1. LetX∼N(mu,Sigma)X \sim \mathcal{N}(\mu, \Sigma)for somemu inR^(n)\mu \in \mathbf{R}^{n} and Sigma inS_(++)^(n)\Sigma \in \mathbf{S}_{++}^{n}. Then, there exists a matrixB inR^(n xx n)B \in \mathbf{R}^{n \times n}such that if we defineZ=B^(-1)(X-mu)Z=B^{-1}(X-\mu), thenZ∼N(0,I)Z \sim \mathcal{N}(0, I).
The derivation of this theorem requires some advanced linear algebra and probability theory and can be skipped for the purposes of this class. Our argument will consist of two parts. First, we will show that the covariance matrix Sigma\Sigma can be factorized as Sigma=BB^(T)\Sigma=B B^{T} for some invertible matrix BB. Second, we will perform a "change-of-variable" from XX to a different vector valued random variable ZZ using the relation Z=B^(-1)(X-mu)Z=B^{-1}(X-\mu).
Step 1: Factorizing the covariance matrix. Recall the following two properties of symmetric matrices from the notes on linear algebra [5] :
Any real symmetric matrix A inR^(n xx n)A \in \mathbf{R}^{n \times n} can always be represented as A=U LambdaU^(T)A=U \Lambda U^{T}, where UU is a full rank orthogonal matrix containing of the eigenvectors of AA as its columns, and Lambda\Lambda is a diagonal matrix containing AA's eigenvalues.
If AA is symmetric positive definite, all its eigenvalues are positive.
Since the covariance matrix Sigma\Sigma is positive definite, using the first fact, we can write Sigma=U LambdaU^(T)\Sigma=U \Lambda U^{T} for some appropriately defined matrices UU and Lambda\Lambda. Using the second fact, we can define Lambda^(1//2)inR^(n xx n)\Lambda^{1 / 2} \in \mathbf{R}^{n \times n} to be the diagonal matrix whose entries are the square roots of the corresponding entries from Lambda\Lambda. Since Lambda=Lambda^(1//2)(Lambda^(1//2))^(T)\Lambda=\Lambda^{1 / 2}(\Lambda^{1 / 2})^{T}, we have
where B=ULambda^(1//2)B=U \Lambda^{1 / 2}[6] In this case, then Sigma^(-1)=B^(-T)B^(-1)\Sigma^{-1}=B^{-T} B^{-1}, so we can rewrite the standard formula for the density of a multivariate Gaussian as
Step 2: Change of variables. Now, define the vector-valued random variable Z=Z=B^(-1)(X-mu)B^{-1}(X-\mu). A basic formula of probability theory, which we did not introduce in the section notes on probability theory, is the "change-of-variables" formula for relating vector-valued random variables:
Suppose that X=[{:[X_(1),cdots,X_(n)]:}]^(T)inR^(n)X=[\begin{array}{lll}X_{1} & \cdots & X_{n}\end{array}]^{T} \in \mathbf{R}^{n} is a vector-valued random variable with joint density function f_(X):R^(n)rarrRf_{X}: \mathbf{R}^{n} \rightarrow \mathbf{R}. If Z=H(X)inR^(n)Z=H(X) \in \mathbf{R}^{n} where HH is a bijective, differentiable function, then ZZ has joint density f_(Z):R^(n)rarrRf_{Z}: \mathbf{R}^{n} \rightarrow \mathbf{R}, where
This is the first of a two-part overview of multivariate gaussians from Andrew Ng’s CS229 course on Machine learning. Click here to read part two.
Recall from the section notes on linear algebra that S_(++)^(n)\mathbf{S}_{++}^{n} is the space of symmetric positive definite n xx nn \times n matrices, defined as S_(++)^(n)={A inR^(n xx n):A=A^(T)" and "x^(T)Ax > 0" for all "x inR^(n)" such that "x!=0}\mathbf{S}_{++}^{n}=\{A \in \mathbf{R}^{n \times n}: A=A^{T} \text { and } x^{T} A x>0 \text { for all } x \in \mathbf{R}^{n} \text { such that } x \neq 0\}↩︎
In these notes, we use the notation p(∙)p(\bullet) to denote density functions, instead of f_(X)(∙)f_{X}(\bullet) (as in the section notes on probability theory). ↩︎
Namely, |[a,b],[c,d]|=ad-bc.\left|\begin{array}{ll}a & b \\ c & d\end{array}\right|=a d-b c .↩︎
Isocontours are often also known as level curves. More generally, a level set of a function f:R^(n)rarrRf: \mathbf{R}^{n} \rightarrow \mathbf{R}, is a set of the form {x inR^(2):f(x)=c}\{x \in \mathbf{R}^{2}: f(x)=c\} for some c inRc \in \mathbf{R}. ↩︎
See section on "Eigenvalues and Eigenvectors of Symmetric Matrices." ↩︎
To show that BB is invertible, it suffices to observe that UU is an invertible matrix, and right-multiplying UU by a diagonal matrix (with no zero diagonal entries) will rescale its columns but will not change its rank. ↩︎
Recommended for you
Eric Mockensturm
Probing The Full Monty Hall Problem
Probing The Full Monty Hall Problem
A tutorial on the Monty Hall problem in statistics.