You can read the notes from the previous lecture from Tengyu Ma and Andrew Ng's CS229 course on the EM algorithm here.
When we have data x^((i))inR^(d)x^{(i)} \in \mathbb{R}^{d} that comes from a mixture of several Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually imagine problems where we have sufficient data to be able to discern the multiple-Gaussian structure in the data. For instance, this would be the case if our training set size nn was significantly larger than the dimension dd of the data.
Now, consider a setting in which d≫nd \gg n. In such a problem, it might be difficult to model the data even with a single Gaussian, much less a mixture of Gaussian. Specifically, since the nn data points span only a low-dimensional subspace of R^(d)\mathbb{R}^{d}, if we model the data as Gaussian, and estimate the mean and covariance using the usual maximum likelihood estimators,
we would find that the matrix Sigma\Sigma is singular. This means that Sigma^(-1)\Sigma^{-1} does not exist, and 1//|Sigma|^(1//2)=1//01 /|\Sigma|^{1 / 2}=1 / 0. But both of these terms are needed in computing the usual density of a multivariate Gaussian distribution. Another way of stating this difficulty is that maximum likelihood estimates of the parameters result in a Gaussian that places all of its probability in the affine space spanned by the data,[1] and this corresponds to a singular covariance matrix.
More generally, unless nn exceeds dd by some reasonable amount, the maximum likelihood estimates of the mean and covariance may be quite poor. Nonetheless, we would still like to be able to fit a reasonable Gaussian model to the data, and perhaps capture some interesting covariance structure in the data. How can we do this?
In the next section, we begin by reviewing two possible restrictions on Sigma\Sigma that allow us to fit Sigma\Sigma with small amounts of data but neither will give a satisfactory solution to our problem. We next discuss some properties of Gaussians that will be needed later; specifically, how to find marginal and conditonal distributions of Gaussians. Finally, we present the factor analysis model, and EM for it.
1. Restrictions of Sigma\Sigma
If we do not have sufficient data to fit a full covariance matrix, we may place some restrictions on the space of matrices Sigma\Sigma that we will consider. For instance, we may choose to fit a covariance matrix Sigma\Sigma that is diagonal. In this setting, the reader may easily verify that the maximum likelihood estimate of the covariance matrix is given by the diagonal matrix Sigma\Sigma satisfying
Thus, Sigma_(jj)\Sigma_{j j} is just the empirical estimate of the variance of the jj-th coordinate of the data.
Recall that the contours of a Gaussian density are ellipses. A diagonal Sigma\Sigma corresponds to a Gaussian where the major axes of these ellipses are axis-aligned.
Sometimes, we may place a further restriction on the covariance matrix that not only must it be diagonal, but its diagonal entries must all be equal. In this setting, we have Sigma=sigma^(2)I\Sigma=\sigma^{2} I, where sigma^(2)\sigma^{2} is the parameter under our control. The maximum likelihood estimate of sigma^(2)\sigma^{2} can be found to be:
This model corresponds to using Gaussians whose densities have contours that are circles (in 22 dimensions; or spheres/hyperspheres in higher dimensions).
If we are fitting a full, unconstrained, covariance matrix Sigma\Sigma to data, it is necessary that n >= d+1n \geq d+1 in order for the maximum likelihood estimate of Sigma\Sigma not to be singular. Under either of the two restrictions above, we may obtain non-singular Sigma\Sigma when n >= 2n \geq 2.
However, restricting Sigma\Sigma to be diagonal also means modeling the different coordinates x_(i),x_(j)x_{i}, x_{j} of the data as being uncorrelated and independent. Often, it would be nice to be able to capture some interesting correlation structure in the data. If we were to use either of the restrictions on Sigma\Sigma described above, we would therefore fail to do so. In this set of notes, we will describe the factor analysis model, which uses more parameters than the diagonal Sigma\Sigma and captures some correlations in the data, but also without having to fit a full covariance matrix.
2. Marginals and conditionals of Gaussians
Before describing factor analysis, we digress to talk about how to find conditional and marginal distributions of random variables with a joint multivariate Gaussian distribution.
where x_(1)inR^(r),x_(2)inR^(s)x_{1} \in \mathbb{R}^{r}, x_{2} \in \mathbb{R}^{s}, and x inR^(r+s)x \in \mathbb{R}^{r+s}. Suppose x∼N(mu,Sigma)x \sim \mathcal{N}(\mu, \Sigma), where
Here, mu_(1)inR^(r),mu_(2)inR^(s),Sigma_(11)inR^(r xx r),Sigma_(12)inR^(r xx s)\mu_{1} \in \mathbb{R}^{r}, \mu_{2} \in \mathbb{R}^{s}, \Sigma_{11} \in \mathbb{R}^{r \times r}, \Sigma_{12} \in \mathbb{R}^{r \times s}, and so on. Note that since covariance matrices are symmetric, Sigma_(12)=Sigma_(21)^(T)\Sigma_{12}=\Sigma_{21}^{T}.
Under our assumptions, x_(1)x_{1} and x_(2)x_{2} are jointly multivariate Gaussian. What is the marginal distribution of x_(1)?x_{1}? It is not hard to see that E[x_(1)]=mu_(1)\mathrm{E}\left[x_{1}\right]=\mu_{1}, and that Cov(x_(1))=E[(x_(1)-mu_(1))(x_(1)-mu_(1))]=Sigma_(11)\operatorname{Cov}\left(x_{1}\right)=\mathrm{E}\left[\left(x_{1}-\mu_{1}\right)\left(x_{1}-\mu_{1}\right)\right]=\Sigma_{11}. To see that the latter is true, note that by definition of the joint covariance of x_(1)x_{1} and x_(2)x_{2}, we have that
Matching the upper-left subblocks in the matrices in the second and the last lines above gives the result.
Since marginal distributions of Gaussians are themselves Gaussian, we therefore have that the marginal distribution of x_(1)x_{1} is given by x_(1)∼N(mu_(1),Sigma_(11))x_{1} \sim \mathcal{N}\left(\mu_{1}, \Sigma_{11}\right).
Also, we can ask, what is the conditional distribution of x_(1)x_{1} given x_(2)x_{2}? By referring to the definition of the multivariate Gaussian distribution, it can be shown that x_(1)|x_(2)∼N(mu_(1|2),Sigma_(1|2))x_{1} | x_{2} \sim \mathcal{N}\left(\mu_{1 | 2}, \Sigma_{1 | 2}\right), where
When we work with the factor analysis model in the next section, these formulas for finding conditional and marginal distributions of Gaussians will be very useful.
3. The Factor analysis model
In the factor analysis model, we posit a joint distribution on (x,z)(x, z) as follows, where z inR^(k)z \in \mathbb{R}^{k} is a latent random variable:
{:[z∼N(0","I)],[x|z∼N(mu+Lambda z","Psi).]:}\begin{aligned}
z & \sim \mathcal{N}(0, I) \\
x | z & \sim \mathcal{N}(\mu+\Lambda z, \Psi) .
\end{aligned}
Here, the parameters of our model are the vector mu inR^(d)\mu \in \mathbb{R}^{d}, the matrix Lambda inR^(d xx k)\Lambda \in \mathbb{R}^{d \times k}, and the diagonal matrix Psi inR^(d xx d)\Psi \in \mathbb{R}^{d \times d}. The value of kk is usually chosen to be smaller than dd.
Thus, we imagine that each datapoint x^((i))x^{(i)} is generated by sampling a kk dimension multivariate Gaussian z^((i))z^{(i)}. Then, it is mapped to a dd-dimensional affine space of R^(d)\mathbb{R}^{d} by computing mu+Lambdaz^((i))\mu+\Lambda z^{(i)}. Lastly, x^((i))x^{(i)} is generated by adding covariance Psi\Psi noise to mu+Lambdaz^((i))\mu+\Lambda z^{(i)}.
Equivalently (convince yourself that this is the case), we can therefore also define the factor analysis model according to
{:[z∼N(0","I)],[epsilon∼N(0","Psi)],[x=mu+Lambda z+epsilon]:}\begin{aligned}
z & \sim \mathcal{N}(0, I) \\
\epsilon & \sim \mathcal{N}(0, \Psi) \\
x &=\mu+\Lambda z+\epsilon
\end{aligned}
where epsilon\epsilon and zz are independent.
Let's work out exactly what distribution our model defines. Our random variables zz and xx have a joint Gaussian distribution
[[z],[x]]∼N(mu_(zx),Sigma).\left[\begin{array}{l}
z \\
x
\end{array}\right] \sim \mathcal{N}\left(\mu_{z x}, \Sigma\right).
We will now find mu_(zx)\mu_{z x} and Sigma\Sigma.
We know that E[z]=0\mathrm{E}[z]=0, from the fact that z∼N(0,I)z \sim \mathcal{N}(0, I). Also, we have that
Next, to find Sigma\Sigma, we need to calculate Sigma_(zz)=E[(z-E[z])(z-E[z])^(T)]\Sigma_{z z}=\mathrm{E}\left[(z-\mathrm{E}[z])(z-\mathrm{E}[z])^{T}\right] (the upper-left block of Sigma\Sigma), Sigma_(zx)=E[(z-E[z])(x-E[x])^(T)]\Sigma_{z x}=\mathrm{E}\left[(z-\mathrm{E}[z])(x-\mathrm{E}[x])^{T}\right] (upper-right block), and Sigma_(xx)=E[(x-E[x])(x-E[x])^(T)]\Sigma_{x x}=\mathrm{E}\left[(x-\mathrm{E}[x])(x-\mathrm{E}[x])^{T}\right] (lower-right block).
Now, since z∼N(0,I)z \sim \mathcal{N}(0, I), we easily find that Sigma_(zz)=Cov(z)=I\Sigma_{z z}=\operatorname{Cov}(z)=I. Also,
In the last step, we used the fact that E[zz^(T)]=Cov(z)\mathrm{E}\left[z z^{T}\right]=\operatorname{Cov}(z) (since zz has zero mean), and E[zepsilon^(T)]=E[z]E[epsilon^(T)]=0\mathrm{E}\left[z \epsilon^{T}\right]=\mathrm{E}[z] \mathrm{E}\left[\epsilon^{T}\right]=0 (since zz and epsilon\epsilon are independent, and hence the expectation of their product is the product of their expectations). Similarly, we can find Sigma_(xx)\Sigma_{x x} as follows:
Putting everything together, we therefore have that
{:(3)[[z],[x]]∼N([[ vec(0)],[mu]],[[I,Lambda^(T)],[Lambda,LambdaLambda^(T)+Psi]]).:}\begin{equation}
\left[\begin{array}{l}
z \\
x
\end{array}\right] \sim \mathcal{N}\left(\left[\begin{array}{l}
\overrightarrow{0} \\
\mu
\end{array}\right],\left[\begin{array}{cc}
I & \Lambda^{T} \\
\Lambda & \Lambda \Lambda^{T}+\Psi
\end{array}\right]\right) .
\end{equation}
Hence, we also see that the marginal distribution of xx is given by x∼x \simN(mu,LambdaLambda^(T)+Psi)\mathcal{N}\left(\mu, \Lambda \Lambda^{T}+\Psi\right). Thus, given a training set {x^((i));i=1,dots,n}\left\{x^{(i)} ; i=1, \ldots, n\right\}, we can write down the log likelihood of the parameters:
To perform maximum likelihood estimation, we would like to maximize this quantity with respect to the parameters. But maximizing this formula explicitly is hard (try it yourself), and we are aware of no algorithm that does so in closed-form. So, we will instead use to the EM algorithm. In the next section, we derive EM for factor analysis.
4. EM for factor analysis
The derivation for the E-step is easy. We need to compute Q_(i)(z^((i)))=Q_{i}(z^{(i)})=p(z^((i))|x^((i));mu,Lambda,Psi)p(z^{(i)} | x^{(i)} ; \mu, \Lambda, \Psi). By substituting the distribution given in Equation (3) into the formulas (1-2) used for finding the conditional distribution of a Gaussian, we find that z^((i))|x^((i));mu,Lambda,Psi∼N(mu_(z^((i)))|x^((i)),Sigma_(z^((i)))|x^((i)))z^{(i)} | x^{(i)} ; \mu, \Lambda, \Psi \sim \mathcal{N}\left(\mu_{z^{(i)}}\left|x^{(i)}, \Sigma_{z^{(i)}}\right| x^{(i)}\right), where
with respect to the parameters mu,Lambda,Psi\mu, \Lambda, \Psi. We will work out only the optimization with respect to Lambda\Lambda, and leave the derivations of the updates for mu\mu and Psi\Psi as an exercise to the reader.
Here, the "z^((i))∼Q_(i)z^{(i)} \sim Q_{i}" subscript indicates that the expectation is with respect to z^((i))z^{(i)} drawn from Q_(i)Q_{i}. In the subsequent development, we will omit this subscript when there is no risk of ambiguity. Dropping terms that do not depend on the parameters, we find that we need to maximize:
Let's maximize this with respect to Lambda\Lambda. Only the last term above depends on Lambda\Lambda. Taking derivatives, and using the facts that tr a=a\operatorname{tr} a=a (for a inRa \in \mathbb{R}), tr AB=tr BA\operatorname{tr} A B=\operatorname{tr} B A, and grad_(A)tr ABA^(T)C=CAB+C^(T)AB^(T)\nabla_{A} \operatorname{tr} A B A^{T} C=C A B+C^{T} A B^{T}, we get:
The analogy is that here, the xx's are a linear function of the zz's (plus noise). Given the "guesses" for zz that the E-step has found, we will now try to estimate the unknown linearity Lambda\Lambda relating the xx's and zz's. It is therefore no surprise that we obtain something similar to the normal equation. There is, however, one important difference between this and an algorithm that performs least squares using just the "best guesses" of the zz's; we will see this difference shortly.
To complete our M-step update, let's work out the values of the expectations in Equation (7). From our definition of Q_(i)Q_{i} being Gaussian with mean mu_(z^((i))∣x^((i)))\mu_{z^{(i)} \mid x^{(i)}} and covariance Sigma_(z^((i))∣x^((i)))\Sigma_{z^{(i)} \mid x^{(i)}}, we easily find
The latter comes from the fact that, for a random variable YY, Cov(Y)=\operatorname{Cov}(Y)=E[YY^(T)]-E[Y]E[Y]^(T)\mathrm{E}\left[Y Y^{T}\right]-\mathrm{E}[Y] \mathrm{E}[Y]^{T}, and hence E[YY^(T)]=E[Y]E[Y]^(T)+Cov(Y)\mathrm{E}\left[Y Y^{T}\right]=\mathrm{E}[Y] \mathrm{E}[Y]^{T}+\operatorname{Cov}(Y). Substituting this back into Equation (7), we get the M-step update for Lambda\Lambda:
It is important to note the presence of the Sigma_(z^((i))∣x^((i)))\Sigma_{z^{(i)} \mid x^{(i)}} on the right hand side of this equation. This is the covariance in the posterior distribution p(z^((i))|x^((i)))p\left(z^{(i)} | x^{(i)}\right) of z^((i))z^{(i)} give x^((i))x^{(i)}, and the M-step must take into account this uncertainty about z^((i))z^{(i)} in the posterior. A common mistake in deriving EM is to assume that in the E-step, we need to calculate only expectation E[z]E[z] of the latent random variable zz, and then plug that into the optimization in the M-step everywhere zz occurs. While this worked for simple problems such as the mixture of Gaussians, in our derivation for factor analysis, we needed E[zz^(T)]E\left[z z^{T}\right] as well E[z]\mathrm{E}[z]; and as we saw, E[zz^(T)]E\left[z z^{T}\right] and E[z]E[z]^(T)\mathrm{E}[z] \mathrm{E}[z]^{T} differ by the quantity Sigma_(z∣x)\Sigma_{z \mid x}. Thus, the M-step update must take into account the covariance of zz in the posterior distribution p(z^((i))|x^((i)))p\left(z^{(i)} | x^{(i)}\right).
Lastly, we can also find the M-step optimizations for the parameters mu\mu and Psi\Psi. It is not hard to show that the first is given by
Since this doesn't change as the parameters are varied (i.e., unlike the update for Lambda\Lambda, the right hand side does not depend on Q_(i)(z^((i)))=p(z^((i))|x^((i));mu,Lambda,Psi)Q_{i}\left(z^{(i)}\right)=p\left(z^{(i)} | x^{(i)} ; \mu, \Lambda, \Psi\right), which in turn depends on the parameters), this can be calculated just once and needs not be further updated as the algorithm is run. Similarly, the diagonal Psi\Psi can be found by calculating
and setting Psi_(ii)=Phi_(ii)\Psi_{i i}=\Phi_{i i} (i.e., letting Psi\Psi be the diagonal matrix containing only the diagonal entries of Phi\Phi).
You can read the notes from the next lecture from Andrew Ng's CS229 course on Principal Components Analysis here.
This is the set of points xx satisfying x=sum_(i=1)^(n)alpha_(i)x^((i))x=\sum_{i=1}^{n} \alpha_{i} x^{(i)}, for some alpha_(i)\alpha_{i}'s so that sum_(i=1)^(n)alpha_(1)=1\sum_{i=1}^{n} \alpha_{1}=1. ↩︎
Recommended for you
Srishti Saha
Text Generation Models - Introduction and a Demo using the GPT-J model
Text Generation Models - Introduction and a Demo using the GPT-J model
The below article describes the mechanism of text generation models. We cover the basic model like Markov Chains as well the more advanced deep learning models. We also give a demo of domain-specific text generation using the latest GPT-J model.