Contents

Mixtures of Gaussians and the EM algorithm

You can read the notes from the previous lecture from Andrew Ng's CS229 course on the k-means clustering algorithm here.
In this set of notes, we discuss the EM (Expectation-Maximization) algorithm for density estimation.
Suppose that we are given a training set { x ( 1 ) , , x ( n ) } x ( 1 ) , , x ( n ) {x^((1)),dots,x^((n))}\left\{x^{(1)}, \ldots, x^{(n)}\right\} as usual. Since we are in the unsupervised learning setting, these points do not come with any labels.
We wish to model the data by specifying a joint distribution p ( x ( i ) , z ( i ) ) = p x ( i ) , z ( i ) = p(x^((i)),z^((i)))=p\left(x^{(i)}, z^{(i)}\right)= p ( x ( i ) | z ( i ) ) p ( z ( i ) ) p x ( i ) | z ( i ) p z ( i ) p(x^((i))|z^((i)))p(z^((i)))p\left(x^{(i)} | z^{(i)}\right) p\left(z^{(i)}\right). Here, z ( i ) Multinomial ( ϕ ) z ( i ) Multinomial ( ϕ ) z^((i))∼Multinomial(phi)z^{(i)} \sim \operatorname{Multinomial}(\phi) (where ϕ j 0 , j = 1 k ϕ j = 1 ϕ j 0 , j = 1 k ϕ j = 1 phi_(j) >= 0,sum_(j=1)^(k)phi_(j)=1\phi_{j} \geq 0, \sum_{j=1}^{k} \phi_{j}=1, and the parameter ϕ j ϕ j phi_(j)\phi_{j} gives p ( z ( i ) = j ) ) p z ( i ) = j {:p(z^((i))=j))\left.p\left(z^{(i)}=j\right)\right), and x ( i ) | z ( i ) = j N ( μ j , Σ j ) x ( i ) | z ( i ) = j N μ j , Σ j x^((i))|z^((i))=j∼N(mu_(j),Sigma_(j))x^{(i)} | z^{(i)}=j \sim \mathcal{N}\left(\mu_{j}, \Sigma_{j}\right). We let k k kk denote the number of values that the z ( i ) z ( i ) z^((i))z^{(i)}'s can take on. Thus, our model posits that each x ( i ) x ( i ) x^((i))x^{(i)} was generated by randomly choosing z ( i ) z ( i ) z^((i))z^{(i)} from { 1 , , k } { 1 , , k } {1,dots,k}\{1, \ldots, k\}, and then x ( i ) x ( i ) x^((i))x^{(i)} was drawn from one of k k kk Gaussians depending on z ( i ) z ( i ) z^((i))z^{(i)}. This is called the mixture of Gaussians model. Also, note that the z ( i ) z ( i ) z^((i))z^{(i)}'s are latent random variables, meaning that they're hidden/unobserved. This is what will make our estimation problem difficult.
The parameters of our model are thus ϕ , μ ϕ , μ phi,mu\phi, \mu and Σ Σ Sigma\Sigma. To estimate them, we can write down the likelihood of our data:
( ϕ , μ , Σ ) = i = 1 n log p ( x ( i ) ; ϕ , μ , Σ ) = i = 1 n log z ( i ) = 1 k p ( x ( i ) | z ( i ) ; μ , Σ ) p ( z ( i ) ; ϕ ) . ( ϕ , μ , Σ ) = i = 1 n log p ( x ( i ) ; ϕ , μ , Σ ) = i = 1 n log z ( i ) = 1 k p ( x ( i ) | z ( i ) ; μ , Σ ) p ( z ( i ) ; ϕ ) . {:[ℓ(phi","mu","Sigma)=sum_(i=1)^(n)log p(x^((i));phi","mu","Sigma)],[=sum_(i=1)^(n)log sum_(z^((i))=1)^(k)p(x^((i))|z^((i));mu","Sigma)p(z^((i));phi).]:}\begin{aligned} \ell(\phi, \mu, \Sigma) &=\sum_{i=1}^{n} \log p(x^{(i)} ; \phi, \mu, \Sigma) \\ &=\sum_{i=1}^{n} \log \sum_{z^{(i)}=1}^{k} p(x^{(i)} | z^{(i)} ; \mu, \Sigma) p(z^{(i)} ; \phi) . \end{aligned}
However, if we set to zero the derivatives of this formula with respect to the parameters and try to solve, we'll find that it is not possible to find the maximum likelihood estimates of the parameters in closed form. (Try this yourself at home.)
The random variables z ( i ) z ( i ) z^((i))z^{(i)} indicate which of the k k kk Gaussians each x ( i ) x ( i ) x^((i))x^{(i)} had come from. Note that if we knew what the z ( i ) z ( i ) z^((i))z^{(i)}'s were, the maximum likelihood problem would have been easy. Specifically, we could then write down the likelihood as
( ϕ , μ , Σ ) = i = 1 n log p ( x ( i ) | z ( i ) ; μ , Σ ) + log p ( z ( i ) ; ϕ ) . ( ϕ , μ , Σ ) = i = 1 n log p ( x ( i ) | z ( i ) ; μ , Σ ) + log p ( z ( i ) ; ϕ ) . ℓ(phi,mu,Sigma)=sum_(i=1)^(n)log p(x^((i))|z^((i));mu,Sigma)+log p(z^((i));phi).\ell(\phi, \mu, \Sigma)=\sum_{i=1}^{n} \log p(x^{(i)} | z^{(i)} ; \mu, \Sigma)+\log p(z^{(i)} ; \phi).
Maximizing this with respect to ϕ , μ ϕ , μ phi,mu\phi, \mu and Σ Σ Sigma\Sigma gives the parameters:
ϕ j = 1 n i = 1 n 1 { z ( i ) = j } , μ j = i = 1 n 1 { z ( i ) = j } x ( i ) i = 1 n 1 { z ( i ) = j } , Σ j = i = 1 n 1 { z ( i ) = j } ( x ( i ) μ j ) ( x ( i ) μ j ) T i = 1 n 1 { z ( i ) = j } . ϕ j = 1 n i = 1 n 1 { z ( i ) = j } , μ j = i = 1 n 1 z ( i ) = j x ( i ) i = 1 n 1 z ( i ) = j , Σ j = i = 1 n 1 z ( i ) = j x ( i ) μ j x ( i ) μ j T i = 1 n 1 z ( i ) = j . {:[phi_(j)=(1)/(n)sum_(i=1)^(n)1{z^((i))=j}","],[mu_(j)=(sum_(i=1)^(n)1{z^((i))=j}x^((i)))/(sum_(i=1)^(n)1{z^((i))=j})","],[Sigma_(j)=(sum_(i=1)^(n)1{z^((i))=j}(x^((i))-mu_(j))(x^((i))-mu_(j))^(T))/(sum_(i=1)^(n)1{z^((i))=j}).]:}\begin{aligned} \phi_{j} &=\frac{1}{n} \sum_{i=1}^{n} 1\{z^{(i)}=j\}, \\ \mu_{j} &=\frac{\sum_{i=1}^{n} 1\left\{z^{(i)}=j\right\} x^{(i)}}{\sum_{i=1}^{n} 1\left\{z^{(i)}=j\right\}}, \\ \Sigma_{j} &=\frac{\sum_{i=1}^{n} 1\left\{z^{(i)}=j\right\}\left(x^{(i)}-\mu_{j}\right)\left(x^{(i)}-\mu_{j}\right)^{T}}{\sum_{i=1}^{n} 1\left\{z^{(i)}=j\right\}} . \end{aligned}
Indeed, we see that if the z ( i ) z ( i ) z^((i))z^{(i)}'s were known, then maximum likelihood estimation becomes nearly identical to what we had when estimating the parameters of the Gaussian discriminant analysis model, except that here the z ( i ) z ( i ) z^((i))z^{(i)}'s playing the role of the class labels.[1]
However, in our density estimation problem, the z ( i ) z ( i ) z^((i))z^{(i)}’s are not known. What can we do?
The EM algorithm is an iterative algorithm that has two main steps. Applied to our problem, in the E-step, it tries to "guess" the values of the z ( i ) z ( i ) z^((i))z^{(i)}'s. In the M-step, it updates the parameters of our model based on our guesses. Since in the M-step we are pretending that the guesses in the first part were correct, the maximization becomes easy. Here's the algorithm:
quad\quadRepeat until convergence: { { {\{
quadquad\quad\quad(E-step) For each i , j , i , j , i,j,i, j, set
w j ( i ) := p ( z ( i ) = j x ( i ) ; ϕ , μ , Σ ) w j ( i ) := p ( z ( i ) = j x ( i ) ; ϕ , μ , Σ ) w_(j)^((i)):=p(z^((i))=j∣x^((i));phi,mu,Sigma)\begin{equation*} w_{j}^{(i)}:=p(z^{(i)}=j \mid x^{(i)} ; \phi, \mu, \Sigma) \end{equation*}
quadquad\quad\quad(M-step) Update the parameters:
ϕ j := 1 n i = 1 n w j ( i ) , μ j := i = 1 n w j ( i ) x ( i ) i = 1 n w j ( i ) , Σ j := i = 1 n w j ( i ) ( x ( i ) μ j ) ( x ( i ) μ j ) T i = 1 n w j ( i ) ϕ j := 1 n i = 1 n w j ( i ) , μ j := i = 1 n w j ( i ) x ( i ) i = 1 n w j ( i ) , Σ j := i = 1 n w j ( i ) x ( i ) μ j x ( i ) μ j T i = 1 n w j ( i ) {:[phi_(j):=(1)/(n)sum_(i=1)^(n)w_(j)^((i))","],[mu_(j):=(sum_(i=1)^(n)w_(j)^((i))x^((i)))/(sum_(i=1)^(n)w_(j)^((i)))","],[Sigma_(j):=(sum_(i=1)^(n)w_(j)^((i))(x^((i))-mu_(j))(x^((i))-mu_(j))^(T))/(sum_(i=1)^(n)w_(j)^((i)))]:}\begin{align*} \phi_{j} &:=\frac{1}{n} \sum_{i=1}^{n} w_{j}^{(i)}, \\ \mu_{j} &:=\frac{\sum_{i=1}^{n} w_{j}^{(i)} x^{(i)}}{\sum_{i=1}^{n} w_{j}^{(i)}}, \\ \Sigma_{j} &:=\frac{\sum_{i=1}^{n} w_{j}^{(i)}\left(x^{(i)}-\mu_{j}\right)\left(x^{(i)}-\mu_{j}\right)^{T}}{\sum_{i=1}^{n} w_{j}^{(i)}} \end{align*}
} } quad}\quad\}
In the E-step, we calculate the posterior probability of our parameters the z ( i ) z ( i ) z^((i))z^{(i)}'s, given the x ( i ) x ( i ) x^((i))x^{(i)} and using the current setting of our parameters. I.e., using Bayes rule, we obtain:
p ( z ( i ) = j | x ( i ) ; ϕ , μ , Σ ) = p ( x ( i ) | z ( i ) = j ; μ , Σ ) p ( z ( i ) = j ; ϕ ) l = 1 k p ( x ( i ) | z ( i ) = l ; μ , Σ ) p ( z ( i ) = l ; ϕ ) p ( z ( i ) = j | x ( i ) ; ϕ , μ , Σ ) = p x ( i ) | z ( i ) = j ; μ , Σ p z ( i ) = j ; ϕ l = 1 k p x ( i ) | z ( i ) = l ; μ , Σ p z ( i ) = l ; ϕ p(z^((i))=j|x^((i));phi,mu,Sigma)=(p(x^((i))|z^((i))=j;mu,Sigma)p(z^((i))=j;phi))/(sum_(l=1)^(k)p(x^((i))|z^((i))=l;mu,Sigma)p(z^((i))=l;phi))p(z^{(i)}=j | x^{(i)} ; \phi, \mu, \Sigma)=\frac{p\left(x^{(i)} | z^{(i)}=j ; \mu, \Sigma\right) p\left(z^{(i)}=j ; \phi\right)}{\sum_{l=1}^{k} p\left(x^{(i)} | z^{(i)}=l ; \mu, \Sigma\right) p\left(z^{(i)}=l ; \phi\right)}
Here, p ( x ( i ) | z ( i ) = j ; μ , Σ ) p x ( i ) | z ( i ) = j ; μ , Σ p(x^((i))|z^((i))=j;mu,Sigma)p\left(x^{(i)} | z^{(i)}=j ; \mu, \Sigma\right) is given by evaluating the density of a Gaussian with mean μ j μ j mu_(j)\mu_{j} and covariance Σ j Σ j Sigma_(j)\Sigma_{j} at x ( i ) ; p ( z ( i ) = j ; ϕ ) x ( i ) ; p z ( i ) = j ; ϕ x^((i));p(z^((i))=j;phi)x^{(i)} ; p\left(z^{(i)}=j ; \phi\right) is given by ϕ j ϕ j phi_(j)\phi_{j}, and so on. The values w j ( i ) w j ( i ) w_(j)^((i))w_{j}^{(i)} calculated in the E-step represent our "soft" guesses[2] for the values of z ( i ) z ( i ) z^((i))z^{(i)}.
Also, you should contrast the updates in the M-step with the formulas we had when the z ( i ) z ( i ) z^((i))z^{(i)}'s were known exactly. They are identical, except that instead of the indicator functions " 1 { z ( i ) = j } 1 z ( i ) = j 1{z^((i))=j}1\left\{z^{(i)}=j\right\}" indicating from which Gaussian each datapoint had come, we now instead have the w j ( i ) w j ( i ) w_(j)^((i))w_{j}^{(i)}'s.
The EM-algorithm is also reminiscent of the K-means clustering algorithm, except that instead of the "hard" cluster assignments c ( i ) c ( i ) c(i)c(i), we instead have the "soft" assignments w j ( i ) w j ( i ) w_(j)^((i))w_{j}^{(i)}. Similar to K-means, it is also susceptible to local optima, so reinitializing at several different initial parameters may be a good idea.
It's clear that the EM algorithm has a very natural interpretation of repeatedly trying to guess the unknown z ( i ) z ( i ) z^((i))z^{(i)}'s; but how did it come about, and can we make any guarantees about it, such as regarding its convergence? In the next set of notes, we will describe a more general view of EM, one that will allow us to easily apply it to other estimation problems in which there are also latent variables, and which will allow us to give a convergence guarantee.
You can read the notes from the next lecture from Andrew Ng's CS229 course on the Expectation Maximization Algorithm here.

  1. There are other minor differences in the formulas here from what we'd obtained in PS1 with Gaussian discriminant analysis, first because we've generalized the z ( i ) z ( i ) z^((i))z^{(i)}'s to be multinomial rather than Bernoulli, and second because here we are using a different Σ j Σ j Sigma_(j)\Sigma_{j} for each Gaussian. ↩︎
  2. "The term "soft" refers to our guesses being probabilities and taking values in [ 0 , 1 ] ; [ 0 , 1 ] ; [0,1];[0,1] ; in contrast, a "hard" guess is one that represents a single best guess (such as taking values in { 0 , 1 } { 0 , 1 } {0,1}\{0,1\} or { 1 , , k } { 1 , , k } {1,dots,k}\{1, \ldots, k\}). ↩︎

Recommended for you

Srishti Saha
Text Generation Models - Introduction and a Demo using the GPT-J model
Text Generation Models - Introduction and a Demo using the GPT-J model
The below article describes the mechanism of text generation models. We cover the basic model like Markov Chains as well the more advanced deep learning models. We also give a demo of domain-specific text generation using the latest GPT-J model.
26 points
0 issues