Introduction to Generative Learning Algorithms

This is the second lecture from Andrew Ng's CS229 course on Machine Learning. You can read the notes from the first lecture on Supervised Learning here.
So far, we've mainly been talking about learning algorithms that model p ( y | x ; θ ) p ( y | x ; θ ) p(y|x;theta)p(y |x ; \theta), the conditional distribution of y y yy given x x xx. For instance, logistic regression modeled p ( y | x ; θ ) p ( y | x ; θ ) p(y|x;theta)p(y | x ; \theta) as h θ ( x ) = g ( θ T x ) h θ ( x ) = g θ T x h_(theta)(x)=g(theta^(T)x)h_{\theta} (x)=g\left(\theta^{T} x\right) where g g gg is the sigmoid function. In these notes, we'll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish between elephants ( y = 1 ) ( y = 1 ) (y=1)(y=1) and dogs ( y = 0 ) ( y = 0 ) (y=0)(y=0), based on some features of an animal. Given a training set, an algorithm like logistic regression or the perceptron algorithm (basically) tries to find a straight line–that is, a decision boundary–that separates the elephants and dogs. Then, to classify a new animal as either an elephant or a dog, it checks on which side of the decision boundary it falls, and makes its prediction accordingly.
Here's a different approach. First, looking at elephants, we can build a model of what elephants look like. Then, looking at dogs, we can build a separate model of what dogs look like. Finally, to classify a new animal, we can match the new animal against the elephant model, and match it against the dog model, to see whether the new animal looks more like the elephants or more like the dogs we had seen in the training set.
Algorithms that try to learn p ( y | x ) p ( y | x ) p(y|x)p(y | x) directly (such as logistic regression), or algorithms that try to learn mappings directly from the space of inputs X X X\mathcal{X} to the labels { 0 , 1 } { 0 , 1 } {0,1}\{0,1\}, (such as the perceptron algorithm) are called discriminative learning algorithms. Here, we'll talk about algorithms that instead try to model p ( x | y ) p ( x | y ) p(x|y)p(x | y) (and p ( y ) p ( y ) p(y)p(y)). These algorithms are called generative learning algorithms. For instance, if y y yy indicates whether an example is a dog (0) or an elephant (1), then p ( x | y = 0 ) p ( x | y = 0 ) p(x|y=0)p(x | y=0) models the distribution of dogs' features, and p ( x | y = 1 ) p ( x | y = 1 ) p(x|y=1)p(x | y=1) models the distribution of elephants' features.
After modeling p ( y ) p ( y ) p(y)p(y) (called the class priors) and p ( x y ) p ( x y ) p(x∣y)p(x \mid y), our algorithm can then use Bayes rule to derive the posterior distribution on y y yy given x x xx:
p ( y | x ) = p ( x | y ) p ( y ) p ( x ) . p ( y | x ) = p ( x | y ) p ( y ) p ( x ) . p(y|x)=(p(x|y)p(y))/(p(x)).p(y | x)=\frac{p(x | y) p(y)}{p(x)} .
Here, the denominator is given by p ( x ) = p ( x | y = 1 ) p ( y = 1 ) + p ( x y = p ( x ) = p ( x | y = 1 ) p ( y = 1 ) + p ( x y = p(x)=p(x|y=1)p(y=1)+p(x∣y=p(x)=p(x | y=1) p(y=1)+p(x \mid y= 0) p ( y = 0 ) p ( y = 0 ) p(y=0)p(y=0) (you should be able to verify that this is true from the standard properties of probabilities), and thus can also be expressed in terms of the quantities p ( x y ) p ( x y ) p(x∣y)p(x \mid y) and p ( y ) p ( y ) p(y)p(y) that we've learned. Actually, if were calculating p ( y x ) p ( y x ) p(y∣x)p(y \mid x) in order to make a prediction, then we don't actually need to calculate the denominator, since
arg max y p ( y | x ) = arg max y p ( x | y ) p ( y ) p ( x ) = arg max y p ( x | y ) p ( y ) . arg max y p ( y | x ) = arg max y p ( x | y ) p ( y ) p ( x ) = arg max y p ( x | y ) p ( y ) . {:[arg max_(y)p(y|x)=arg max_(y)(p(x|y)p(y))/(p(x))],[=arg max_(y)p(x|y)p(y).]:}\begin{aligned} \arg \max _{y} p(y | x) &=\arg \max _{y} \frac{p(x | y) p(y)}{p(x)} \\ &=\arg \max _{y} p(x | y) p(y) . \end{aligned}

1. Gaussian discriminant analysis

The first generative learning algorithm that we'll look at is Gaussian discriminant analysis (GDA). In this model, we'll assume that p ( x | y ) p ( x | y ) p(x|y)p(x | y) is distributed according to a multivariate normal distribution. Let's talk briefly about the properties of multivariate normal distributions before moving on to the GDA model itself.

1.1. The multivariate normal distribution

The multivariate normal distribution in d d dd-dimensions, also called the multivariate Gaussian distribution, is parameterized by a mean vector μ R d μ R d mu inR^(d)\mu \in \mathbb{R}^{d} and a covariance matrix Σ R d × d Σ R d × d Sigma inR^(d xx d)\Sigma \in \mathbb{R}^{d \times d}, where Σ 0 Σ 0 Sigma >= 0\Sigma \geq 0 is symmetric and positive semi-definite. Also written " N ( μ , Σ ) N ( μ , Σ ) N(mu,Sigma)\mathcal{N}(\mu, \Sigma) ", its density is given by:
p ( x ; μ , Σ ) = 1 ( 2 π ) d / 2 | Σ | 1 / 2 exp ( 1 2 ( x μ ) T Σ 1 ( x μ ) ) . p ( x ; μ , Σ ) = 1 ( 2 π ) d / 2 | Σ | 1 / 2 exp 1 2 ( x μ ) T Σ 1 ( x μ ) . p(x;mu,Sigma)=(1)/((2pi)^(d//2)|Sigma|^(1//2))exp(-(1)/(2)(x-mu)^(T)Sigma^(-1)(x-mu)).p(x ; \mu, \Sigma)=\frac{1}{(2 \pi)^{d / 2}|\Sigma|^{1 / 2}} \exp \left(-\frac{1}{2}(x-\mu)^{T} \Sigma^{-1}(x-\mu)\right) .
In the equation above, " | Σ | | Σ | |Sigma||\Sigma|" denotes the determinant of the matrix Σ Σ Sigma\Sigma.
For a random variable X X XX distributed N ( μ , Σ ) N ( μ , Σ ) N(mu,Sigma)\mathcal{N}(\mu, \Sigma), the mean is (unsurprisingly) given by μ μ mu\mu:
E [ X ] = x x p ( x ; μ , Σ ) d x = μ E [ X ] = x x p ( x ; μ , Σ ) d x = μ E[X]=int_(x)xp(x;mu,Sigma)dx=mu\mathrm{E}[X]=\int_{x} x p(x ; \mu, \Sigma) d x=\mu
The covariance of a vector-valued random variable Z Z ZZ is defined as Cov ( Z ) = Cov ( Z ) = Cov(Z)=\operatorname{Cov}(Z)= E [ ( Z E [ Z ] ) ( Z E [ Z ] ) T ] E ( Z E [ Z ] ) ( Z E [ Z ] ) T E[(Z-E[Z])(Z-E[Z])^(T)]\mathrm{E}\left[(Z-\mathrm{E}[Z])(Z-\mathrm{E}[Z])^{T}\right]. This generalizes the notion of the variance of a real-valued random variable. The covariance can also be defined as Cov ( Z ) = Cov ( Z ) = Cov(Z)=\operatorname{Cov}(Z)= E [ Z Z T ] ( E [ Z ] ) ( E [ Z ] ) T E Z Z T ( E [ Z ] ) ( E [ Z ] ) T E[ZZ^(T)]-(E[Z])(E[Z])^(T)\mathrm{E}\left[Z Z^{T}\right]-(\mathrm{E}[Z])(\mathrm{E}[Z])^{T}. (You should be able to prove to yourself that these two definitions are equivalent.) If X N ( μ , Σ ) X N ( μ , Σ ) X∼N(mu,Sigma)X \sim \mathcal{N}(\mu, \Sigma), then
Cov ( X ) = Σ . Cov ( X ) = Σ . Cov(X)=Sigma.\operatorname{Cov}(X)=\Sigma .
Here are some examples of what the density of a Gaussian distribution looks like:
The left-most figure shows a Gaussian with mean zero (that is, the 2 x 1 2 x 1 2x12 \mathrm{x} 1 zero-vector) and covariance matrix Σ = I Σ = I Sigma=I\Sigma=I (the 2 × 2 2 × 2 2xx22 \times 2 identity matrix). A Gaussian with zero mean and identity covariance is also called the standard normal distribution. The middle figure shows the density of a Gaussian with zero mean and Σ = 0.6 I Σ = 0.6 I Sigma=0.6 I\Sigma=0.6 I; and in the rightmost figure shows one with, Σ = 2 I Σ = 2 I Sigma=2I\Sigma=2 I. We see that as Σ Σ Sigma\Sigma becomes larger, the Gaussian becomes more "spread-out," and as it becomes smaller, the distribution becomes more "compressed." Let's look at some more examples.
The figures above show Gaussians with mean 0 0 00, and with covariance matrices respectively
Σ = [ 1 0 0 1 ] ; Σ = [ 1 0.5 0.5 1 ] ; Σ = [ 1 0.8 0.8 1 ] . Σ = 1      0 0      1 ; Σ = 1 0.5 0.5 1 ; Σ = 1 0.8 0.8 1 . Sigma=[[1,0],[0,1]];quad Sigma=[[1,0.5],[0.5,1]];quad Sigma=[[1,0.8],[0.8,1]].\Sigma=\left[\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right] ; \quad \Sigma=\left[\begin{array}{cc} 1 & 0.5 \\ 0.5 & 1 \end{array}\right] ; \quad \Sigma=\left[\begin{array}{cc} 1 & 0.8 \\ 0.8 & 1 \end{array}\right] .
The leftmost figure shows the familiar standard normal distribution, and we see that as we increase the off-diagonal entry in Σ Σ Sigma\Sigma, the density becomes more "compressed" towards the 45 45 45^(@)45^{\circ} line (given by x 1 = x 2 x 1 = x 2 x_(1)=x_(2)x_{1}=x_{2}). We can see this more clearly when we look at the contours of the same three densities:
Here's one last set of examples generated by varying Σ Σ Sigma\Sigma:
The plots above used, respectively,
Σ = [ 1 0.5 0.5 1 ] ; Σ = [ 1 0.8 0.8 1 ] ; Σ = [ 3 0.8 0.8 1 ] . Σ = 1 0.5 0.5 1 ; Σ = 1 0.8 0.8 1 ; Σ = 3 0.8 0.8 1 . Sigma=[[1,-0.5],[-0.5,1]];quad Sigma=[[1,-0.8],[-0.8,1]];quad Sigma=[[3,0.8],[0.8,1]].\Sigma=\left[\begin{array}{cc} 1 & -0.5 \\ -0.5 & 1 \end{array}\right] ; \quad \Sigma=\left[\begin{array}{cc} 1 & -0.8 \\ -0.8 & 1 \end{array}\right] ; \quad \Sigma=\left[\begin{array}{cc} 3 & 0.8 \\ 0.8 & 1 \end{array}\right].
From the leftmost and middle figures, we see that by decreasing the offdiagonal elements of the covariance matrix, the density now becomes "compressed" again, but in the opposite direction. Lastly, as we vary the parameters, more generally the contours will form ellipses (the rightmost figure showing an example).
As our last set of examples, fixing Σ = I Σ = I Sigma=I\Sigma=I, by varying μ μ mu\mu, we can also move the mean of the density around.
The figures above were generated using Σ = I Σ = I Sigma=I\Sigma=I, and respectively
μ = [ 1 0 ] ; μ = [ 0.5 0 ] ; μ = [ 1 1.5 ] . μ = 1 0 ; μ = 0.5 0 ; μ = 1 1.5 . mu=[[1],[0]];mu=[[-0.5],[0]];quad mu=[[-1],[-1.5]].\mu=\left[\begin{array}{l} 1 \\ 0 \end{array}\right] ; \mu=\left[\begin{array}{c} -0.5 \\ 0 \end{array}\right] ; \quad \mu=\left[\begin{array}{c} -1 \\ -1.5 \end{array}\right].

1.2. The Gaussian Discriminant Analysis model

When we have a classification problem in which the input features x x xx are continuous-valued random variables, we can then use the Gaussian Discriminant Analysis (GDA) model, which models p ( x | y ) p ( x | y ) p(x|y)p(x | y) using a multivariate normal distribution. The model is:
y Bernoulli ( ϕ ) x y = 0 N ( μ 0 , Σ ) x y = 1 N ( μ 1 , Σ ) y Bernoulli ( ϕ ) x y = 0 N μ 0 , Σ x y = 1 N μ 1 , Σ {:[y∼Bernoulli(phi)],[x∣y=0∼N(mu_(0),Sigma)],[x∣y=1∼N(mu_(1),Sigma)]:}\begin{aligned} y & \sim \operatorname{Bernoulli}(\phi) \\ x \mid y=0 & \sim \mathcal{N}\left(\mu_{0}, \Sigma\right) \\ x \mid y=1 & \sim \mathcal{N}\left(\mu_{1}, \Sigma\right) \end{aligned}
Writing out the distributions, this is:
p ( y ) = ϕ y ( 1 ϕ ) 1 y p ( x y = 0 ) = 1 ( 2 π ) d / 2 | Σ | 1 / 2 exp ( 1 2 ( x μ 0 ) T Σ 1 ( x μ 0 ) ) p ( x y = 1 ) = 1 ( 2 π ) d / 2 | Σ | 1 / 2 exp ( 1 2 ( x μ 1 ) T Σ 1 ( x μ 1 ) ) p ( y ) = ϕ y ( 1 ϕ ) 1 y p ( x y = 0 ) = 1 ( 2 π ) d / 2 | Σ | 1 / 2 exp 1 2 x μ 0 T Σ 1 x μ 0 p ( x y = 1 ) = 1 ( 2 π ) d / 2 | Σ | 1 / 2 exp 1 2 x μ 1 T Σ 1 x μ 1 {:[p(y)=phi^(y)(1-phi)^(1-y)],[p(x∣y=0)=(1)/((2pi)^(d//2)|Sigma|^(1//2))exp(-(1)/(2)(x-mu_(0))^(T)Sigma^(-1)(x-mu_(0)))],[p(x∣y=1)=(1)/((2pi)^(d//2)|Sigma|^(1//2))exp(-(1)/(2)(x-mu_(1))^(T)Sigma^(-1)(x-mu_(1)))]:}\begin{aligned} p(y) &=\phi^{y}(1-\phi)^{1-y} \\ p(x \mid y=0) &=\frac{1}{(2 \pi)^{d / 2}|\Sigma|^{1 / 2}} \exp \left(-\frac{1}{2}\left(x-\mu_{0}\right)^{T} \Sigma^{-1}\left(x-\mu_{0}\right)\right) \\ p(x \mid y=1) &=\frac{1}{(2 \pi)^{d / 2}|\Sigma|^{1 / 2}} \exp \left(-\frac{1}{2}\left(x-\mu_{1}\right)^{T} \Sigma^{-1}\left(x-\mu_{1}\right)\right) \end{aligned}
Here, the parameters of our model are ϕ , Σ , μ 0 ϕ , Σ , μ 0 phi,Sigma,mu_(0)\phi, \Sigma, \mu_{0} and μ 1 μ 1 mu_(1)\mu_{1}. (Note that while there're two different mean vectors μ 0 μ 0 mu_(0)\mu_{0} and μ 1 μ 1 mu_(1)\mu_{1}, this model is usually applied using only one covariance matrix Σ Σ Sigma\Sigma.) The log-likelihood of the data is given by
( ϕ , μ 0 , μ 1 , Σ ) = log i = 1 n p ( x ( i ) , y ( i ) ; ϕ , μ 0 , μ 1 , Σ ) = log i = 1 n p ( x ( i ) y ( i ) ; μ 0 , μ 1 , Σ ) p ( y ( i ) ; ϕ ) . ( ϕ , μ 0 , μ 1 , Σ ) = log i = 1 n p ( x ( i ) , y ( i ) ; ϕ , μ 0 , μ 1 , Σ ) = log i = 1 n p ( x ( i ) y ( i ) ; μ 0 , μ 1 , Σ ) p ( y ( i ) ; ϕ ) . {:[ℓ(phi","mu_(0)","mu_(1)","Sigma)=log prod_(i=1)^(n)p(x^((i))","y^((i));phi","mu_(0)","mu_(1)","Sigma)],[=log prod_(i=1)^(n)p(x^((i))∣y^((i));mu_(0)","mu_(1)","Sigma)p(y^((i));phi).]:}\begin{aligned} \ell(\phi, \mu_{0}, \mu_{1}, \Sigma) &=\log \prod_{i=1}^{n} p(x^{(i)}, y^{(i)} ; \phi, \mu_{0}, \mu_{1}, \Sigma) \\ &=\log \prod_{i=1}^{n} p(x^{(i)} \mid y^{(i)} ; \mu_{0}, \mu_{1}, \Sigma) p(y^{(i)} ; \phi). \end{aligned}
By maximizing \ell with respect to the parameters, we find the maximum likelihood estimate of the parameters (see problem set 1 1 11) to be:
ϕ = 1 n i = 1 n 1 { y ( i ) = 1 } μ 0 = i = 1 n 1 { y ( i ) = 0 } x ( i ) i = 1 n 1 { y ( i ) = 0 } μ 1 = i = 1 n 1 { y ( i ) = 1 } x ( i ) i = 1 n 1 { y ( i ) = 1 } Σ = 1 n i = 1 n ( x ( i ) μ y ( i ) ) ( x ( i ) μ y ( i ) ) T . ϕ = 1 n i = 1 n 1 y ( i ) = 1 μ 0 = i = 1 n 1 y ( i ) = 0 x ( i ) i = 1 n 1 y ( i ) = 0 μ 1 = i = 1 n 1 y ( i ) = 1 x ( i ) i = 1 n 1 y ( i ) = 1 Σ = 1 n i = 1 n ( x ( i ) μ y ( i ) ) ( x ( i ) μ y ( i ) ) T . {:[phi=(1)/(n)sum_(i=1)^(n)1{y^((i))=1}],[mu_(0)=(sum_(i=1)^(n)1{y^((i))=0}x^((i)))/(sum_(i=1)^(n)1{y^((i))=0})],[mu_(1)=(sum_(i=1)^(n)1{y^((i))=1}x^((i)))/(sum_(i=1)^(n)1{y^((i))=1})],[Sigma=(1)/(n)sum_(i=1)^(n)(x^((i))-mu_(y^((i))))(x^((i))-mu_(y^((i))))^(T).]:}\begin{aligned} \phi &=\frac{1}{n} \sum_{i=1}^{n} 1\left\{y^{(i)}=1\right\} \\ \mu_{0} &=\frac{\sum_{i=1}^{n} 1\left\{y^{(i)}=0\right\} x^{(i)}}{\sum_{i=1}^{n} 1\left\{y^{(i)}=0\right\}} \\ \mu_{1} &=\frac{\sum_{i=1}^{n} 1\left\{y^{(i)}=1\right\} x^{(i)}}{\sum_{i=1}^{n} 1\left\{y^{(i)}=1\right\}} \\ \Sigma &=\frac{1}{n} \sum_{i=1}^{n}(x^{(i)}-\mu_{y^{(i)}})(x^{(i)}-\mu_{y^{(i)}})^{T} . \end{aligned}
Pictorially, what the algorithm is doing can be seen in as follows:
Shown in the figure are the training set, as well as the contours of the two Gaussian distributions that have been fit to the data in each of the two classes. Note that the two Gaussians have contours that are the same shape and orientation, since they share a covariance matrix Σ Σ Sigma\Sigma, but they have different means μ 0 μ 0 mu_(0)\mu_{0} and μ 1 μ 1 mu_(1)\mu_{1}. Also shown in the figure is the straight line giving the decision boundary at which p ( y = 1 | x ) = 0.5 p ( y = 1 | x ) = 0.5 p(y=1|x)=0.5p(y=1 | x)=0.5. On one side of the boundary, we'll predict y = 1 y = 1 y=1y=1 to be the most likely outcome, and on the other side, we'll predict y = 0 y = 0 y=0y=0.

1.3. Discussion: GDA and logistic regression

The GDA model has an interesting relationship to logistic regression. If we view the quantity p ( y = 1 | x ; ϕ , μ 0 , μ 1 , Σ ) p y = 1 | x ; ϕ , μ 0 , μ 1 , Σ p(y=1|x;phi,mu_(0),mu_(1),Sigma)p\left(y=1 | x ; \phi, \mu_{0}, \mu_{1}, \Sigma\right) as a function of x x xx, we'll find that it can be expressed in the form
p ( y = 1 | x ; ϕ , Σ , μ 0 , μ 1 ) = 1 1 + exp ( θ T x ) , p y = 1 | x ; ϕ , Σ , μ 0 , μ 1 = 1 1 + exp θ T x , p(y=1|x;phi,Sigma,mu_(0),mu_(1))=(1)/(1+exp(-theta^(T)x)),p\left(y=1 | x ; \phi, \Sigma, \mu_{0}, \mu_{1}\right)=\frac{1}{1+\exp \left(-\theta^{T} x\right)},
where θ θ theta\theta is some appropriate function of ϕ , Σ , μ 0 , μ 1 ϕ , Σ , μ 0 , μ 1 phi,Sigma,mu_(0),mu_(1)\phi, \Sigma, \mu_{0}, \mu_{1}.[1] This is exactly the form that logistic regression–a discriminative algorithm–used to model p ( y = p ( y = p(y=p(y= 1 | x ) 1 | x ) 1|x)1 | x).
When would we prefer one model over another? GDA and logistic regression will, in general, give different decision boundaries when trained on the same dataset. Which is better?
We just argued that if p ( x | y ) p ( x | y ) p(x|y)p(x | y) is multivariate gaussian (with shared Σ Σ Sigma\Sigma ), then p ( y | x ) p ( y | x ) p(y|x)p(y | x) necessarily follows a logistic function. The converse, however, is not true; i.e., p ( y | x ) p ( y | x ) p(y|x)p(y | x) being a logistic function does not imply p ( x y ) p ( x y ) p(x∣y)p(x \mid y) is multivariate gaussian. This shows that GDA makes stronger modeling assumptions about the data than does logistic regression. It turns out that when these modeling assumptions are correct, then GDA will find better fits to the data, and is a better model. Specifically, when p ( x | y ) p ( x | y ) p(x|y)p(x | y) is indeed gaussian (with shared Σ Σ Sigma\Sigma ), then GDA is asymptotically efficient. Informally, this means that in the limit of very large training sets (large n n nn ), there is no algorithm that is strictly better than GDA (in terms of, say, how accurately they estimate p ( y | x ) p ( y | x ) p(y|x)p(y | x) ). In particular, it can be shown that in this setting, GDA will be a better algorithm than logistic regression; and more generally, even for small training set sizes, we would generally expect GDA to better.
In contrast, by making significantly weaker assumptions, logistic regression is also more robust and less sensitive to incorrect modeling assumptions. There are many different sets of assumptions that would lead to p ( y | x ) p ( y | x ) p(y|x)p(y | x) taking the form of a logistic function. For example, if x | y = 0 Poisson ( λ 0 ) x | y = 0 Poisson λ 0 x|y=0∼Poisson(lambda_(0))x | y=0 \sim \operatorname{Poisson} \left(\lambda_{0} \right), and x | y = 1 Poisson ( λ 1 ) x | y = 1 Poisson λ 1 x|y=1∼Poisson(lambda_(1))x | y=1 \sim \operatorname{Poisson} \left( \lambda_{1}\right), then p ( y | x ) p ( y | x ) p(y|x)p(y | x) will be logistic. Logistic regression will also work well on Poisson data like this. But if we were to use GDA on such data–and fit Gaussian distributions to such non-Gaussian data–then the results will be less predictable, and GDA may (or may not) do well.
To summarize: GDA makes stronger modeling assumptions, and is more data efficient (i.e., requires less training data to learn "well") when the modeling assumptions are correct or at least approximately correct. Logistic
regression makes weaker assumptions, and is significantly more robust to deviations from modeling assumptions. Specifically, when the data is indeed non-Gaussian, then in the limit of large datasets, logistic regression will almost always do better than GDA. For this reason, in practice logistic regression is used more often than GDA. (Some related considerations about discriminative vs. generative models also apply for the Naive Bayes algorithm that we discuss next, but the Naive Bayes algorithm is still considered a very good, and is certainly also a very popular, classification algorithm.)

2. Naive Bayes

In GDA, the feature vectors x x xx were continuous, real-valued vectors. Let's now talk about a different learning algorithm in which the x j x j x_(j)x_{j}'s are discretevalued.
For our motivating example, consider building an email spam filter using machine learning. Here, we wish to classify messages according to whether they are unsolicited commercial (spam) email, or non-spam email. After learning to do this, we can then have our mail reader automatically filter out the spam messages and perhaps place them in a separate mail folder. Classifying emails is one example of a broader set of problems called text classification.
Let's say we have a training set (a set of emails labeled as spam or nonspam). We'll begin our construction of our spam filter by specifying the features x j x j x_(j)x_{j} used to represent an email.
We will represent an email via a feature vector whose length is equal to the number of words in the dictionary. Specifically, if an email contains the j j jj-th word of the dictionary, then we will set x j = 1 x j = 1 x_(j)=1x_{j}=1; otherwise, we let x j = 0 x j = 0 x_(j)=0x_{j}=0. For instance, the vector
x = [ 1 0 0 1 0 ] a aardvark aardwolf buy zygmurgy x = 1 0 0 1 0 a aardvark aardwolf buy zygmurgy x=[[1],[0],[0],[vdots],[1],[vdots],[0]]quad{:["a"],["aardvark"],["aardwolf"],[vdots],["buy"],[vdots],["zygmurgy"]:}x=\left[\begin{array}{c} 1 \\ 0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0 \end{array}\right] \quad \begin{array}{l} \text {a} \\ \text {aardvark} \\\text {aardwolf}\\ \vdots \\ \text {buy} \\ \vdots \\ \text {zygmurgy} \end{array}
is used to represent an email that contains the words "a" and "buy," but not "aardvark," "aardwolf" or "zygmurgy."[2] The set of words encoded into the feature vector is called the vocabulary, so the dimension of x x xx is equal to the size of the vocabulary.
Having chosen our feature vector, we now want to build a generative model. So, we have to model p ( x | y ) p ( x | y ) p(x|y)p(x | y). But if we have, say, a vocabulary of 50000 50000 5000050000 words, then x { 0 , 1 } 50000 x { 0 , 1 } 50000 x in{0,1}^(50000)x \in\{0,1\}^{50000} ( x x xx is a 50000 50000 5000050000-dimensional vector of 0's and 1's), and if we were to model x x xx explicitly with a multinomial distribution over the 2 50000 2 50000 2^(50000)2^{50000} possible outcomes, then we'd end up with a ( 2 50000 1 ) 2 50000 1 (2^(50000)-1)\left(2^{50000}-1\right)-dimensional parameter vector. This is clearly too many parameters.
To model p ( x | y ) p ( x | y ) p(x|y)p(x | y), we will therefore make a very strong assumption. We will assume that the x i x i x_(i)x_{i}'s are conditionally independent given y y yy. This assumption is called the Naive Bayes (NB) assumption, and the resulting algorithm is called the Naive Bayes classifier. For instance, if y = 1 y = 1 y=1y=1 means spam email; "buy" is word 2087 and "price" is word 39831 39831 3983139831; then we are assuming that if I tell you y = 1 y = 1 y=1y=1 (that a particular piece of email is spam), then knowledge of x 2087 x 2087 x_(2087)x_{2087} (knowledge of whether "buy" appears in the message) will have no effect on your beliefs about the value of x 39831 x 39831 x_(39831)x_{39831} (whether "price" appears). More formally, this can be written p ( x 2087 | y ) = p ( x 2087 | y , x 39831 ) p x 2087 | y = p x 2087 | y , x 39831 p(x_(2087)|y)=p(x_(2087)|y,x_(39831))p\left(x_{2087} | y\right)=p\left(x_{2087} | y, x_{39831}\right). (Note that this is not the same as saying that x 2087 x 2087 x_(2087)x_{2087} and x 39831 x 39831 x_(39831)x_{39831} are independent, which would have been written " p ( x 2087 ) = p ( x 2087 | x 39831 ) p x 2087 = p x 2087 | x 39831 p(x_(2087))=p(x_(2087)|x_(39831))p\left(x_{2087}\right)=p\left(x_{2087} | x_{39831}\right)"; rather, we are only assuming that x 2087 x 2087 x_(2087)x_{2087} and x 39831 x 39831 x_(39831)x_{39831} are conditionally independent given y y yy.)
We now have:
p ( x 1 , , x 50000 | y = p ( x 1 y ) p ( x 2 y , x 1 ) p ( x 3 y , x 1 , x 2 ) p ( x 50000 y , x 1 , , x 49999 ) = p ( x 1 y ) p ( x 2 y ) p ( x 3 y ) p ( x 50000 y ) = j = 1 d p ( x j y ) p ( x 1 , , x 50000 | y = p x 1 y p x 2 y , x 1 p x 3 y , x 1 , x 2 p x 50000 y , x 1 , , x 49999 = p x 1 y p x 2 y p x 3 y p x 50000 y = j = 1 d p x j y {:[p(x_(1)","dots","x_(50000)|y],[=p(x_(1)∣y)p(x_(2)∣y,x_(1))p(x_(3)∣y,x_(1),x_(2))cdots p(x_(50000)∣y,x_(1),dots,x_(49999))],[=p(x_(1)∣y)p(x_(2)∣y)p(x_(3)∣y)cdots p(x_(50000)∣y)],[=prod_(j=1)^(d)p(x_(j)∣y)]:}\begin{aligned} p (&x_{1}, \ldots, x_{50000} | y \\ &=p\left(x_{1} \mid y\right) p\left(x_{2} \mid y, x_{1}\right) p\left(x_{3} \mid y, x_{1}, x_{2}\right) \cdots p\left(x_{50000} \mid y, x_{1}, \ldots, x_{49999}\right) \\ &=p\left(x_{1} \mid y\right) p\left(x_{2} \mid y\right) p\left(x_{3} \mid y\right) \cdots p\left(x_{50000} \mid y\right) \\ &=\prod_{j=1}^{d} p\left(x_{j} \mid y\right) \end{aligned}
The first equality simply follows from the usual properties of probabilities, and the second equality used the NB assumption. We note that even though the Naive Bayes assumption is an extremely strong assumptions, the resulting algorithm works well on many problems.
Our model is parameterized by ϕ j | y = 1 = p ( x j = 1 | y = 1 ) , ϕ j | y = 0 = p ( x j = 1 | y = 0 ) ϕ j | y = 1 = p ( x j = 1 | y = 1 ) , ϕ j | y = 0 = p ( x j = 1 | y = 0 ) phi_(j|y=1)=p(x_(j)=1|y=1),phi_(j|y=0)=p(x_(j)=1|y=0)\phi_{j | y=1}=p(x_{j}=1 | y=1), \phi_{j | y=0}=p(x_{j}=1 | y=0), and ϕ y = p ( y = 1 ) ϕ y = p ( y = 1 ) phi_(y)=p(y=1)\phi_{y}=p(y=1). As usual, given a training set { ( x ( i ) , y ( i ) ) ; i = 1 , , n } { ( x ( i ) , y ( i ) ) ; i = 1 , , n } {(x^((i)),y^((i)));i=1,dots,n}\{(x^{(i)}, y^{(i)}) ; i=1, \ldots, n\}, we can write down the joint likelihood of the data:
L ( ϕ y , ϕ j y = 0 , ϕ j y = 1 ) = i = 1 n p ( x ( i ) , y ( i ) ) . L ( ϕ y , ϕ j y = 0 , ϕ j y = 1 ) = i = 1 n p ( x ( i ) , y ( i ) ) . L(phi_(y),phi_(j∣y=0),phi_(j∣y=1))=prod_(i=1)^(n)p(x^((i)),y^((i))).\mathcal{L}(\phi_{y}, \phi_{j \mid y=0}, \phi_{j \mid y=1})=\prod_{i=1}^{n} p(x^{(i)}, y^{(i)}) .
Maximizing this with respect to ϕ y , ϕ j y = 0 ϕ y , ϕ j y = 0 phi_(y),phi_(j∣y=0)\phi_{y}, \phi_{j \mid y=0} and ϕ j y = 1 ϕ j y = 1 phi_(j∣y=1)\phi_{j \mid y=1} gives the maximum likelihood estimates:
ϕ j | y = 1 = i = 1 n 1 { x j ( i ) = 1 y ( i ) = 1 } i = 1 n 1 { y ( i ) = 1 } ϕ j | y = 0 = i = 1 n 1 { x j ( i ) = 1 y ( i ) = 0 } i = 1 n 1 { y ( i ) = 0 } ϕ y = i = 1 n 1 { y ( i ) = 1 } n ϕ j | y = 1 = i = 1 n 1 { x j ( i ) = 1 y ( i ) = 1 } i = 1 n 1 { y ( i ) = 1 } ϕ j | y = 0 = i = 1 n 1 { x j ( i ) = 1 y ( i ) = 0 } i = 1 n 1 { y ( i ) = 0 } ϕ y = i = 1 n 1 { y ( i ) = 1 } n {:[phi_(j|y=1)=(sum_(i=1)^(n)1{x_(j)^((i))=1^^y^((i))=1})/(sum_(i=1)^(n)1{y^((i))=1})],[phi_(j|y=0)=(sum_(i=1)^(n)1{x_(j)^((i))=1^^y^((i))=0})/(sum_(i=1)^(n)1{y^((i))=0})],[phi_(y)=(sum_(i=1)^(n)1{y^((i))=1})/(n)]:}\begin{aligned} \phi_{j | y=1} &=\frac{\sum_{i=1}^{n} 1\{x_{j}^{(i)}=1 \wedge y^{(i)}=1\}}{\sum_{i=1}^{n} 1\{y^{(i)}=1\}} \\ \phi_{j | y=0} &=\frac{\sum_{i=1}^{n} 1\{x_{j}^{(i)}=1 \wedge y^{(i)}=0\}}{\sum_{i=1}^{n} 1\{y^{(i)}=0\}} \\ \phi_{y} &=\frac{\sum_{i=1}^{n} 1\{y^{(i)}=1\}}{n} \end{aligned}
In the equations above, the " ^^\wedge" symbol means "and." The parameters have a very natural interpretation. For instance, ϕ j y = 1 ϕ j y = 1 phi_(j∣y=1)\phi_{j \mid y=1} is just the fraction of the spam ( y = 1 ) ( y = 1 ) (y=1)(y=1) emails in which word j j jj does appear.
Having fit all these parameters, to make a prediction on a new example with features x x xx, we then simply calculate
p ( y = 1 | x ) = p ( x | y = 1 ) p ( y = 1 ) p ( x ) = ( j = 1 d p ( x j | y = 1 ) ) p ( y = 1 ) ( j = 1 d p ( x j | y = 1 ) ) p ( y = 1 ) + ( j = 1 d p ( x j | y = 0 ) ) p ( y = 0 ) , p ( y = 1 | x ) = p ( x | y = 1 ) p ( y = 1 ) p ( x ) = j = 1 d p x j | y = 1 p ( y = 1 ) j = 1 d p x j | y = 1 p ( y = 1 ) + j = 1 d p x j | y = 0 p ( y = 0 ) , {:[p(y=1|x)=(p(x|y=1)p(y=1))/(p(x))],[=((prod_(j=1)^(d)p(x_(j)|y=1))p(y=1))/((prod_(j=1)^(d)p(x_(j)|y=1))p(y=1)+(prod_(j=1)^(d)p(x_(j)|y=0))p(y=0))","]:}\begin{aligned} p(y=1 | x) &=\frac{p(x | y=1) p(y=1)}{p(x)} \\ &=\frac{\left(\prod_{j=1}^{d} p\left(x_{j} | y=1\right)\right) p(y=1)}{\left(\prod_{j=1}^{d} p\left(x_{j} | y=1\right)\right) p(y=1)+\left(\prod_{j=1}^{d} p\left(x_{j} | y=0\right)\right) p(y=0)}, \end{aligned}
and pick whichever class has the higher posterior probability.
Lastly, we note that while we have developed the Naive Bayes algorithm mainly for the case of problems where the features x j x j x_(j)x_{j} are binary-valued, the generalization to where x j x j x_(j)x_{j} can take values in { 1 , 2 , , k j } 1 , 2 , , k j {1,2,dots,k_(j)}\left\{1,2, \ldots, k_{j}\right\} is straightforward. Here, we would simply model p ( x j y ) p x j y p(x_(j)∣y)p\left(x_{j} \mid y\right) as multinomial rather than as Bernoulli. Indeed, even if some original input attribute (say, the living area of a house, as in our earlier example) were continuous valued, it is quite common to discretize it–that is, turn it into a small set of discrete values - and apply Naive Bayes. For instance, if we use some feature x j x j x_(j)x_{j} to represent living area, we might discretize the continuous values as follows:
Living area (sq. feet) < 400 < 400 < 400<400 400 800 400 800 400-800400-800 800 1200 800 1200 800-1200800-1200 1200 1600 1200 1600 1200-16001200-1600 > 1600 > 1600 > 1600>1600
x i x i x_(i)x_{i} 1 2 3 4 5
Thus, for a house with living area 890 square feet, we would set the value of the corresponding feature x j x j x_(j)x_{j} to 3 . We can then apply the Naive Bayes algorithm, and model p ( x j y ) p x j y p(x_(j)∣y)p\left(x_{j} \mid y\right) with a multinomial distribution, as described previously. When the original, continuous-valued attributes are not wellmodeled by a multivariate normal distribution, discretizing the features and using Naive Bayes (instead of GDA) will often result in a better classifier.

2.1. Laplace smoothing

The Naive Bayes algorithm as we have described it will work fairly well for many problems, but there is a simple change that makes it work much better, especially for text classification. Let's briefly discuss a problem with the algorithm in its current form, and then talk about how we can fix it.
Consider spam/email classification, and let's suppose that, we are in the year of 20xx, after completing CS229 and having done excellent work on the project, you decide around May 20xx to submit work you did to the NeurIPS conference for publication.[3] Because you end up discussing the conference in your emails, you also start getting messages with the word "neurips" in it. But this is your first NeurIPS paper, and until this time, you had not previously seen any emails containing the word "neurips"; in particular "neurips" did not ever appear in your training set of spam/non-spam emails. Assuming that "neurips" was the 35000 th word in the dictionary, your Naive Bayes spam filter therefore had picked its maximum likelihood estimates of the parameters ϕ 35000 | y ϕ 35000 | y phi_(35000|y)\phi_{35000 | y} to be
ϕ 35000 | y = 1 = i = 1 n 1 { x 35000 ( i ) = 1 y ( i ) = 1 } i = 1 n 1 { y ( i ) = 1 } = 0 ϕ 35000 | y = 0 = i = 1 n 1 { x 35000 ( i ) = 1 y ( i ) = 0 } i = 1 n 1 { y ( i ) = 0 } = 0 ϕ 35000 | y = 1 = i = 1 n 1 x 35000 ( i ) = 1 y ( i ) = 1 i = 1 n 1 y ( i ) = 1 = 0 ϕ 35000 | y = 0 = i = 1 n 1 x 35000 ( i ) = 1 y ( i ) = 0 i = 1 n 1 y ( i ) = 0 = 0 {:[phi_(35000|y=1)=(sum_(i=1)^(n)1{x_(35000)^((i))=1^^y^((i))=1})/(sum_(i=1)^(n)1{y^((i))=1})=0],[phi_(35000|y=0)=(sum_(i=1)^(n)1{x_(35000)^((i))=1^^y^((i))=0})/(sum_(i=1)^(n)1{y^((i))=0})=0]:}\begin{aligned} \phi_{35000 | y=1} &=\frac{\sum_{i=1}^{n} 1\left\{x_{35000}^{(i)}=1 \wedge y^{(i)}=1\right\}}{\sum_{i=1}^{n} 1\left\{y^{(i)}=1\right\}}=0 \\ \phi_{35000 | y=0} &=\frac{\sum_{i=1}^{n} 1\left\{x_{35000}^{(i)}=1 \wedge y^{(i)}=0\right\}}{\sum_{i=1}^{n} 1\left\{y^{(i)}=0\right\}}=0 \end{aligned}
I.e., because it has never seen "neurips" before in either spam or non-spam training examples, it thinks the probability of seeing it in either type of email is zero. Hence, when trying to decide if one of these messages containing "neurips" is spam, it calculates the class posterior probabilities, and obtains
p ( y = 1 | x ) = j = 1 d p ( x j | y = 1 ) p ( y = 1 ) j = 1 d p ( x j | y = 1 ) p ( y = 1 ) + j = 1 d p ( x j | y = 0 ) p ( y = 0 ) = 0 0 . p ( y = 1 | x ) = j = 1 d p x j | y = 1 p ( y = 1 ) j = 1 d p x j | y = 1 p ( y = 1 ) + j = 1 d p x j | y = 0 p ( y = 0 ) = 0 0 . {:[p(y=1|x)=(prod_(j=1)^(d)p(x_(j)|y=1)p(y=1))/(prod_(j=1)^(d)p(x_(j)|y=1)p(y=1)+prod_(j=1)^(d)p(x_(j)|y=0)p(y=0))],[=(0)/(0).]:}\begin{aligned} p(y=1 | x) &=\frac{\prod_{j=1}^{d} p\left(x_{j} | y=1\right) p(y=1)}{\prod_{j=1}^{d} p\left(x_{j} | y=1\right) p(y=1)+\prod_{j=1}^{d} p\left(x_{j} | y=0\right) p(y=0)} \\ &=\frac{0}{0} . \end{aligned}
This is because each of the terms " j = 1 d p ( x j | y ) j = 1 d p x j | y prod_(j=1)^(d)p(x_(j)|y)\prod_{j=1}^{d} p\left(x_{j} | y\right)" includes a term p ( x 35000 y ) = 0 p x 35000 y = 0 p(x_(35000)∣y)=0p\left(x_{35000} \mid y\right)=0 that is multiplied into it. Hence, our algorithm obtains 0 / 0 0 / 0 0//00 / 0, and doesn't know how to make a prediction.
Stating the problem more broadly, it is statistically a bad idea to estimate the probability of some event to be zero just because you haven't seen it before in your finite training set. Take the problem of estimating the mean of a multinomial random variable z z zz taking values in { 1 , , k } { 1 , , k } {1,dots,k}\{1, \ldots, k\}. We can parameterize our multinomial with ϕ j = p ( z = j ) ϕ j = p ( z = j ) phi_(j)=p(z=j)\phi_{j}=p(z=j). Given a set of n n nn independent observations { z ( 1 ) , , z ( n ) } { z ( 1 ) , , z ( n ) } {z^((1)),dots,z^((n))}\{z^{(1)}, \ldots, z^{(n)}\}, the maximum likelihood estimates are given by
ϕ j = i = 1 n 1 { z ( i ) = j } n . ϕ j = i = 1 n 1 z ( i ) = j n . phi_(j)=(sum_(i=1)^(n)1{z^((i))=j})/(n).\phi_{j}=\frac{\sum_{i=1}^{n} 1\left\{z^{(i)}=j\right\}}{n} .
As we saw previously, if we were to use these maximum likelihood estimates, then some of the ϕ j ϕ j phi_(j)\phi_{j} 's might end up as zero, which was a problem. To avoid this, we can use Laplace smoothing, which replaces the above estimate with
ϕ j = 1 + i = 1 n 1 { z ( i ) = j } k + n . ϕ j = 1 + i = 1 n 1 z ( i ) = j k + n . phi_(j)=(1+sum_(i=1)^(n)1{z^((i))=j})/(k+n).\phi_{j}=\frac{1+\sum_{i=1}^{n} 1\left\{z^{(i)}=j\right\}}{k+n} .
Here, we've added 1 to the numerator, and k k kk to the denominator. Note that j = 1 k ϕ j = 1 j = 1 k ϕ j = 1 sum_(j=1)^(k)phi_(j)=1\sum_{j=1}^{k} \phi_{j}=1 still holds (check this yourself!), which is a desirable property since the ϕ j ϕ j phi_(j)\phi_{j}'s are estimates for probabilities that we know must sum to 1 . Also, ϕ j 0 ϕ j 0 phi_(j)!=0\phi_{j} \neq 0 for all values of j j jj, solving our problem of probabilities being estimated as zero. Under certain (arguably quite strong) conditions, it can be shown that the Laplace smoothing actually gives the optimal estimator of the ϕ j ϕ j phi_(j)\phi_{j}'s.
Returning to our Naive Bayes classifier, with Laplace smoothing, we therefore obtain the following estimates of the parameters:
ϕ j y = 1 = 1 + i = 1 n 1 { x j ( i ) = 1 y ( i ) = 1 } 2 + i = 1 n 1 { y ( i ) = 1 } ϕ j y = 0 = 1 + i = 1 n 1 { x j ( i ) = 1 y ( i ) = 0 } 2 + i = 1 n 1 { y ( i ) = 0 } ϕ j y = 1 = 1 + i = 1 n 1 x j ( i ) = 1 y ( i ) = 1 2 + i = 1 n 1 y ( i ) = 1 ϕ j y = 0 = 1 + i = 1 n 1 x j ( i ) = 1 y ( i ) = 0 2 + i = 1 n 1 y ( i ) = 0 {:[phi_(j∣y=1)=(1+sum_(i=1)^(n)1{x_(j)^((i))=1^^y^((i))=1})/(2+sum_(i=1)^(n)1{y^((i))=1})],[phi_(j∣y=0)=(1+sum_(i=1)^(n)1{x_(j)^((i))=1^^y^((i))=0})/(2+sum_(i=1)^(n)1{y^((i))=0})]:}\begin{aligned} \phi_{j \mid y=1} &=\frac{1+\sum_{i=1}^{n} 1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=1\right\}}{2+\sum_{i=1}^{n} 1\left\{y^{(i)}=1\right\}} \\ \phi_{j \mid y=0} &=\frac{1+\sum_{i=1}^{n} 1\left\{x_{j}^{(i)}=1 \wedge y^{(i)}=0\right\}}{2+\sum_{i=1}^{n} 1\left\{y^{(i)}=0\right\}} \end{aligned}
(In practice, it usually doesn't matter much whether we apply Laplace smoothing to ϕ y ϕ y phi_(y)\phi_{y} or not, since we will typically have a fair fraction each of spam and non-spam messages, so ϕ y ϕ y phi_(y)\phi_{y} will be a reasonable estimate of p ( y = 1 ) p ( y = 1 ) p(y=1)p(y=1) and will be quite far from 0 anyway.)

2.2. Event models for text classification

To close off our discussion of generative learning algorithms, let's talk about one more model that is specifically for text classification. While Naive Bayes as we've presented it will work well for many classification problems, for text classification, there is a related model that does even better.
In the specific context of text classification, Naive Bayes as presented uses the what's called the Bernoulli event model (or sometimes multi-variate Bernoulli event model). In this model, we assumed that the way an email is generated is that first it is randomly determined (according to the class priors p ( y ) p ( y ) p(y)p(y)) whether a spammer or non-spammer will send you your next message. Then, the person sending the email runs through the dictionary, deciding whether to include each word j j jj in that email independently and according to the probabilities p ( x j = 1 | y ) = ϕ j | y p x j = 1 | y = ϕ j | y p(x_(j)=1|y)=phi_(j|y)p\left(x_{j}=1 | y\right)=\phi_{j | y}. Thus, the probability of a message was given by p ( y ) j = 1 d p ( x j | y ) p ( y ) j = 1 d p x j | y p(y)prod_(j=1)^(d)p(x_(j)|y)p(y) \prod_{j=1}^{d} p\left(x_{j} |y\right).
Here's a different model, called the Multinomial event model. To describe this model, we will use a different notation and set of features for representing emails. We let x j x j x_(j)x_{j} denote the identity of the j j jj-th word in the email. Thus, x j x j x_(j)x_{j} is now an integer taking values in { 1 , , | V | } { 1 , , | V | } {1,dots,|V|}\{1, \ldots,|V|\}, where | V | | V | |V||V| is the size of our vocabulary (dictionary). An email of d d dd words is now represented by a vector ( x 1 , x 2 , , x d ) x 1 , x 2 , , x d (x_(1),x_(2),dots,x_(d))\left(x_{1}, x_{2}, \ldots, x_{d}\right) of length d d dd; note that d d dd can vary for different documents. For instance, if an email starts with "A NeurIPS ...," then x 1 = 1 x 1 = 1 x_(1)=1x_{1}=1 ("a" is the first word in the dictionary), and x 2 = 35000 x 2 = 35000 x_(2)=35000x_{2}=35000 (if "neurips" is the 35000 35000 3500035000th word in the dictionary).
In the multinomial event model, we assume that the way an email is generated is via a random process in which spam/non-spam is first determined (according to p ( y ) p ( y ) p(y)p(y)) as before. Then, the sender of the email writes the email by first generating x 1 x 1 x_(1)x_{1} from some multinomial distribution over words ( p ( x 1 y ) ) p x 1 y (p(x_(1)∣y))\left(p\left(x_{1} \mid y\right)\right). Next, the second word x 2 x 2 x_(2)x_{2} is chosen independently of x 1 x 1 x_(1)x_{1} but from the same multinomial distribution, and similarly for x 3 , x 4 x 3 , x 4 x_(3),x_(4)x_{3}, x_{4}, and so on, until all d d dd words of the email have been generated. Thus, the overall probability of a message is given by p ( y ) j = 1 d p ( x j | y ) p ( y ) j = 1 d p ( x j | y ) p(y)prod_(j=1)^(d)p(x_(j)|y)p(y) \prod_{j=1}^{d} p(x_{j} | y). Note that this formula looks like the one we had earlier for the probability of a message under the Bernoulli event model, but that the terms in the formula now mean very different things. In particular x j | y x j | y x_(j)|yx_{j} | y is now a multinomial, rather than a Bernoulli distribution.
The parameters for our new model are ϕ y = p ( y ) ϕ y = p ( y ) phi_(y)=p(y)\phi_{y}=p(y) as before, ϕ k | y = 1 = p ( x j = k | y = 1 ) ϕ k | y = 1 = p ( x j = k | y = 1 ) phi_(k|y=1)=p(x_(j)=k|y=1)\phi_{k | y=1}=p(x_{j}=k | y=1) (for any j j jj) and ϕ k | y = 0 = p ( x j = k | y = 0 ) ϕ k | y = 0 = p ( x j = k | y = 0 ) phi_(k|y=0)=p(x_(j)=k|y=0)\phi_{k | y=0}=p(x_{j}=k | y=0). Note that we have assumed that p ( x j | y ) p ( x j | y ) p(x_(j)|y)p(x_{j} | y) is the same for all values of j j jj (i.e., that the distribution according to which a word is generated does not depend on its position j j jj within the email). K ( x , z ) ϕ ( x ) , ϕ ( z ) K ( x , z ) ϕ ( x ) , ϕ ( z ) K(x,z)≜(:phi(x),phi(z):)K(x, z) \triangleq\langle\phi(x), \phi(z)\rangle follows:
If we are given a training set { ( x ( i ) , y ( i ) ) ; i = 1 , , n } x ( i ) , y ( i ) ; i = 1 , , n {(x^((i)),y^((i)));i=1,dots,n}\left\{\left(x^{(i)}, y^{(i)}\right) ; i=1, \ldots, n\right\} where x ( i ) = ( x 1 ( i ) , x 2 ( i ) , , x d i ( i ) ) x ( i ) = x 1 ( i ) , x 2 ( i ) , , x d i ( i ) x^((i))=(x_(1)^((i)),x_(2)^((i)),dots,x_(d_(i))^((i)))x^{(i)}=\left(x_{1}^{(i)}, x_{2}^{(i)}, \ldots, x_{d_{i}}^{(i)}\right) (here, d i d i d_(i)d_{i} is the number of words in the i i ii-training example), the likelihood of the data is given by
L ( ϕ y , ϕ k y = 0 , ϕ k y = 1 ) = i = 1 n p ( x ( i ) , y ( i ) ) = i = 1 n ( j = 1 d i p ( x j ( i ) | y ; ϕ k | y = 0 , ϕ k | y = 1 ) ) p ( y ( i ) ; ϕ y ) . L ( ϕ y , ϕ k y = 0 , ϕ k y = 1 ) = i = 1 n p ( x ( i ) , y ( i ) ) = i = 1 n j = 1 d i p ( x j ( i ) | y ; ϕ k | y = 0 , ϕ k | y = 1 ) p ( y ( i ) ; ϕ y ) . {:[L(phi_(y)","phi_(k∣y=0)","phi_(k∣y=1))=prod_(i=1)^(n)p(x^((i))","y^((i)))],[=prod_(i=1)^(n)(prod_(j=1)^(d_(i))p(x_(j)^((i))|y;phi_(k|y=0),phi_(k|y=1)))p(y^((i));phi_(y)).]:}\begin{aligned} \mathcal{L}(\phi_{y}, \phi_{k \mid y=0}, \phi_{k \mid y=1}) &=\prod_{i=1}^{n} p(x^{(i)}, y^{(i)}) \\ &=\prod_{i=1}^{n}\left(\prod_{j=1}^{d_{i}} p(x_{j}^{(i)} | y ; \phi_{k | y=0}, \phi_{k | y=1})\right) p(y^{(i)} ; \phi_{y}) . \end{aligned}
Maximizing this yields the maximum likelihood estimates of the parameters:
ϕ k | y = 1 = i = 1 n j = 1 d i 1 { x j ( i ) = k y ( i ) = 1 } i = 1 n 1 { y ( i ) = 1 } d i ϕ k | y = 0 = i = 1 n j = 1 d i 1 { x j ( i ) = k y ( i ) = 0 } i = 1 n 1 { y ( i ) = 0 } d i ϕ y = i = 1 n 1 { y ( i ) = 1 } n . ϕ k | y = 1 = i = 1 n j = 1 d i 1 { x j ( i ) = k y ( i ) = 1 } i = 1 n 1 { y ( i ) = 1 } d i ϕ k | y = 0 = i = 1 n j = 1 d i 1 { x j ( i ) = k y ( i ) = 0 } i = 1 n 1 { y ( i ) = 0 } d i ϕ y = i = 1 n 1 { y ( i ) = 1 } n . {:[phi_(k|y=1)=(sum_(i=1)^(n)sum_(j=1)^(d_(i))1{x_(j)^((i))=k^^y^((i))=1})/(sum_(i=1)^(n)1{y^((i))=1}d_(i))],[phi_(k|y=0)=(sum_(i=1)^(n)sum_(j=1)^(d_(i))1{x_(j)^((i))=k^^y^((i))=0})/(sum_(i=1)^(n)1{y^((i))=0}d_(i))],[phi_(y)=(sum_(i=1)^(n)1{y^((i))=1})/(n).]:}\begin{aligned} \phi_{k | y=1} &=\frac{\sum_{i=1}^{n} \sum_{j=1}^{d_{i}} 1\{x_{j}^{(i)}=k \wedge y^{(i)}=1\}}{\sum_{i=1}^{n} 1\{y^{(i)}=1\} d_{i}} \\ \phi_{k | y=0} &=\frac{\sum_{i=1}^{n} \sum_{j=1}^{d_{i}} 1\{x_{j}^{(i)}=k \wedge y^{(i)}=0\}}{\sum_{i=1}^{n} 1\{y^{(i)}=0\} d_{i}} \\ \phi_{y} &=\frac{\sum_{i=1}^{n} 1\{y^{(i)}=1\}}{n} . \end{aligned}
If we were to apply Laplace smoothing (which is needed in practice for good performance) when estimating ϕ k | y = 0 ϕ k | y = 0 phi_(k|y=0)\phi_{k | y=0} and ϕ k | y = 1 ϕ k | y = 1 phi_(k|y=1)\phi_{k | y=1}, we add 1 1 11 to the numerators and | V | | V | |V||V| to the denominators, and obtain:
ϕ k y = 1 = 1 + i = 1 n j = 1 d i 1 { x j ( i ) = k y ( i ) = 1 } | V | + i = 1 n 1 { y ( i ) = 1 } d i ϕ k y = 0 = 1 + i = 1 n j = 1 d i 1 { x j ( i ) = k y ( i ) = 0 } | V | + i = 1 n 1 { y ( i ) = 0 } d i . ϕ k y = 1 = 1 + i = 1 n j = 1 d i 1 x j ( i ) = k y ( i ) = 1 | V | + i = 1 n 1 y ( i ) = 1 d i ϕ k y = 0 = 1 + i = 1 n j = 1 d i 1 x j ( i ) = k y ( i ) = 0 | V | + i = 1 n 1 y ( i ) = 0 d i . {:[phi_(k∣y=1)=(1+sum_(i=1)^(n)sum_(j=1)^(d_(i))1{x_(j)^((i))=k^^y^((i))=1})/(|V|+sum_(i=1)^(n)1{y^((i))=1}d_(i))],[phi_(k∣y=0)=(1+sum_(i=1)^(n)sum_(j=1)^(d_(i))1{x_(j)^((i))=k^^y^((i))=0})/(|V|+sum_(i=1)^(n)1{y^((i))=0}d_(i)).]:}\begin{aligned} \phi_{k \mid y=1} &=\frac{1+\sum_{i=1}^{n} \sum_{j=1}^{d_{i}} 1\left\{x_{j}^{(i)}=k \wedge y^{(i)}=1\right\}}{|V|+\sum_{i=1}^{n} 1\left\{y^{(i)}=1\right\} d_{i}} \\ \phi_{k \mid y=0} &=\frac{1+\sum_{i=1}^{n} \sum_{j=1}^{d_{i}} 1\left\{x_{j}^{(i)}=k \wedge y^{(i)}=0\right\}}{|V|+\sum_{i=1}^{n} 1\left\{y^{(i)}=0\right\} d_{i}} . \end{aligned}
While not necessarily the very best classification algorithm, the Naive Bayes classifier often works surprisingly well. It is often also a very good "first thing to try," given its simplicity and ease of implementation.
You can read the notes from the next CS229 lecture on Kernal Methods here.

  1. This uses the convention of redefining the x ( i ) x ( i ) x^((i))x^{(i)} 's on the right-hand-side to be ( d + 1 ) ( d + 1 ) (d+1)(d+1) dimensional vectors by adding the extra coordinate x 0 ( i ) = 1 x 0 ( i ) = 1 x_(0)^((i))=1x_{0}^{(i)}=1; see problem set 1 . 1 . 1.1 . ↩︎
  2. Actually, rather than looking through an English dictionary for the list of all English words, in practice it is more common to look through our training set and encode in our feature vector only the words that occur at least once there. Apart from reducing the number of words modeled and hence reducing our computational and space requirements, this also has the advantage of allowing us to model/include as a feature many words that may appear in your email (such as "cs229") but that you won't find in a dictionary. Sometimes (as in the homework), we also exclude the very high frequency words (which will be words like "the," "of," "and"; these high frequency, "content free" words are called stop words) since they occur in so many documents and do little to indicate whether an email is spam or non-spam. ↩︎
  3. NeurIPS is one of the top machine learning conferences. The deadline for submitting a paper is typically in May-June. ↩︎

Recommended for you

Emil Junker
Manipulative Attacks in Group Identification
Manipulative Attacks in Group Identification
This review provides an introduction to the group identification problem and gives an overview of the feasibility and computational complexity of manipulative attacks in group identification.
2 points
0 issues