Contents

The k-means Clustering Algorithm

You can read the notes from the previous lecture from Andrew Ng's CS229 course on Regularization and Model Selection here.
In the clustering problem, we are given a training set { x ( 1 ) , , x ( n ) } x ( 1 ) , , x ( n ) {x^((1)),dots,x^((n))}\left\{x^{(1)}, \ldots, x^{(n)}\right\}, and want to group the data into a few cohesive "clusters." Here, x ( i ) R d x ( i ) R d x^((i))inR^(d)x^{(i)} \in \mathbb{R}^{d} as usual; but no labels y ( i ) y ( i ) y^((i))y^{(i)} are given. So, this is an unsupervised learning problem.
The k k kk-means clustering algorithm is as follows:
  1. Initialize cluster centroids μ 1 , μ 2 , , μ k R d μ 1 , μ 2 , , μ k R d mu_(1),mu_(2),dots,mu_(k)inR^(d)\mu_{1}, \mu_{2}, \ldots, \mu_{k} \in \mathbb{R}^{d} randomly.
  2. Repeat until convergence: {
    1. For every i i ii, set c ( i ) := arg min j x ( i ) μ j 2 . c ( i ) := arg min j x ( i ) μ j 2 . c^((i)):=arg min_(j)||x^((i))-mu_(j)||^(2).c^{(i)}:=\arg \min _{j}\left\|x^{(i)}-\mu_{j}\right\|^{2} . For each j j jj, set μ j := i = 1 n 1 { c ( i ) = j } x ( i ) i = 1 n 1 { c ( i ) = j } . μ j := i = 1 n 1 c ( i ) = j x ( i ) i = 1 n 1 c ( i ) = j . mu_(j):=(sum_(i=1)^(n)1{c^((i))=j}x^((i)))/(sum_(i=1)^(n)1{c^((i))=j}).\mu_{j}:=\frac{\sum_{i=1}^{n} 1\left\{c^{(i)}=j\right\} x^{(i)}}{\sum_{i=1}^{n} 1\left\{c^{(i)}=j\right\}} .}
In the algorithm above, k k kk (a parameter of the algorithm) is the number of clusters we want to find; and the cluster centroids μ j μ j mu_(j)\mu_{j} represent our current guesses for the positions of the centers of the clusters. To initialize the cluster centroids (in step 1 of the algorithm above), we could choose k k kk training examples randomly, and set the cluster centroids to be equal to the values of these k k kk examples. (Other initialization methods are also possible.)
The inner-loop of the algorithm repeatedly carries out two steps: (i) "Assigning" each training example x ( i ) x ( i ) x^((i))x^{(i)} to the closest cluster centroid μ j μ j mu_(j)\mu_{j}, and (ii) Moving each cluster centroid μ j μ j mu_(j)\mu_{j} to the mean of the points assigned to it. Figure 1 shows an illustration of running k k kk-means.
Figure 1: K-means algorithm. Training examples are shown as dots, and cluster centroids are shown as crosses. (a) Original dataset. (b) Random initial cluster centroids (in this instance, not chosen to be equal to two training examples). (c-f) Illustration of running two iterations of k-means. In each iteration, we assign each training example to the closest cluster centroid (shown by "painting" the training examples the same color as the cluster centroid to which is assigned); then we move each cluster centroid to the mean of the points assigned to it. (Best viewed in color.) Images courtesy Michael Jordan.
Is the k k kk-means algorithm guaranteed to converge? Yes it is, in a certain sense. In particular, let us define the distortion function to be:
J ( c , μ ) = i = 1 n x ( i ) μ c ( i ) 2 J ( c , μ ) = i = 1 n x ( i ) μ c ( i ) 2 J(c,mu)=sum_(i=1)^(n)||x^((i))-mu_(c^((i)))||^(2)J(c, \mu)=\sum_{i=1}^{n}\left\|x^{(i)}-\mu_{c^{(i)}}\right\|^{2}
Thus, J J JJ measures the sum of squared distances between each training example x ( i ) x ( i ) x^((i))x^{(i)} and the cluster centroid μ c ( i ) μ c ( i ) mu_(c^((i)))\mu_{c^{(i)}} to which it has been assigned. It can be shown that k k kk-means is exactly coordinate descent on J J JJ. Specifically, the inner-loop of k k kk-means repeatedly minimizes J J JJ with respect to c c cc while holding μ μ mu\mu fixed, and then minimizes J J JJ with respect to μ μ mu\mu while holding c c cc fixed. Thus, J J JJ must monotonically decrease, and the value of J J JJ must converge. (Usually, this implies that c c cc and μ μ mu\mu will converge too. In theory, it is possible for k k kk-means to oscillate between a few different clusterings–i.e., a few different values for c c cc and/or μ μ mu\mu–that have exactly the same value of J J JJ, but this almost never happens in practice.)
The distortion function J J JJ is a non-convex function, and so coordinate descent on J J JJ is not guaranteed to converge to the global minimum. In other words, k k kk-means can be susceptible to local optima. Very often k k kk-means will work fine and come up with very good clusterings despite this. But if you are worried about getting stuck in bad local minima, one common thing to do is run k k kk-means many times (using different random initial values for the cluster centroids μ j μ j mu_(j)\mu_{j}). Then, out of all the different clusterings found, pick the one that gives the lowest distortion J ( c , μ ) J ( c , μ ) J(c,mu)J(c, \mu).
You can read the notes from the next lecture from Andrew Ng's CS229 course on Gaussians and the EM algorithm here.

Recommended for you

Jintang Li
Adversarial Learning on Graph
Adversarial Learning on Graph
This review gives an introduction to Adversarial Machine Learning on graph-structured data, including several recent papers and research ideas in this field. This review is based on our paper “A Survey of Adversarial Learning on Graph”.
7 points
0 issues