Introduction to Kernel Methods

You can read the notes from the previous lecture from Andrew Ng's CS229 course on Generative Learning Algorithms here.

1. Kernel Methods

1.1. Feature maps

Recall that in our discussion about linear regression, we considered the problem of predicting the price of a house (denoted by y y yy) from the living area of the house (denoted by x x xx), and we fit a linear function of x x xx to the training data. What if the price y y yy can be more accurately represented as a non-linear function of x x xx? In this case, we need a more expressive family of models than linear models.
We start by considering fitting cubic functions y = θ 3 x 3 + θ 2 x 2 + θ 1 x + θ 0 y = θ 3 x 3 + θ 2 x 2 + θ 1 x + θ 0 y=theta_(3)x^(3)+theta_(2)x^(2)+theta_(1)x+theta_(0)y=\theta_{3} x^{3}+\theta_{2} x^{2}+\theta_{1} x+\theta_{0}. It turns out that we can view the cubic function as a linear function over the a different set of feature variables (defined below). Concretely, let the function ϕ : R R 4 ϕ : R R 4 phi:RrarrR^(4)\phi: \mathbb{R} \rightarrow \mathbb{R}^{4} be defined as
(1) ϕ ( x ) = [ 1 x x 2 x 3 ] R 4 (1) ϕ ( x ) = 1 x x 2 x 3 R 4 {:(1)phi(x)=[[1],[x],[x^(2)],[x^(3)]]inR^(4):}\begin{equation} \phi(x)=\left[\begin{array}{c} 1 \\ x \\ x^{2} \\ x^{3} \end{array}\right] \in \mathbb{R}^{4} \end{equation}
Let θ R 4 θ R 4 theta inR^(4)\theta \in \mathbb{R}^{4} be the vector containing θ 0 , θ 1 , θ 2 , θ 3 θ 0 , θ 1 , θ 2 , θ 3 theta_(0),theta_(1),theta_(2),theta_(3)\theta_{0}, \theta_{1}, \theta_{2}, \theta_{3} as entries. Then we can rewrite the cubic function in x x xx as:
θ 3 x 3 + θ 2 x 2 + θ 1 x + θ 0 = θ T ϕ ( x ) θ 3 x 3 + θ 2 x 2 + θ 1 x + θ 0 = θ T ϕ ( x ) theta_(3)x^(3)+theta_(2)x^(2)+theta_(1)x+theta_(0)=theta^(T)phi(x)\theta_{3} x^{3}+\theta_{2} x^{2}+\theta_{1} x+\theta_{0}=\theta^{T} \phi(x)
Thus, a cubic function of the variable x x xx can be viewed as a linear function over the variables ϕ ( x ) ϕ ( x ) phi(x)\phi(x). To distinguish between these two sets of variables, in the context of kernel methods, we will call the "original" input value the input attributes of a problem (in this case, x x xx, the living area). When the original input is mapped to some new set of quantities ϕ ( x ) ϕ ( x ) phi(x)\phi(x), we will call those new quantities the features variables. (Unfortunately, different authors use different terms to describe these two things in different contexts.) We will call ϕ ϕ phi\phi a feature map, which maps the attributes to the features.

1.2. LMS (least mean squares) with features

We will derive the gradient descent algorithm for fitting the model θ T ϕ ( x ) θ T ϕ ( x ) theta^(T)phi(x)\theta^{T} \phi(x). First recall that for ordinary least square problem where we were to fit θ T x θ T x theta^(T)x\theta^{T} x, the batch gradient descent update is (see the first lecture note for its derivation):
θ := θ + α i = 1 n ( y ( i ) h θ ( x ( i ) ) ) x ( i ) (2) := θ + α i = 1 n ( y ( i ) θ T x ( i ) ) x ( i ) θ := θ + α i = 1 n y ( i ) h θ ( x ( i ) ) x ( i ) (2) := θ + α i = 1 n y ( i ) θ T x ( i ) x ( i ) {:[theta:=theta+alphasum_(i=1)^(n)(y^((i))-h_(theta)(x^((i))))x^((i))],(2):=theta+alphasum_(i=1)^(n)(y^((i))-theta^(T)x^((i)))x^((i)):}\begin{align} \theta &:=\theta+\alpha \sum_{i=1}^{n}\left(y^{(i)}-h_{\theta}(x^{(i)})\right) x^{(i)}\nonumber \\ &:=\theta+\alpha \sum_{i=1}^{n}\left(y^{(i)}-\theta^{T} x^{(i)}\right) x^{(i)} \end{align}
Let ϕ : R d R p ϕ : R d R p phi:R^(d)rarrR^(p)\phi: \mathbb{R}^{d} \rightarrow \mathbb{R}^{p} be a feature map that maps attribute x x xx (in R d R d R^(d)\mathbb{R}^{d} ) to the features ϕ ( x ) ϕ ( x ) phi(x)\phi(x) in R p R p R^(p)\mathbb{R}^{p}. (In the motivating example in the previous subsection, we have d = 1 d = 1 d=1d=1 and p = 4 p = 4 p=4p=4.) Now our goal is to fit the function θ T ϕ ( x ) θ T ϕ ( x ) theta^(T)phi(x)\theta^{T} \phi(x), with θ θ theta\theta being a vector in R p R p R^(p)\mathbb{R}^{p} instead of R d R d R^(d)\mathbb{R}^{d}. We can replace all the occurrences of x ( i ) x ( i ) x^((i))x^{(i)} in the algorithm above by ϕ ( x ( i ) ) ϕ ( x ( i ) ) phi(x^((i)))\phi(x^{(i)}) to obtain the new update:
(3) θ := θ + α i = 1 n ( y ( i ) θ T ϕ ( x ( i ) ) ) ϕ ( x ( i ) ) (3) θ := θ + α i = 1 n y ( i ) θ T ϕ ( x ( i ) ) ϕ ( x ( i ) ) {:(3)theta:=theta+alphasum_(i=1)^(n)(y^((i))-theta^(T)phi(x^((i))))phi(x^((i))):}\begin{equation} \theta:=\theta+\alpha \sum_{i=1}^{n}\left(y^{(i)}-\theta^{T} \phi(x^{(i)})\right) \phi(x^{(i)}) \end{equation}
Similarly, the corresponding stochastic gradient descent update rule is
(4) θ := θ + α ( y ( i ) θ T ϕ ( x ( i ) ) ) ϕ ( x ( i ) ) (4) θ := θ + α y ( i ) θ T ϕ ( x ( i ) ) ϕ ( x ( i ) ) {:(4)theta:=theta+alpha(y^((i))-theta^(T)phi(x^((i))))phi(x^((i))):}\begin{equation} \theta:=\theta+\alpha\left(y^{(i)}-\theta^{T} \phi(x^{(i)})\right) \phi(x^{(i)}) \end{equation}

1.3. LMS with the kernel trick

The gradient descent update, or stochastic gradient update above becomes computationally expensive when the features ϕ ( x ) ϕ ( x ) phi(x)\phi(x) is high-dimensional. For example, consider the direct extension of the feature map in equation (1) to high-dimensional input x x xx: suppose x R d x R d x inR^(d)x \in \mathbb{R}^{d}, and let ϕ ( x ) ϕ ( x ) phi(x)\phi(x) be the vector that contains all the monomials of x x xx with degree 3 3 <= 3\leq 3
(5) ϕ ( x ) = [ 1 x 1 x 2 x 1 2 x 1 x 2 x 1 x 3 x 2 x 1 x 1 3 x 1 2 x 2 ] . (5) ϕ ( x ) = 1 x 1 x 2 x 1 2 x 1 x 2 x 1 x 3 x 2 x 1 x 1 3 x 1 2 x 2 . {:(5)phi(x)=[[1],[x_(1)],[x_(2)],[vdots],[x_(1)^(2)],[x_(1)x_(2)],[x_(1)x_(3)],[vdots],[x_(2)x_(1)],[vdots],[x_(1)^(3)],[x_(1)^(2)x_(2)],[vdots]].:}\begin{equation} \phi(x)=\left[\begin{array}{c} 1 \\ x_{1} \\ x_{2} \\ \vdots \\ x_{1}^{2} \\ x_{1} x_{2} \\ x_{1} x_{3} \\ \vdots \\ x_{2} x_{1} \\ \vdots \\ x_{1}^{3} \\ x_{1}^{2} x_{2} \\ \vdots \end{array}\right] . \end{equation}
The dimension of the features ϕ ( x ) ϕ ( x ) phi(x)\phi(x) is on the order of d 3 d 3 d^(3)d^{3}.[1] This is a prohibitively long vector for computational purpose - when d = 1000 d = 1000 d=1000d=1000, each update requires at least computing and storing a 1000 3 = 10 9 1000 3 = 10 9 1000^(3)=10^(9)1000^{3}=10^{9} dimensional vector, which is 10 6 10 6 10^(6)10^{6} times slower than the update rule for for ordinary least squares updates ( 2 ) ( 2 ) (2)(2).
It may appear at first that such d 3 d 3 d^(3)d^{3} runtime per update and memory usage are inevitable, because the vector θ θ theta\theta itself is of dimension p d 3 p d 3 p~~d^(3)p \approx d^{3}, and we may need to update every entry of θ θ theta\theta and store it. However, we will introduce the kernel trick with which we will not need to store θ θ theta\theta explicitly, and the runtime can be significantly improved.
For simplicity, we assume the initialize the value θ = 0 θ = 0 theta=0\theta=0, and we focus on the iterative update (3). The main observation is that at any time, θ θ theta\theta can be represented as a linear combination of the vectors ϕ ( x ( 1 ) ) , , ϕ ( x ( n ) ) ϕ ( x ( 1 ) ) , , ϕ ( x ( n ) ) phi(x^((1))),dots,phi(x^((n)))\phi(x^{(1)}), \ldots, \phi(x^{(n)}). Indeed, we can show this inductively as follows. At initialization, θ = 0 = θ = 0 = theta=0=\theta=0= i = 1 n 0 ϕ ( x ( i ) ) i = 1 n 0 ϕ x ( i ) sum_(i=1)^(n)0*phi(x^((i)))\sum_{i=1}^{n} 0 \cdot \phi\left(x^{(i)}\right). Assume at some point, θ θ theta\theta can be represented as
(6) θ = i = 1 n β i ϕ ( x ( i ) ) (6) θ = i = 1 n β i ϕ ( x ( i ) ) {:(6)theta=sum_(i=1)^(n)beta_(i)phi(x^((i))):}\begin{equation} \theta=\sum_{i=1}^{n} \beta_{i} \phi(x^{(i)}) \end{equation}
for some β 1 , , β n R β 1 , , β n R beta_(1),dots,beta_(n)inR\beta_{1}, \ldots, \beta_{n} \in \mathbb{R}. Then we claim that in the next round, θ θ theta\theta is still a linear combination of ϕ ( x ( 1 ) ) , , ϕ ( x ( n ) ) ϕ ( x ( 1 ) ) , , ϕ ( x ( n ) ) phi(x^((1))),dots,phi(x^((n)))\phi(x^{(1)}), \ldots, \phi(x^{(n)}) because
θ := θ + α i = 1 n ( y ( i ) θ T ϕ ( x ( i ) ) ) ϕ ( x ( i ) ) = i = 1 n β i ϕ ( x ( i ) ) + α i = 1 n ( y ( i ) θ T ϕ ( x ( i ) ) ) ϕ ( x ( i ) ) (7) = i = 1 n ( β i + α ( y ( i ) θ T ϕ ( x ( i ) ) ) ) new β i ϕ ( x ( i ) ) θ := θ + α i = 1 n y ( i ) θ T ϕ ( x ( i ) ) ϕ ( x ( i ) ) = i = 1 n β i ϕ ( x ( i ) ) + α i = 1 n y ( i ) θ T ϕ ( x ( i ) ) ϕ ( x ( i ) ) (7) = i = 1 n ( β i + α y ( i ) θ T ϕ ( x ( i ) ) ) new  β i ϕ ( x ( i ) ) {:[theta:=theta+alphasum_(i=1)^(n)(y^((i))-theta^(T)phi(x^((i))))phi(x^((i)))],[=sum_(i=1)^(n)beta_(i)phi(x^((i)))+alphasum_(i=1)^(n)(y^((i))-theta^(T)phi(x^((i))))phi(x^((i)))],(7)=sum_(i=1)^(n)ubrace((beta_(i)+alpha(y^((i))-theta^(T)phi(x^((i))))))_("new "beta_(i))phi(x^((i))):}\begin{align} \theta &:=\theta+\alpha \sum_{i=1}^{n}\left(y^{(i)}-\theta^{T} \phi(x^{(i)})\right) \phi(x^{(i)}) \nonumber\\ &=\sum_{i=1}^{n} \beta_{i} \phi(x^{(i)})+\alpha \sum_{i=1}^{n}\left(y^{(i)}-\theta^{T} \phi(x^{(i)})\right) \phi(x^{(i)})\nonumber\\ &=\sum_{i=1}^{n} \underbrace{(\beta_{i}+\alpha\left(y^{(i)}-\theta^{T} \phi(x^{(i)})\right))}_{\text {new } \beta_{i}} \phi(x^{(i)}) \end{align}
You may realize that our general strategy is to implicitly represent the p p pp-dimensional vector θ θ theta\theta by a set of coefficients β 1 , , β n β 1 , , β n beta_(1),dots,beta_(n)\beta_{1}, \ldots, \beta_{n}. Towards doing this, we derive the update rule of the coefficients β 1 , , β n β 1 , , β n beta_(1),dots,beta_(n)\beta_{1}, \ldots, \beta_{n}. Using the equation above, we see that the new β i β i beta_(i)\beta_{i} depends on the old one via
(8) β i := β i + α ( y ( i ) θ T ϕ ( x ( i ) ) ) (8) β i := β i + α y ( i ) θ T ϕ ( x ( i ) ) {:(8)beta_(i):=beta_(i)+alpha(y^((i))-theta^(T)phi(x^((i)))):}\begin{equation} \beta_{i}:=\beta_{i}+\alpha\left(y^{(i)}-\theta^{T} \phi(x^{(i)})\right) \end{equation}
Here we still have the old θ θ theta\theta on the RHS of the equation. Replacing θ θ theta\theta by θ = j = 1 n β j ϕ ( x ( j ) ) θ = j = 1 n β j ϕ x ( j ) theta=sum_(j=1)^(n)beta_(j)phi(x^((j)))\theta=\sum_{j=1}^{n} \beta_{j} \phi\left(x^{(j)}\right) gives
i { 1 , , n } , β i := β i + α ( y ( i ) j = 1 n β j ϕ ( x ( j ) ) T ϕ ( x ( i ) ) ) i { 1 , , n } , β i := β i + α y ( i ) j = 1 n β j ϕ ( x ( j ) ) T ϕ ( x ( i ) ) AA i in{1,dots,n},beta_(i):=beta_(i)+alpha(y^((i))-sum_(j=1)^(n)beta_(j)phi(x^((j)))^(T)phi(x^((i))))\forall i \in\{1, \ldots, n\}, \beta_{i}:=\beta_{i}+\alpha\left(y^{(i)}-\sum_{j=1}^{n} \beta_{j} \phi(x^{(j)})^{T} \phi(x^{(i)})\right)
We often rewrite ϕ ( x ( j ) ) T ϕ ( x ( i ) ) ϕ ( x ( j ) ) T ϕ ( x ( i ) ) phi(x^((j)))^(T)phi(x^((i)))\phi(x^{(j)})^{T} \phi(x^{(i)}) as ϕ ( x ( j ) ) , ϕ ( x ( i ) ) ϕ ( x ( j ) ) , ϕ ( x ( i ) ) (:phi(x^((j))),phi(x^((i))):)\langle\phi(x^{(j)}), \phi(x^{(i)})\rangle to emphasize that it's the inner product of the two feature vectors. Viewing β i β i beta_(i)\beta_{i}'s as the new representation of θ θ theta\theta, we have successfully translated the batch gradient descent algorithm into an algorithm that updates the value of β β beta\beta iteratively. It may appear that at every iteration, we still need to compute the values of ϕ ( x ( j ) ) , ϕ ( x ( i ) ) ϕ x ( j ) , ϕ x ( i ) (:phi(x^((j))),phi(x^((i))):)\left\langle\phi\left(x^{(j)}\right), \phi\left(x^{(i)}\right)\right\rangle for all pairs of i , j i , j i,ji, j, each of which may take roughly O ( p ) O ( p ) O(p)O(p) operation. However, two important properties come to rescue:
  1. We can pre-compute the pairwise inner products ϕ ( x ( j ) ) , ϕ ( x ( i ) ) ϕ ( x ( j ) ) , ϕ ( x ( i ) ) (:phi(x^((j))),phi(x^((i))):)\langle\phi(x^{(j)}), \phi(x^{(i)})\rangle for all pairs of i , j i , j i,ji, j before the loop starts.
  2. For the feature map ϕ ϕ phi\phi defined in (5) (or many other interesting feature maps), computing ϕ ( x ( j ) ) , ϕ ( x ( i ) ) ϕ ( x ( j ) ) , ϕ ( x ( i ) ) (:phi(x^((j))),phi(x^((i))):)\langle\phi(x^{(j)}), \phi(x^{(i)})\rangle can be efficient and does not necessarily require computing ϕ ( x ( i ) ) ϕ x ( i ) phi(x^((i)))\phi\left(x^{(i)}\right) explicitly. This is because: ϕ ( x ) , ϕ ( z ) = 1 + i = 1 d x i z i + i , j { 1 , , d } x i x j z i z j + i , j , k { 1 , , d } x i x j x k z i z j z k = 1 + i = 1 d x i z i + ( i = 1 d x i z i ) 2 + ( i = 1 d x i z i ) 3 (9) = 1 + x , z + x , z 2 + x , z 3 ϕ ( x ) , ϕ ( z ) = 1 + i = 1 d x i z i + i , j { 1 , , d } x i x j z i z j + i , j , k { 1 , , d } x i x j x k z i z j z k = 1 + i = 1 d x i z i + i = 1 d x i z i 2 + i = 1 d x i z i 3 (9) = 1 + x , z + x , z 2 + x , z 3 {:[(:phi(x)","phi(z):)=1+sum_(i=1)^(d)x_(i)z_(i)+sum_(i,j in{1,dots,d})x_(i)x_(j)z_(i)z_(j)+sum_(i,j,k in{1,dots,d})x_(i)x_(j)x_(k)z_(i)z_(j)z_(k)],[=1+sum_(i=1)^(d)x_(i)z_(i)+(sum_(i=1)^(d)x_(i)z_(i))^(2)+(sum_(i=1)^(d)x_(i)z_(i))^(3)],(9)=1+(:x","z:)+(:x","z:)^(2)+(:x","z:)^(3):}\begin{align} \langle\phi(x), \phi(z)\rangle &=1+\sum_{i=1}^{d} x_{i} z_{i}+\sum_{i, j \in\{1, \ldots, d\}} x_{i} x_{j} z_{i} z_{j}+\sum_{i, j, k \in\{1, \ldots, d\}} x_{i} x_{j} x_{k} z_{i} z_{j} z_{k}\nonumber \\ &=1+\sum_{i=1}^{d} x_{i} z_{i}+\left(\sum_{i=1}^{d} x_{i} z_{i}\right)^{2}+\left(\sum_{i=1}^{d} x_{i} z_{i}\right)^{3}\nonumber \\ &=1+\langle x, z\rangle+\langle x, z\rangle^{2}+\langle x, z\rangle^{3} \end{align} Therefore, to compute ϕ ( x ) , ϕ ( z ) ϕ ( x ) , ϕ ( z ) (:phi(x),phi(z):)\langle\phi(x), \phi(z)\rangle, we can first compute x , z x , z (:x,z:)\langle x, z\rangle with O ( d ) O ( d ) O(d)O(d) time and then take another constant number of operations to compute 1 + x , z + x , z 2 + x , z 3 1 + x , z + x , z 2 + x , z 3 1+(:x,z:)+(:x,z:)^(2)+(:x,z:)^(3)1+\langle x, z\rangle+\langle x, z\rangle^{2}+\langle x, z\rangle^{3}.
As you will see, the inner products between the features ϕ ( x ) , ϕ ( z ) ϕ ( x ) , ϕ ( z ) (:phi(x),phi(z):)\langle\phi(x), \phi(z)\rangle are essential here. We define the Kernel corresponding to the feature map ϕ ϕ phi\phi as a function that maps X × X R X × X R XxxXrarrR\mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R} satisfying:[2]
(10) K ( x , z ) ϕ ( x ) , ϕ ( z ) (10) K ( x , z ) ϕ ( x ) , ϕ ( z ) {:(10)K(x","z)≜(:phi(x)","phi(z):):}\begin{equation} K(x, z) \triangleq\langle\phi(x), \phi(z)\rangle \end{equation}
To wrap up the discussion, we write the down the final algorithm as follows:

  1. Compute all the values K ( x ( i ) , x ( j ) ) ϕ ( x ( i ) ) , ϕ ( x ( j ) ) K ( x ( i ) , x ( j ) ) ϕ ( x ( i ) ) , ϕ ( x ( j ) ) K(x^((i)),x^((j)))≜(:phi(x^((i))),phi(x^((j))):)K(x^{(i)}, x^{(j)}) \triangleq\langle\phi(x^{(i)}), \phi(x^{(j)})\rangle using equation (9) for all, i , j { 1 , , n } } i , j { 1 , , n } } i,j in{1,dots,n}}i,j \in\{1, \ldots, n\}\}. Set β := 0 β := 0 beta:=0\beta:=0.
  2. Loop: (11) i { 1 , , n } , β i := β i + α ( y ( i ) j = 1 n β j K ( x ( i ) , x ( j ) ) ) (11) i { 1 , , n } , β i := β i + α y ( i ) j = 1 n β j K x ( i ) , x ( j ) {:(11)AA i in{1","dots","n}","beta_(i):=beta_(i)+alpha(y^((i))-sum_(j=1)^(n)beta_(j)K(x^((i)),x^((j)))):}\begin{equation} \forall i \in\{1, \ldots, n\}, \beta_{i}:=\beta_{i}+\alpha\left(y^{(i)}-\sum_{j=1}^{n} \beta_{j} K\left(x^{(i)}, x^{(j)}\right)\right) \end{equation} Or in vector notation, letting K K KK be the n × n n × n n xx nn\times n matrix with K i j = K i j = K_(ij)=K_{ij}= K ( x ( i ) , x ( j ) ) K x ( i ) , x ( j ) K(x^((i)),x^((j)))K\left(x^{(i)}, x^{(j)}\right), we have β := β + α ( y K β ) β := β + α ( y K β ) beta:=beta+alpha( vec(y)-K beta)\beta:=\beta+\alpha(\vec{y}-K \beta)

With the algorithm above, we can update the representation β β beta\beta of the vector θ θ theta\theta efficiently with O ( n ) O ( n ) O(n)O(n) time per update. Finally, we need to show that the knowledge of the representation β β beta\beta suffices to compute the prediction θ T ϕ ( x ) θ T ϕ ( x ) theta^(T)phi(x)\theta^{T} \phi(x). Indeed, we have
(12) θ T ϕ ( x ) = i = 1 n β i ϕ ( x ( i ) ) T ϕ ( x ) = i = 1 n β i K ( x ( i ) , x ) (12) θ T ϕ ( x ) = i = 1 n β i ϕ x ( i ) T ϕ ( x ) = i = 1 n β i K x ( i ) , x {:(12)theta^(T)phi(x)=sum_(i=1)^(n)beta_(i)phi(x^((i)))^(T)phi(x)=sum_(i=1)^(n)beta_(i)K(x^((i)),x):}\begin{equation} \theta^{T} \phi(x)=\sum_{i=1}^{n} \beta_{i} \phi\left(x^{(i)}\right)^{T} \phi(x)=\sum_{i=1}^{n} \beta_{i} K\left(x^{(i)}, x\right) \end{equation}
You may realize that fundamentally all we need to know about the feature map ϕ ( ) ϕ ( ) phi(*)\phi(\cdot) is encapsulated in the corresponding kernel function K ( , ) K ( , ) K(*,*)K(\cdot, \cdot). We will expand on this in the next section.

1.4. Properties of kernels

In the last subsection, we started with an explicitly defined feature map ϕ ϕ phi\phi, which induces the kernel function K ( x , z ) ϕ ( x ) , ϕ ( z ) K ( x , z ) ϕ ( x ) , ϕ ( z ) K(x,z)≜(:phi(x),phi(z):)K(x, z) \triangleq\langle\phi(x), \phi(z)\rangle. Then we saw that the kernel function is so intrinsic so that as long as the kernel function is defined, the whole training algorithm can be written entirely in the language of the kernel without referring to the feature map ϕ ϕ phi\phi, so can the prediction of a test example x x xx (equation (12).)
Therefore, it would be tempted to define other kernel function K ( , ) K ( , ) K(*,*)K(\cdot, \cdot) and run the algorithm (11). Note that the algorithm (11) does not need to explicitly access the feature map ϕ ϕ phi\phi, and therefore we only need to ensure the existence of the feature map ϕ ϕ phi\phi, but do not necessarily need to be able to explicitly write ϕ ϕ phi\phi down.
What kinds of functions K ( , ) K ( , ) K(*,*)K(\cdot, \cdot) can correspond to some feature map ϕ ϕ phi\phi? In other words, can we tell if there is some feature mapping ϕ ϕ phi\phi so that K ( x , z ) = K ( x , z ) = K(x,z)=K(x, z)= ϕ ( x ) T ϕ ( z ) ϕ ( x ) T ϕ ( z ) phi(x)^(T)phi(z)\phi(x)^{T} \phi(z) for all x , z x , z x,zx, z?
If we can answer this question by giving a precise characterization of valid kernel functions, then we can completely change the interface of selecting feature maps ϕ ϕ phi\phi to the interface of selecting kernel function K K KK. Concretely, we can pick a function K K KK, verify that it satisfies the characterization (so that there exists a feature map ϕ ϕ phi\phi that K K KK corresponds to), and then we can run update rule (11). The benefit here is that we don't have to be able to compute ϕ ϕ phi\phi or write it down analytically, and we only need to know its existence. We will answer this question at the end of this subsection after we go through several concrete examples of kernels.
Suppose x , z R d x , z R d x,z inR^(d)x, z \in \mathbb{R}^{d}, and let's first consider the function K ( , ) K ( , ) K(*,*)K(\cdot, \cdot) defined as:
K ( x , z ) = ( x T z ) 2 . K ( x , z ) = x T z 2 . K(x,z)=(x^(T)z)^(2).K(x, z)=\left(x^{T} z\right)^{2} .
We can also write this as
K ( x , z ) = ( i = 1 d x i z i ) ( j = 1 d x j z j ) = i = 1 d j = 1 d x i x j z i z j = i , j = 1 d ( x i x j ) ( z i z j ) K ( x , z ) = i = 1 d x i z i j = 1 d x j z j = i = 1 d j = 1 d x i x j z i z j = i , j = 1 d x i x j z i z j {:[K(x","z)=(sum_(i=1)^(d)x_(i)z_(i))(sum_(j=1)^(d)x_(j)z_(j))],[=sum_(i=1)^(d)sum_(j=1)^(d)x_(i)x_(j)z_(i)z_(j)],[=sum_(i,j=1)^(d)(x_(i)x_(j))(z_(i)z_(j))]:}\begin{aligned} K(x, z) &=\left(\sum_{i=1}^{d} x_{i} z_{i}\right)\left(\sum_{j=1}^{d} x_{j} z_{j}\right) \\ &=\sum_{i=1}^{d} \sum_{j=1}^{d} x_{i} x_{j} z_{i} z_{j} \\ &=\sum_{i, j=1}^{d}\left(x_{i} x_{j}\right)\left(z_{i} z_{j}\right) \end{aligned}
Thus, we see that K ( x , z ) = ϕ ( x ) , ϕ ( z ) K ( x , z ) = ϕ ( x ) , ϕ ( z ) K(x,z)=(:phi(x),phi(z):)K(x, z)=\langle\phi(x), \phi(z)\rangle is the kernel function that corresponds to the the feature mapping ϕ ϕ phi\phi given (shown here for the case of d = 3 d = 3 d=3d=3) by
ϕ ( x ) = [ x 1 x 1 x 1 x 2 x 1 x 3 x 2 x 1 x 2 x 2 x 2 x 3 x 3 x 1 x 3 x 2 x 3 x 3 ] . ϕ ( x ) = x 1 x 1 x 1 x 2 x 1 x 3 x 2 x 1 x 2 x 2 x 2 x 3 x 3 x 1 x 3 x 2 x 3 x 3 . phi(x)=[[x_(1)x_(1)],[x_(1)x_(2)],[x_(1)x_(3)],[x_(2)x_(1)],[x_(2)x_(2)],[x_(2)x_(3)],[x_(3)x_(1)],[x_(3)x_(2)],[x_(3)x_(3)]].\phi(x)=\left[\begin{array}{l} x_{1} x_{1} \\ x_{1} x_{2} \\ x_{1} x_{3} \\ x_{2} x_{1} \\ x_{2} x_{2} \\ x_{2} x_{3} \\ x_{3} x_{1} \\ x_{3} x_{2} \\ x_{3} x_{3} \end{array}\right] .
Revisiting the computational efficiency perspective of kernel, note that whereas calculating the high-dimensional ϕ ( x ) ϕ ( x ) phi(x)\phi(x) requires O ( d 2 ) O d 2 O(d^(2))O\left(d^{2}\right) time, finding K ( x , z ) K ( x , z ) K(x,z)K(x, z) takes only O ( d ) O ( d ) O(d)O(d) time–linear in the dimension of the input attributes.
For another related example, also consider K ( , ) K ( , ) K(*,*)K(\cdot, \cdot) defined by
K ( x , z ) = ( x T z + c ) 2 = i , j = 1 d ( x i x j ) ( z i z j ) + i = 1 d ( 2 c x i ) ( 2 c z i ) + c 2 . K ( x , z ) = x T z + c 2 = i , j = 1 d x i x j z i z j + i = 1 d 2 c x i 2 c z i + c 2 . {:[K(x","z)=(x^(T)z+c)^(2)],[=sum_(i,j=1)^(d)(x_(i)x_(j))(z_(i)z_(j))+sum_(i=1)^(d)(sqrt(2c)x_(i))(sqrt(2c)z_(i))+c^(2).]:}\begin{aligned} K(x, z) &=\left(x^{T} z+c\right)^{2} \\ &=\sum_{i, j=1}^{d}\left(x_{i} x_{j}\right)\left(z_{i} z_{j}\right)+\sum_{i=1}^{d}\left(\sqrt{2 c} x_{i}\right)\left(\sqrt{2 c} z_{i}\right)+c^{2}. \end{aligned}
(Check this yourself.) This function K K KK is a kernel function that corresponds to the feature mapping (again shown for d = 3 d = 3 d=3d=3)
ϕ ( x ) = [ x 1 x 1 x 1 x 2 x 1 x 3 x 2 x 1 x 2 x 2 x 2 x 3 x 3 x 1 x 3 x 2 x 3 x 3 2 c x 1 2 c x 2 2 c x 3 c ] , ϕ ( x ) = x 1 x 1 x 1 x 2 x 1 x 3 x 2 x 1 x 2 x 2 x 2 x 3 x 3 x 1 x 3 x 2 x 3 x 3 2 c x 1 2 c x 2 2 c x 3 c , phi(x)=[[x_(1)x_(1)],[x_(1)x_(2)],[x_(1)x_(3)],[x_(2)x_(1)],[x_(2)x_(2)],[x_(2)x_(3)],[x_(3)x_(1)],[x_(3)x_(2)],[x_(3)x_(3)],[sqrt(2c)x_(1)],[sqrt(2c)x_(2)],[sqrt(2c)x_(3)],[c]],\phi(x)=\left[\begin{array}{c} x_{1} x_{1} \\ x_{1} x_{2} \\ x_{1} x_{3} \\ x_{2} x_{1} \\ x_{2} x_{2} \\ x_{2} x_{3} \\ x_{3} x_{1} \\ x_{3} x_{2} \\ x_{3} x_{3} \\ \sqrt{2 c} x_{1} \\ \sqrt{2 c} x_{2} \\ \sqrt{2 c} x_{3} \\ c \end{array}\right],
and the parameter c c cc controls the relative weighting between the x i x i x_(i)x_{i} (first order) and the x i x j x i x j x_(i)x_(j)x_{i} x_{j} (second order) terms.
More broadly, the kernel K ( x , z ) = ( x T z + c ) k K ( x , z ) = x T z + c k K(x,z)=(x^(T)z+c)^(k)K(x, z)=\left(x^{T} z+c\right)^{k} corresponds to a feature mapping to an ( d + k k ) d + k k ([d+k],[k])\left(\begin{array}{c}d+k \\ k\end{array}\right) feature space, corresponding of all monomials of the form x i 1 x i 2 x i k x i 1 x i 2 x i k x_(i_(1))x_(i_(2))dotsx_(i_(k))x_{i_{1}} x_{i_{2}} \ldots x_{i_{k}} that are up to order k k kk. However, despite working in this O ( d k ) O d k O(d^(k))O\left(d^{k}\right)-dimensional space, computing K ( x , z ) K ( x , z ) K(x,z)K(x, z) still takes only O ( d ) O ( d ) O(d)O(d) time, and hence we never need to explicitly represent feature vectors in this very high dimensional feature space.
Kernels as similarity metrics. Now, let's talk about a slightly different view of kernels. Intuitively, (and there are things wrong with this intuition, but nevermind), if ϕ ( x ) ϕ ( x ) phi(x)\phi(x) and ϕ ( z ) ϕ ( z ) phi(z)\phi(z) are close together, then we might expect K ( x , z ) = ϕ ( x ) T ϕ ( z ) K ( x , z ) = ϕ ( x ) T ϕ ( z ) K(x,z)=phi(x)^(T)phi(z)K(x, z)=\phi(x)^{T} \phi(z) to be large. Conversely, if ϕ ( x ) ϕ ( x ) phi(x)\phi(x) and ϕ ( z ) ϕ ( z ) phi(z)\phi(z) are far apart say nearly orthogonal to each other - then K ( x , z ) = ϕ ( x ) T ϕ ( z ) K ( x , z ) = ϕ ( x ) T ϕ ( z ) K(x,z)=phi(x)^(T)phi(z)K(x, z)=\phi(x)^{T} \phi(z) will be small. So, we can think of K ( x , z ) K ( x , z ) K(x,z)K(x, z) as some measurement of how similar are ϕ ( x ) ϕ ( x ) phi(x)\phi(x) and ϕ ( z ) ϕ ( z ) phi(z)\phi(z), or of how similar are x x xx and z z zz.
Given this intuition, suppose that for some learning problem that you're working on, you've come up with some function K ( x , z ) K ( x , z ) K(x,z)K(x, z) that you think might be a reasonable measure of how similar x x xx and z z zz are. For instance, perhaps you chose
K ( x , z ) = exp ( x z 2 2 σ 2 ) . K ( x , z ) = exp x z 2 2 σ 2 . K(x,z)=exp(-(||x-z||^(2))/(2sigma^(2))).K(x, z)=\exp \left(-\frac{\|x-z\|^{2}}{2 \sigma^{2}}\right) .
This is a reasonable measure of x x xx and z z zz's similarity, and is close to 1 1 11 when x x xx and z z zz are close, and near 0 0 00 when x x xx and z z zz are far apart. Does there exist a feature map ϕ ϕ phi\phi such that the kernel K K KK defined above satisfies K ( x , z ) = K ( x , z ) = K(x,z)=K(x, z)= ϕ ( x ) T ϕ ( z ) ? ϕ ( x ) T ϕ ( z ) ? phi(x)^(T)phi(z)?\phi(x)^{T} \phi(z)? In this particular example, the answer is yes. This kernel is called the Gaussian kernel, and corresponds to an infinite dimensional feature mapping ϕ ϕ phi\phi. We will give a precise characterization about what properties a function K K KK needs to satisfy so that it can be a valid kernel function that corresponds to some feature map ϕ ϕ phi\phi.
Necessary conditions for valid kernels. Suppose for now that K K KK is indeed a valid kernel corresponding to some feature mapping ϕ ϕ phi\phi, and we will first see what properties it satisfies. Now, consider some finite set of n n nn points (not necessarily the training set) { x ( 1 ) , , x ( n ) } x ( 1 ) , , x ( n ) {x^((1)),dots,x^((n))}\left\{x^{(1)}, \ldots, x^{(n)}\right\}, and let a square, n n nn-by- n n nn matrix K K KK be defined so that its ( i , j ) ( i , j ) (i,j)(i, j)-entry is given by K i j = K ( x ( i ) , x ( j ) ) K i j = K x ( i ) , x ( j ) K_(ij)=K(x^((i)),x^((j)))K_{i j}=K\left(x^{(i)}, x^{(j)}\right). This matrix is called the kernel matrix. Note that we've overloaded the notation and used K K KK to denote both the kernel function K ( x , z ) K ( x , z ) K(x,z)K(x, z) and the kernel matrix K K KK, due to their obvious close relationship.
Now, if K K KK is a valid kernel, then K i j = K ( x ( i ) , x ( j ) ) = ϕ ( x ( i ) ) T ϕ ( x ( j ) ) = K i j = K x ( i ) , x ( j ) = ϕ x ( i ) T ϕ x ( j ) = K_(ij)=K(x^((i)),x^((j)))=phi(x^((i)))^(T)phi(x^((j)))=K_{i j}=K\left(x^{(i)}, x^{(j)}\right)=\phi\left(x^{(i)}\right)^{T} \phi\left(x^{(j)}\right)= ϕ ( x ( j ) ) T ϕ ( x ( i ) ) = K ( x ( j ) , x ( i ) ) = K j i ϕ x ( j ) T ϕ x ( i ) = K x ( j ) , x ( i ) = K j i phi(x^((j)))^(T)phi(x^((i)))=K(x^((j)),x^((i)))=K_(ji)\phi\left(x^{(j)}\right)^{T} \phi\left(x^{(i)}\right)=K\left(x^{(j)}, x^{(i)}\right)=K_{j i}, and hence K K KK must be symmetric. Moreover, letting ϕ k ( x ) ϕ k ( x ) phi_(k)(x)\phi_{k}(x) denote the k k kk-th coordinate of the vector ϕ ( x ) ϕ ( x ) phi(x)\phi(x), we find that for any vector z z zz, we have
z T K z = i j z i K i j z j = i j z i ϕ ( x ( i ) ) T ϕ ( x ( j ) ) z j = i j z i k ϕ k ( x ( i ) ) ϕ k ( x ( j ) ) z j = k i j z i ϕ k ( x ( i ) ) ϕ k ( x ( j ) ) z j = k ( i z i ϕ k ( x ( i ) ) ) 2 0 . z T K z = i j z i K i j z j = i j z i ϕ ( x ( i ) ) T ϕ ( x ( j ) ) z j = i j z i k ϕ k ( x ( i ) ) ϕ k ( x ( j ) ) z j = k i j z i ϕ k ( x ( i ) ) ϕ k ( x ( j ) ) z j = k i z i ϕ k ( x ( i ) ) 2 0 . {:[z^(T)Kz=sum_(i)sum_(j)z_(i)K_(ij)z_(j)],[=sum_(i)sum_(j)z_(i)phi(x^((i)))^(T)phi(x^((j)))z_(j)],[=sum_(i)sum_(j)z_(i)sum_(k)phi_(k)(x^((i)))phi_(k)(x^((j)))z_(j)],[=sum_(k)sum_(i)sum_(j)z_(i)phi_(k)(x^((i)))phi_(k)(x^((j)))z_(j)],[=sum_(k)(sum_(i)z_(i)phi_(k)(x^((i))))^(2)],[ >= 0.]:}\begin{aligned} z^{T} K z &=\sum_{i} \sum_{j} z_{i} K_{i j} z_{j} \\ &=\sum_{i} \sum_{j} z_{i} \phi(x^{(i)})^{T} \phi(x^{(j)}) z_{j} \\ &=\sum_{i} \sum_{j} z_{i} \sum_{k} \phi_{k}(x^{(i)}) \phi_{k}(x^{(j)}) z_{j} \\ &=\sum_{k} \sum_{i} \sum_{j} z_{i} \phi_{k}(x^{(i)}) \phi_{k}(x^{(j)}) z_{j} \\ &=\sum_{k}\left(\sum_{i} z_{i} \phi_{k}(x^{(i)})\right)^{2} \\ & \geq 0 . \end{aligned}
The second-to-last step uses the fact that i , j a i a j = ( i a i ) 2 i , j a i a j = i a i 2 sum_(i,j)a_(i)a_(j)=(sum_(i)a_(i))^(2)\sum_{i, j} a_{i} a_{j}=\left(\sum_{i} a_{i}\right)^{2} for a i = a i = a_(i)=a_{i}= z i ϕ k ( x ( i ) ) z i ϕ k x ( i ) z_(i)phi_(k)(x^((i)))z_{i} \phi_{k}\left(x^{(i)}\right). Since z z zz was arbitrary, this shows that K K KK is positive semi-definite ( K 0 ) ( K 0 ) (K >= 0)(K \geq 0).
Hence, we've shown that if K K KK is a valid kernel (i.e., if it corresponds to some feature mapping ϕ ϕ phi\phi ), then the corresponding kernel matrix K R n × n K R n × n K inR^(n xx n)K \in \mathbb{R}^{n \times n} is symmetric positive semidefinite.
Sufficient conditions for valid kernels. More generally, the condition above turns out to be not only a necessary, but also a sufficient, condition for K K KK to be a valid kernel (also called a Mercer kernel). The following result is due to Mercer.[3]
Theorem (Mercer). Let K : R d × R d R K : R d × R d R K:R^(d)xxR^(d)|->RK: \mathbb{R}^{d} \times \mathbb{R}^{d} \mapsto \mathbb{R} be given. Then for K K KK to be a valid (Mercer) kernel, it is necessary and sufficient that for any { x ( 1 ) , , x ( n ) } , ( n < ) { x ( 1 ) , , x ( n ) } , ( n < ) {x^((1)),dots,x^((n))},(n < oo)\{x^{(1)}, \ldots, x^{(n)}\},(n<\infty), the corresponding kernel matrix is symmetric positive semi-definite.
Given a function K K KK, apart from trying to find a feature mapping ϕ ϕ phi\phi that corresponds to it, this theorem therefore gives another way of testing if it is a valid kernel. You'll also have a chance to play with these ideas more in problem set 2.
In class, we also briefly talked about a couple of other examples of kernels. For instance, consider the digit recognition problem, in which given an image (16x16 pixels) of a handwritten digit (0-9), we have to figure out which digit it was. Using either a simple polynomial kernel K ( x , z ) = ( x T z ) k K ( x , z ) = x T z k K(x,z)=(x^(T)z)^(k)K(x, z)=\left(x^{T} z\right)^{k} or the Gaussian kernel, SVMs were able to obtain extremely good performance on this problem. This was particularly surprising since the input attributes x x xx were just 256-dimensional vectors of the image pixel intensity values, and the system had no prior knowledge about vision, or even about which pixels are adjacent to which other ones. Another example that we briefly talked about in lecture was that if the objects x x xx that we are trying to classify are strings (say, x x xx is a list of amino acids, which strung together form a protein), then it seems hard to construct a reasonable, "small" set of features for most learning algorithms, especially if different strings have different lengths. However, consider letting ϕ ( x ) ϕ ( x ) phi(x)\phi(x) be a feature vector that counts the number of occurrences of each length- k k kk substring in x x xx. If we're considering strings of English letters, then there are 26 k 26 k 26^(k)26^{k} such strings. Hence, ϕ ( x ) ϕ ( x ) phi(x)\phi(x) is a 26 k 26 k 26^(k)26^{k} dimensional vector; even for moderate values of k k kk, this is probably too big for us to efficiently work with. (e.g., 26 4 460000 26 4 460000 26^(4)~~46000026^{4} \approx 460000.) However, using (dynamic programming-ish) string matching algorithms, it is possible to efficiently compute K ( x , z ) = ϕ ( x ) T ϕ ( z ) K ( x , z ) = ϕ ( x ) T ϕ ( z ) K(x,z)=phi(x)^(T)phi(z)K(x, z)=\phi(x)^{T} \phi(z), so that we can now implicitly work in this 26 k 26 k 26^(k)26^{k}-dimensional feature space, but without ever explicitly computing feature vectors in this space.
Application of kernel methods: We've seen the application of kernels to linear regression. In the next part, we will introduce the support vector machines to which kernels can be directly applied. dwell too much longer on it here. In fact, the idea of kernels has significantly broader applicability than linear regression and SVMs. Specifically, if you have any learning algorithm that you can write in terms of only inner products x , z x , z (:x,z:)\langle x, z\rangle between input attribute vectors, then by replacing this with K ( x , z ) K ( x , z ) K(x,z)K(x, z) where K K KK is a kernel, you can "magically" allow your algorithm to work efficiently in the high dimensional feature space corresponding to K K KK. For instance, this kernel trick can be applied with the perceptron to derive a kernel perceptron algorithm. Many of the algorithms that we'll see later in this class will also be amenable to this method, which has come to be known as the "kernel trick."

Support Vector Machines

This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best) "off-the-shelf" supervised learning algorithms. To tell the SVM story, we'll need to first talk about margins and the idea of separating data with a large "gap." Next, we'll talk about the optimal margin classifier, which will lead us into a digression on Lagrange duality. We'll also see kernels, which give a way to apply SVMs efficiently in very high dimensional (such as infinitedimensional) feature spaces, and finally, we'll close off the story with the SMO algorithm, which gives an efficient implementation of SVMs.

2. Margins: Intuition

We'll start our story on SVMs by talking about margins. This section will give the intuitions about margins and about the "confidence" of our predictions; these ideas will be made formal in Section 4.
Consider logistic regression, where the probability p ( y = 1 | x ; θ ) p ( y = 1 | x ; θ ) p(y=1|x;theta)p(y=1 | x ; \theta) is modeled by h θ ( x ) = g ( θ T x ) h θ ( x ) = g θ T x h_(theta)(x)=g(theta^(T)x)h_{\theta}(x)=g\left(\theta^{T} x\right). We then predict " 1 1 11" on an input x x xx if and only if h θ ( x ) 0.5 h θ ( x ) 0.5 h_(theta)(x) >= 0.5h_{\theta}(x) \geq 0.5, or equivalently, if and only if θ T x 0 θ T x 0 theta^(T)x >= 0\theta^{T} x \geq 0. Consider a positive training example ( y = 1 ) ( y = 1 ) (y=1)(y=1). The larger θ T x θ T x theta^(T)x\theta^{T} x is, the larger also is h θ ( x ) = p ( y = h θ ( x ) = p ( y = h_(theta)(x)=p(y=h_{\theta}(x)=p(y= 1 | x ; θ ) 1 | x ; θ ) 1|x;theta)1 | x ; \theta), and thus also the higher our degree of "confidence" that the label is 1 1 11. Thus, informally we can think of our prediction as being very confident that y = 1 y = 1 y=1y=1 if θ T x 0 θ T x 0 theta^(T)x≫0\theta^{T} x \gg 0. Similarly, we think of logistic regression as confidently predicting y = 0 y = 0 y=0y=0, if θ T x 0 θ T x 0 theta^(T)x≪0\theta^{T} x \ll 0. Given a training set, again informally it seems that we'd have found a good fit to the training data if we can find θ θ theta\theta so that θ T x ( i ) 0 θ T x ( i ) 0 theta^(T)x^((i))≫0\theta^{T} x^{(i)} \gg 0 whenever y ( i ) = 1 y ( i ) = 1 y^((i))=1y^{(i)}=1, and θ T x ( i ) 0 θ T x ( i ) 0 theta^(T)x^((i))≪0\theta^{T} x^{(i)} \ll 0 whenever y ( i ) = 0 y ( i ) = 0 y^((i))=0y^{(i)}=0, since this would reflect a very confident (and correct) set of classifications for all the training examples. This seems to be a nice goal to aim for, and we'll soon formalize this idea using the notion of functional margins.
For a different type of intuition, consider the following figure, in which x's represent positive training examples, o's denote negative training examples, a decision boundary (this is the line given by the equation θ T x = 0 θ T x = 0 theta^(T)x=0\theta^{T} x=0, and is also called the separating hyperplane) is also shown, and three points have also been labeled A, B and C.
Notice that the point A is very far from the decision boundary. If we are asked to make a prediction for the value of y y yy at A, it seems we should be quite confident that y = 1 y = 1 y=1y=1 there. Conversely, the point C is very close to the decision boundary, and while it's on the side of the decision boundary on which we would predict y = 1 y = 1 y=1y=1, it seems likely that just a small change to the decision boundary could easily have caused out prediction to be y = 0 y = 0 y=0y=0. Hence, we're much more confident about our prediction at A than at C. The point B lies in-between these two cases, and more broadly, we see that if a point is far from the separating hyperplane, then we may be significantly more confident in our predictions. Again, informally we think it would be nice if, given a training set, we manage to find a decision boundary that allows us to make all correct and confident (meaning far from the decision boundary) predictions on the training examples. We'll formalize this later using the notion of geometric margins.

3. Notation

To make our discussion of SVMs easier, we'll first need to introduce a new notation for talking about classification. We will be considering a linear classifier for a binary classification problem with labels y y yy and features x x xx. From now, we'll use y { 1 , 1 } y { 1 , 1 } y in{-1,1}y \in\{-1,1\} (instead of { 0 , 1 } { 0 , 1 } {0,1}\{0,1\}) to denote the class labels. Also, rather than parameterizing our linear classifier with the vector θ θ theta\theta, we will use parameters w , b w , b w,bw, b, and write our classifier as
h w , b ( x ) = g ( w T x + b ) . h w , b ( x ) = g w T x + b . h_(w,b)(x)=g(w^(T)x+b).h_{w, b}(x)=g\left(w^{T} x+b\right) .
Here, g ( z ) = 1 g ( z ) = 1 g(z)=1g(z)=1 if z 0 z 0 z >= 0z \geq 0, and g ( z ) = 1 g ( z ) = 1 g(z)=-1g(z)=-1 otherwise. This " w , b w , b w,bw, b " notation allows us to explicitly treat the intercept term b b bb separately from the other parameters. (We also drop the convention we had previously of letting x 0 = 1 x 0 = 1 x_(0)=1x_{0}=1 be an extra coordinate in the input feature vector.) Thus, b b bb takes the role of what was previously θ 0 θ 0 theta_(0)\theta_{0}, and w w ww takes the role of [ θ 1 θ d ] T θ 1 θ d T [theta_(1)dotstheta_(d)]^(T)\left[\theta_{1} \ldots \theta_{d}\right]^{T}.
Note also that, from our definition of g g gg above, our classifier will directly predict either 1 1 11 or 1 1 -1-1 (cf. the perceptron algorithm), without first going through the intermediate step of estimating p ( y = 1 ) p ( y = 1 ) p(y=1)p(y=1) (which is what logistic regression does).

4. Functional and geometric margins

Let's formalize the notions of the functional and geometric margins. Given a training example ( x ( i ) , y ( i ) ) x ( i ) , y ( i ) (x^((i)),y^((i)))\left(x^{(i)}, y^{(i)}\right), we define the functional margin of ( w , b ) ( w , b ) (w,b)(w, b) with respect to the training example as
γ ^ ( i ) = y ( i ) ( w T x ( i ) + b ) . γ ^ ( i ) = y ( i ) ( w T x ( i ) + b ) . hat(gamma)^((i))=y^((i))(w^(T)x^((i))+b).\hat{\gamma}^{(i)}=y^{(i)}(w^{T} x^{(i)}+b) .
Note that if y ( i ) = 1 y ( i ) = 1 y^((i))=1y^{(i)}=1, then for the functional margin to be large (i.e., for our prediction to be confident and correct), we need w T x ( i ) + b w T x ( i ) + b w^(T)x^((i))+bw^{T} x^{(i)}+b to be a large positive number. Conversely, if y ( i ) = 1 y ( i ) = 1 y^((i))=-1y^{(i)}=-1, then for the functional margin to be large, we need w T x ( i ) + b w T x ( i ) + b w^(T)x^((i))+bw^{T} x^{(i)}+b to be a large negative number. Moreover, if y ( i ) ( w T x ( i ) + b ) > 0 y ( i ) w T x ( i ) + b > 0 y^((i))(w^(T)x^((i))+b) > 0y^{(i)}\left(w^{T} x^{(i)}+b\right)>0, then our prediction on this example is correct. (Check this yourself.) Hence, a large functional margin represents a confident and a correct prediction.
For a linear classifier with the choice of g g gg given above (taking values in { 1 , 1 } { 1 , 1 } {-1,1}\{-1,1\} ), there's one property of the functional margin that makes it not a very good measure of confidence, however. Given our choice of g g gg, we note that if we replace w w ww with 2 w 2 w 2w2 w and b b bb with 2 b 2 b 2b2 b, then since g ( w T x + b ) = g ( 2 w T x + 2 b ) g w T x + b = g 2 w T x + 2 b g(w^(T)x+b)=g(2w^(T)x+2b)g\left(w^{T} x+b\right)=g\left(2 w^{T} x+2 b\right), this would not change h w , b ( x ) h w , b ( x ) h_(w,b)(x)h_{w, b}(x) at all. I.e., g g gg, and hence also h w , b ( x ) h w , b ( x ) h_(w,b)(x)h_{w, b}(x), depends only on the sign, but not on the magnitude, of w T x + b w T x + b w^(T)x+bw^{T} x+b. However, replacing ( w , b ) ( w , b ) (w,b)(w, b) with ( 2 w , 2 b ) ( 2 w , 2 b ) (2w,2b)(2 w, 2 b) also results in multiplying our functional margin by a factor of 2. Thus, it seems that by exploiting our freedom to scale w w ww and b b bb, we can make the functional margin arbitrarily large without really changing anything meaningful. Intuitively, it might therefore make sense to impose some sort of normalization condition such as that w 2 = 1 w 2 = 1 ||w||_(2)=1\|w\|_{2}=1; i.e., we might replace ( w , b ) ( w , b ) (w,b)(w, b) with ( w / w 2 , b / w 2 ) w / w 2 , b / w 2 (w//||w||_(2),b//||w||_(2))\left(w /\|w\|_{2}, b /\|w\|_{2}\right), and instead consider the functional margin of ( w / w 2 , b / w 2 ) w / w 2 , b / w 2 (w//||w||_(2),b//||w||_(2))\left(w /\|w\|_{2}, b /\|w\|_{2}\right). We'll come back to this later.
Given a training set S = { ( x ( i ) , y ( i ) ) ; i = 1 , , n } S = x ( i ) , y ( i ) ; i = 1 , , n S={(x^((i)),y^((i)));i=1,dots,n}S=\left\{\left(x^{(i)}, y^{(i)}\right) ; i=1, \ldots, n\right\}, we also define the function margin of ( w , b ) ( w , b ) (w,b)(w, b) with respect to S S SS as the smallest of the functional margins of the individual training examples. Denoted by γ ^ γ ^ hat(gamma)\hat{\gamma}, this can therefore be written:
γ ^ = min i = 1 , , n γ ^ ( i ) . γ ^ = min i = 1 , , n γ ^ ( i ) . hat(gamma)=min_(i=1,dots,n) hat(gamma)^((i)).\hat{\gamma}=\min _{i=1, \ldots, n} \hat{\gamma}^{(i)} .
Next, let's talk about geometric margins. Consider the picture below:
The decision boundary corresponding to ( w , b ) ( w , b ) (w,b)(w, b) is shown, along with the vector w w ww. Note that w w ww is orthogonal (at 90 90 90^(@)90^{\circ}) to the separating hyperplane. (You should convince yourself that this must be the case.) Consider the point at A, which represents the input x ( i ) x ( i ) x^((i))x^{(i)} of some training example with label y ( i ) = 1 y ( i ) = 1 y^((i))=1y^{(i)}=1. Its distance to the decision boundary, γ ( i ) γ ( i ) gamma^((i))\gamma^{(i)}, is given by the line segment AB.
How can we find the value of γ ( i ) ? γ ( i ) ? gamma^((i))?\gamma^{(i)}? Well, w / w w / w w//||w||w /\|w\| is a unit-length vector pointing in the same direction as w w ww. Since A A AA represents x ( i ) x ( i ) x^((i))x^{(i)}, we therefore find that the point B B BB is given by x ( i ) γ ( i ) w / w x ( i ) γ ( i ) w / w x^((i))-gamma^((i))*w//||w||x^{(i)}-\gamma^{(i)} \cdot w /\|w\|. But this point lies on the decision boundary, and all points x x xx on the decision boundary satisfy the equation w T x + b = 0 w T x + b = 0 w^(T)x+b=0w^{T} x+b=0. Hence,
w T ( x ( i ) γ ( i ) w w ) + b = 0 . w T x ( i ) γ ( i ) w w + b = 0 . w^(T)(x^((i))-gamma^((i))(w)/(||w||))+b=0.w^{T}\left(x^{(i)}-\gamma^{(i)} \frac{w}{\|w\|}\right)+b=0 .
Solving for γ ( i ) γ ( i ) gamma^((i))\gamma^{(i)} yields
γ ( i ) = w T x ( i ) + b w = ( w w ) T x ( i ) + b w . γ ( i ) = w T x ( i ) + b w = w w T x ( i ) + b w . gamma^((i))=(w^(T)x^((i))+b)/(||w||)=((w)/(||w||))^(T)x^((i))+(b)/(||w||).\gamma^{(i)}=\frac{w^{T} x^{(i)}+b}{\|w\|}=\left(\frac{w}{\|w\|}\right)^{T} x^{(i)}+\frac{b}{\|w\|} .
This was worked out for the case of a positive training example at A in the figure, where being on the "positive" side of the decision boundary is good. More generally, we define the geometric margin of ( w , b ) ( w , b ) (w,b)(w, b) with respect to a training example ( x ( i ) , y ( i ) ) x ( i ) , y ( i ) (x^((i)),y^((i)))\left(x^{(i)}, y^{(i)}\right) to be
γ ( i ) = y ( i ) ( ( w w ) T x ( i ) + b w ) . γ ( i ) = y ( i ) w w T x ( i ) + b w . gamma^((i))=y^((i))(((w)/(||w||))^(T)x^((i))+(b)/(||w||)).\gamma^{(i)}=y^{(i)}\left(\left(\frac{w}{\|w\|}\right)^{T} x^{(i)}+\frac{b}{\|w\|}\right) .
Note that if w = 1 w = 1 ||w||=1\|w\|=1, then the functional margin equals the geometric margin-this thus gives us a way of relating these two different notions of margin. Also, the geometric margin is invariant to rescaling of the parameters; i.e., if we replace w w ww with 2 w 2 w 2w2 w and b b bb with 2 b 2 b 2b2 b, then the geometric margin does not change. This will in fact come in handy later. Specifically, because of this invariance to the scaling of the parameters, when trying to fit w w ww and b b bb to training data, we can impose an arbitrary scaling constraint on w w ww without changing anything important; for instance, we can demand that w = 1 w = 1 ||w||=1\|w\|=1, or | w 1 | = 5 w 1 = 5 |w_(1)|=5\left|w_{1}\right|=5, or | w 1 + b | + | w 2 | = 2 w 1 + b + w 2 = 2 |w_(1)+b|+|w_(2)|=2\left|w_{1}+b\right|+\left|w_{2}\right|=2, and any of these can be satisfied simply by rescaling w w ww and b b bb.
Finally, given a training set S = { ( x ( i ) , y ( i ) ) ; i = 1 , , n } S = x ( i ) , y ( i ) ; i = 1 , , n S={(x^((i)),y^((i)));i=1,dots,n}S=\left\{\left(x^{(i)}, y^{(i)}\right) ; i=1, \ldots, n\right\}, we also define the geometric margin of ( w , b ) ( w , b ) (w,b)(w, b) with respect to S S SS to be the smallest of the geometric margins on the individual training examples:
γ = min i = 1 , , n γ ( i ) . γ = min i = 1 , , n γ ( i ) . gamma=min_(i=1,dots,n)gamma^((i)).\gamma=\min _{i=1, \ldots, n} \gamma^{(i)} .

5. The optimal margin classifier

Given a training set, it seems from our previous discussion that a natural desideratum is to try to find a decision boundary that maximizes the (geometric) margin, since this would reflect a very confident set of predictions on the training set and a good "fit" to the training data. Specifically, this will result in a classifier that separates the positive and the negative training examples with a "gap" (geometric margin).
For now, we will assume that we are given a training set that is linearly separable; i.e., that it is possible to separate the positive and negative examples using some separating hyperplane. How will we find the one that achieves the maximum geometric margin? We can pose the following optimization problem:
max γ , w , b γ s.t. y ( i ) ( w T x ( i ) + b ) γ , i = 1 , , n w = 1 . max γ , w , b γ s.t. y ( i ) w T x ( i ) + b γ , i = 1 , , n w = 1 . {:[max_(gamma,w,b)quadgamma],["s.t."quady^((i))(w^(T)x^((i))+b) >= gamma","quad i=1","dots","n],[||w||=1.]:}\begin{aligned} \max _{\gamma, w, b} \quad & \gamma \\ \text {s.t.}\quad & y^{(i)}\left(w^{T} x^{(i)}+b\right) \geq \gamma, \quad i=1, \ldots, n \\ &\|w\|=1 . \end{aligned}
I.e., we want to maximize γ γ gamma\gamma, subject to each training example having functional margin at least γ γ gamma\gamma. The w = 1 w = 1 ||w||=1\|w\|=1 constraint moreover ensures that the functional margin equals to the geometric margin, so we are also guaranteed that all the geometric margins are at least γ γ gamma\gamma. Thus, solving this problem will result in ( w , b ) ( w , b ) (w,b)(w, b) with the largest possible geometric margin with respect to the training set.
If we could solve the optimization problem above, we'd be done. But the " w = 1 w = 1 ||w||=1\| w \|=1" constraint is a nasty (non-convex) one, and this problem certainly isn't in any format that we can plug into standard optimization software to solve. So, let's try transforming the problem into a nicer one. Consider:
max γ ^ , w , b γ ^ w s.t. y ( i ) ( w T x ( i ) + b ) γ ^ , i = 1 , , n max γ ^ , w , b γ ^ w s.t. y ( i ) ( w T x ( i ) + b ) γ ^ , i = 1 , , n {:[max_( hat(gamma),w,b)quad(( hat(gamma)))/(||w||)],["s.t."quady^((i))(w^(T)x^((i))+b) >= hat(gamma)","quad i=1","dots","n]:}\begin{aligned} \max _{\hat{\gamma}, w, b}\quad & \frac{\hat{\gamma}}{\|w\|} \\ \text {s.t.}\quad & y^{(i)}(w^{T} x^{(i)}+b) \geq \hat{\gamma}, \quad i=1, \ldots, n \end{aligned}
Here, we're going to maximize γ ^ / w γ ^ / w hat(gamma)//||w||\hat{\gamma} /\|w\|, subject to the functional margins all being at least γ ^ γ ^ hat(gamma)\hat{\gamma}. Since the geometric and functional margins are related by γ = γ ^ / | | w γ = γ ^ / | | w gamma= hat(gamma)//||w∣\gamma=\hat{\gamma} /|| w \mid, this will give us the answer we want. Moreover, we've gotten rid of the constraint w = 1 w = 1 ||w||=1\|w\|=1 that we didn't like. The downside is that we now have a nasty (again, non-convex) objective γ ^ w γ ^ w (( hat(gamma)))/(||w||)\frac{\hat{\gamma}}{\|w\|} function; and, we still don't have any off-the-shelf software that can solve this form of an optimization problem.
Let's keep going. Recall our earlier discussion that we can add an arbitrary scaling constraint on w w ww and b b bb without changing anything. This is the key idea we'll use now. We will introduce the scaling constraint that the functional margin of w , b w , b w,bw, b with respect to the training set must be 1 1 11:
γ ^ = 1 . γ ^ = 1 . hat(gamma)=1.\hat{\gamma}=1 .
Since multiplying w w ww and b b bb by some constant results in the functional margin being multiplied by that same constant, this is indeed a scaling constraint, and can be satisfied by rescaling w , b w , b w,bw, b. Plugging this into our problem above, and noting that maximizing γ ^ / w = 1 / | | w γ ^ / w = 1 / | | w hat(gamma)//||w||=1//||w||\hat{\gamma} /\|w\|=1 /|| w \| is the same thing as minimizing w 2 w 2 ||w||^(2)\|w\|^{2}, we now have the following optimization problem:
min w , b 1 2 w 2 s.t. y ( i ) ( w T x ( i ) + b ) 1 , i = 1 , , n min w , b 1 2 w 2 s.t. y ( i ) ( w T x ( i ) + b ) 1 , i = 1 , , n {:[min_(w,b)quad(1)/(2)||w||^(2)],["s.t."quady^((i))(w^(T)x^((i))+b) >= 1","quad i=1","dots","n]:}\begin{aligned} \min _{w, b} \quad& \frac{1}{2}\|w\|^{2} \\ \text {s.t.} \quad& y^{(i)}(w^{T} x^{(i)}+b) \geq 1, \quad i=1, \ldots, n \end{aligned}
We've now transformed the problem into a form that can be efficiently solved. The above is an optimization problem with a convex quadratic objective and only linear constraints. Its solution gives us the optimal margin classifier. This optimization problem can be solved using commercial quadratic programming (QP) code.[4]
While we could call the problem solved here, what we will instead do is make a digression to talk about Lagrange duality. This will lead us to our optimization problem's dual form, which will play a key role in allowing us to use kernels to get optimal margin classifiers to work efficiently in very high dimensional spaces. The dual form will also allow us to derive an efficient algorithm for solving the above optimization problem that will typically do much better than generic QP software.

6. Lagrange duality (optional reading)

Let's temporarily put aside SVMs and maximum margin classifiers, and talk about solving constrained optimization problems.
Consider a problem of the following form:
min w f ( w ) s.t. h i ( w ) = 0 , i = 1 , , l . min w      f ( w ) s.t.      h i ( w ) = 0 , i = 1 , , l . {:[min_(w),f(w)],["s.t.",h_(i)(w)=0","quad i=1","dots","l.]:}\begin{array}{rl} \min _{w} & f(w) \\ \text {s.t.} & h_{i}(w)=0, \quad i=1, \ldots, l . \end{array}
Some of you may recall how the method of Lagrange multipliers can be used to solve it. (Don't worry if you haven't seen it before.) In this method, we define the Lagrangian to be
L ( w , β ) = f ( w ) + i = 1 l β i h i ( w ) L ( w , β ) = f ( w ) + i = 1 l β i h i ( w ) L(w,beta)=f(w)+sum_(i=1)^(l)beta_(i)h_(i)(w)\mathcal{L}(w, \beta)=f(w)+\sum_{i=1}^{l} \beta_{i} h_{i}(w)
Here, the β i β i beta_(i)\beta_{i}'s are called the Lagrange multipliers. We would then find and set L L L\mathcal{L} 's partial derivatives to zero:
L w i = 0 ; L β i = 0 , L w i = 0 ; L β i = 0 , (delL)/(delw_(i))=0;quad(delL)/(delbeta_(i))=0,\frac{\partial \mathcal{L}}{\partial w_{i}}=0 ; \quad \frac{\partial \mathcal{L}}{\partial \beta_{i}}=0,
and solve for w w ww and β β beta\beta.
In this section, we will generalize this to constrained optimization problems in which we may have inequality as well as equality constraints. Due to time constraints, we won't really be able to do the theory of Lagrange duality justice in this class[5] but we will give the main ideas and results, which we will then apply to our optimal margin classifier's optimization problem.
Consider the following, which we'll call the primal optimization problem:
min w f ( w ) s.t. g i ( w ) 0 , i = 1 , , k h i ( w ) = 0 , i = 1 , , l . min w      f ( w ) s.t.      g i ( w ) 0 , i = 1 , , k      h i ( w ) = 0 , i = 1 , , l . {:[min_(w),f(w)],["s.t.",g_(i)(w) <= 0","quad i=1","dots","k],[,h_(i)(w)=0","quad i=1","dots","l.]:}\begin{array}{rl} \min _{w}\,\, & f(w) \\ \text {s.t.} & g_{i}(w) \leq 0, \quad i=1, \ldots, k \\ & h_{i}(w)=0, \quad i=1, \ldots, l . \end{array}
To solve it, we start by defining the generalized Lagrangian
L ( w , α , β ) = f ( w ) + i = 1 k α i g i ( w ) + i = 1 l β i h i ( w ) . L ( w , α , β ) = f ( w ) + i = 1 k α i g i ( w ) + i = 1 l β i h i ( w ) . L(w,alpha,beta)=f(w)+sum_(i=1)^(k)alpha_(i)g_(i)(w)+sum_(i=1)^(l)beta_(i)h_(i)(w).\mathcal{L}(w, \alpha, \beta)=f(w)+\sum_{i=1}^{k} \alpha_{i} g_{i}(w)+\sum_{i=1}^{l} \beta_{i} h_{i}(w).
Here, the α i α i alpha_(i)\alpha_{i}'s and β i β i beta_(i)\beta_{i}'s are the Lagrange multipliers. Consider the quantity
θ P ( w ) = max α , β : α i 0 L ( w , α , β ) . θ P ( w ) = max α , β : α i 0 L ( w , α , β ) . theta_(P)(w)=max_(alpha,beta:alpha_(i) >= 0)L(w,alpha,beta).\theta_{\mathcal{P}}(w)=\max _{\alpha, \beta: \alpha_{i} \geq 0} \mathcal{L}(w, \alpha, \beta) .
Here, the " P P P\mathcal{P}" violates any of the "primal." Let some w w ww be given. If w w ww violates any of the primal constraints (i.e., if either g i ( w ) > 0 g i ( w ) > 0 g_(i)(w) > 0g_{i}(w)>0 or h i ( w ) 0 h i ( w ) 0 h_(i)(w)!=0h_{i}(w) \neq 0 for some i i ii ), then you should be able to verify that
(13) θ P ( w ) = max α , β : α i 0 f ( w ) + i = 1 k α i g i ( w ) + i = 1 l β i h i ( w ) (14) = . (13) θ P ( w ) = max α , β : α i 0 f ( w ) + i = 1 k α i g i ( w ) + i = 1 l β i h i ( w ) (14) = . {:(13)theta_(P)(w)=max_(alpha,beta:alpha_(i) >= 0)f(w)+sum_(i=1)^(k)alpha_(i)g_(i)(w)+sum_(i=1)^(l)beta_(i)h_(i)(w),(14)=oo.:}\begin{align} \theta_{\mathcal{P}}(w) &=\max _{\alpha, \beta: \alpha_{i} \geq 0} f(w)+\sum_{i=1}^{k} \alpha_{i} g_{i}(w)+\sum_{i=1}^{l} \beta_{i} h_{i}(w) \\ &=\infty . \end{align}
Conversely, if the constraints are indeed satisfied for a particular value of w w ww, then θ P ( w ) = f ( w ) θ P ( w ) = f ( w ) theta_(P)(w)=f(w)\theta_{\mathcal{P}}(w)=f(w). Hence,
θ P ( w ) = { f ( w ) if w satisfies primal constraints otherwise. θ P ( w ) = f ( w )      if  w  satisfies primal constraints      otherwise. theta_(P)(w)={[f(w),"if "w" satisfies primal constraints"],[oo,"otherwise."]:}\theta_{\mathcal{P}}(w)= \begin{cases}f(w) & \text {if } w \text { satisfies primal constraints} \\ \infty & \text {otherwise.}\end{cases}
Thus, θ P θ P theta_(P)\theta_{\mathcal{P}} takes the same value as the objective in our problem for all values of w w ww that satisfies the primal constraints, and is positive infinity if the constraints are violated. Hence, if we consider the minimization problem
min w θ P ( w ) = min w max α , β : α i 0 L ( w , α , β ) , min w θ P ( w ) = min w max α , β : α i 0 L ( w , α , β ) , min_(w)theta_(P)(w)=min_(w)max_(alpha,beta:alpha_(i) >= 0)L(w,alpha,beta),\min _{w} \theta_{\mathcal{P}}(w)=\min _{w} \max _{\alpha, \beta: \alpha_{i} \geq 0} \mathcal{L}(w, \alpha, \beta),
we see that it is the same problem (i.e., and has the same solutions as) our original, primal problem. For later use, we also define the optimal value of the objective to be p = min w θ P ( w ) p = min w θ P ( w ) p^(**)=min_(w)theta_(P)(w)p^{*}=\min _{w} \theta_{\mathcal{P}}(w); we call this the value of the primal problem.
Now, let's look at a slightly different problem. We define
θ D ( α , β ) = min w L ( w , α , β ) . θ D ( α , β ) = min w L ( w , α , β ) . theta_(D)(alpha,beta)=min_(w)L(w,alpha,beta).\theta_{\mathcal{D}}(\alpha, \beta)=\min _{w} \mathcal{L}(w, \alpha, \beta) .
Here, the " D D D\mathcal{D}" subscript stands for "dual." Note also that whereas in the definition of θ P θ P theta_(P)\theta_{\mathcal{P}} we were optimizing (maximizing) with respect to α , β α , β alpha,beta\alpha, \beta, here we are minimizing with respect to w w ww.
We can now pose the dual optimization problem:
max α , β : α i 0 θ D ( α , β ) = max α , β : α i 0 min w L ( w , α , β ) . max α , β : α i 0 θ D ( α , β ) = max α , β : α i 0 min w L ( w , α , β ) . max_(alpha,beta:alpha_(i) >= 0)theta_(D)(alpha,beta)=max_(alpha,beta:alpha_(i) >= 0)min_(w)L(w,alpha,beta).\max _{\alpha, \beta: \alpha_{i} \geq 0} \theta_{\mathcal{D}}(\alpha, \beta)=\max _{\alpha, \beta: \alpha_{i} \geq 0} \min _{w} \mathcal{L}(w, \alpha, \beta) .
This is exactly the same as our primal problem shown above, except that the order of the "max" and the "min" are now exchanged. We also define the optimal value of the dual problem's objective to be d = max α , β : α i 0 θ D ( w ) d = max α , β : α i 0 θ D ( w ) d^(**)=max_(alpha,beta:alpha_(i) >= 0)theta_(D)(w)d^{*}=\max _{\alpha, \beta: \alpha_{i} \geq 0} \theta_{\mathcal{D}}(w).
How are the primal and the dual problems related? It can easily be shown that
d = max α , β : α i 0 min w L ( w , α , β ) min w max α , β : α i 0 L ( w , α , β ) = p . d = max α , β : α i 0 min w L ( w , α , β ) min w max α , β : α i 0 L ( w , α , β ) = p . d^(**)=max_(alpha,beta:alpha_(i) >= 0)min_(w)L(w,alpha,beta) <= min_(w)max_(alpha,beta:alpha_(i) >= 0)L(w,alpha,beta)=p^(**).d^{*}=\max _{\alpha, \beta: \alpha_{i} \geq 0} \min _{w} \mathcal{L}(w, \alpha, \beta) \leq \min _{w} \max _{\alpha, \beta: \alpha_{i} \geq 0} \mathcal{L}(w, \alpha, \beta)=p^{*} .
(You should convince yourself of this; this follows from the "maxmin" of a function always being less than or equal to the "min max.") However, under certain conditions, we will have
d = p , d = p , d^(**)=p^(**),d^{*}=p^{*},
so that we can solve the dual problem in lieu of the primal problem. Let's see what these conditions are.
Suppose f f ff and the g i g i g_(i)g_{i}'s are convex,[6] and the h i h i h_(i)h_{i}'s are affine.[7] Suppose further that the constraints g i g i g_(i)g_{i} are (strictly) feasible; this means that there exists some w w ww so that g i ( w ) < 0 g i ( w ) < 0 g_(i)(w) < 0g_{i}(w)<0 for all i i ii.
Under our above assumptions, there must exist w , α , β w , α , β w^(**),alpha^(**),beta^(**)w^{*}, \alpha^{*}, \beta^{*} so that w w w^(**)w^{*} is the solution to the primal problem, α , β α , β alpha^(**),beta^(**)\alpha^{*}, \beta^{*} are the solution to the dual problem, and moreover p = d = L ( w , α , β ) p = d = L w , α , β p^(**)=d^(**)=L(w^(**),alpha^(**),beta^(**))p^{*}=d^{*}=\mathcal{L}\left(w^{*}, \alpha^{*}, \beta^{*}\right). Moreover, w , α w , α w^(**),alpha^(**)w^{*}, \alpha^{*} and β β beta^(**)\beta^{*} satisfy the Karush-Kuhn-Tucker (KKT) conditions, which are as follows:
(15) w i L ( w , α , β ) = 0 , i = 1 , , d (16) β i L ( w , α , β ) = 0 , i = 1 , , l (17) α i g i ( w ) = 0 , i = 1 , , k (18) g i ( w ) 0 , i = 1 , , k (19) α 0 , i = 1 , , k (15) w i L w , α , β = 0 , i = 1 , , d (16) β i L w , α , β = 0 , i = 1 , , l (17) α i g i w = 0 , i = 1 , , k (18) g i w 0 , i = 1 , , k (19) α 0 , i = 1 , , k {:(15)(del)/(delw_(i))L(w^(**),alpha^(**),beta^(**))=0","quad i=1","dots","d,(16)(del)/(delbeta_(i))L(w^(**),alpha^(**),beta^(**))=0","quad i=1","dots","l,(17)alpha_(i)^(**)g_(i)(w^(**))=0","quad i=1","dots","k,(18)g_(i)(w^(**)) <= 0","quad i=1","dots","k,(19)alpha^(**) >= 0","quad i=1","dots","k:}\begin{align} \frac{\partial}{\partial w_{i}} \mathcal{L}\left(w^{*}, \alpha^{*}, \beta^{*}\right) &=0, \quad i=1, \ldots, d \\ \frac{\partial}{\partial \beta_{i}} \mathcal{L}\left(w^{*}, \alpha^{*}, \beta^{*}\right) &=0, \quad i=1, \ldots, l \\ \alpha_{i}^{*} g_{i}\left(w^{*}\right) &=0, \quad i=1, \ldots, k \\ g_{i}\left(w^{*}\right) & \leq 0, \quad i=1, \ldots, k \\ \alpha^{*} & \geq 0, \quad i=1, \ldots, k \end{align}
Moreover, if some w , α , β w , α , β w^(**),alpha^(**),beta^(**)w^{*}, \alpha^{*}, \beta^{*} satisfy the KKT conditions, then it is also a solution to the primal and dual problems.
We draw attention to Equation (17), which is called the KKT dual complementarity condition. Specifically, it implies that if α i > 0 α i > 0 alpha_(i)^(**) > 0\alpha_{i}^{*}>0, then g i ( w ) = 0 g i w = 0 g_(i)(w^(**))=0g_{i}\left(w^{*}\right)=0. (I.e., the "g g i ( w ) 0 " g i ( w ) 0 " g_(i)(w) <= 0"g_{i}(w) \leq 0 " constraint is active, meaning it holds with equality rather than with inequality.) Later on, this will be key for showing that the SVM has only a small number of "support vectors"; the KKT dual complementarity condition will also give us our convergence test when we talk about the SMO algorithm.

7. Optimal margin classifiers

Note: The equivalence of optimization problem (20) and the optimization problem (24), and the relationship between the primary and dual variables in equation (22) are the most important take home messages of this section.
Previously, we posed the following (primal) optimization problem for finding the optimal margin classifier:
(20) min w , b 1 2 w 2 s.t. y ( i ) ( w T x ( i ) + b ) 1 , i = 1 , , n (20) min w , b 1 2 w 2 s.t. y ( i ) w T x ( i ) + b 1 , i = 1 , , n {:(20)min_(w,b)quad(1)/(2)||w||^(2),["s.t."quady^((i))(w^(T)x^((i))+b) >= 1","quad i=1","dots","n]:}\begin{align} \min _{w, b}\quad &\frac{1}{2}\|w\|^{2}\\ \text {s.t.}\quad & y^{(i)}\left(w^{T} x^{(i)}+b\right) \geq 1, \quad i=1, \ldots, n \nonumber \end{align}
We can write the constraints as
g i ( w ) = y ( i ) ( w T x ( i ) + b ) + 1 0 . g i ( w ) = y ( i ) w T x ( i ) + b + 1 0 . g_(i)(w)=-y^((i))(w^(T)x^((i))+b)+1 <= 0.g_{i}(w)=-y^{(i)}\left(w^{T} x^{(i)}+b\right)+1 \leq 0 .
We have one such constraint for each training example. Note that from the KKT dual complementarity condition, we will have α i > 0 α i > 0 alpha_(i) > 0\alpha_{i}>0 only for the training examples that have functional margin exactly equal to one (i.e., the ones corresponding to constraints that hold with equality, g i ( w ) = 0 g i ( w ) = 0 g_(i)(w)=0g_{i}(w)=0). Consider the figure below, in which a maximum margin separating hyperplane is shown by the solid line.
The points with the smallest margins are exactly the ones closest to the decision boundary; here, these are the three points (one negative and two positive examples) that lie on the dashed lines parallel to the decision boundary. Thus, only three of the α i α i alpha_(i)\alpha_{i}'s–namely, the ones corresponding to these three training examples–will be non-zero at the optimal solution to our optimization problem. These three points are called the support vectors in this problem. The fact that the number of support vectors can be much smaller than the size the training set will be useful later.
Let's move on. Looking ahead, as we develop the dual form of the problem, one key idea to watch out for is that we'll try to write our algorithm in terms of only the inner product x ( i ) , x ( j ) x ( i ) , x ( j ) (:x^((i)),x^((j)):)\left\langle x^{(i)}, x^{(j)}\right\rangle (think of this as ( x ( i ) ) T x ( j ) x ( i ) T x ( j ) (x^((i)))^(T)x^((j))\left(x^{(i)}\right)^{T} x^{(j)} ) between points in the input feature space. The fact that we can express our algorithm in terms of these inner products will be key when we apply the kernel trick.
When we construct the Lagrangian for our optimization problem we have:
(21) L ( w , b , α ) = 1 2 w 2 i = 1 n α i [ y ( i ) ( w T x ( i ) + b ) 1 ] . (21) L ( w , b , α ) = 1 2 w 2 i = 1 n α i y ( i ) w T x ( i ) + b 1 . {:(21)L(w","b","alpha)=(1)/(2)||w||^(2)-sum_(i=1)^(n)alpha_(i)[y^((i))(w^(T)x^((i))+b)-1].:}\begin{equation} \mathcal{L}(w, b, \alpha)=\frac{1}{2}\|w\|^{2}-\sum_{i=1}^{n} \alpha_{i}\left[y^{(i)}\left(w^{T} x^{(i)}+b\right)-1\right] . \end{equation}
Note that there're only " α i α i alpha_(i)\alpha_{i}" but no " β i β i beta_(i)\beta_{i}" Lagrange multipliers, since the problem has only inequality constraints.
Let's find the dual form of the problem. To do so, we need to first minimize L ( w , b , α ) L ( w , b , α ) L(w,b,alpha)\mathcal{L}(w, b, \alpha) with respect to w w ww and b b bb (for fixed α α alpha\alpha), to get θ D θ D theta_(D)\theta_{\mathcal{D}}, which we'll do by setting the derivatives of L L L\mathcal{L} with respect to w w ww and b b bb to zero. We have:
w L ( w , b , α ) = w i = 1 n α i y ( i ) x ( i ) = 0 w L ( w , b , α ) = w i = 1 n α i y ( i ) x ( i ) = 0 grad_(w)L(w,b,alpha)=w-sum_(i=1)^(n)alpha_(i)y^((i))x^((i))=0\nabla_{w} \mathcal{L}(w, b, \alpha)=w-\sum_{i=1}^{n} \alpha_{i} y^{(i)} x^{(i)}=0
This implies that
(22) w = i = 1 n α i y ( i ) x ( i ) (22) w = i = 1 n α i y ( i ) x ( i ) {:(22)w=sum_(i=1)^(n)alpha_(i)y^((i))x^((i)):}\begin{equation} w=\sum_{i=1}^{n} \alpha_{i} y^{(i)} x^{(i)} \end{equation}
As for the derivative with respect to b b bb, we obtain
(23) b L ( w , b , α ) = i = 1 n α i y ( i ) = 0 . (23) b L ( w , b , α ) = i = 1 n α i y ( i ) = 0 . {:(23)(del)/(del b)L(w","b","alpha)=sum_(i=1)^(n)alpha_(i)y^((i))=0.:}\begin{equation} \frac{\partial}{\partial b} \mathcal{L}(w, b, \alpha)=\sum_{i=1}^{n} \alpha_{i} y^{(i)}=0 . \end{equation}
If we take the definition of w w ww in Equation (22) and plug that back into the Lagrangian (Equation 21), and simplify, we get
L ( w , b , α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j ( x ( i ) ) T x ( j ) b i = 1 n α i y ( i ) . L ( w , b , α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) T x ( j ) b i = 1 n α i y ( i ) . L(w,b,alpha)=sum_(i=1)^(n)alpha_(i)-(1)/(2)sum_(i,j=1)^(n)y^((i))y^((j))alpha_(i)alpha_(j)(x^((i)))^(T)x^((j))-bsum_(i=1)^(n)alpha_(i)y^((i)).\mathcal{L}(w, b, \alpha)=\sum_{i=1}^{n} \alpha_{i}-\frac{1}{2} \sum_{i, j=1}^{n} y^{(i)} y^{(j)} \alpha_{i} \alpha_{j}\left(x^{(i)}\right)^{T} x^{(j)}-b \sum_{i=1}^{n} \alpha_{i} y^{(i)}.
But from Equation (23), the last term must be zero, so we obtain
L ( w , b , α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j ( x ( i ) ) T x ( j ) . L ( w , b , α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) T x ( j ) . L(w,b,alpha)=sum_(i=1)^(n)alpha_(i)-(1)/(2)sum_(i,j=1)^(n)y^((i))y^((j))alpha_(i)alpha_(j)(x^((i)))^(T)x^((j)).\mathcal{L}(w, b, \alpha)=\sum_{i=1}^{n} \alpha_{i}-\frac{1}{2} \sum_{i, j=1}^{n} y^{(i)} y^{(j)} \alpha_{i} \alpha_{j}\left(x^{(i)}\right)^{T} x^{(j)}.
Recall that we got to the equation above by minimizing L L L\mathcal{L} with respect to w w ww and b b bb. Putting this together with the constraints α i 0 α i 0 alpha_(i) >= 0\alpha_{i} \geq 0 (that we always had) and the constraint (23), we obtain the following dual optimization problem:
(24) max α W ( α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) , x ( j ) s.t. α i 0 , i = 1 , , n i = 1 n α i y ( i ) = 0 (24) max α W ( α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) , x ( j ) s.t. α i 0 , i = 1 , , n i = 1 n α i y ( i ) = 0 {:(24)max_(alpha)quadW(alpha)=sum_(i=1)^(n)alpha_(i)-(1)/(2)sum_(i,j=1)^(n)y^((i))y^((j))alpha_(i)alpha_(j)(:x^((i)),x^((j)):),["s.t."quadalpha_(i) >= 0","quad i=1","dots","n],[sum_(i=1)^(n)alpha_(i)y^((i))=0]:}\begin{align} \max _{\alpha}\quad & W(\alpha)=\sum_{i=1}^{n} \alpha_{i}-\frac{1}{2} \sum_{i, j=1}^{n} y^{(i)} y^{(j)} \alpha_{i} \alpha_{j}\left\langle x^{(i)}, x^{(j)}\right\rangle \\ \text {s.t.} \quad& \alpha_{i} \geq 0, \quad i=1, \ldots, n\nonumber \\ & \sum_{i=1}^{n} \alpha_{i} y^{(i)}=0\nonumber \end{align}
You should also be able to verify that the conditions required for p = d p = d p^(**)=d^(**)p^{*}=d^{*} and the KKT conditions (Equations 15-19) to hold are indeed satisfied in our optimization problem. Hence, we can solve the dual in lieu of solving the primal problem. Specifically, in the dual problem above, we have a maximization problem in which the parameters are the α i α i alpha_(i)\alpha_{i} 's. We'll talk later about the specific algorithm that we're going to use to solve the dual problem, but if we are indeed able to solve it (i.e., find the α α alpha\alpha's that maximize W ( α ) W ( α ) W(alpha)W(\alpha) subject to the constraints), then we can use Equation (22) to go back and find the optimal w w ww's as a function of the α α alpha\alpha's. Having found w w w^(**)w^{*}, by considering the primal problem, it is also straightforward to find the optimal value for the intercept term b b bb as
(25) b = max i : y ( i ) = 1 w T x ( i ) + min i : y ( i ) = 1 w T x ( i ) 2 . (25) b = max i : y ( i ) = 1 w T x ( i ) + min i : y ( i ) = 1 w T x ( i ) 2 . {:(25)b^(**)=-(max_(i:y^((i))=-1)w^(**T)x^((i))+min_(i:y^((i))=1)w^(**T)x^((i)))/(2).:}\begin{equation} b^{*}=-\frac{\max _{i: y^{(i)}=-1} w^{* T} x^{(i)}+\min _{i: y^{(i)}=1} w^{* T} x^{(i)}}{2} . \end{equation}
(Check for yourself that this is correct.)
Before moving on, let's also take a more careful look at Equation (22), which gives the optimal value of w w ww in terms of (the optimal value of) α α alpha\alpha. Suppose we've fit our model's parameters to a training set, and now wish to make a prediction at a new point input x x xx. We would then calculate w T x + b w T x + b w^(T)x+bw^{T} x+b, and predict y = 1 y = 1 y=1y=1 if and only if this quantity is bigger than zero. But using (22), this quantity can also be written:
(26) w T x + b = ( i = 1 n α i y ( i ) x ( i ) ) T x + b (27) = i = 1 n α i y ( i ) x ( i ) , x + b . (26) w T x + b = i = 1 n α i y ( i ) x ( i ) T x + b (27) = i = 1 n α i y ( i ) x ( i ) , x + b . {:(26)w^(T)x+b=(sum_(i=1)^(n)alpha_(i)y^((i))x^((i)))^(T)x+b,(27)=sum_(i=1)^(n)alpha_(i)y^((i))(:x^((i)),x:)+b.:}\begin{align} w^{T} x+b &=\left(\sum_{i=1}^{n} \alpha_{i} y^{(i)} x^{(i)}\right)^{T} x+b \\ &=\sum_{i=1}^{n} \alpha_{i} y^{(i)}\left\langle x^{(i)}, x\right\rangle+b . \end{align}
Hence, if we've found the α i α i alpha_(i)\alpha_{i}'s, in order to make a prediction, we have to calculate a quantity that depends only on the inner product between x x xx and the points in the training set. Moreover, we saw earlier that the α i α i alpha_(i)\alpha_{i}'s will all be zero except for the support vectors. Thus, many of the terms in the sum above will be zero, and we really need to find only the inner products between x x xx and the support vectors (of which there is often only a small number) in order calculate 27 and make our prediction.
By examining the dual form of the optimization problem, we gained significant insight into the structure of the problem, and were also able to write the entire algorithm in terms of only inner products between input feature vectors. In the next section, we will exploit this property to apply the kernels to our classification problem. The resulting algorithm, support vector machines, will be able to efficiently learn in very high dimensional spaces.

8. Regularization and the non-separable case (optional reading)

The derivation of the SVM as presented so far assumed that the data is linearly separable. While mapping data to a high dimensional feature space via ϕ ϕ phi\phi does generally increase the likelihood that the data is separable, we can't guarantee that it always will be so. Also, in some cases it is not clear that finding a separating hyperplane is exactly what we'd want to do, since that might be susceptible to outliers. For instance, the left figure below shows an optimal margin classifier, and when a single outlier is added in the upper-left region (right figure), it causes the decision boundary to make a dramatic swing, and the resulting classifier has a much smaller margin.
To make the algorithm work for non-linearly separable datasets as well as be less sensitive to outliers, we reformulate our optimization (using 1 1 ℓ_(1)\ell_{1} regularization) as follows:
min γ , w , b 1 2 w 2 + C i = 1 n ξ i s.t. y ( i ) ( w T x ( i ) + b ) 1 ξ i , i = 1 , , n ξ i 0 , i = 1 , , n . min γ , w , b 1 2 w 2 + C i = 1 n ξ i s.t. y ( i ) w T x ( i ) + b 1 ξ i , i = 1 , , n ξ i 0 , i = 1 , , n . {:[min_(gamma,w,b)quad(1)/(2)||w||^(2)+Csum_(i=1)^(n)xi_(i)],["s.t."quady^((i))(w^(T)x^((i))+b) >= 1-xi_(i)","quad i=1","dots","n],[xi_(i) >= 0","quad i=1","dots","n.]:}\begin{aligned} \min _{\gamma, w, b} \quad& \frac{1}{2}\|w\|^{2}+C \sum_{i=1}^{n} \xi_{i} \\ \text {s.t.}\quad & y^{(i)}\left(w^{T} x^{(i)}+b\right) \geq 1-\xi_{i}, \quad i=1, \ldots, n \\ & \xi_{i} \geq 0, \quad i=1, \ldots, n . \end{aligned}
Thus, examples are now permitted to have (functional) margin less than 1, and if an example has functional margin 1 ξ i 1 ξ i 1-xi_(i)1-\xi_{i} (with ξ > 0 ξ > 0 xi > 0\xi>0), we would pay a cost of the objective function being increased by C ξ i C ξ i Cxi_(i)C \xi_{i}. The parameter C C CC controls the relative weighting between the twin goals of making the w 2 w 2 ||w||^(2)\|w\|^{2} small (which we saw earlier makes the margin large) and of ensuring that most examples have functional margin at least 1 . 1 . 1.1 .
As before, we can form the Lagrangian:
L ( w , b , ξ , α , r ) = 1 2 w T w + C i = 1 n ξ i i = 1 n α i [ y ( i ) ( x T w + b ) 1 + ξ i ] i = 1 n r i ξ i . L ( w , b , ξ , α , r ) = 1 2 w T w + C i = 1 n ξ i i = 1 n α i y ( i ) x T w + b 1 + ξ i i = 1 n r i ξ i . L(w,b,xi,alpha,r)=(1)/(2)w^(T)w+Csum_(i=1)^(n)xi_(i)-sum_(i=1)^(n)alpha_(i)[y^((i))(x^(T)w+b)-1+xi_(i)]-sum_(i=1)^(n)r_(i)xi_(i).\mathcal{L}(w, b, \xi, \alpha, r)=\frac{1}{2} w^{T} w+C \sum_{i=1}^{n} \xi_{i}-\sum_{i=1}^{n} \alpha_{i}\left[y^{(i)}\left(x^{T} w+b\right)-1+\xi_{i}\right]-\sum_{i=1}^{n} r_{i} \xi_{i} .
Here, the α i α i alpha_(i)\alpha_{i}'s and r i r i r_(i)r_{i}'s are our Lagrange multipliers (constrained to be 0 0 >= 0\geq 0). We won't go through the derivation of the dual again in detail, but after setting the derivatives with respect to w w ww and b b bb to zero as before, substituting them back in, and simplifying, we obtain the following dual form of the problem:
max α W ( α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) , x ( j ) s.t. 0 α i C , i = 1 , , n i = 1 n α i y ( i ) = 0 , max α W ( α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) , x ( j ) s.t. 0 α i C , i = 1 , , n i = 1 n α i y ( i ) = 0 , {:[max_(alpha)quadW(alpha)=sum_(i=1)^(n)alpha_(i)-(1)/(2)sum_(i,j=1)^(n)y^((i))y^((j))alpha_(i)alpha_(j)(:x^((i)),x^((j)):)],["s.t."quad0 <= alpha_(i) <= C","quad i=1","dots","n],[sum_(i=1)^(n)alpha_(i)y^((i))=0","]:}\begin{aligned} \max _{\alpha}\quad & W(\alpha)=\sum_{i=1}^{n} \alpha_{i}-\frac{1}{2} \sum_{i, j=1}^{n} y^{(i)} y^{(j)} \alpha_{i} \alpha_{j}\left\langle x^{(i)}, x^{(j)}\right\rangle \\ \text {s.t.} \quad& 0 \leq \alpha_{i} \leq C, \quad i=1, \ldots, n \\ & \sum_{i=1}^{n} \alpha_{i} y^{(i)}=0, \end{aligned}
As before, we also have that w w ww can be expressed in terms of the α i α i alpha_(i)\alpha_{i}'s as given in Equation (22), so that after solving the dual problem, we can continue to use Equation (27) to make our predictions. Note that, somewhat surprisingly, in adding 1 1 ℓ_(1)\ell_{1} regularization, the only change to the dual problem is that what was originally a constraint that 0 α i 0 α i 0 <= alpha_(i)0 \leq \alpha_{i} has now become 0 0 0 <=0 \leq α i C α i C alpha_(i) <= C\alpha_{i} \leq C. The calculation for b b b^(**)b^{*} also has to be modified (Equation 25 is no longer valid); see the comments in the next section/Platt's paper.
Also, the KKT dual-complementarity conditions (which in the next section will be useful for testing for the convergence of the SMO algorithm) are:
(28) α i = 0 y ( i ) ( w T x ( i ) + b ) 1 (29) α i = C y ( i ) ( w T x ( i ) + b ) 1 (30) 0 < α i < C y ( i ) ( w T x ( i ) + b ) = 1 . (28) α i = 0 y ( i ) w T x ( i ) + b 1 (29) α i = C y ( i ) w T x ( i ) + b 1 (30) 0 < α i < C y ( i ) w T x ( i ) + b = 1 . {:(28)alpha_(i)=0=>y^((i))(w^(T)x^((i))+b) >= 1,(29)alpha_(i)=C=>y^((i))(w^(T)x^((i))+b) <= 1,(30)0 < alpha_(i) < C=>y^((i))(w^(T)x^((i))+b)=1.:}\begin{align} \alpha_{i}=0 & \Rightarrow y^{(i)}\left(w^{T} x^{(i)}+b\right) \geq 1 \\ \alpha_{i}=C & \Rightarrow y^{(i)}\left(w^{T} x^{(i)}+b\right) \leq 1 \\ 0<\alpha_{i}<C & \Rightarrow y^{(i)}\left(w^{T} x^{(i)}+b\right)=1 . \end{align}
Now, all that remains is to give an algorithm for actually solving the dual problem, which we will do in the next section.

9. The SMO algorithm (optional reading)

The SMO (sequential minimal optimization) algorithm, due to John Platt, gives an efficient way of solving the dual problem arising from the derivation of the SVM. Partly to motivate the SMO algorithm, and partly because it's interesting in its own right, let's first take another digression to talk about the coordinate ascent algorithm.

9.1. Coordinate ascent

Consider trying to solve the unconstrained optimization problem
max α W ( α 1 , α 2 , , α n ) . max α W α 1 , α 2 , , α n . max_(alpha)W(alpha_(1),alpha_(2),dots,alpha_(n)).\max _{\alpha} W\left(\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}\right) .
Here, we think of W W WW as just some function of the parameters α i α i alpha_(i)\alpha_{i}'s, and for now ignore any relationship between this problem and SVMs. We've already seen two optimization algorithms, gradient ascent and Newton's method. The new algorithm we're going to consider here is called coordinate ascent:
Loop until convergence: { { {\{
quad\quad For i = 1 , . . . , n , { i = 1 , . . . , n , { i=1,...,n,{i = 1,...,n, \{
α i := arg max α ^ i W ( α 1 , , α i 1 , α ^ i , α i + 1 , , α n ) α i := arg max α ^ i W α 1 , , α i 1 , α ^ i , α i + 1 , , α n quadquadalpha_(i):=arg max_( hat(alpha)_(i))W(alpha_(1),dots,alpha_(i-1), hat(alpha)_(i),alpha_(i+1),dots,alpha_(n))\quad\quad\alpha_{i}:=\arg \max _{\hat{\alpha}_{i}} W\left(\alpha_{1}, \ldots, \alpha_{i-1}, \hat{\alpha}_{i}, \alpha_{i+1}, \ldots, \alpha_{n}\right)
} } quad}\quad \}
} } }\}
Thus, in the innermost loop of this algorithm, we will hold all the variables except for some α i α i alpha_(i)\alpha_{i} fixed, and reoptimize W W WW with respect to just the parameter α i α i alpha_(i)\alpha_{i}. In the version of this method presented here, the inner-loop reoptimizes the variables in order α 1 , α 2 , , α n , α 1 , α 2 , α 1 , α 2 , , α n , α 1 , α 2 , alpha_(1),alpha_(2),dots,alpha_(n),alpha_(1),alpha_(2),dots\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}, \alpha_{1}, \alpha_{2}, \ldots. (A more sophisticated version might choose other orderings; for instance, we may choose the next variable to update according to which one we expect to allow us to make the largest increase in W ( α ) W ( α ) W(alpha)W(\alpha).)
When the function W W WW happens to be of such a form that the "arg max" in the inner loop can be performed efficiently, then coordinate ascent can be a fairly efficient algorithm. Here's a picture of coordinate ascent in action:
The ellipses in the figure are the contours of a quadratic function that we want to optimize. Coordinate ascent was initialized at ( 2 , 2 ) ( 2 , 2 ) (2,-2)(2,-2), and also plotted in the figure is the path that it took on its way to the global maximum. Notice that on each step, coordinate ascent takes a step that's parallel to one of the axes, since only one variable is being optimized at a time.

9.2. SMO

We close off the discussion of SVMs by sketching the derivation of the SMO algorithm. Some details will be left to the homework, and for others you may refer to the paper excerpt handed out in class.
Here's the (dual) optimization problem that we want to solve:
(31) max α W ( α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) , x ( j ) . (32) s.t. 0 α i C , i = 1 , , n (33) i = 1 n α i y ( i ) = 0. (31) max α W ( α ) = i = 1 n α i 1 2 i , j = 1 n y ( i ) y ( j ) α i α j x ( i ) , x ( j ) . (32) s.t. 0 α i C , i = 1 , , n (33) i = 1 n α i y ( i ) = 0. {:(31)max_(alpha)quadW(alpha)=sum_(i=1)^(n)alpha_(i)-(1)/(2)sum_(i,j=1)^(n)y^((i))y^((j))alpha_(i)alpha_(j)(:x^((i))","x^((j)):).,(32)"s.t."quad0 <= alpha_(i) <= C","quad i=1","dots","n,(33)sum_(i=1)^(n)alpha_(i)y^((i))=0.:}\begin{align} \max _{\alpha}\quad & W(\alpha)=\sum_{i=1}^{n} \alpha_{i}-\frac{1}{2} \sum_{i, j=1}^{n} y^{(i)} y^{(j)} \alpha_{i} \alpha_{j}\langle x^{(i)}, x^{(j)}\rangle. \\ \text {s.t.} \quad& 0 \leq \alpha_{i} \leq C, \quad i=1, \ldots, n \\ & \sum_{i=1}^{n} \alpha_{i} y^{(i)}=0. \end{align}
Let's say we have set of α i α i alpha_(i)\alpha_{i}'s that satisfy the constraints (32-33). Now, suppose we want to hold α 2 , , α n α 2 , , α n alpha_(2),dots,alpha_(n)\alpha_{2}, \ldots, \alpha_{n} fixed, and take a coordinate ascent step and reoptimize the objective with respect to α 1 α 1 alpha_(1)\alpha_{1}. Can we make any progress? The answer is no, because the constraint (33) ensures that
α 1 y ( 1 ) = i = 2 n α i y ( i ) . α 1 y ( 1 ) = i = 2 n α i y ( i ) . alpha_(1)y^((1))=-sum_(i=2)^(n)alpha_(i)y^((i)).\alpha_{1} y^{(1)}=-\sum_{i=2}^{n} \alpha_{i} y^{(i)}.
Or, by multiplying both sides by y ( 1 ) y ( 1 ) y^((1))y^{(1)}, we equivalently have
α 1 = y ( 1 ) i = 2 n α i y ( i ) . α 1 = y ( 1 ) i = 2 n α i y ( i ) . alpha_(1)=-y^((1))sum_(i=2)^(n)alpha_(i)y^((i)).\alpha_{1}=-y^{(1)} \sum_{i=2}^{n} \alpha_{i} y^{(i)}.
(This step used the fact that y ( 1 ) { 1 , 1 } y ( 1 ) { 1 , 1 } y^((1))in{-1,1}y^{(1)} \in\{-1,1\}, and hence ( y ( 1 ) ) 2 = 1 y ( 1 ) 2 = 1 (y^((1)))^(2)=1\left(y^{(1)}\right)^{2}=1.) Hence, α 1 α 1 alpha_(1)\alpha_{1} is exactly determined by the other α i α i alpha_(i)\alpha_{i}'s, and if we were to hold α 2 , , α n α 2 , , α n alpha_(2),dots,alpha_(n)\alpha_{2}, \ldots, \alpha_{n} fixed, then we can't make any change to α 1 α 1 alpha_(1)\alpha_{1} without violating the constraint (33) in the optimization problem.
Thus, if we want to update some subject of the α i α i alpha_(i)\alpha_{i}'s, we must update at least two of them simultaneously in order to keep satisfying the constraints. This motivates the SMO algorithm, which simply does the following:
Repeat till convergence { { {\{
  1. Select some pair α i α i alpha_(i)\alpha_{i} and α j α j alpha_(j)\alpha_{j} to update next (using a heuristic that tries to pick the two that will allow us to make the biggest progress towards the global maximum).
  2. Reoptimize W ( α ) W ( α ) W(alpha)W(\alpha) with respect to α i α i alpha_(i)\alpha_{i} and α j α j alpha_(j)\alpha_{j}, while holding all the other α k α k alpha_(k)\alpha_{k}'s ( k i , j ) ( k i , j ) (k!=i,j)(k \neq i, j) fixed.
} } }\}
To test for convergence of this algorithm, we can check whether the KKT conditions (Equations 28-30) are satisfied to within some tol. Here, tol is the convergence tolerance parameter, and is typically set to around 0.01 0.01 0.010.01 to 0.001 0.001 0.0010.001. (See the paper and pseudocode for details.)
The key reason that SMO is an efficient algorithm is that the update to α i , α j α i , α j alpha_(i),alpha_(j)\alpha_{i}, \alpha_{j} can be computed very efficiently. Let's now briefly sketch the main ideas for deriving the efficient update.
Let's say we currently have some setting of the α i α i alpha_(i)\alpha_{i}'s that satisfy the constraints (32-33), and suppose we've decided to hold α 3 , , α n α 3 , , α n alpha_(3),dots,alpha_(n)\alpha_{3}, \ldots, \alpha_{n} fixed, and want to reoptimize W ( α 1 , α 2 , , α n ) W α 1 , α 2 , , α n W(alpha_(1),alpha_(2),dots,alpha_(n))W\left(\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}\right) with respect to α 1 α 1 alpha_(1)\alpha_{1} and α 2 α 2 alpha_(2)\alpha_{2} (subject to the constraints). From (33), we require that
α 1 y ( 1 ) + α 2 y ( 2 ) = i = 3 n α i y ( i ) . α 1 y ( 1 ) + α 2 y ( 2 ) = i = 3 n α i y ( i ) . alpha_(1)y^((1))+alpha_(2)y^((2))=-sum_(i=3)^(n)alpha_(i)y^((i)).\alpha_{1} y^{(1)}+\alpha_{2} y^{(2)}=-\sum_{i=3}^{n} \alpha_{i} y^{(i)} .
Since the right hand side is fixed (as we've fixed α 3 , α n α 3 , α n alpha_(3),dotsalpha_(n)\alpha_{3}, \ldots \alpha_{n} ), we can just let it be denoted by some constant ζ ζ zeta\zeta:
(34) α 1 y ( 1 ) + α 2 y ( 2 ) = ζ . (34) α 1 y ( 1 ) + α 2 y ( 2 ) = ζ . {:(34)alpha_(1)y^((1))+alpha_(2)y^((2))=zeta.:}\begin{equation} \alpha_{1} y^{(1)}+\alpha_{2} y^{(2)}=\zeta . \end{equation}
We can thus picture the constraints on α 1 α 1 alpha_(1)\alpha_{1} and α 2 α 2 alpha_(2)\alpha_{2} as follows:
From the constraints (32), we know that α 1 α 1 alpha_(1)\alpha_{1} and α 2 α 2 alpha_(2)\alpha_{2} must lie within the box [ 0 , C ] × [ 0 , C ] [ 0 , C ] × [ 0 , C ] [0,C]xx[0,C][0, C] \times[0, C] shown. Also plotted is the line α 1 y ( 1 ) + α 2 y ( 2 ) = ζ α 1 y ( 1 ) + α 2 y ( 2 ) = ζ alpha_(1)y^((1))+alpha_(2)y^((2))=zeta\alpha_{1} y^{(1)}+\alpha_{2} y^{(2)}=\zeta, on which we know α 1 α 1 alpha_(1)\alpha_{1} and α 2 α 2 alpha_(2)\alpha_{2} must lie. Note also that, from these constraints, we know L α 2 H L α 2 H L <= alpha_(2) <= HL \leq \alpha_{2} \leq H; otherwise, ( α 1 , α 2 ) α 1 , α 2 (alpha_(1),alpha_(2))\left(\alpha_{1}, \alpha_{2}\right) can't simultaneously satisfy both the box and the straight line constraint. In this example, L = 0 L = 0 L=0L=0. But depending on what the line α 1 y ( 1 ) + α 2 y ( 2 ) = ζ α 1 y ( 1 ) + α 2 y ( 2 ) = ζ alpha_(1)y^((1))+alpha_(2)y^((2))=zeta\alpha_{1} y^{(1)}+\alpha_{2} y^{(2)}=\zeta looks like, this won't always necessarily be the case; but more generally, there will be some lower-bound L L LL and some upper-bound H H HH on the permissible values for α 2 α 2 alpha_(2)\alpha_{2} that will ensure that α 1 , α 2 α 1 , α 2 alpha_(1),alpha_(2)\alpha_{1}, \alpha_{2} lie within the box [ 0 , C ] × [ 0 , C ] [ 0 , C ] × [ 0 , C ] [0,C]xx[0,C][0, C] \times[0, C].
Using Equation (34), we can also write α 1 α 1 alpha_(1)\alpha_{1} as a function of α 2 α 2 alpha_(2)\alpha_{2}:
α 1 = ( ζ α 2 y ( 2 ) ) y ( 1 ) . α 1 = ( ζ α 2 y ( 2 ) ) y ( 1 ) . alpha_(1)=(zeta-alpha_(2)y^((2)))y^((1)).\alpha_{1}=(\zeta-\alpha_{2} y^{(2)}) y^{(1)} .
(Check this derivation yourself; we again used the fact that y ( 1 ) { 1 , 1 } y ( 1 ) { 1 , 1 } y^((1))in{-1,1}y^{(1)} \in\{-1,1\} so that ( y ( 1 ) ) 2 = 1 ( y ( 1 ) ) 2 = 1 (y^((1)))^(2)=1(y^{(1)})^{2}=1.) Hence, the objective W ( α ) W ( α ) W(alpha)W(\alpha) can be written
W ( α 1 , α 2 , , α n ) = W ( ( ζ α 2 y ( 2 ) ) y ( 1 ) , α 2 , , α n ) . W ( α 1 , α 2 , , α n ) = W ( ( ζ α 2 y ( 2 ) ) y ( 1 ) , α 2 , , α n ) . W(alpha_(1),alpha_(2),dots,alpha_(n))=W((zeta-alpha_(2)y^((2)))y^((1)),alpha_(2),dots,alpha_(n)).W(\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n})=W((\zeta-\alpha_{2} y^{(2)}) y^{(1)}, \alpha_{2}, \ldots, \alpha_{n}) .
Treating α 3 , , α n α 3 , , α n alpha_(3),dots,alpha_(n)\alpha_{3}, \ldots, \alpha_{n} as constants, you should be able to verify that this is just some quadratic function in α 2 α 2 alpha_(2)\alpha_{2}. I.e., this can also be expressed in the form a α 2 2 + b α 2 + c a α 2 2 + b α 2 + c aalpha_(2)^(2)+balpha_(2)+ca \alpha_{2}^{2}+b \alpha_{2}+c for some appropriate a , b a , b a,ba, b, and c c cc. If we ignore the "box" constraints (32) (or, equivalently, that L α 2 H L α 2 H L <= alpha_(2) <= HL \leq \alpha_{2} \leq H ), then we can easily maximize this quadratic function by setting its derivative to zero and solving. We'll let α 2 new,unclipped α 2 new,unclipped alpha_(2)^("new,unclipped")\alpha_{2}^{\textit {new,unclipped}} denote the resulting value of α 2 α 2 alpha_(2)\alpha_{2}. You should also be able to convince yourself that if we had instead wanted to maximize W W WW with respect to α 2 α 2 alpha_(2)\alpha_{2} but subject to the box constraint, then we can find the resulting value optimal simply by taking α 2 new,unclipped α 2 new,unclipped alpha_(2)^("new,unclipped")\alpha_{2}^{\textit {new,unclipped}} and "clipping" it to lie in the [ L , H ] [ L , H ] [L,H][L, H] interval, to get
α 2 new = { H if α 2 new,unclipped > H α 2 new,unclipped if L α 2 new,unclipped H L if α 2 new,unclipped < L α 2 new = H      if  α 2 new,unclipped > H α 2 new,unclipped      if  L α 2 new,unclipped H L      if  α 2 new,unclipped < L alpha_(2)^("new")={[H,"if "alpha_(2)^("new,unclipped") > H],[alpha_(2)^("new,unclipped"),"if "L <= alpha_(2)^("new,unclipped") <= H],[L,"if "alpha_(2)^("new,unclipped") < L]:}\alpha_{2}^{\textit {new}}= \begin{cases}H & \text {if } \alpha_{2}^{\textit {new,unclipped}}>H \\ \alpha_{2}^{\textit {new,unclipped}} & \text {if } L \leq \alpha_{2}^{\textit{new,unclipped}} \leq H \\ L & \text {if } \alpha_{2}^{\textit {new,unclipped}}<L\end{cases}
Finally, having found the α 2 new α 2 new alpha_(2)^("new")\alpha_{2}^{\textit {new}}, we can use Equation (34) to go back and find the optimal value of α 1 new α 1 new alpha_(1)^("new")\alpha_{1}^{\textit {new}}.
There're a couple more details that are quite easy but that we'll leave you to read about yourself in Platt's paper: One is the choice of the heuristics used to select the next α i , α j α i , α j alpha_(i),alpha_(j)\alpha_{i}, \alpha_{j} to update; the other is how to update b b bb as the SMO algorithm is run.
You can read the notes from the next lecture from CS229 on Deep Learning here.

  1. Here, for simplicity, we include all the monomials with repetitions (so that, e.g., x 1 x 2 x 3 x 1 x 2 x 3 x_(1)x_(2)x_(3)x_{1} x_{2} x_{3} and x 2 x 3 x 1 x 2 x 3 x 1 x_(2)x_(3)x_(1)x_{2} x_{3} x_{1} both appear in ϕ ( x ) ϕ ( x ) phi(x)\phi(x)). Therefore, there are totally 1 + d + d 2 + d 3 1 + d + d 2 + d 3 1+d+d^(2)+d^(3)1+d+d^{2}+d^{3} entries in ϕ ( x ) ϕ ( x ) phi(x)\phi(x). ↩︎
  2. Recall that X X X\mathcal{X} is the space of the input x x xx. In our running example, X = R d X = R d X=R^(d)\mathcal{X}=\mathbb{R}^{d} ↩︎
  3. Many texts present Mercer's theorem in a slightly more complicated form involving L 2 L 2 L^(2)L^{2} functions, but when the input attributes take values in R d R d R^(d)\mathbb{R}^{d}, the version given here is equivalent. ↩︎
  4. You may be familiar with linear programming, which solves optimization problems that have linear objectives and linear constraints. QP software is also widely available, which allows convex quadratic objectives and linear constraints. ↩︎
  5. Readers interested in learning more about this topic are encouraged to read, e.g., R. T. Rockarfeller (1970), Convex Analysis, Princeton University Press. ↩︎
  6. When f f ff has a Hessian, then it is convex if and only if the Hessian is positive semidefinite. For instance, f ( w ) = w T w f ( w ) = w T w f(w)=w^(T)wf(w)=w^{T} w is convex; similarly, all linear (and affine) functions are also convex. (A function f f ff can also be convex without being differentiable, but we won't need those more general definitions of convexity here.) ↩︎
  7. I.e., there exists a i , b i a i , b i a_(i),b_(i)a_{i}, b_{i}, so that h i ( w ) = a i T w + b i h i ( w ) = a i T w + b i h_(i)(w)=a_(i)^(T)w+b_(i)h_{i}(w)=a_{i}^{T} w+b_{i}. "Affine" means the same thing as linear, except that we also allow the extra intercept term b i b i b_(i)b_{i}. ↩︎

Recommended for you

Jintang Li
Adversarial Learning on Graph
Adversarial Learning on Graph
This review gives an introduction to Adversarial Machine Learning on graph-structured data, including several recent papers and research ideas in this field. This review is based on our paper “A Survey of Adversarial Learning on Graph”.
7 points
0 issues