Introduction to Regularization and Model Selection

You can read the notes from the previous lecture from Andrew Ng's CS229 course on Deep Learning here.
Suppose we are trying select among several different models for a learning problem. For instance, we might be using a polynomial regression model h θ ( x ) = g ( θ 0 + θ 1 x + θ 2 x 2 + + θ k x k ) h θ ( x ) = g θ 0 + θ 1 x + θ 2 x 2 + + θ k x k h_(theta)(x)=g(theta_(0)+theta_(1)x+theta_(2)x^(2)+cdots+theta_(k)x^(k))h_{\theta}(x)=g\left(\theta_{0}+\theta_{1} x+\theta_{2} x^{2}+\cdots+\theta_{k} x^{k}\right), and wish to decide if k k kk should be 0 , 1 , 0 , 1 , 0,1,dots0,1, \ldots, or 10 10 1010. How can we automatically select a model that represents a good tradeoff between the twin evils of bias and variance[1]? Alternatively, suppose we want to automatically choose the bandwidth parameter τ τ tau\tau for locally weighted regression, or the parameter C C CC for our 1 1 ℓ_(1)\ell_{1}-regularized SVM. How can we do that?
For the sake of concreteness, in these notes we assume we have some finite set of models M = { M 1 , , M d } M = M 1 , , M d M={M_(1),dots,M_(d)}\mathcal{M}=\left\{M_{1}, \ldots, M_{d}\right\} that we're trying to select among. For instance, in our first example above, the model M i M i M_(i)M_{i} would be an i i ii-th order polynomial regression model. (The generalization to infinite M M M\mathcal{M} is not hard.[2]) Alternatively, if we are trying to decide between using an SVM, a neural network or logistic regression, then M M M\mathcal{M} may contain these models.

1. Cross validation

Lets suppose we are, as usual, given a training set S S SS. Given what we know about empirical risk minimization, here's what might initially seem like a algorithm, resulting from using empirical risk minimization for model selection:
  1. Train each model M i M i M_(i)M_{i} on S S SS, to get some hypothesis h i h i h_(i)h_{i}.
  2. Pick the hypotheses with the smallest training error.
This algorithm does not work. Consider choosing the order of a polynomial. The higher the order of the polynomial, the better it will fit the training set S S SS, and thus the lower the training error. Hence, this method will always select a high-variance, high-degree polynomial model, which we saw previously is often poor choice.
Here's an algorithm that works better. In hold-out cross validation (also called simple cross validation), we do the following:
  1. Randomly split S S SS into S train S train S_("train")S_{\text{train}} (say, 70 % 70 % 70%70 \% of the data) and S cv S cv S_("cv")S_{\text{cv}} (the remaining 30 % 30 % 30%30 \%). Here, S cv S cv S_("cv")S_{\text{cv}} is called the hold-out cross validation set.
  2. Train each model M i M i M_(i)M_{i} on S train S train S_("train")S_{\text{train}} only, to get some hypothesis h i h i h_(i)h_{i}.
  3. Select and output the hypothesis h i h i h_(i)h_{i} that had the smallest error ε ^ S cv ( h i ) ε ^ S cv h i hat(epsi)_(S_("cv"))(h_(i))\hat{\varepsilon}_{S_{\text{cv}}}\left(h_{i}\right) on the hold out cross validation set. (Recall, ε ^ S cv ( h ) ε ^ S cv ( h ) hat(epsi)_(S_("cv"))(h)\hat{\varepsilon}_{S_{\text{cv}}}(h) denotes the empirical error of h h hh on the set of examples in S cv S cv S_("cv")S_{\text{cv}}.)
By testing on a set of examples S cv S cv S_("cv")S_{\text{cv}} that the models were not trained on, we obtain a better estimate of each hypothesis h i h i h_(i)h_{i}'s true generalization error, and can then pick the one with the smallest estimated generalization error. Usually, somewhere between 1 / 4 1 / 3 1 / 4 1 / 3 1//4-1//31 / 4-1 / 3 of the data is used in the hold out cross validation set, and 30 % 30 % 30%30 \% is a typical choice.
Optionally, step 3 in the algorithm may also be replaced with selecting the model M i M i M_(i)M_{i} according to arg min i ε ^ S cv ( h i ) arg min i ε ^ S cv h i arg min_(i) hat(epsi)_(S_("cv"))(h_(i))\arg \min _{i} \hat{\varepsilon}_{S_{\text{cv}}}\left(h_{i}\right), and then retraining M i M i M_(i)M_{i} on the entire training set S S SS. (This is often a good idea, with one exception being learning algorithms that are be very sensitive to perturbations of the initial conditions and/or data. For these methods, M i M i M_(i)M_{i} doing well on S train S train S_("train")S_{\text {train}} does not necessarily mean it will also do well on S cv S cv S_("cv")S_{\text{cv}}, and it might be better to forgo this retraining step.)
The disadvantage of using hold out cross validation is that it "wastes" about 30 % 30 % 30%30 \% of the data. Even if we were to take the optional step of retraining the model on the entire training set, it's still as if we're trying to find a good model for a learning problem in which we had 0.7 m 0.7 m 0.7m0.7 \mathrm{~m} training examples, rather than n n nn training examples, since we're testing models that were trained on only 0.7 m 0.7 m 0.7 m0.7 m examples each time. While this is fine if data is abundant and/or cheap, in learning problems in which data is scarce (consider a problem with m = 20 m = 20 m=20m=20, say), we'd like to do something better.
Here is a method, called k k kk-fold cross validation, that holds out less data each time:
  1. Randomly split S S SS into k k kk disjoint subsets of m / k m / k m//km / k training examples each. Lets call these subsets S 1 , , S k S 1 , , S k S_(1),dots,S_(k)S_{1}, \ldots, S_{k}.
  2. For each model M i M i M_(i)M_{i}, we evaluate it as follows:
    1. For j = 1 , , k j = 1 , , k j=1,dots,kj=1, \ldots, k
        Train the model M i M i M_(i)M_{i} on S 1 S j 1 S j + 1 S k S 1 S j 1 S j + 1 S k S_(1)uu cdots uuS_(j-1)uuS_(j+1)uu cdotsS_(k)S_{1} \cup \cdots \cup S_{j-1} \cup S_{j+1} \cup \cdots S_{k} (i.e., train on all the data except S j S j S_(j)S_{j}) to get some hypothesis h i j h i j h_(ij)h_{i j}. Test the hypothesis h i j h i j h_(ij)h_{i j} on S j S j S_(j)S_{j}, to get ε ^ S j ( h i j ) ε ^ S j h i j hat(epsi)_(S_(j))(h_(ij))\hat{\varepsilon}_{S_{j}}\left(h_{i j}\right).
      The estimated generalization error of model M i M i M_(i)M_{i} is then calculated as the average of the ε ^ S j ( h i j ε ^ S j h i j hat(epsi)_(S_(j))(h_(ij):}\hat{\varepsilon}_{S_{j}}\left(h_{i j}\right.'s (averaged over j j jj).
  3. Pick the model M i M i M_(i)M_{i} with the lowest estimated generalization error, and retrain that model on the entire training set S S SS. The resulting hypothesis is then output as our final answer.
A typical choice for the number of folds to use here would be k = 10 k = 10 k=10k=10. While the fraction of data held out each time is now 1 / k 1 / k 1//k1 / k–much smaller than before–this procedure may also be more computationally expensive than hold-out cross validation, since we now need train to each model k k kk times.
While k = 10 k = 10 k=10k=10 is a commonly used choice, in problems in which data is really scarce, sometimes we will use the extreme choice of k = m k = m k=mk=m in order to leave out as little data as possible each time. In this setting, we would repeatedly train on all but one of the training examples in S S SS, and test on that held-out example. The resulting m = k m = k m=km=k errors are then averaged together to obtain our estimate of the generalization error of a model. This method has its own name; since we're holding out one training example at a time, this method is called leave-one-out cross validation.
Finally, even though we have described the different versions of cross validation as methods for selecting a model, they can also be used more simply to evaluate a single model or algorithm. For example, if you have implemented some learning algorithm and want to estimate how well it performs for your application (or if you have invented a novel learning algorithm and want to report in a technical paper how well it performs on various test sets), cross validation would give a reasonable way of doing so.

2. Feature Selection

One special and important case of model selection is called feature selection. To motivate this, imagine that you have a supervised learning problem where the number of features d d dd is very large (perhaps n n n n n≫nn \gg n ), but you suspect that there is only a small number of features that are "relevant" to the learning task. Even if you use the a simple linear classifier (such as the perceptron) over the d d dd input features, the VC dimension of your hypothesis class would still be O ( n ) O ( n ) O(n)O(n), and thus overfitting would be a potential problem unless the training set is fairly large.
In such a setting, you can apply a feature selection algorithm to reduce the number of features. Given d d dd features, there are 2 d 2 d 2^(d)2^{d} possible feature subsets (since each of the d d dd features can either be included or excluded from the subset), and thus feature selection can be posed as a model selection problem over 2 d 2 d 2^(d)2^{d} possible models. For large values of d d dd, it's usually too expensive to explicitly enumerate over and compare all 2 d 2 d 2^(d)2^{d} models, and so typically some heuristic search procedure is used to find a good feature subset. The following search procedure is called forward search:
  1. Initialize F = F = F=O/\mathcal{F}=\emptyset.
  2. Repeat { { {\{
    • (a)For i = 1 , , d i = 1 , , d i=1,dots,di=1, \ldots, d if i F i F i!inFi \notin \mathcal{F}, let F i = F { i } F i = F { i } F_(i)=Fuu{i}\mathcal{F}_{i}=\mathcal{F} \cup\{i\}, and use some version of cross validation to evaluate features F i F i F_(i)\mathcal{F}_{i}. (I.e., train your learning algorithm using only the features in F i F i F_(i)\mathcal{F}_{i}, and estimate its generalization error.)
    • (b)Set F F F\mathcal{F} to be the best feature subset found on step (a).
    } } }\}
  3. Select and output the best feature subset that was evaluated during the entire search procedure.
The outer loop of the algorithm can be terminated either when F = { 1 , , d } F = { 1 , , d } F={1,dots,d}\mathcal{F}=\{1, \ldots, d\} is the set of all features, or when | F | | F | |F||\mathcal{F}| exceeds some pre-set threshold (corresponding to the maximum number of features that you want the algorithm to consider using).
This algorithm described above one instantiation of wrapper model feature selection, since it is a procedure that "wraps" around your learning algorithm, and repeatedly makes calls to the learning algorithm to evaluate how well it does using different feature subsets. Aside from forward search, other search procedures can also be used. For example, backward search starts off with F = { 1 , , d } F = { 1 , , d } F={1,dots,d}\mathcal{F}=\{1, \ldots, d\} as the set of all features, and repeatedly deletes features one at a time (evaluating single-feature deletions in a similar manner to how forward search evaluates single-feature additions) until F = F = F=O/\mathcal{F}=\emptyset.
Wrapper feature selection algorithms often work quite well, but can be computationally expensive given how that they need to make many calls to the learning algorithm. Indeed, complete forward search (terminating when F = { 1 , , d } ) F = { 1 , , d } ) F={1,dots,d})\mathcal{F}=\{1, \ldots, d\}) would take about O ( n 2 ) O n 2 O(n^(2))O\left(n^{2}\right) calls to the learning algorithm.
Filter feature selection methods give heuristic, but computationally much cheaper, ways of choosing a feature subset. The idea here is to compute some simple score S ( i ) S ( i ) S(i)S(i) that measures how informative each feature x i x i x_(i)x_{i} is about the class labels y y yy. Then, we simply pick the k k kk features with the largest scores S ( i ) S ( i ) S(i)S(i).
One possible choice of the score would be define S ( i ) S ( i ) S(i)S(i) to be (the absolute value of) the correlation between x i x i x_(i)x_{i} and y y yy, as measured on the training data. This would result in our choosing the features that are the most strongly correlated with the class labels. In practice, it is more common (particularly for discrete-valued features x i x i x_(i)x_{i}) to choose S ( i ) S ( i ) S(i)S(i) to be the mutual information MI ( x i , y ) MI x i , y MI(x_(i),y)\operatorname{MI}\left(x_{i}, y\right) between x i x i x_(i)x_{i} and y y yy:
MI ( x i , y ) = x i { 0 , 1 } y { 0 , 1 } p ( x i , y ) log p ( x i , y ) p ( x i ) p ( y ) . MI x i , y = x i { 0 , 1 } y { 0 , 1 } p x i , y log p x i , y p x i p ( y ) . MI(x_(i),y)=sum_(x_(i)in{0,1})sum_(y in{0,1})p(x_(i),y)log (p(x_(i),y))/(p(x_(i))p(y)).\operatorname{MI}\left(x_{i}, y\right)=\sum_{x_{i} \in\{0,1\}} \sum_{y \in\{0,1\}} p\left(x_{i}, y\right) \log \frac{p\left(x_{i}, y\right)}{p\left(x_{i}\right) p(y)} .
(The equation above assumes that x i x i x_(i)x_{i} and y y yy are binary-valued; more generally the summations would be over the domains of the variables.) The probabilities above p ( x i , y ) , p ( x i ) p x i , y , p x i p(x_(i),y),p(x_(i))p\left(x_{i}, y\right), p\left(x_{i}\right) and p ( y ) p ( y ) p(y)p(y) can all be estimated according to their empirical distributions on the training set.
To gain intuition about what this score does, note that the mutual information can also be expressed as a Kullback-Leibler (KL) divergence:
MI ( x i , y ) = KL ( p ( x i , y ) p ( x i ) p ( y ) ) MI x i , y = KL p x i , y p x i p ( y ) MI(x_(i),y)=KL(p(x_(i),y)||p(x_(i))p(y))\operatorname{MI}\left(x_{i}, y\right)=\operatorname{KL}\left(p\left(x_{i}, y\right) \| p\left(x_{i}\right) p(y)\right)
You'll get to play more with KL-divergence in Problem set #3, but informally, this gives a measure of how different the probability distributions p ( x i , y ) p x i , y p(x_(i),y)p\left(x_{i}, y\right) and p ( x i ) p ( y ) p x i p ( y ) p(x_(i))p(y)p\left(x_{i}\right) p(y) are. If x i x i x_(i)x_{i} and y y yy are independent random variables, then we would have p ( x i , y ) = p ( x i ) p ( y ) p x i , y = p x i p ( y ) p(x_(i),y)=p(x_(i))p(y)p\left(x_{i}, y\right)=p\left(x_{i}\right) p(y), and the KL-divergence between the two distributions will be zero. This is consistent with the idea if x i x i x_(i)x_{i} and y y yy are independent, then x i x i x_(i)x_{i} is clearly very "non-informative" about y y yy, and thus the score S ( i ) S ( i ) S(i)S(i) should be small. Conversely, if x i x i x_(i)x_{i} is very "informative" about y y yy, then their mutual information MI ( x i , y ) MI x i , y MI(x_(i),y)\operatorname{MI}\left(x_{i}, y\right) would be large.
One final detail: Now that you've ranked the features according to their scores S ( i ) S ( i ) S(i)S(i), how do you decide how many features k k kk to choose? Well, one standard way to do so is to use cross validation to select among the possible values of k k kk. For example, when applying naive Bayes to text classification–a problem where d d dd, the vocabulary size, is usually very large–using this method to select a feature subset often results in increased classifier accuracy.

3. Bayesian statistics and regularization

In this section, we will talk about one more tool in our arsenal for our battle against overfitting.
At the beginning of the quarter, we talked about parameter fitting using maximum likelihood estimation (MLE), and chose our parameters according to
θ MLE = arg max θ i = 1 n p ( y ( i ) | x ( i ) ; θ ) . θ MLE = arg max θ i = 1 n p y ( i ) | x ( i ) ; θ . theta_("MLE")=arg max_(theta)prod_(i=1)^(n)p(y^((i))|x^((i));theta).\theta_{\text{MLE}}=\arg \max _{\theta} \prod_{i=1}^{n} p\left(y^{(i)} | x^{(i)} ; \theta\right) .
Throughout our subsequent discussions, we viewed θ θ theta\theta as an unknown parameter of the world. This view of the θ θ theta\theta as being constant-valued but unknown is taken in frequentist statistics. In the frequentist this view of the world, θ θ theta\theta is not random–it just happens to be unknown–and it's our job to come up with statistical procedures (such as maximum likelihood) to try to estimate this parameter.
An alternative way to approach our parameter estimation problems is to take the Bayesian view of the world, and think of θ θ theta\theta as being a random variable whose value is unknown. In this approach, we would specify a prior distribution p ( θ ) p ( θ ) p(theta)p(\theta) on θ θ theta\theta that expresses our "prior beliefs" about the parameters. Given a training set S = { ( x ( i ) , y ( i ) ) } i = 1 n S = x ( i ) , y ( i ) i = 1 n S={(x^((i)),y^((i)))}_(i=1)^(n)S=\left\{\left(x^{(i)}, y^{(i)}\right)\right\}_{i=1}^{n}, when we are asked to make a prediction on a new value of x x xx, we can then compute the posterior distribution on the parameters
p ( θ | S ) = p ( S | θ ) p ( θ ) p ( S ) = (1) ( i = 1 n p ( y ( i ) | x ( i ) , θ ) ) p ( θ ) θ ( i = 1 n p ( y ( i ) | x ( i ) , θ ) p ( θ ) ) d θ p ( θ | S )      = p ( S | θ ) p ( θ ) p ( S )      = (1) i = 1 n p y ( i ) | x ( i ) , θ p ( θ ) θ i = 1 n p y ( i ) | x ( i ) , θ p ( θ ) d θ {:[p(theta|S)=(p(S|theta)p(theta))/(p(S))],[={:(1)((prod_(i=1)^(n)p(y^((i))|x^((i)),theta))p(theta))/(int_(theta)(prod_(i=1)^(n)p(y^((i))|x^((i)),theta)p(theta))d theta):}]:}\begin{split} p(\theta | S) &=\frac{p(S | \theta) p(\theta)}{p(S)} \\ &=\begin{equation}\frac{\left(\prod_{i=1}^{n} p\left(y^{(i)} | x^{(i)}, \theta\right)\right) p(\theta)}{\int_{\theta}\left(\prod_{i=1}^{n} p\left(y^{(i)} | x^{(i)}, \theta\right) p(\theta)\right) d \theta}\end{equation} \end{split}
In the equation above, p ( y ( i ) | x ( i ) , θ ) p y ( i ) | x ( i ) , θ p(y^((i))|x^((i)),theta)p\left(y^{(i)} | x^{(i)}, \theta\right) comes from whatever model you're using for your learning problem. For example, if you are using Bayesian logistic regression, then you might choose p ( y ( i ) | x ( i ) , θ ) = h θ ( x ( i ) ) y ( i ) ( 1 h θ ( x ( i ) ) ) ( 1 y ( i ) ) p y ( i ) | x ( i ) , θ = h θ x ( i ) y ( i ) 1 h θ x ( i ) 1 y ( i ) p(y^((i))|x^((i)),theta)=h_(theta)(x^((i)))^(y^((i)))(1-h_(theta)(x^((i))))^((1-y^((i))))p\left(y^{(i)} | x^{(i)}, \theta\right)=h_{\theta}\left(x^{(i)}\right)^{y^{(i)}}\left(1-h_{\theta}\left(x^{(i)}\right)\right)^{\left(1-y^{(i)}\right)}, where h θ ( x ( i ) ) = 1 / ( 1 + exp ( θ T x ( i ) ) ) h θ x ( i ) = 1 / 1 + exp θ T x ( i ) h_(theta)(x^((i)))=1//(1+exp(-theta^(T)x^((i))))h_{\theta}\left(x^{(i)}\right)=1 /\left(1+\exp \left(-\theta^{T} x^{(i)}\right)\right).[3]
When we are given a new test example x x xx and asked to make it prediction on it, we can compute our posterior distribution on the class label using the posterior distribution on θ θ theta\theta:
(2) p ( y | x , S ) = θ p ( y | x , θ ) p ( θ | S ) d θ (2) p ( y | x , S ) = θ p ( y | x , θ ) p ( θ | S ) d θ {:(2)p(y|x","S)=int_(theta)p(y|x","theta)p(theta|S)d theta:}\begin{equation}p(y | x, S)=\int_{\theta} p(y | x, \theta) p(\theta | S) d \theta\end{equation}
In the equation above, p ( θ | S ) p ( θ | S ) p(theta|S)p(\theta | S) comes from Equation (1). Thus, for example, if the goal is to the predict the expected value of y y yy given x x xx, then we would output[4]
E [ y | x , S ] = y y p ( y | x , S ) d y E [ y | x , S ] = y y p ( y | x , S ) d y E[y|x,S]=int_(y)yp(y|x,S)dy\mathrm{E}[y | x, S]=\int_{y} y p(y | x, S) d y
The procedure that we've outlined here can be thought of as doing "fully Bayesian" prediction, where our prediction is computed by taking an average with respect to the posterior p ( θ | S ) p ( θ | S ) p(theta|S)p(\theta | S) over θ θ theta\theta. Unfortunately, in general it is computationally very difficult to compute this posterior distribution. This is because it requires taking integrals over the (usually high-dimensional) θ θ theta\theta as in Equation (1), and this typically cannot be done in closed-form.
Thus, in practice we will instead approximate the posterior distribution for θ θ theta\theta. One common approximation is to replace our posterior distribution for θ θ theta\theta (as in Equation 2) with a single point estimate. The MAP (maximum a posteriori) estimate for θ θ theta\theta is given by
(3) θ MAP = arg max θ i = 1 n p ( y ( i ) | x ( i ) , θ ) p ( θ ) . (3) θ MAP = arg max θ i = 1 n p y ( i ) | x ( i ) , θ p ( θ ) . {:(3)theta_("MAP")=arg max_(theta)prod_(i=1)^(n)p(y^((i))|x^((i)),theta)p(theta).:}\begin{equation}\theta_{\text{MAP}}=\arg \max _{\theta} \prod_{i=1}^{n} p\left(y^{(i)} | x^{(i)}, \theta\right) p(\theta).\end{equation}
Note that this is the same formulas as for the MLE (maximum likelihood) estimate for θ θ theta\theta, except for the prior p ( θ ) p ( θ ) p(theta)p(\theta) term at the end.
In practical applications, a common choice for the prior p ( θ ) p ( θ ) p(theta)p(\theta) is to assume that θ N ( 0 , τ 2 I ) θ N 0 , τ 2 I theta∼N(0,tau^(2)I)\theta \sim \mathcal{N}\left(0, \tau^{2} I\right). Using this choice of prior, the fitted parameters θ MAP θ MAP theta_("MAP")\theta_{\text{MAP}} will have smaller norm than that selected by maximum likelihood. (See Problem Set #3.) In practice, this causes the Bayesian MAP estimate to be less susceptible to overfitting than the ML estimate of the parameters. For example, Bayesian logistic regression turns out to be an effective algorithm for text classification, even though in text classification we usually have d n d n d≫nd \gg n.
You can read the notes from the next lecture from Andrew Ng's CS229 course on the k-means clustering algorithm here.

  1. Given that we said in the previous set of notes that bias and variance are two very different beasts, some readers may be wondering if we should be calling them "twin" evils here. Perhaps it'd be better to think of them as non-identical twins. The phrase "the fraternal twin evils of bias and variance" doesn't have the same ring to it, though. ↩︎
  2. If we are trying to choose from an infinite set of models, say corresponding to the possible values of the bandwidth τ R + τ R + tau inR^(+)\tau \in \mathbb{R}^{+}, we may discretize τ τ tau\tau and consider only a finite number of possible values for it. More generally, most of the algorithms described here can all be viewed as performing optimization search in the space of models, and we can perform this search over infinite model classes as well. ↩︎
  3. Since we are now viewing θ θ theta\theta as a random variable, it is okay to condition on it value, and write " p ( y | x , θ ) p ( y | x , θ ) p(y|x,theta)p(y | x, \theta)" instead of " p ( y | x ; θ ) p ( y | x ; θ ) p(y|x;theta)p(y | x ; \theta)." ↩︎
  4. The integral below would be replaced by a summation if y y yy is discrete-valued. ↩︎

Recommended for you

Srishti Saha
High Dimension Data Analysis - A tutorial and review for Dimensionality Reduction Techniques
High Dimension Data Analysis - A tutorial and review for Dimensionality Reduction Techniques
This article explains and provides a comparative study of a few techniques for dimensionality reduction. It dives into the mathematical explanation of several feature selection and feature transformation techniques, while also providing the algorithmic representation and implementation of some other techniques. Lastly, it also provides a very brief review of various other works done in this space.
29 points
0 issues