next up previous contents index
Next: Multiclass SVMs Up: Extensions to the SVM Previous: Extensions to the SVM   Contents   Index


Soft margin classification

For the very high dimensional problems common in text classification, sometimes the data are linearly separable. But in the general case they are not, and even if they are, we might prefer a solution that better separates the bulk of the data while ignoring a few weird noise documents.

Figure 15.5: Large margin classification with slack variables.
\begin{figure}\begin{pspicture}(0.5,1.5)(8.5,8.25)
\psset{arrowscale=2,dotsize=6...
...ale=2]{->}(5.3,2.3)(7.5,4.5)
\rput(6.3,2.8){$\xi_j$}
\end{pspicture}\end{figure}

If the training set $\docsetlabeled$ is not linearly separable, the standard approach is to allow the fat decision margin to make a few mistakes (some points - outliers or noisy examples - are inside or on the wrong side of the margin). We then pay a cost for each misclassified example, which depends on how far it is from meeting the margin requirement given in Equation 169. To implement this, we introduce slack variables $\xi_i$. A non-zero value for $\xi_i$ allows $\vec{x}_i$ to not meet the margin requirement at a cost proportional to the value of $\xi_i$. See Figure 15.5 .

The formulation of the SVM optimization problem with slack variables is:
\begin{example}
Find $\vec{w}$, $b$, and $\xi_i \ge 0$\ such that:
\begin{itemiz...
...y_i)\}$, $y_i(\vec{w}^{T}\vec{x}_i + b) \ge 1- \xi_i$
\end{itemize}\end{example}
The optimization problem is then trading off how fat it can make the margin versus how many points have to be moved around to allow this margin. The margin can be less than 1 for a point $\vec{x}_i$ by setting $\xi_i > 0$, but then one pays a penalty of $C\xi_i$ in the minimization for having done that. The sum of the $\xi_i$ gives an upper bound on the number of training errors. Soft-margin SVMs minimize training error traded off against margin. The parameter $C$ is a regularization term, which provides a way to control overfitting: as $C$ becomes large, it is unattractive to not respect the data at the cost of reducing the geometric margin; when it is small, it is easy to account for some data points with the use of slack variables and to have a fat margin placed so it models the bulk of the data.

The dual problem for soft margin classification becomes:
\begin{example}
Find $\alpha_1, \ldots \alpha_N$\ such that
$
\sum \alpha_i - \f...
...\item $0 \le \alpha_i \le C$\ for all $1 \le i \le N$
\end{itemize}\end{example}
Neither the slack variables $\xi_i$ nor Lagrange multipliers for them appear in the dual problem. All we are left with is the constant $C$ bounding the possible size of the Lagrange multipliers for the support vector data points. As before, the $\vec{x}_i$ with non-zero $\alpha_i$ will be the support vectors. The solution of the dual problem is of the form:
\begin{example}
$\vec{w} = \sum \alpha y_i \vec{x}_i$\ \\
$b = y_k(1-\xi_k) - \vec{w}^{T}\vec{x}_k$\ for
$k = \argmax_k \alpha_k$
\end{example}
Again $\vec{w}$ is not needed explicitly for classification, which can be done in terms of dot products with data points, as in Equation 170.

Typically, the support vectors will be a small proportion of the training data. However, if the problem is non-separable or with small margin, then every data point which is misclassified or within the margin will have a non-zero $\alpha_i$. If this set of points becomes large, then, for the nonlinear case which we turn to in Section 15.2.3 , this can be a major slowdown for using SVMs at test time.


Classifier Mode Method Time complexity
NB training   $\Theta(\vert\docsetlabeled\vert L_{ave}+\vert\mathbb{C}\vert\vert V\vert)$
NB testing   $\Theta(\vert\mathbb{C}\vert M_{a}) $
Rocchio training   $\Theta(\vert\docsetlabeled\vert L_{ave}+\vert\mathbb{C}\vert\vert V\vert)$
Rocchio testing   $\Theta(\vert\mathbb{C}\vert M_{a}) $
kNN training preprocessing $\Theta(\vert\docsetlabeled\vert L_{ave})$
kNN testing preprocessing $\Theta(\vert\docsetlabeled\vert M_{ave} M_{a})$
kNN training no preprocessing $\Theta(1)$
kNN testing no preprocessing $\Theta(\vert\docsetlabeled\vert L_{ave} M_{a})$
SVM training conventional $O(\vert\mathbb{C}\vert\vert\docsetlabeled\vert^3 M_{ave})$;
      $\approx O(\vert\mathbb{C}\vert\vert\docsetlabeled\vert^{1.7} M_{ave})$, empirically
SVM training cutting planes $O(\vert\mathbb{C}\vert\vert\docsetlabeled\vert M_{ave})$
SVM testing   $O(\vert\mathbb{C}\vert M_{a})$
Training and testing complexity of various classifiers including SVMs. Training is the time the learning method takes to learn a classifier over $\docsetlabeled$, while testing is the time it takes a classifier to classify one document. For SVMs, multiclass classification is assumed to be done by a set of $\vert\mathbb{C}\vert$ one-versus-rest classifiers. $ L_{ave}$ is the average number of tokens per document, while $ M_{ave}$ is the average vocabulary (number of non-zero features) of a document. $ L_{a}$ and $ M_{a}$ are the numbers of tokens and types, respectively, in the test document.

The complexity of training and testing with linear SVMs is shown in Table 15.1 .[*] The time for training an SVM is dominated by the time for solving the underlying QP, and so the theoretical and empirical complexity varies depending on the method used to solve it. The standard result for solving QPs is that it takes time cubic in the size of the data set (Kozlov et al., 1979). All the recent work on SVM training has worked to reduce that complexity, often by being satisfied with approximate solutions. Standardly, empirical complexity is about $O(\vert\docsetlabeled\vert^{1.7})$ (Joachims, 2006a). Nevertheless, the super-linear training time of traditional SVM algorithms makes them difficult or impossible to use on very large training data sets. Alternative traditional SVM solution algorithms which are linear in the number of training examples scale badly with a large number of features, which is another standard attribute of text problems. However, a new training algorithm based on cutting plane techniques gives a promising answer to this issue by having running time linear in the number of training examples and the number of non-zero features in examples (Joachims, 2006a). Nevertheless, the actual speed of doing quadratic optimization remains much slower than simply counting terms as is done in a Naive Bayes model. Extending SVM algorithms to nonlinear SVMs, as in the next section, standardly increases training complexity by a factor of $\vert\docsetlabeled\vert$ (since dot products between examples need to be calculated), making them impractical. In practice it can often be cheaper to materialize the higher-order features and to train a linear SVM.[*]


next up previous contents index
Next: Multiclass SVMs Up: Extensions to the SVM Previous: Extensions to the SVM   Contents   Index
© 2008 Cambridge University Press
This is an automatically generated page. In case of formatting errors you may want to look at the PDF edition of the book.
2009-04-07