next up previous contents index
Next: References and further reading Up: Vector space classification Previous: Classification with more than   Contents   Index


The bias-variance tradeoff

Nonlinear classifiers are more powerful than linear classifiers. For some problems, there exists a nonlinear classifier with zero classification error, but no such linear classifier. Does that mean that we should always use nonlinear classifiers for optimal effectiveness in statistical text classification?

To answer this question, we introduce the bias-variance tradeoff in this section, one of the most important concepts in machine learning. The tradeoff helps explain why there is no universally optimal learning method. Selecting an appropriate learning method is therefore an unavoidable part of solving a text classification problem.

Throughout this section, we use linear and nonlinear classifiers as prototypical examples of ``less powerful'' and ``more powerful'' learning, respectively. This is a simplification for a number of reasons. First, many nonlinear models subsume linear models as a special case. For instance, a nonlinear learning method like kNN will in some cases produce a linear classifier. Second, there are nonlinear models that are less complex than linear models. For instance, a quadratic polynomial with two parameters is less powerful than a 10,000-dimensional linear classifier. Third, the complexity of learning is not really a property of the classifier because there are many aspects of learning (such as feature selection, cf. feature, regularization, and constraints such as margin maximization in Chapter 15 ) that make a learning method either more powerful or less powerful without affecting the type of classifier that is the final result of learning - regardless of whether that classifier is linear or nonlinear. We refer the reader to the publications listed in Section 14.7 for a treatment of the bias-variance tradeoff that takes into account these complexities. In this section, linear and nonlinear classifiers will simply serve as proxies for weaker and stronger learning methods in text classification.

We first need to state our objective in text classification more precisely. In Section 13.1 (page [*]), we said that we want to minimize classification error on the test set. The implicit assumption was that training documents and test documents are generated according to the same underlying distribution. We will denote this distribution $P(\langle d,c\rangle)$ where $d$ is the document and $c$ its label or class. graphclassmodelbernoulligraph were examples of generative models that decompose $P(\langle d,c\rangle)$ into the product of $P(c)$ and $P(d\vert c)$. typicallineartypicalnonlinear depict generative models for $\langle d,c\rangle$ with $d \in \mathbb{R}^2$ and $c \in \{\mbox{square},\mbox{solid circle} \}$.

In this section, instead of using the number of correctly classified test documents (or, equivalently, the error rate on test documents) as evaluation measure, we adopt an evaluation measure that addresses the inherent uncertainty of labeling. In many text classification problems, a given document representation can arise from documents belonging to different classes. This is because documents from different classes can be mapped to the same document representation. For example, the one-sentence documents China sues France and France sues China are mapped to the same document representation $d' = \{ \mbox{\term{China}} ,
\mbox{\term{France}} , \mbox{\term{sues}} \}$ in a bag of words model. But only the latter document is relevant to the class $c'=$ legal actions brought by France (which might be defined, for example, as a standing query by an international trade lawyer).

To simplify the calculations in this section, we do not count the number of errors on the test set when evaluating a classifier, but instead look at how well the classifier estimates the conditional probability $P(c\vert d)$ of a document being in a class. In the above example, we might have $P(c'\vert d') = 0.5$.

Our goal in text classification then is to find a classifier $\gamma $ such that, averaged over documents $d$, $\gamma(d)$ is as close as possible to the true probability $P(c\vert d)$. We measure this using mean squared error:

$\displaystyle \mbox{MSE}(\gamma) =
E_{\onedoc}
[\gamma(\onedoc)
- P(c\vert\onedoc)]^2$     (148)

where $E_{\onedoc}$ is the expectation with respect to $P(d)$. The mean squared error term gives partial credit for decisions by $\gamma $ that are close if not completely right.

We define a classifier $\gamma $ to be optimal for a distribution $P(\onedoclabeled)$ if it minimizes $\mbox{MSE}(\gamma)$.

Minimizing MSE is a desideratum for classifiers. We also need a criterion for learning methods. Recall that we defined a learning method $\Gamma$ as a function that takes a labeled training set $\docsetlabeled$ as input and returns a classifier $\gamma $.

For learning methods, we adopt as our goal to find a $\Gamma$ that, averaged over training sets, learns classifiers $\gamma $ with minimal MSE. We can formalize this as minimizing learning error :

$\displaystyle \mbox{learning-error}(\Gamma) = E_{\docsetlabeled}
[\mbox{MSE}(\Gamma(\docsetlabeled))]$     (149)

where $E_{\docsetlabeled}$ is the expectation over labeled training sets. To keep things simple, we can assume that training sets have a fixed size - the distribution $P(\onedoclabeled)$ then defines a distribution $P(\docsetlabeled)$ over training sets.

We can use learning error as a criterion for selecting a learning method in statistical text classification. A learning method $\Gamma$ is optimal for a distribution $P(\docsetlabeled)$ if it minimizes the learning error.


$\displaystyle E[x-\alpha]^2$ $\textstyle =$ $\displaystyle E x^2 -2Ex\alpha +\alpha^2$ (150)
  $\textstyle =$ $\displaystyle (Ex)^2 -2Ex\alpha +\alpha^2$ (151)
    $\displaystyle + E x^2 - 2(Ex)^2 +(Ex)^2$ (152)
  $\textstyle =$ $\displaystyle [Ex-\alpha]^2$ (153)
    $\displaystyle + E x^2 - E2x(Ex) +E(Ex)^2$ (154)
  $\textstyle =$ $\displaystyle [Ex-\alpha]^2 +
E[x-Ex]^2$ (155)
      (156)
$\displaystyle E_\docsetlabeled E_d[ \Gamma_\docsetlabeled
(\onedoc)-P(c\vert d)]^2$ $\textstyle =$ $\displaystyle E_d E_\docsetlabeled[\Gamma_\docsetlabeled (\onedoc)-P(c\vert d)]^2$ (157)
  $\textstyle =$ $\displaystyle E_d[\ \
[E_\docsetlabeled\Gamma_\docsetlabeled (\onedoc)-P(c\vert d)]^2$ (158)
    $\displaystyle +
E_\docsetlabeled[\Gamma_\docsetlabeled
(\onedoc)-E_\docsetlabeled\Gamma_\docsetlabeled
(\onedoc)]^2\ \ ]$ (159)

Arithmetic transformations for the bias-variance decomposition. For the derivation of Equation 157, we set $\alpha =
P(c\vert d)$ and $x=\Gamma_\docsetlabeled (d)$ in Equation 150.

Writing $\Gamma_\docsetlabeled$ for $\Gamma(\docsetlabeled)$ for better readability, we can transform Equation 149 as follows:

$\displaystyle \mbox{learning-error}(\Gamma)$ $\textstyle =$ $\displaystyle E_{\docsetlabeled}
[\mbox{MSE}(\Gamma_\docsetlabeled)]$ (160)
  $\textstyle =$ $\displaystyle E_{\docsetlabeled}
E_{\onedoc}
[\Gamma_\docsetlabeled (\onedoc)
- P(c\vert\onedoc)]^2$ (161)
  $\textstyle =$ $\displaystyle E_{\onedoc}
[\mbox{bias}(\Gamma,\onedoc) +
\mbox{variance}(\Gamma,\onedoc) ]$ (162)
$\displaystyle \mbox{bias} (\Gamma,\onedoc)$ $\textstyle =$ $\displaystyle [
P(c\vert\onedoc)
-
E_{\docsetlabeled}
\Gamma_\docsetlabeled (\onedoc)
]^2$ (163)
$\displaystyle \mbox{variance}(\Gamma,\onedoc)$ $\textstyle =$ $\displaystyle E_{\docsetlabeled}
[
\Gamma_\docsetlabeled (\onedoc)
-
E_{\docsetlabeled}
\Gamma_\docsetlabeled (\onedoc)
]^2$ (164)

where the equivalence between and 162 is shown in Equation 157 in Figure 14.6 . Note that $\onedoc$ and $\docsetlabeled$ are independent of each other. In general, for a random document $\onedoc$ and a random training set $\docsetlabeled$, $\docsetlabeled$ does not contain a labeled instance of $\onedoc$.

Bias is the squared difference between $P(c\vert d)$, the true conditional probability of $d$ being in $c$, and $\Gamma_\docsetlabeled (\onedoc)$, the prediction of the learned classifier, averaged over training sets. Bias is large if the learning method produces classifiers that are consistently wrong. Bias is small if (i) the classifiers are consistently right or (ii) different training sets cause errors on different documents or (iii) different training sets cause positive and negative errors on the same documents, but that average out to close to 0. If one of these three conditions holds, then $E_\docsetlabeled\Gamma_\docsetlabeled (\onedoc)$, the expectation over all training sets, is close to $P(c\vert d)$.

Linear methods like Rocchio and Naive Bayes have a high bias for nonlinear problems because they can only model one type of class boundary, a linear hyperplane. If the generative model $P(\onedoclabeled)$ has a complex nonlinear class boundary, the bias term in Equation 162 will be high because a large number of points will be consistently misclassified. For example, the circular enclave in Figure 14.11 does not fit a linear model and will be misclassified consistently by linear classifiers.

We can think of bias as resulting from our domain knowledge (or lack thereof) that we build into the classifier. If we know that the true boundary between the two classes is linear, then a learning method that produces linear classifiers is more likely to succeed than a nonlinear method. But if the true class boundary is not linear and we incorrectly bias the classifier to be linear, then classification accuracy will be low on average.

Nonlinear methods like kNN have low bias. We can see in Figure 14.6 that the decision boundaries of kNN are variable - depending on the distribution of documents in the training set, learned decision boundaries can vary greatly. As a result, each document has a chance of being classified correctly for some training sets. The average prediction $E_\docsetlabeled\Gamma_\docsetlabeled (\onedoc)$ is therefore closer to $P(c\vert\onedoc)$ and bias is smaller than for a linear learning method.

Variance is the variation of the prediction of learned classifiers: the average squared difference between $\Gamma_\docsetlabeled (d)$ and its average $E_\docsetlabeled \Gamma_\docsetlabeled (d)$. Variance is large if different training sets $\docsetlabeled$ give rise to very different classifiers $\Gamma_\docsetlabeled$. It is small if the training set has a minor effect on the classification decisions $\Gamma_\docsetlabeled$ makes, be they correct or incorrect. Variance measures how inconsistent the decisions are, not whether they are correct or incorrect.

Linear learning methods have low variance because most randomly drawn training sets produce similar decision hyperplanes. The decision lines produced by linear learning methods in and 14.11 will deviate slightly from the main class boundaries, depending on the training set, but the class assignment for the vast majority of documents (with the exception of those close to the main boundary) will not be affected. The circular enclave in Figure 14.11 will be consistently misclassified.

Nonlinear methods like kNN have high variance. It is apparent from Figure 14.6 that kNN can model very complex boundaries between two classes. It is therefore sensitive to noise documents of the sort depicted in Figure 14.10 . As a result the variance term in Equation 162 is large for kNN: Test documents are sometimes misclassified - if they happen to be close to a noise document in the training set - and sometimes correctly classified - if there are no noise documents in the training set near them. This results in high variation from training set to training set.

High-variance learning methods are prone to overfitting the training data. The goal in classification is to fit the training data to the extent that we capture true properties of the underlying distribution $P(\onedoclabeled)$. In overfitting, the learning method also learns from noise. Overfitting increases MSE and frequently is a problem for high-variance learning methods.

We can also think of variance as the model complexity or, equivalently, memory capacity of the learning method - how detailed a characterization of the training set it can remember and then apply to new data. This capacity corresponds to the number of independent parameters available to fit the training set. Each kNN neighborhood $S_k$ makes an independent classification decision. The parameter in this case is the estimate $\hat{P}(c\vert S_k)$ from Figure 14.7 . Thus, kNN's capacity is only limited by the size of the training set. It can memorize arbitrarily large training sets. In contrast, the number of parameters of Rocchio is fixed - $J$ parameters per dimension, one for each centroid - and independent of the size of the training set. The Rocchio classifier (in form of the centroids defining it) cannot ``remember'' fine-grained details of the distribution of the documents in the training set.

According to Equation 149, our goal in selecting a learning method is to minimize learning error. The fundamental insight captured by Equation 162, which we can succinctly state as: learning-error = bias + variance, is that the learning error has two components, bias and variance, which in general cannot be minimized simultaneously. When comparing two learning methods $\Gamma_1$ and $\Gamma_2$, in most cases the comparison comes down to one method having higher bias and lower variance and the other lower bias and higher variance. The decision for one learning method vs. another is then not simply a matter of selecting the one that reliably produces good classifiers across training sets (small variance) or the one that can learn classification problems with very difficult decision boundaries (small bias). Instead, we have to weigh the respective merits of bias and variance in our application and choose accordingly. This tradeoff is called the bias-variance tradeoff .

Figure 14.10 provides an illustration, which is somewhat contrived, but will be useful as an example for the tradeoff. Some Chinese text contains English words written in the Roman alphabet like CPU, ONLINE, and GPS. Consider the task of distinguishing Chinese-only web pages from mixed Chinese-English web pages. A search engine might offer Chinese users without knowledge of English (but who understand loanwords like CPU) the option of filtering out mixed pages. We use two features for this classification task: number of Roman alphabet characters and number of Chinese characters on the web page. As stated earlier, the distribution $P(\onedoclabeled$) of the generative model generates most mixed (respectively, Chinese) documents above (respectively, below) the short-dashed line, but there are a few noise documents.

In Figure 14.10 , we see three classifiers:

It is perhaps surprising that so many of the best-known text classification algorithms are linear. Some of these methods, in particular linear SVMs, regularized logistic regression and regularized linear regression, are among the most effective known methods. The bias-variance tradeoff provides insight into their success. Typical classes in text classification are complex and seem unlikely to be modeled well linearly. However, this intuition is misleading for the high-dimensional spaces that we typically encounter in text applications. With increased dimensionality, the likelihood of linear separability increases rapidly (Exercise 14.8 ). Thus, linear models in high-dimensional spaces are quite powerful despite their linearity. Even more powerful nonlinear learning methods can model decision boundaries that are more complex than a hyperplane, but they are also more sensitive to noise in the training data. Nonlinear learning methods sometimes perform better if the training set is large, but by no means in all cases.


next up previous contents index
Next: References and further reading Up: Vector space classification Previous: Classification with more than   Contents   Index
© 2008 Cambridge University Press
This is an automatically generated page. In case of formatting errors you may want to look at the PDF edition of the book.
2009-04-07