Nonlinear classifiers are more powerful than linear classifiers. For some problems, there exists a nonlinear classifier with zero classification error, but no such linear classifier. Does that mean that we should always use nonlinear classifiers for optimal effectiveness in statistical text classification?
To answer this question, we introduce the bias-variance tradeoff in this section, one of the most important concepts in machine learning. The tradeoff helps explain why there is no universally optimal learning method. Selecting an appropriate learning method is therefore an unavoidable part of solving a text classification problem.
Throughout this section, we use linear and nonlinear classifiers as prototypical examples of ``less powerful'' and ``more powerful'' learning, respectively. This is a simplification for a number of reasons. First, many nonlinear models subsume linear models as a special case. For instance, a nonlinear learning method like kNN will in some cases produce a linear classifier. Second, there are nonlinear models that are less complex than linear models. For instance, a quadratic polynomial with two parameters is less powerful than a 10,000-dimensional linear classifier. Third, the complexity of learning is not really a property of the classifier because there are many aspects of learning (such as feature selection, cf. feature, regularization, and constraints such as margin maximization in Chapter 15 ) that make a learning method either more powerful or less powerful without affecting the type of classifier that is the final result of learning - regardless of whether that classifier is linear or nonlinear. We refer the reader to the publications listed in Section 14.7 for a treatment of the bias-variance tradeoff that takes into account these complexities. In this section, linear and nonlinear classifiers will simply serve as proxies for weaker and stronger learning methods in text classification.
We first need to state our objective in text classification
more precisely. In Section 13.1 (page ), we
said that we want to minimize classification
error on the test set. The implicit assumption was that
training documents and test documents are generated
according to
the same underlying
distribution. We will denote this distribution
where
is the document and
its label or class.
graphclassmodelbernoulligraph were examples of
generative models that decompose
into the product of
and
.
typicallineartypicalnonlinear depict generative
models for
with
and
.
In this section, instead of using the number of
correctly classified test documents (or, equivalently, the
error rate on test documents) as evaluation measure, we
adopt
an evaluation measure that
addresses the inherent uncertainty of labeling. In
many text classification problems, a given document
representation can
arise from documents belonging to
different classes. This is
because documents from different classes can be mapped to
the same document representation. For example, the
one-sentence documents China sues France and
France sues China are mapped to the same
document representation
in a bag of
words model. But only the latter document is relevant to the
class
legal actions brought by France (which
might be defined, for example, as a standing query by an
international trade lawyer).
To simplify the calculations in this section, we
do not count the number of errors on the test set
when evaluating a classifier, but instead
look at how well
the classifier estimates the conditional probability
of a document being in a class. In the above example, we
might have
.
Our goal in text classification then is to find a classifier
such that, averaged over documents
,
is as close as possible to
the true probability
.
We measure this using mean squared error:
![]() |
(148) |
We define a classifier
to be optimal for a distribution
if it minimizes
.
Minimizing MSE is a desideratum for classifiers.
We also need a criterion for learning methods. Recall that we defined a
learning method as a function that takes a labeled
training set
as input and returns a
classifier
.
For learning methods, we adopt
as our goal
to find a that, averaged over training sets,
learns classifiers
with minimal MSE. We can
formalize this as minimizing
learning error :
We can use learning error as
a criterion for selecting a
learning method in statistical text classification.
A learning method is
optimal for a distribution
if it minimizes the
learning error.
Writing
for
for better readability,
we can transform
Equation 149 as follows:
Bias is the squared difference between
, the true conditional probability of
being in
, and
, the prediction
of the learned classifier, averaged over training
sets. Bias
is large if the learning method produces classifiers that
are consistently wrong. Bias is small if (i) the classifiers
are consistently right or (ii) different training sets
cause errors on different documents or (iii) different
training sets cause positive and negative errors on the same
documents, but that average out to close to 0. If one of these
three conditions holds,
then
, the expectation over all
training sets, is close to
.
Linear methods like Rocchio and Naive Bayes have a high bias
for nonlinear problems because they can only model one type
of class boundary, a linear hyperplane. If the
generative model
has a
complex nonlinear class boundary, the bias term in
Equation 162 will be high because a large number of
points will be consistently misclassified. For example, the
circular enclave in Figure 14.11 does not fit a
linear model and will be misclassified consistently by
linear classifiers.
We can think of bias as resulting from our domain knowledge (or lack thereof) that we build into the classifier. If we know that the true boundary between the two classes is linear, then a learning method that produces linear classifiers is more likely to succeed than a nonlinear method. But if the true class boundary is not linear and we incorrectly bias the classifier to be linear, then classification accuracy will be low on average.
Nonlinear methods like kNN have low bias. We can see in
Figure 14.6 that the decision boundaries of kNN
are variable - depending on the distribution of documents
in the training set, learned decision boundaries can vary
greatly. As a result, each document has a chance of being
classified correctly for some training sets. The average
prediction
is therefore closer to
and bias is smaller
than for a linear learning method.
Variance is the variation
of the prediction of learned classifiers: the average
squared difference between
and
its average
.
Variance is large if different training sets
give rise to very different classifiers
. It is small if the training set
has a minor effect on the classification decisions
makes, be they correct or incorrect.
Variance measures how inconsistent the decisions are, not
whether they are correct or incorrect.
Linear learning methods have low variance because most randomly drawn training sets produce similar decision hyperplanes. The decision lines produced by linear learning methods in and 14.11 will deviate slightly from the main class boundaries, depending on the training set, but the class assignment for the vast majority of documents (with the exception of those close to the main boundary) will not be affected. The circular enclave in Figure 14.11 will be consistently misclassified.
Nonlinear methods like kNN have high variance. It is apparent from Figure 14.6 that kNN can model very complex boundaries between two classes. It is therefore sensitive to noise documents of the sort depicted in Figure 14.10 . As a result the variance term in Equation 162 is large for kNN: Test documents are sometimes misclassified - if they happen to be close to a noise document in the training set - and sometimes correctly classified - if there are no noise documents in the training set near them. This results in high variation from training set to training set.
High-variance learning methods are prone to
overfitting the training data.
The goal in classification is to fit the training data to
the extent that we capture true properties of the underlying
distribution
. In overfitting, the
learning method also learns from noise. Overfitting increases
MSE and frequently is a problem for
high-variance learning methods.
We can also think of variance as the
model complexity
or, equivalently, memory capacity
of the learning method - how detailed a characterization of the
training set it can remember and then apply to new
data. This capacity corresponds to
the number of
independent parameters available to fit the training set.
Each kNN neighborhood makes an
independent classification decision. The parameter in this
case is the estimate
from
Figure 14.7 . Thus, kNN's
capacity is only limited by the size of the training set. It can memorize arbitrarily large
training sets. In contrast,
the number of parameters of Rocchio is fixed
-
parameters per dimension, one for each centroid - and
independent of the size of the training
set. The Rocchio classifier (in form of the centroids
defining it) cannot ``remember'' fine-grained details of the
distribution of the documents in the training set.
According to Equation 149, our goal in selecting a
learning method is to minimize learning error. The
fundamental insight captured by Equation 162, which
we can succinctly state as: learning-error = bias +
variance, is that the learning error has two components,
bias and variance, which in general cannot be minimized
simultaneously. When comparing two learning methods
and
, in most cases the comparison
comes down to one method having higher bias and lower
variance and the other lower bias and higher variance. The
decision for one learning method vs. another is then not
simply a matter of selecting the one that reliably produces
good classifiers across training sets (small variance) or
the one that can learn classification
problems with very difficult decision boundaries (small bias).
Instead, we have to weigh the respective
merits of bias and variance in our application and choose
accordingly. This tradeoff is called the
bias-variance tradeoff .
Figure 14.10 provides an illustration, which is
somewhat contrived, but will be useful as an example for the
tradeoff. Some Chinese text contains English words written
in the Roman alphabet like CPU, ONLINE, and
GPS. Consider the task of distinguishing Chinese-only
web pages from mixed Chinese-English web pages. A search
engine might offer Chinese users without knowledge of
English (but who understand loanwords like CPU) the
option of filtering out mixed pages. We use two features for
this classification task: number of Roman alphabet
characters and number of Chinese characters on the web
page. As stated earlier, the distribution
) of the generative
model generates most mixed (respectively, Chinese) documents
above (respectively, below) the short-dashed line,
but there are a few noise documents.
In Figure 14.10 , we see three classifiers:
It is perhaps surprising that so many of the best-known text classification algorithms are linear. Some of these methods, in particular linear SVMs, regularized logistic regression and regularized linear regression, are among the most effective known methods. The bias-variance tradeoff provides insight into their success. Typical classes in text classification are complex and seem unlikely to be modeled well linearly. However, this intuition is misleading for the high-dimensional spaces that we typically encounter in text applications. With increased dimensionality, the likelihood of linear separability increases rapidly (Exercise 14.8 ). Thus, linear models in high-dimensional spaces are quite powerful despite their linearity. Even more powerful nonlinear learning methods can model decision boundaries that are more complex than a hyperplane, but they are also more sensitive to noise in the training data. Nonlinear learning methods sometimes perform better if the training set is large, but by no means in all cases.