Difference between revisions of "Software/Classifier"

From NLPWiki
Jump to: navigation, search
(20 Newsgroups)
(Cheese-Disease: A small textual example)
 
(27 intermediate revisions by 3 users not shown)
Line 1: Line 1:
= Stanford Classifier =
+
= The Stanford Classifier =
  
== Examples ==
+
The [http://nlp.stanford.edu/software/classifier.shtml Stanford Classifier] is a general purpose classifier - something that takes a set of input data and assigns each of them to one of a set of categories. It does this by generating features from each datum which are associated with positive or negative numeric "votes" (weights) for each class.  In principle, the weights could be set by hand, but the expected use is for the weights to be learned automatically based on hand-classified training data items.  (This is referred to as "supervised learning".)  The classifier can work with (scaled) real-valued and categorical inputs, and supports several machine learning algorithms. It also supports several forms of regularization, which is generally needed when building models with very large numbers of predictive features.
  
=== 20 Newsgroups ===
+
You can use the classifier on any sort of data, including standard statistics and machine learning data sets.  But for small data sets and numeric predictors, you'd generally be better off using another tool such as [http://www.r-project.org/ R] or [http://www.cs.waikato.ac.nz/ml/weka/ Weka]. Where the Stanford Classifier shines is in working with mainly textual data, where it has powerful and flexible means of generating features from character strings.  However, if you've also got a few numeric variables, you can throw them in at the same time.
  
Now let's walk through a more realistic example of using the Stanford Classifier on the well-known 20 Newgroups dataset.  There are several versions of 20 Newsgroups.  We'll use Jason Rennie's "bydate" version.  The precise commands shown below should work on Linux or Mac OS X systems.  The Java parts should also be fine under Windows, but you'd need to do the downloading and reformatting a little differently.
+
== Small Examples ==
  
First we download the corpus:
+
=== Cheese-Disease: A small textual example ===
curl -O http://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz
+
Then we unpack it:
+
tar -xvzf 20news-bydate.tar.gz
+
The 20 Newsgroups data comes in a format of one file per document, with the correct class shown by the directory name. The Stanford Classifier works with tab-delimited text files.  We convert it into this latter format with a simple shell script:
+
curl -O http://nlp.stanford.edu/software/classifier/convert-to-stanford-classifier.csh
+
chmod 755 convert-to-stanford-classifier.csh
+
./convert-to-stanford-classifier.csh
+
Note that we do this by converting line endings to spaces.  This loses line break information which could easily have some value in classification.  (We could have done something tricker like converting line endings to a vertical tab or form feed, but this will do for this example.)
+
  
Check that everything worked and you have the right number of documents:
+
While you can specify most options on the command line, normally the easiest way to train and test models with the Stanford Classifier is through use of properties files that record all the options used. You can find a couple of example data sets and properties files in the <tt>examples</tt> folder of the Stanford Classifier distribution.
wc -l 20news-bydate*-stanford-classifier.txt
+
    7532 20news-bydate-test-stanford-classifier.txt
+
  11314 20news-bydate-train-stanford-classifier.txt
+
  18846 total
+
The correct number should be as shown.
+
  
We'll assume that <code>$STANFORD_CLASSIFIER_JAR</code> points at the Stanford Classifier jarSo, depending on your shell, do something like:
+
The Cheese-Disease dataset is a play on the MTV game show Idiot Savants from the late 1990s, which had a trivia category of Cheese or Disease? (I guess you had to be there...).  The goal is to distinguish cheese names from disease names.  Look at the file <tt>examples/cheeseDisease.train</tt> to see what the data looks likeThe first column is the category (1=cheese, 2=disease).  The number coding of classes was arbitrary.  The two classes could have been called "cheese" and "disease".  The second column is the name.  The columns are separated by a tab character.  Here there is just one class column and one predictive column.  This is the minimum for training a classifier, but you can have any number of predictive columns and specify which column has what role.
  STANFORD_CLASSIFIER_JAR=/Users/manning/Software/stanford-classifier-2008-04-18/stanford-classifier.jar
+
  
This next command builds pretty much the simplest classifier that you couldIt divides the input documents on white space and then trains a classifier on the resulting tokens. The command is normally entered as all one line without the trailing backslashes, but we've split it so it formats better on this page.
+
In the top level folder of the Stanford Classifier, the following command will build a model for this data set and test it on the test data set in the simplest possible way:
  java -mx1800m -cp $STANFORD_CLASSIFIER_JAR edu.stanford.nlp.classify.ColumnDataClassifier \
+
java -jar stanford-classifier.jar -prop examples/cheese2007.prop
  -trainFile 20news-bydate-train-stanford-classifier.txt -testFile 20news-bydate-test-stanford-classifier.txt \
+
This prints a lot of informationThe first part shows a little bit about the data set.  The second part shows the process of optimization (choosing feature weights in training a classifier on the training data). The next part then shows the results of testing the model on a separate test set of data, and the final 5 lines give the test results:
  -1.useSplitWords -1.splitWordsRegexp "\\s+"
+
  196 examples in test set
Note that once the dataset is reasonably large, you have to give a fair amount of memory to the classifier. We discuss options for reducing memory usage below.
+
Cls 2: TP=123 FN=5 FP=8 TN=60; Acc 0.934 P 0.939 R 0.961 F1 0.950
 +
  Cls 1: TP=60 FN=8 FP=5 TN=123; Acc 0.934 P 0.923 R 0.882 F1 0.902
 +
  Micro-averaged accuracy/F1: 0.93367
 +
Macro-averaged F1: 0.92603
 +
For each class, the results show the number of true positives, false negatives, false positives, and true negatives, the class accuracy, precision, recall and F1 measure.  It then gives a summary F1 over the whole data set, either micro-averaged (each test item counts equally) or macro-averaged (each class counts equally). For skewed data sets, macro-averaged F1 is a good measure of how well a classifier does on uncommon classes.
 +
Distinguishing cheeses and diseases isn't too hard for the classifier!
  
There's a lot of output. The last part shows the accuracy of the classifier:
+
What features does the classifier use, and what is useful in making a decision?  Mostly the system is using ''character n-grams'' - short subsequences of characters - though it also has a couple of other features that include a class frequency prior and a feature for the bucketed length of the nameIn the above example,
7532 examples in test set
+
the <tt>-jar</tt> command runs the default class in the jar file, which is <tt>edu.stanford.nlp.classify.ColumnDataClassifier</tt>. In this example we'll show running the command explicitlyAlso, often it is useful to mix a properties file and some command-line flags: if running a series of experiments, you might have the baseline classifier configuration in a properties file but put differences in properties for a series of experiments on the command-lineThings specified on the command-line override specifications in the properties fileWe'll add a command-line flag to print features with high weights:
Cls alt.atheism: TP=215 FN=104 FP=91 TN=7122; Acc 0.974 P 0.703 R 0.674 F1 0.688
+
  java -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier -prop examples/cheese2007.prop -printClassifier HighWeight
  Cls comp.graphics: TP=257 FN=132 FP=211 TN=6932; Acc 0.954 P 0.549 R 0.661 F1 0.600
+
This now prints out the features with high weights(This form of output is especially easily interpretable for categorical features.) You see that most of the clearest, best features are particular character n-grams that indicate disease words, such as: ia$, ma$, sis (where $ indicates the end of string)For example, the highest weight feature is:
Cls comp.os.ms-windows.misc: TP=260 FN=134 FP=90 TN=7048; Acc 0.970 P 0.743 R 0.660 F1 0.699
+
  (1-#E-ia,2)        1.0975
Cls comp.sys.ibm.pc.hardware: TP=264 FN=128 FP=158 TN=6982; Acc 0.962 P 0.626 R 0.673 F1 0.649
+
which says that the feature is a string final (#E) bigram of ''ia'' from the String in column 1. For that feature, the weight for class 2 (disease) is 1.0975 - this is a strong positive vote for this feature indicating a disease not a cheese.
  Cls comp.sys.mac.hardware: TP=278 FN=107 FP=108 TN=7039; Acc 0.971 P 0.720 R 0.722 F1 0.721
+
  Cls comp.windows.x: TP=300 FN=95 FP=85 TN=7052; Acc 0.976 P 0.779 R 0.759 F1 0.769
+
  Cls misc.forsale: TP=346 FN=44 FP=114 TN=7028; Acc 0.979 P 0.752 R 0.887 F1 0.814
+
  Cls rec.autos: TP=306 FN=90 FP=82 TN=7054; Acc 0.977 P 0.789 R 0.773 F1 0.781
+
Cls rec.motorcycles: TP=358 FN=40 FP=52 TN=7082; Acc 0.988 P 0.873 R 0.899 F1 0.886
+
Cls rec.sport.baseball: TP=340 FN=57 FP=87 TN=7048; Acc 0.981 P 0.796 R 0.856 F1 0.825
+
  Cls rec.sport.hockey: TP=357 FN=42 FP=32 TN=7101; Acc 0.990 P 0.918 R 0.895 F1 0.906
+
  Cls sci.crypt: TP=328 FN=68 FP=23 TN=7113; Acc 0.988 P 0.934 R 0.828 F1 0.878
+
  Cls sci.electronics: TP=271 FN=122 FP=133 TN=7006; Acc 0.966 P 0.671 R 0.690 F1 0.680
+
  Cls sci.med: TP=288 FN=108 FP=73 TN=7063; Acc 0.976 P 0.798 R 0.727 F1 0.761
+
Cls sci.space: TP=328 FN=66 FP=41 TN=7097; Acc 0.986 P 0.889 R 0.832 F1 0.860
+
Cls soc.religion.christian: TP=354 FN=44 FP=104 TN=7030; Acc 0.980 P 0.773 R 0.889 F1 0.827
+
Cls talk.politics.guns: TP=310 FN=54 FP=131 TN=7037; Acc 0.975 P 0.703 R 0.852 F1 0.770
+
Cls talk.politics.mideast: TP=294 FN=82 FP=16 TN=7140; Acc 0.987 P 0.948 R 0.782 F1 0.857
+
Cls talk.politics.misc: TP=172 FN=138 FP=59 TN=7163; Acc 0.974 P 0.745 R 0.555 F1 0.636
+
Cls talk.religion.misc: TP=143 FN=108 FP=73 TN=7208; Acc 0.976 P 0.662 R 0.570 F1 0.612
+
Micro-averaged accuracy/F1: 0.76593
+
Macro-averaged F1: 0.76098
+
We see the statistics for each class and averaged over all the data. This is already quite competitive performance. Recent published papers (from 2008-10) often present a best macro-averaged F1 around 0.79.  But we can do a little better.
+
  
As soon as you want to start specifying a lot of options, you'll probably want a properties file to specify everything.  Indeed, some options you can only successfully set with a properties fileOne of the first things to address seems to be better tokenization.  Tokenizing on whitespace is fairly naive. One can usually write a rough-and-ready but usable tokenizer inside <code>ColumnDataClassifier</code> by using the <code>splitWordsTokenizerRegexp</code> property. Another alternative would be to use the Stanford tokenizer to pre-tokenize the data. In general, this will work a bit better for English-language text, but is beyond what we consider here. Here's a simple properties file which you can [http://nlp.stanford.edu/software/classifier/20news1.prop download]:
+
The commonest features to use in text classification are word features, and you might think of adding them here (even though most of these names are 3 or less words, and many are only 1)You can fairly easily do this by adding a couple more flags for features:
trainFile=20news-bydate-train-stanford-classifier.txt
+
  java -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier -prop examples/cheese2007.prop \
  testFile=20news-bydate-test-stanford-classifier.txt
+
        -printClassifier HighWeight -1.useSplitWords -1.splitWordsRegexp "\s"
  1.useSplitWords=true
+
However, at least in this case, accuracy isn't improved beyond just using character n-grams. We get the same results as before:
  1.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
+
  196 examples in test set
  1.splitWordsIgnoreRegexp=\\s+
+
  Cls 2: TP=123 FN=5 FP=8 TN=60; Acc 0.934 P 0.939 R 0.961 F1 0.950
This tokenize recognizes tokens starting with letters followed by letters and ASCII digits, or some number, money, and percent expressions, whitespace or a single letter. The whitespace tokens are then ignored.
+
  Cls 1: TP=60 FN=8 FP=5 TN=123; Acc 0.934 P 0.923 R 0.882 F1 0.902
 +
  Micro-averaged accuracy/F1: 0.93367
 +
  Macro-averaged F1: 0.92603
  
Just a bit of work on tokenization gives us almost 3%!
+
=== Iris data set ===
Micro-averaged accuracy/F1: 0.79501
+
Macro-averaged F1: 0.78963
+
  
You can look at the output of the tokenizer by examining the features the classifier generates.  We can do this with this command:
+
Fisher's Iris data set is one of the most famous data sets in statistics and machine learning [http://en.wikipedia.org/wiki/Iris_flower_data_set]. Three species of Iris are described by four numeric variables. We show it both as a simple example of numeric classification and as an example of using multiple columns of inputs for each data itemIn the download, there is a version of the 150 item data set divided into 130 training examples and 20 test examples, and a properties file suitable for training a classifier from it.
java -mx1800m -cp $STANFORD_CLASSIFIER_JAR edu.stanford.nlp.classify.ColumnDataClassifier -prop /Users/manning/Projects/Classification-20Newsgroups/20news1.prop -printFeatures prop1
+
Look at the resulting (very large) file <code>prop1.train</code> . You might be able to get a bit better performance by fine-tuning the tokenization, but, often, for text categorization, a fairly simple tokenization is sufficient, providing it's enough to recognize most semantically contentful word units, and doesn't produce a huge number of rarely observed features(E.g., for this data set, there are a few uuencoded files in newsgroup postings.  Under whitespace tokenization, each line of the file became a token that almost certainly only occurred once.  Now they'll get split up on characters that aren't letters and digits.  That not only reduces the token space, but probably some of the letter strings that do result will recur, and become slightly useful features.)
+
  
There are many other kinds of features that you could consider putting into the classifier which might improve performance. The length of a newsgroup posting might be informative, but it probably isn't linearly related to its class, so we bin lengths into 4 categories, which become categorical features. You have to choose those cut-offs manually, but <code>ColumnDataClassifier</code> can print simple statistics of how many documents of each class fall in each bin, which can help you see if you've chosen very bad cut-offs.  Here's the properties file: [http://nlp.stanford.edu/software/classifier/20news2.prop] .
+
Note that the provided properties file is set up to run from the top-level folder of the Stanford classifier distribution. We will asssume that STANFORD_CLASSIFIER_HOME points to it. You can do something like:
trainFile=20news-bydate-train-stanford-classifier.txt
+
  STANFORD_CLASSIFIER_HOME=/Users/manning/Software/stanford-classifier-2008-04-18
testFile=20news-bydate-test-stanford-classifier.txt
+
1.useSplitWords=true
+
1.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
+
1.splitWordsIgnoreRegexp=\\s+
+
1.binnedLengths=500,2500,12500
+
1.binnedLengthsStatistics=true
+
  
In this case, that doesn't help:
+
Here is the provided properties file:
  Micro-averaged accuracy/F1: 0.79408
+
  #
  Macro-averaged F1: 0.78853
+
# Features
 +
#
 +
# Data format by column is:
 +
# species    sepalLength sepalWidth petalLength petalWidth
 +
#
 +
useClassFeature=true
 +
1.realValued=true
 +
  2.realValued=true
 +
3.realValued=true
 +
4.realValued=true
 +
 +
printClassifier=AllWeights
 +
 +
#
 +
# Training input
 +
#
 +
trainFile=./examples/iris.train
 +
testFile=./examples/iris.test
  
Other good feature ideas might be: to use token prefix and suffixes and to use the "shape" of a token (whether it contains upper or lowercase or digits or certain kinds of symbols as equivalence classesWe also turn off the printing of the documents in the output so that the output is not quite so voluminous.  This gives our next properties file:  [http://nlp.stanford.edu/software/classifier/20news3.prop] .
+
The four predictor variables are all specified as real valuedThere are other flags that will let you use numeric variables with a few simple transforms, such as <code>logTransform</code> or <code>logitTransform</code>.
trainFile=20news-bydate-train-stanford-classifier.txt
+
testFile=20news-bydate-test-stanford-classifier.txt
+
displayedColumn=-1
+
1.useSplitWords=true
+
1.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
+
1.splitWordsIgnoreRegexp=\\s+
+
1.useSplitPrefixSuffixNGrams=true
+
1.maxNGramLeng=4
+
1.minNGramLeng=1
+
1.splitWordShape=chris4
+
This pushes performance up a tiny bit further:
+
Micro-averaged accuracy/F1: 0.80324
+
Macro-averaged F1: 0.79777
+
  
As well as fiddling with features, we can also fiddle with the machine learning and optimizationBy default you get a maximum entropy (roughly, multiclass logistic regression) model with L2 regularization (a.k.a., a gaussian prior) optimized by the L-BFGS quasi-Newton method. You might be able to get a bit of improvement by adjusting the amount of regularization, which you can do by altering the sigma parameter:
+
If you run this model:
  sigma=3
+
cd $STANFORD_CLASSIFIER_HOME
You can also change the type of regularization altogetherLately, L1 regularization has been popular for producing well-performing compact modelsWe'll use itYou might then also want to save your built classifier so you can run it on data sets later(You can do this either directly with <code>ColumnDataClassifier</code> or, in your own program, you'll want to load the classifier using a method like <code>LinearClassifier.readClassifier(filename)</code>. This gives us our final properties file:
+
java -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier -prop examples/iris2007.prop
[http://nlp.stanford.edu/software/classifier/20news4.prop] .
+
Then you'll find that you get the test set completely right!
 +
Built this classifier: Linear classifier with the following weights
 +
        Iris-setosa    Iris-versicolor Iris-virginica
 +
3-Value -2.27            0.03            2.26         
 +
  CLASS    0.34            0.65          -1.01         
 +
4-Value -1.07          -0.91            1.99         
 +
2-Value  1.60          -0.13          -1.43         
 +
1-Value  0.69            0.42          -1.23         
 +
Total:  -0.72            0.05            0.57         
 +
Prob:    0.15            0.32            0.54         
 +
 +
Output format: dataColumn1 goldAnswer classifierAnswer P(classifierAnswer)
 +
5 Iris-setosa Iris-setosa 0.969786023975717
 +
4.6 Iris-setosa Iris-setosa 0.9922589089843827
 +
5.1 Iris-setosa Iris-setosa 0.9622434861270637
 +
  4.9 Iris-setosa Iris-setosa 0.9515812390773056
 +
5.4 Iris-setosa Iris-setosa 0.9811482146433487
 +
4.4 Iris-setosa Iris-setosa 0.9682526103461551
 +
5.3 Iris-setosa Iris-setosa 0.9832118698970074
 +
6.1 Iris-versicolor Iris-versicolor 0.7091015073390197
 +
  6 Iris-versicolor Iris-versicolor 0.7601066690047942
 +
  5.5 Iris-versicolor Iris-versicolor 0.723249991884404
 +
  6.5 Iris-versicolor Iris-versicolor 0.7913325733592043
 +
  6.8 Iris-versicolor Iris-versicolor 0.8416723165037595
 +
6.2 Iris-versicolor Iris-versicolor 0.8854234492113978
 +
6.7 Iris-virginica Iris-virginica 0.8440929745353494
 +
6.4 Iris-virginica Iris-virginica 0.7816139993113614
 +
5.7 Iris-virginica Iris-virginica 0.9352983975779943
 +
6.7 Iris-virginica Iris-virginica 0.8626420107509875
 +
6.8 Iris-virginica Iris-virginica 0.9442955376893006
 +
7.7 Iris-virginica Iris-virginica 0.8866439920995643
 +
7.3 Iris-virginica Iris-virginica 0.8633450387282207
 +
 +
20 examples in test set
 +
Cls Iris-setosa: TP=7 FN=0 FP=0 TN=13; Acc 1.000 P 1.000 R 1.000 F1 1.000
 +
Cls Iris-versicolor: TP=6 FN=0 FP=0 TN=14; Acc 1.000 P 1.000 R 1.000 F1 1.000
 +
Cls Iris-virginica: TP=7 FN=0 FP=0 TN=13; Acc 1.000 P 1.000 R 1.000 F1 1.000
 +
Micro-averaged accuracy/F1: 1.00000
 +
Macro-averaged F1: 1.00000
 +
This is a fairly easy, well-separated classification problem. Indeed you might think that the model is overparameterized, and it is. The number of examples in each class is roughly balanced, so there is presumably little value in the <code>useClassFeature</code> property which puts in a feature that models the overall distribution of classes. You also don't need to use all the numeric features.  See the plots on the [http://en.wikipedia.org/wiki/Iris_flower_data_set Wikipedia Iris flower data set page].  You can instead, delete features for columns 2 and 4 and just use the sepal and petal lengths rather than also widths, and also still get 100% accuracy on our test set.  (However, it happens that if you delete both the classFeature and the two width features, then the model that is built only gets 19/20 of the test set examples right....)
 +
 
 +
== Larger Examples ==
 +
 
 +
[[Software/Classifier/20 Newsgroups| 20 Newsgroups text classification]]
 +
 
 +
[[Software/Classifier/Sentiment| Sentiment Analysis]]

Latest revision as of 07:41, 27 October 2012

Contents

The Stanford Classifier

The Stanford Classifier is a general purpose classifier - something that takes a set of input data and assigns each of them to one of a set of categories. It does this by generating features from each datum which are associated with positive or negative numeric "votes" (weights) for each class. In principle, the weights could be set by hand, but the expected use is for the weights to be learned automatically based on hand-classified training data items. (This is referred to as "supervised learning".) The classifier can work with (scaled) real-valued and categorical inputs, and supports several machine learning algorithms. It also supports several forms of regularization, which is generally needed when building models with very large numbers of predictive features.

You can use the classifier on any sort of data, including standard statistics and machine learning data sets. But for small data sets and numeric predictors, you'd generally be better off using another tool such as R or Weka. Where the Stanford Classifier shines is in working with mainly textual data, where it has powerful and flexible means of generating features from character strings. However, if you've also got a few numeric variables, you can throw them in at the same time.

Small Examples

Cheese-Disease: A small textual example

While you can specify most options on the command line, normally the easiest way to train and test models with the Stanford Classifier is through use of properties files that record all the options used. You can find a couple of example data sets and properties files in the examples folder of the Stanford Classifier distribution.

The Cheese-Disease dataset is a play on the MTV game show Idiot Savants from the late 1990s, which had a trivia category of Cheese or Disease? (I guess you had to be there...). The goal is to distinguish cheese names from disease names. Look at the file examples/cheeseDisease.train to see what the data looks like. The first column is the category (1=cheese, 2=disease). The number coding of classes was arbitrary. The two classes could have been called "cheese" and "disease". The second column is the name. The columns are separated by a tab character. Here there is just one class column and one predictive column. This is the minimum for training a classifier, but you can have any number of predictive columns and specify which column has what role.

In the top level folder of the Stanford Classifier, the following command will build a model for this data set and test it on the test data set in the simplest possible way:

java -jar stanford-classifier.jar -prop examples/cheese2007.prop

This prints a lot of information. The first part shows a little bit about the data set. The second part shows the process of optimization (choosing feature weights in training a classifier on the training data). The next part then shows the results of testing the model on a separate test set of data, and the final 5 lines give the test results:

196 examples in test set
Cls 2: TP=123 FN=5 FP=8 TN=60; Acc 0.934 P 0.939 R 0.961 F1 0.950
Cls 1: TP=60 FN=8 FP=5 TN=123; Acc 0.934 P 0.923 R 0.882 F1 0.902
Micro-averaged accuracy/F1: 0.93367
Macro-averaged F1: 0.92603

For each class, the results show the number of true positives, false negatives, false positives, and true negatives, the class accuracy, precision, recall and F1 measure. It then gives a summary F1 over the whole data set, either micro-averaged (each test item counts equally) or macro-averaged (each class counts equally). For skewed data sets, macro-averaged F1 is a good measure of how well a classifier does on uncommon classes. Distinguishing cheeses and diseases isn't too hard for the classifier!

What features does the classifier use, and what is useful in making a decision? Mostly the system is using character n-grams - short subsequences of characters - though it also has a couple of other features that include a class frequency prior and a feature for the bucketed length of the name. In the above example, the -jar command runs the default class in the jar file, which is edu.stanford.nlp.classify.ColumnDataClassifier. In this example we'll show running the command explicitly. Also, often it is useful to mix a properties file and some command-line flags: if running a series of experiments, you might have the baseline classifier configuration in a properties file but put differences in properties for a series of experiments on the command-line. Things specified on the command-line override specifications in the properties file. We'll add a command-line flag to print features with high weights:

java -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier -prop examples/cheese2007.prop -printClassifier HighWeight

This now prints out the features with high weights. (This form of output is especially easily interpretable for categorical features.) You see that most of the clearest, best features are particular character n-grams that indicate disease words, such as: ia$, ma$, sis (where $ indicates the end of string). For example, the highest weight feature is:

(1-#E-ia,2)        1.0975

which says that the feature is a string final (#E) bigram of ia from the String in column 1. For that feature, the weight for class 2 (disease) is 1.0975 - this is a strong positive vote for this feature indicating a disease not a cheese.

The commonest features to use in text classification are word features, and you might think of adding them here (even though most of these names are 3 or less words, and many are only 1). You can fairly easily do this by adding a couple more flags for features:

java -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier -prop examples/cheese2007.prop \
        -printClassifier HighWeight -1.useSplitWords -1.splitWordsRegexp "\s"

However, at least in this case, accuracy isn't improved beyond just using character n-grams. We get the same results as before:

196 examples in test set
Cls 2: TP=123 FN=5 FP=8 TN=60; Acc 0.934 P 0.939 R 0.961 F1 0.950
Cls 1: TP=60 FN=8 FP=5 TN=123; Acc 0.934 P 0.923 R 0.882 F1 0.902
Micro-averaged accuracy/F1: 0.93367
Macro-averaged F1: 0.92603

Iris data set

Fisher's Iris data set is one of the most famous data sets in statistics and machine learning [1]. Three species of Iris are described by four numeric variables. We show it both as a simple example of numeric classification and as an example of using multiple columns of inputs for each data item. In the download, there is a version of the 150 item data set divided into 130 training examples and 20 test examples, and a properties file suitable for training a classifier from it.

Note that the provided properties file is set up to run from the top-level folder of the Stanford classifier distribution. We will asssume that STANFORD_CLASSIFIER_HOME points to it. You can do something like:

 STANFORD_CLASSIFIER_HOME=/Users/manning/Software/stanford-classifier-2008-04-18

Here is the provided properties file:

#
# Features
#
# Data format by column is:
# species     sepalLength	sepalWidth	petalLength	petalWidth
#
useClassFeature=true
1.realValued=true
2.realValued=true
3.realValued=true
4.realValued=true

printClassifier=AllWeights

#
# Training input
#
trainFile=./examples/iris.train
testFile=./examples/iris.test

The four predictor variables are all specified as real valued. There are other flags that will let you use numeric variables with a few simple transforms, such as logTransform or logitTransform.

If you run this model:

cd $STANFORD_CLASSIFIER_HOME
java -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier -prop examples/iris2007.prop

Then you'll find that you get the test set completely right!

Built this classifier: Linear classifier with the following weights
        Iris-setosa     Iris-versicolor Iris-virginica 
3-Value -2.27            0.03            2.26          
CLASS    0.34            0.65           -1.01          
4-Value -1.07           -0.91            1.99          
2-Value  1.60           -0.13           -1.43          
1-Value  0.69            0.42           -1.23          
Total:  -0.72            0.05            0.57          
Prob:    0.15            0.32            0.54          

Output format: dataColumn1 goldAnswer classifierAnswer P(classifierAnswer)
5	Iris-setosa	Iris-setosa	0.969786023975717
4.6	Iris-setosa	Iris-setosa	0.9922589089843827
5.1	Iris-setosa	Iris-setosa	0.9622434861270637
4.9	Iris-setosa	Iris-setosa	0.9515812390773056
5.4	Iris-setosa	Iris-setosa	0.9811482146433487
4.4	Iris-setosa	Iris-setosa	0.9682526103461551
5.3	Iris-setosa	Iris-setosa	0.9832118698970074
6.1	Iris-versicolor	Iris-versicolor	0.7091015073390197
6	Iris-versicolor	Iris-versicolor	0.7601066690047942
5.5	Iris-versicolor	Iris-versicolor	0.723249991884404
6.5	Iris-versicolor	Iris-versicolor	0.7913325733592043
6.8	Iris-versicolor	Iris-versicolor	0.8416723165037595
6.2	Iris-versicolor	Iris-versicolor	0.8854234492113978
6.7	Iris-virginica	Iris-virginica	0.8440929745353494
6.4	Iris-virginica	Iris-virginica	0.7816139993113614
5.7	Iris-virginica	Iris-virginica	0.9352983975779943
6.7	Iris-virginica	Iris-virginica	0.8626420107509875
6.8	Iris-virginica	Iris-virginica	0.9442955376893006
7.7	Iris-virginica	Iris-virginica	0.8866439920995643
7.3	Iris-virginica	Iris-virginica	0.8633450387282207

20 examples in test set
Cls Iris-setosa: TP=7 FN=0 FP=0 TN=13; Acc 1.000 P 1.000 R 1.000 F1 1.000
Cls Iris-versicolor: TP=6 FN=0 FP=0 TN=14; Acc 1.000 P 1.000 R 1.000 F1 1.000
Cls Iris-virginica: TP=7 FN=0 FP=0 TN=13; Acc 1.000 P 1.000 R 1.000 F1 1.000
Micro-averaged accuracy/F1: 1.00000
Macro-averaged F1: 1.00000

This is a fairly easy, well-separated classification problem. Indeed you might think that the model is overparameterized, and it is. The number of examples in each class is roughly balanced, so there is presumably little value in the useClassFeature property which puts in a feature that models the overall distribution of classes. You also don't need to use all the numeric features. See the plots on the Wikipedia Iris flower data set page. You can instead, delete features for columns 2 and 4 and just use the sepal and petal lengths rather than also widths, and also still get 100% accuracy on our test set. (However, it happens that if you delete both the classFeature and the two width features, then the model that is built only gets 19/20 of the test set examples right....)

Larger Examples

20 Newsgroups text classification

Sentiment Analysis