Difference between revisions of "Software/Classifier"

From NLPWiki
Jump to: navigation, search
(20 Newsgroups)
(20 Newsgroups)
Line 40: Line 40:
 
  -trainFile 20news-bydate-devtrain-stanford-classifier.txt -testFile 20news-bydate-devtest-stanford-classifier.txt \
 
  -trainFile 20news-bydate-devtrain-stanford-classifier.txt -testFile 20news-bydate-devtest-stanford-classifier.txt \
 
  -2.useSplitWords -2.splitWordsRegexp "\\s+"
 
  -2.useSplitWords -2.splitWordsRegexp "\\s+"
Note that once the dataset is reasonably large, you have to give a fair amount of memory to the classifier. (There are some methods for reducing memory usage that we'll discuss later.)  [Also, if you have any problems with this command, it's probably an issue with symbol escaping in your shell, and so you should probably just skip it and go on to the next example that uses a properties file.]
+
Note that once the dataset is reasonably large, you have to give a fair amount of memory to the classifier. (There are some methods for reducing memory usage that we'll discuss later on.)  [Also, if you have any problems with this command, it's probably an issue with symbol escaping in your shell, and so you should probably just skip it and go on to the next example that uses a properties file.]
  
 
This command generates a lot of output. The last part shows the accuracy of the classifier:
 
This command generates a lot of output. The last part shows the accuracy of the classifier:
Line 69: Line 69:
  
 
As soon as you want to start specifying a lot of options, you'll probably want a properties file to specify everything.  Indeed, some options you can only successfully set with a properties file.  One of the first things to address seems to be better tokenization.  Tokenizing on whitespace is fairly naive.  One can usually write a rough-and-ready but usable tokenizer inside <code>ColumnDataClassifier</code> by using the <code>splitWordsTokenizerRegexp</code> property.  Another alternative would be to use a tool like the Stanford tokenizer to pre-tokenize the data.  In general, this will probably work a bit better for English-language text, but is beyond what we consider here. Here's a simple properties file which you can [http://nlp.stanford.edu/software/classifier/20news1.prop download]:
 
As soon as you want to start specifying a lot of options, you'll probably want a properties file to specify everything.  Indeed, some options you can only successfully set with a properties file.  One of the first things to address seems to be better tokenization.  Tokenizing on whitespace is fairly naive.  One can usually write a rough-and-ready but usable tokenizer inside <code>ColumnDataClassifier</code> by using the <code>splitWordsTokenizerRegexp</code> property.  Another alternative would be to use a tool like the Stanford tokenizer to pre-tokenize the data.  In general, this will probably work a bit better for English-language text, but is beyond what we consider here. Here's a simple properties file which you can [http://nlp.stanford.edu/software/classifier/20news1.prop download]:
  trainFile=20news-bydate-train-stanford-classifier.txt
+
  trainFile=20news-bydate-devtrain-stanford-classifier.txt
  testFile=20news-bydate-test-stanford-classifier.txt
+
  testFile=20news-bydate-devtest-stanford-classifier.txt
 
  2.useSplitWords=true
 
  2.useSplitWords=true
 
  2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
 
  2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
Line 112: Line 112:
 
  printClassifierParam=400
 
  printClassifierParam=400
  
This pushes performance up a tiny bit further:
+
This pushes performance up another percent, roughly:
  Micro-averaged accuracy/F1: 0.80324
+
  Micro-averaged accuracy/F1: 0.88755
  Macro-averaged F1: 0.79777
+
  Macro-averaged F1: 0.88588
  
 
As well as fiddling with features, we can also fiddle with the machine learning and optimization.  By default you get a maximum entropy (roughly, multiclass logistic regression) model with L2 regularization (a.k.a., a gaussian prior) optimized by the L-BFGS quasi-Newton method. You might be able to get a bit of improvement by adjusting the amount of regularization, which you can do by altering the sigma parameter:
 
As well as fiddling with features, we can also fiddle with the machine learning and optimization.  By default you get a maximum entropy (roughly, multiclass logistic regression) model with L2 regularization (a.k.a., a gaussian prior) optimized by the L-BFGS quasi-Newton method. You might be able to get a bit of improvement by adjusting the amount of regularization, which you can do by altering the sigma parameter:
Line 122: Line 122:
 
We tried a couple of settings of both of these, but nothing really seemed to beat out the defaults.
 
We tried a couple of settings of both of these, but nothing really seemed to beat out the defaults.
  
And so we return to fiddling with features. Academic papers don't spend much time discussing fiddling with features, but in practice it's usually where most of the gains come from (once you've got a basically competent method). In the last properties file, we had it print out the highest weight features. That's often useful to look at.  Here are the top few:
+
And so we return to fiddling with features. Academic papers don't spend much time discussing fiddling with features, but in practice it's usually where most of the gains come from (once you've got a basically competent machine learning method). In the last properties file, we had it print out the highest weight features. That's often useful to look at.  Here are the top few:
 
  (1-S#B-X,comp.windows.x)                  1.3971
 
  (1-S#B-X,comp.windows.x)                  1.3971
 
  (1-S#B-Win,comp.os.ms-windows.misc)        0.9708
 
  (1-S#B-Win,comp.os.ms-windows.misc)        0.9708
Line 133: Line 133:
 
  (1-S#E-ows,comp.os.ms-windows.misc)        0.7382
 
  (1-S#E-ows,comp.os.ms-windows.misc)        0.7382
 
  (1-S#E-ale,misc.forsale)                  0.7361
 
  (1-S#E-ale,misc.forsale)                  0.7361
They basically make sense.  Note that all but one of them is a beginning or end split words n-gram feature (S#B or S#E).  This partly makes sense: these features generalize over multiple actual words, so starting with "X" will match "X" "Xwindows" or "X-windows".  It's part of what makes these features useful. But it also suggests that we might really be missing out by not collapsing case distinctions: S#E-ale is a good feature precisely because it matches both "Sale" or "sale".  So let's try tackling that.  One thing to try would be to just lowercase everything.  Another would be to put in both the regular splitWords features AND lowercased versions of them.  We tried both.   
+
They basically make sense.  Note that all but one of them is a beginning or end split words n-gram feature (S#B or S#E).  This partly makes sense: these features generalize over multiple actual words, so starting with "X" will match "X" "Xwindows" or "X-windows".  It's part of what makes these features useful. But it also suggests that we might really be missing out by not collapsing case distinctions: S#E-ale is a good feature for <code>misc.forsale</code> precisely because it matches both "Sale" or "sale".  So let's try tackling that.  There are several possible variants.  One thing to try would be to just lowercase everything.  Another would be to instead put in lowercased splitWords or both the regular splitWords features <i>and</i> lowercased versions of them.  We tried several thingsRelevant properties are <code>lowercase</code>, <code>useLowercaseSplitWords</code>, and <code>lowercaseNGrams</code>.  The best thing seemed to be to put in the splitWords regular case and lowercase, but to keep the character n-grams cased.
  
As a more technical point, the top features list also shows that many of the features are highly collinear: you get pairs like SW-car and S#E-car or S#E-dows and S#E-ows which mainly match in the same documents. This is common with textual features, and we don't try to solve this problem. The best we can do is to observe that maximum entropy models are reasonably tolerant of this sort of feature overlap.
+
<i>More on that technical point: the top features list also shows that many of the features are highly collinear: you get pairs like SW-car and S#E-car or S#E-dows and S#E-ows which mainly match in the same documents. This is common with textual features, and we don't try to solve this problem. The best we can do is to observe that maximum entropy models are reasonably tolerant of this sort of feature overlap: The fact that the model with both lowercased and regular case features seems to work best is indicative of this.</i>
  
 +
We'll use that.  You might then also want to save your built classifier so you can run it on data sets later.  (You can do this either directly with <code>ColumnDataClassifier</code> or, in your own program, you'll want to load the classifier using a method like <code>LinearClassifier.readClassifier(filename)</code>. This gives us our final properties file:
 +
[http://nlp.stanford.edu/software/classifier/20news4.prop] .
 +
trainFile=20news-bydate-devtrain-stanford-classifier.txt
 +
testFile=20news-bydate-devtest-stanford-classifier.txt
 +
2.useLowercaseSplitWords=true
 +
2.useSplitWords=true
 +
2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
 +
2.splitWordsIgnoreRegexp=\\s+
 +
2.useSplitPrefixSuffixNGrams=true
 +
2.maxNGramLeng=4
 +
2.minNGramLeng=1
 +
2.splitWordShape=chris4
 +
printClassifier=HighWeight
 +
printClassifierParam=400
 +
serializeTo=20newsgroups4.ser.gz
  
 +
Here are the devtest set results:
 +
2241 examples in test set
 +
Cls alt.atheism: TP=90 FN=10 FP=5 TN=2136; Acc 0.993 P 0.947 R 0.900 F1 0.923
 +
Cls comp.graphics: TP=91 FN=24 FP=26 TN=2100; Acc 0.978 P 0.778 R 0.791 F1 0.784
 +
Cls comp.os.ms-windows.misc: TP=98 FN=17 FP=20 TN=2106; Acc 0.983 P 0.831 R 0.852 F1 0.841
 +
Cls comp.sys.ibm.pc.hardware: TP=90 FN=28 FP=35 TN=2088; Acc 0.972 P 0.720 R 0.763 F1 0.741
 +
Cls comp.sys.mac.hardware: TP=92 FN=22 FP=15 TN=2112; Acc 0.983 P 0.860 R 0.807 F1 0.833
 +
Cls comp.windows.x: TP=112 FN=8 FP=16 TN=2105; Acc 0.989 P 0.875 R 0.933 F1 0.903
 +
Cls misc.forsale: TP=99 FN=18 FP=29 TN=2095; Acc 0.979 P 0.773 R 0.846 F1 0.808
 +
Cls rec.autos: TP=98 FN=16 FP=15 TN=2112; Acc 0.986 P 0.867 R 0.860 F1 0.863
 +
Cls rec.motorcycles: TP=106 FN=6 FP=6 TN=2123; Acc 0.995 P 0.946 R 0.946 F1 0.946
 +
Cls rec.sport.baseball: TP=114 FN=5 FP=11 TN=2111; Acc 0.993 P 0.912 R 0.958 F1 0.934
 +
Cls rec.sport.hockey: TP=113 FN=4 FP=5 TN=2119; Acc 0.996 P 0.958 R 0.966 F1 0.962
 +
Cls sci.crypt: TP=110 FN=6 FP=2 TN=2123; Acc 0.996 P 0.982 R 0.948 F1 0.965
 +
Cls sci.electronics: TP=97 FN=20 FP=16 TN=2108; Acc 0.984 P 0.858 R 0.829 F1 0.843
 +
Cls sci.med: TP=113 FN=11 FP=3 TN=2114; Acc 0.994 P 0.974 R 0.911 F1 0.942
 +
Cls sci.space: TP=117 FN=4 FP=4 TN=2116; Acc 0.996 P 0.967 R 0.967 F1 0.967
 +
Cls soc.religion.christian: TP=113 FN=6 FP=18 TN=2104; Acc 0.989 P 0.863 R 0.950 F1 0.904
 +
Cls talk.politics.guns: TP=99 FN=7 FP=4 TN=2131; Acc 0.995 P 0.961 R 0.934 F1 0.947
 +
Cls talk.politics.mideast: TP=107 FN=4 FP=1 TN=2129; Acc 0.998 P 0.991 R 0.964 F1 0.977
 +
Cls talk.politics.misc: TP=91 FN=8 FP=4 TN=2138; Acc 0.995 P 0.958 R 0.919 F1 0.938
 +
Cls talk.religion.misc: TP=49 FN=18 FP=7 TN=2167; Acc 0.989 P 0.875 R 0.731 F1 0.797
 +
Micro-averaged accuracy/F1: 0.89201
 +
Macro-averaged F1: 0.89099
 +
 +
This then leaves the final test where we train on the full training set and then test on the test set:
 +
java -mx1800m -cp $STANFORD_CLASSIFIER_JAR edu.stanford.nlp.classify.ColumnDataClassifier -prop 20news4.prop \
 +
  -trainFile 20news-bydate-train-stanford-classifier.txt -testFile 20news-bydate-test-stanford-classifier.txt
 +
 +
Here are the final test set results:
 +
 +
You'll notice that they are quite a bit lower. Results being a bit lower is to be suspected (after all, we were overfitting to the devtest set by doing multiple runs), but the results here are a lot lower.  I think this is because in the <code>bydate</code> version of 20 Newsgroups, the test set is all from a later time period than the training set, whereas when we subdivided the training set, we took a roughly uniform sample across it as the devtest set.  There's enough extra similarity between documents close in time and  temporal movement in what gets posted over time in most time series text data sets that you get a substantial difference like this.
 +
 +
Nevertheless, these results are pretty good!  Here are results we could find in the academic literature for the same data set:
  
We'll use it.  You might then also want to save your built classifier so you can run it on data sets later.  (You can do this either directly with <code>ColumnDataClassifier</code> or, in your own program, you'll want to load the classifier using a method like <code>LinearClassifier.readClassifier(filename)</code>. This gives us our final properties file:
 
[http://nlp.stanford.edu/software/classifier/20news4.prop] .
 
  
 
Other results on 20 Newsgroups
 
Other results on 20 Newsgroups

Revision as of 15:03, 17 September 2010

The Stanford Classifier

The Stanford Classifier is a general purpose classifier - something that takes input data and assigns it to one of a number of categories. It can work with (scaled) real-valued and categorical inputs, supports several machine learning algorithms. It also supports several forms of regularization, which is generally needed when building models with very large numbers of predictive features.

You can use the classifier on any sort of data, including standard statistics and machine learning data sets. But for small data sets and numeric predictors, you'd generally be better off using another tool such as Weka or R. Where the Stanford Classifier shines is in working with mainly textual data, where it has powerful and flexible means of generating features from character strings. But if you've also got a few numeric variables, you can throw them in at the same time.

Examples

20 Newsgroups

Now let's walk through a more realistic example of using the Stanford Classifier on the well-known 20 Newgroups dataset. There are several versions of 20 Newsgroups. We'll use Jason Rennie's "bydate" version from [1]. The precise commands shown below should work on Linux or Mac OS X systems. The Java parts should also be fine under Windows, but you'll need to do the downloading and reformatting a little differently.

First we download the corpus:

curl -O http://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz

Then we unpack it:

tar -xzf 20news-bydate.tar.gz

The 20 Newsgroups data comes in a format of one file per document, with the correct class shown by the directory name. The Stanford Classifier works with tab-delimited text files. We convert it into this latter format with a simple shell script:

curl -O http://nlp.stanford.edu/software/classifier/convert-to-stanford-classifier.csh
chmod 755 convert-to-stanford-classifier.csh
./convert-to-stanford-classifier.csh

We do this by converting line endings to spaces. This loses line break information which could easily have some value in classification. (We could have done something tricker like converting line endings to a vertical tab or form feed, but this will do for this example.) As part of the conversion, we also convert the original 8-bit newsgroup posts to utf-8. It's 2010 now.

Check that everything worked and you have the right number of documents:

wc -l 20news-bydate*-stanford-classifier.txt
   7532 20news-bydate-test-stanford-classifier.txt
  11314 20news-bydate-train-stanford-classifier.txt
  18846 total

The correct number should be as shown.

Next we split the training data into initial training data and a development test set. (Methodological note: people often fail to do this. But if you are going to build a sequence of models trying to find the best classifier, and moreover if you are likely to be looking at the test results to see where your model fails and how you might fix it, then it is vital that you use a development test set that is distinct from your final test set. Otherwise you will overfit on your final test set and report unrealistically rosy results.)

grep -P '^\S+\s[0-9]*[1-8]\s' 20news-bydate-train-stanford-classifier.txt > 20news-bydate-devtrain-stanford-classifier.txt
grep -P '^\S+\s[0-9]*[90]\s' 20news-bydate-train-stanford-classifier.txt > 20news-bydate-devtest-stanford-classifier.txt

This gives a roughly random partition of 80% of the data in devtrain and 20% in devtest based on the final digit of the newsgroup item number.

We'll assume that $STANFORD_CLASSIFIER_JAR points at the Stanford Classifier jar. So, depending on your shell, do something like:

 STANFORD_CLASSIFIER_JAR=/Users/manning/Software/stanford-classifier-2008-04-18/stanford-classifier.jar

This next command builds pretty much the simplest classifier that you could. It divides the input documents on white space and then trains a classifier on the resulting tokens. The command is normally entered as all one line without the trailing backslashes, but we've split it so it formats better on this page.

java -mx1800m -cp $STANFORD_CLASSIFIER_JAR edu.stanford.nlp.classify.ColumnDataClassifier \
-trainFile 20news-bydate-devtrain-stanford-classifier.txt -testFile 20news-bydate-devtest-stanford-classifier.txt \
-2.useSplitWords -2.splitWordsRegexp "\\s+"

Note that once the dataset is reasonably large, you have to give a fair amount of memory to the classifier. (There are some methods for reducing memory usage that we'll discuss later on.) [Also, if you have any problems with this command, it's probably an issue with symbol escaping in your shell, and so you should probably just skip it and go on to the next example that uses a properties file.]

This command generates a lot of output. The last part shows the accuracy of the classifier:

2241 examples in test set
Cls alt.atheism: TP=80 FN=20 FP=5 TN=2136; Acc 0.989 P 0.941 R 0.800 F1 0.865
Cls comp.graphics: TP=82 FN=33 FP=30 TN=2096; Acc 0.972 P 0.732 R 0.713 F1 0.722
Cls comp.os.ms-windows.misc: TP=91 FN=24 FP=20 TN=2106; Acc 0.980 P 0.820 R 0.791 F1 0.805
Cls comp.sys.ibm.pc.hardware: TP=87 FN=31 FP=32 TN=2091; Acc 0.972 P 0.731 R 0.737 F1 0.734
Cls comp.sys.mac.hardware: TP=90 FN=24 FP=23 TN=2104; Acc 0.979 P 0.796 R 0.789 F1 0.793
Cls comp.windows.x: TP=112 FN=8 FP=20 TN=2101; Acc 0.988 P 0.848 R 0.933 F1 0.889
Cls misc.forsale: TP=100 FN=17 FP=47 TN=2077; Acc 0.971 P 0.680 R 0.855 F1 0.758
Cls rec.autos: TP=95 FN=19 FP=21 TN=2106; Acc 0.982 P 0.819 R 0.833 F1 0.826
Cls rec.motorcycles: TP=98 FN=14 FP=14 TN=2115; Acc 0.988 P 0.875 R 0.875 F1 0.875
Cls rec.sport.baseball: TP=112 FN=7 FP=13 TN=2109; Acc 0.991 P 0.896 R 0.941 F1 0.918
Cls rec.sport.hockey: TP=113 FN=4 FP=4 TN=2120; Acc 0.996 P 0.966 R 0.966 F1 0.966
Cls sci.crypt: TP=108 FN=8 FP=5 TN=2120; Acc 0.994 P 0.956 R 0.931 F1 0.943
Cls sci.electronics: TP=93 FN=24 FP=24 TN=2100; Acc 0.979 P 0.795 R 0.795 F1 0.795
Cls sci.med: TP=104 FN=20 FP=8 TN=2109; Acc 0.988 P 0.929 R 0.839 F1 0.881
Cls sci.space: TP=113 FN=8 FP=9 TN=2111; Acc 0.992 P 0.926 R 0.934 F1 0.930
Cls soc.religion.christian: TP=107 FN=12 FP=22 TN=2100; Acc 0.985 P 0.829 R 0.899 F1 0.863
Cls talk.politics.guns: TP=96 FN=10 FP=5 TN=2130; Acc 0.993 P 0.950 R 0.906 F1 0.928
Cls talk.politics.mideast: TP=104 FN=7 FP=3 TN=2127; Acc 0.996 P 0.972 R 0.937 F1 0.954
Cls talk.politics.misc: TP=87 FN=12 FP=5 TN=2137; Acc 0.992 P 0.946 R 0.879 F1 0.911
Cls talk.religion.misc: TP=49 FN=18 FP=10 TN=2164; Acc 0.988 P 0.831 R 0.731 F1 0.778
Micro-averaged accuracy/F1: 0.85721
Macro-averaged F1: 0.85670

We see the statistics for each class and averaged over all the data. For each class, we see the four cells of counts in a contingency table, and then the accuracy and precision, recall and F-measure calculated for them. This model already seems to perform quite well. But we can do a little better.

As soon as you want to start specifying a lot of options, you'll probably want a properties file to specify everything. Indeed, some options you can only successfully set with a properties file. One of the first things to address seems to be better tokenization. Tokenizing on whitespace is fairly naive. One can usually write a rough-and-ready but usable tokenizer inside ColumnDataClassifier by using the splitWordsTokenizerRegexp property. Another alternative would be to use a tool like the Stanford tokenizer to pre-tokenize the data. In general, this will probably work a bit better for English-language text, but is beyond what we consider here. Here's a simple properties file which you can download:

trainFile=20news-bydate-devtrain-stanford-classifier.txt
testFile=20news-bydate-devtest-stanford-classifier.txt
2.useSplitWords=true
2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
2.splitWordsIgnoreRegexp=\\s+

This tokenizer recognizes tokens starting with letters followed by letters and ASCII digits, or some number, money, and percent expressions, whitespace or a single letter. The whitespace tokens are then ignored.

Just a bit of work on tokenization gives us about 2%!

java -mx1800m -cp $STANFORD_CLASSIFIER_JAR edu.stanford.nlp.classify.ColumnDataClassifier -prop 20news1.prop
...
Micro-averaged accuracy/F1: 0.87773
Macro-averaged F1: 0.87619

You can look at the output of the tokenizer by examining the features the classifier generates. We can do this with this command:

java -mx1800m -cp $STANFORD_CLASSIFIER_JAR edu.stanford.nlp.classify.ColumnDataClassifier -prop 20news1.prop -printFeatures prop1

Look at the resulting (very large) file prop1.train . You might be able to get a bit better performance by fine-tuning the tokenization, though, often, for text categorization, a fairly simple tokenization is sufficient, providing (1) it's enough to recognize most semantically contentful word units, and (2) it doesn't produce a huge number of rarely observed features. (E.g., for this data set, there are a few uuencoded files in newsgroup postings. Under whitespace tokenization, each line of the file became a token that almost certainly only occurred once. Now they'll get split up on characters that aren't letters and digits. That not only reduces the token space, but probably some of the letter strings that do result will recur, and become slightly useful features. It seems like it might be better to remove the uuencoded content altogether, while leaving just enough to know that there was a uuencoded file in the news posting. Some much more developed processors such as the bow tokenizer handle recognizing and stripping uuencoded files. The Stanford Classifier doesn't. uuencoded text isn't so common in 2010. But we don't worry about this; it probably doesn't make much difference.)

There are many other kinds of features that you could consider putting into the classifier which might improve performance. The length of a newsgroup posting might be informative, but it probably isn't linearly related to its class, so we bin lengths into 4 categories, which become categorical features. You have to choose those cut-offs manually, but ColumnDataClassifier can print simple statistics of how many documents of each class fall in each bin, which can help you see if you've chosen very bad cut-offs. Here's the properties file: [2] .

trainFile=20news-bydate-devtrain-stanford-classifier.txt
testFile=20news-bydate-devtest-stanford-classifier.txt
2.useSplitWords=true
2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
2.splitWordsIgnoreRegexp=\\s+
2.binnedLengths=500,1500,4500,13500
2.binnedLengthsStatistics=true

In this case, that doesn't help:

Micro-averaged accuracy/F1: 0.87773
Macro-averaged F1: 0.87619

Other feature types that are often good with text documents are: to use token prefix and suffixes and to use the "shape" of a token (whether it contains upper or lowercase or digits or certain kinds of symbols as equivalence classes). We've also changed the output to show the highest weight features in the model. That's often informative to look at. This gives our next properties file: [3] .

trainFile=20news-bydate-devtrain-stanford-classifier.txt
testFile=20news-bydate-devtest-stanford-classifier.txt
2.useSplitWords=true
2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
2.splitWordsIgnoreRegexp=\\s+
2.useSplitPrefixSuffixNGrams=true
2.maxNGramLeng=4
2.minNGramLeng=1
2.splitWordShape=chris4
printClassifier=HighWeight
printClassifierParam=400

This pushes performance up another percent, roughly:

Micro-averaged accuracy/F1: 0.88755
Macro-averaged F1: 0.88588

As well as fiddling with features, we can also fiddle with the machine learning and optimization. By default you get a maximum entropy (roughly, multiclass logistic regression) model with L2 regularization (a.k.a., a gaussian prior) optimized by the L-BFGS quasi-Newton method. You might be able to get a bit of improvement by adjusting the amount of regularization, which you can do by altering the sigma parameter:

sigma=3

You can also change the type of regularization altogether. Lately, L1 regularization has been popular for producing well-performing compact models. You can select it by specifying the L1 regularization parameter:

l1reg=0.1

We tried a couple of settings of both of these, but nothing really seemed to beat out the defaults.

And so we return to fiddling with features. Academic papers don't spend much time discussing fiddling with features, but in practice it's usually where most of the gains come from (once you've got a basically competent machine learning method). In the last properties file, we had it print out the highest weight features. That's often useful to look at. Here are the top few:

(1-S#B-X,comp.windows.x)                   1.3971
(1-S#B-Win,comp.os.ms-windows.misc)        0.9708
(1-SW-car,rec.autos)                       0.9538
(1-S#E-car,rec.autos)                      0.9178
(1-S#B-Mac,comp.sys.mac.hardware)          0.9054
(1-S#E-dows,comp.os.ms-windows.misc)       0.9020
(1-S#B-x,comp.windows.x)                   0.8157
(1-S#B-car,rec.autos)                      0.7689
(1-S#E-ows,comp.os.ms-windows.misc)        0.7382
(1-S#E-ale,misc.forsale)                   0.7361

They basically make sense. Note that all but one of them is a beginning or end split words n-gram feature (S#B or S#E). This partly makes sense: these features generalize over multiple actual words, so starting with "X" will match "X" "Xwindows" or "X-windows". It's part of what makes these features useful. But it also suggests that we might really be missing out by not collapsing case distinctions: S#E-ale is a good feature for misc.forsale precisely because it matches both "Sale" or "sale". So let's try tackling that. There are several possible variants. One thing to try would be to just lowercase everything. Another would be to instead put in lowercased splitWords or both the regular splitWords features and lowercased versions of them. We tried several things. Relevant properties are lowercase, useLowercaseSplitWords, and lowercaseNGrams. The best thing seemed to be to put in the splitWords regular case and lowercase, but to keep the character n-grams cased.

More on that technical point: the top features list also shows that many of the features are highly collinear: you get pairs like SW-car and S#E-car or S#E-dows and S#E-ows which mainly match in the same documents. This is common with textual features, and we don't try to solve this problem. The best we can do is to observe that maximum entropy models are reasonably tolerant of this sort of feature overlap: The fact that the model with both lowercased and regular case features seems to work best is indicative of this.

We'll use that. You might then also want to save your built classifier so you can run it on data sets later. (You can do this either directly with ColumnDataClassifier or, in your own program, you'll want to load the classifier using a method like LinearClassifier.readClassifier(filename). This gives us our final properties file: [4] .

trainFile=20news-bydate-devtrain-stanford-classifier.txt
testFile=20news-bydate-devtest-stanford-classifier.txt
2.useLowercaseSplitWords=true
2.useSplitWords=true
2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
2.splitWordsIgnoreRegexp=\\s+
2.useSplitPrefixSuffixNGrams=true
2.maxNGramLeng=4
2.minNGramLeng=1
2.splitWordShape=chris4
printClassifier=HighWeight
printClassifierParam=400
serializeTo=20newsgroups4.ser.gz

Here are the devtest set results:

2241 examples in test set
Cls alt.atheism: TP=90 FN=10 FP=5 TN=2136; Acc 0.993 P 0.947 R 0.900 F1 0.923
Cls comp.graphics: TP=91 FN=24 FP=26 TN=2100; Acc 0.978 P 0.778 R 0.791 F1 0.784
Cls comp.os.ms-windows.misc: TP=98 FN=17 FP=20 TN=2106; Acc 0.983 P 0.831 R 0.852 F1 0.841
Cls comp.sys.ibm.pc.hardware: TP=90 FN=28 FP=35 TN=2088; Acc 0.972 P 0.720 R 0.763 F1 0.741
Cls comp.sys.mac.hardware: TP=92 FN=22 FP=15 TN=2112; Acc 0.983 P 0.860 R 0.807 F1 0.833
Cls comp.windows.x: TP=112 FN=8 FP=16 TN=2105; Acc 0.989 P 0.875 R 0.933 F1 0.903
Cls misc.forsale: TP=99 FN=18 FP=29 TN=2095; Acc 0.979 P 0.773 R 0.846 F1 0.808
Cls rec.autos: TP=98 FN=16 FP=15 TN=2112; Acc 0.986 P 0.867 R 0.860 F1 0.863
Cls rec.motorcycles: TP=106 FN=6 FP=6 TN=2123; Acc 0.995 P 0.946 R 0.946 F1 0.946
Cls rec.sport.baseball: TP=114 FN=5 FP=11 TN=2111; Acc 0.993 P 0.912 R 0.958 F1 0.934
Cls rec.sport.hockey: TP=113 FN=4 FP=5 TN=2119; Acc 0.996 P 0.958 R 0.966 F1 0.962
Cls sci.crypt: TP=110 FN=6 FP=2 TN=2123; Acc 0.996 P 0.982 R 0.948 F1 0.965
Cls sci.electronics: TP=97 FN=20 FP=16 TN=2108; Acc 0.984 P 0.858 R 0.829 F1 0.843
Cls sci.med: TP=113 FN=11 FP=3 TN=2114; Acc 0.994 P 0.974 R 0.911 F1 0.942
Cls sci.space: TP=117 FN=4 FP=4 TN=2116; Acc 0.996 P 0.967 R 0.967 F1 0.967
Cls soc.religion.christian: TP=113 FN=6 FP=18 TN=2104; Acc 0.989 P 0.863 R 0.950 F1 0.904
Cls talk.politics.guns: TP=99 FN=7 FP=4 TN=2131; Acc 0.995 P 0.961 R 0.934 F1 0.947
Cls talk.politics.mideast: TP=107 FN=4 FP=1 TN=2129; Acc 0.998 P 0.991 R 0.964 F1 0.977
Cls talk.politics.misc: TP=91 FN=8 FP=4 TN=2138; Acc 0.995 P 0.958 R 0.919 F1 0.938
Cls talk.religion.misc: TP=49 FN=18 FP=7 TN=2167; Acc 0.989 P 0.875 R 0.731 F1 0.797
Micro-averaged accuracy/F1: 0.89201
Macro-averaged F1: 0.89099

This then leaves the final test where we train on the full training set and then test on the test set:

java -mx1800m -cp $STANFORD_CLASSIFIER_JAR edu.stanford.nlp.classify.ColumnDataClassifier -prop 20news4.prop \
  -trainFile 20news-bydate-train-stanford-classifier.txt -testFile 20news-bydate-test-stanford-classifier.txt

Here are the final test set results:

You'll notice that they are quite a bit lower. Results being a bit lower is to be suspected (after all, we were overfitting to the devtest set by doing multiple runs), but the results here are a lot lower. I think this is because in the bydate version of 20 Newsgroups, the test set is all from a later time period than the training set, whereas when we subdivided the training set, we took a roughly uniform sample across it as the devtest set. There's enough extra similarity between documents close in time and temporal movement in what gets posted over time in most time series text data sets that you get a substantial difference like this.

Nevertheless, these results are pretty good! Here are results we could find in the academic literature for the same data set:


Other results on 20 Newsgroups Rennie ICML 2003 accuracy: Multinomial NB 0848 TWCNB 0.861 and SVM 0.862 on average of 80/20 splits Lan, Tan, and Low AAAI 2006 accuracy: SVM 0.81, kNN 0.69 Gu and Zhou SIAM Datamining 2009 accuracy: