edu.stanford.nlp.classify
Class ColumnDataClassifier

java.lang.Object
  extended by edu.stanford.nlp.classify.ColumnDataClassifier

public class ColumnDataClassifier
extends java.lang.Object

ColumnDataClassifier provides a command-line interface for doing context-free (independent) classification of a series of data items, where each data item is represented by a line of a file, as a list of String variables, in tab-separated columns. Some features will interpret these variables as numbers, but the code is mainly oriented towards generating features for string classification. To designate a real-valued feature, use the realValued option described below. The classifier can be either a Bernoulli Naive Bayes model or a loglinear discriminative (i.e., maxent) model.

Input files are expected to be one data item per line with two or more columns indicating the class of the item and one or more predictive features. Columns are separated by tab characters. Tab and newline characters cannot occur inside field values (there is no escaping mechanism); any other characters are legal in field values.

Typical usage:

java edu.stanford.nlp.classify.ColumnDataClassifier -prop propFile

or

java -mx300m edu.stanford.nlp.classify.ColumnDataClassifier -trainFile trainFile -testFile testFile -useNGrams|... > output

(Note that for large data sets, you may wish to specify the amount of memory available to Java, such as in the second example above.)

In the simplest case, there are just two tab-separated columns in the training input: the first for the class, and the second for the String datum which has that class. In more complex uses, each datum can be multidimensional, and there are many columns of data attributes.

To illustrate simple uses, and the behavior of Naive Bayes and Maximum entropy classifiers, example files corresponding to the examples from the Manning and Klein maxent classifier tutorial, slides 46-49, available at http://nlp.stanford.edu/downloads/classifier.shtml are included in the classify package source directory (files starting with "easy"). Other examples appear in the examples directory of the distributed classifier.

In many instances, parameters can either be given on the command line or provided using a Properties file (specified on the command-line with -prop propFile). Option names are the same as property names with a preceding dash. Boolean properties can simply be turned on via the command line. Parameters of types int, String, and double take a following argument after the option. Command-line parameters can only define features for the first column describing the datum. If you have multidimensional data, you need to use a properties file. Property names, as below, are either global (things like the testFile name) or are seen as properties that define features for the first data column (we count columns from 0 - unlike the Unix cut command!). To specify features for a particular data column, precede a feature by a column number and then a period (for example, 3.wordShape=chris4). If no number is specified, then the default interpretation is column 0. Note that in properties files you must give a value to boolean properties (e.g., 2.useString=true); just giving the property name (as 2.useString) isn't sufficient.

The following properties are recognized:

Property NameTypeDefault ValueDescriptionFeatName
loadClassifier Stringn/aPath of serialized classifier file to load
serializeToStringn/aPath to serialize classifier to
printToStringn/aPath to print a text representation of the linear classifier to
trainFileStringn/aPath of file to use as training data
testFileStringn/aPath of file to use as test data
displayedColumnint1Column number that will be printed out to stdout in the output next to the gold class and the chosen class. This is just an aide memoire. If the value is negative, nothing is printed.
goldAnswerColumnint0Column number that contains the correct class for each data item (again, columns are numbered from 0 up).
groupingColumnint-1Column for grouping multiple data items for the purpose of computing ranking accuracy. This is appropriate when only one datum in a group can be correct, and the intention is to choose the highest probability one, rather than accepting all above a threshold. Multiple items in the same group must be contiguous in the test file (otherwise it would be necessary to cache probabilities and groups for the entire test file to check matches). If it is negative, no grouping column is used, and no ranking accuracy is reported.
rankingScoreColumnint-1If this parameter is non-negative and a groupingColumn is defined, then an average ranking score will be calculated by scoring the chosen candidate from a group according to its value in this column (for instance, the values of this column can be set to a mean reciprocal rank of 1.0 for the best answer, 0.5 for the second best and so on, or the value of this column can be a similarity score reflecting the similarity of the answer to the true answer.
rankingAccuracyClassStringnullIf this and groupingColumn are defined (positive), then the system will compute a ranking accuracy under the assumption that there is (at most) one assignment of this class for each group, and ranking accuracy counts the classifier as right if that datum is the one with highest probability according to the model.
useStringbooleanfalseGives you a feature for whole string sS-str
useClassFeaturebooleanfalseInclude a feature for the class (as a class marginal)CLASS
binnedLengthsStringnullIf non-null, treat as a sequence of comma separated integer bounds, where items above the previous bound (if any) up to the next bound (inclusive) are binned (e.g., "1,5,15,30,60"). The feature represents the length of the String in this column.Len-range
binnedLengthsStatisticsbooleanfalseIf true, print to stderr contingency table of statistics for binnedLengths.
binnedValuesStringnullIf non-null, treat as a sequence of comma separated double bounds, where data items above the previous bound up to the next bound (inclusive) are binned. If a value in this column isn't a legal double, then the value is treated as binnedValuesNaN.Val-range
binnedValuesNaNdouble-1.0If the value of a numeric binnedValues field is not a number, it will be given this value.
binnedValuesStatisticsbooleanfalseIf true, print to stderr a contingency table of statistics for binnedValues.
countCharsStringnullIf non-null, count the number of occurrences of each character in the String, and make a feature for each character, binned according to countCharsBinsChar-ch-range
countCharsBinsString"0,1"Treat as a sequence of comma separated integer bounds, where character counts above the previous bound up to and including the next bound are binned. For instance, a value of "0,2" will give 3 bins, dividing a character count into bins of 0, 1-or-2, and 3-or-more occurrences.
splitWordsRegexpStringnullIf defined, use this as a regular expression on which to split the whole string (as in the String.split() function, which will return the things between delimiters, and discard the delimiters). The resulting split-up "words" will be used in classifier features iff one of the other "useSplit" options is turned on.
splitWordsTokenizerRegexpStringnullIf defined, use this as a regular expression to cut initial pieces off a String. This regular expression should always match the String, and the size of the token is the number of characters matched. So, for example, one can group letter and number characters but do nothing else with a regular expression like ([A-Za-z]+|[0-9]+|.). (If the regular expression doesn't match, the first character of the string is treated as a one character word, and then matching is tried again, but in this case a warning message is printed.) Note that, for Java regular expressions with disjunctions like this, the match is the first matching disjunction, not the longest matching disjunction, so patterns with common prefixes need to be ordered from most specific (longest) to least specific (shortest).) The resulting split up "words" will be used in classifier features iff one of the other "useSplit" options is turned on. Note that as usual for Java String processing, backslashes must be doubled in the regular expressions that you write.
splitWordsIgnoreRegexpStringnullIf defined, this regexp is used to determine character sequences which should not be returned as tokens when using the splitWordsTokenizerRegexp. Typically, these might be whitespace tokens (i.e., \\s+).
useSplitWordsbooleanfalseMake features from the "words" that are returned by dividing the string on splitWordsRegexp or splitWordsTokenizerRegexp. Requires splitWordsRegexp or splitWordsTokenizerRegexp.SW-str
useLowercaseSplitWordsbooleanfalseMake features from the "words" that are returned by dividing the string on splitWordsRegexp or splitWordsTokenizerRegexp and then lowercasing the result. Requires splitWordsRegexp or splitWordsTokenizerRegexp. Note that this can be specified independently of useSplitWords. You can put either or both original cased and lowercased words in as features.SW-str
useSplitWordPairsbooleanfalseMake features from the pairs of adjacent "words" that are returned by dividing the string into splitWords. Requires splitWordsRegexp or splitWordsTokenizerRegexp.SWP-str1-str2
maxWordNGramLengint-1If this number is positive, word n-grams above this size will not be used in the model
minWordNGramLengint1Must be positive. word n-grams below this size will not be used in the model
wordNGramBoundaryRegexpStringnullIf this is defined and the regexp matches, then the ngram stops
useSplitFirstLastWordsbooleanfalseMake features from the first and last "words" that are returned as splitWords. Requires splitWordsRegexp or splitWordsTokenizerRegexp.SFW-str, SLW-str
useSplitNGramsbooleanfalseMake features from letter n-grams - internal as well as edge all treated the same - after the data string has been split into tokens. Requires splitWordsRegexp or splitWordsTokenizerRegexp.S#-str
useSplitPrefixSuffixNGramsbooleanfalseMake features from prefixes and suffixes after splitting with splitWordsRegexp. Requires splitWordsRegexp or splitWordsTokenizerRegexp.S#B-str, S#E-str
useNGramsbooleanfalseMake features from letter n-grams - internal as well as edge all treated the same.#-str
usePrefixSuffixNGramsbooleanfalseMake features from prefix and suffix strings.#B-str, #E-str
lowercasebooleanfalseMake the input string lowercase so all features work unicase
lowercaseNGramsbooleanfalseMake features from letter n-grams all lowercase (for both useNGrams and usePrefixSuffixNGrams)
maxNGramLengint-1If this number is positive, n-grams above this size will not be used in the model
minNGramLengint2Must be positive. n-grams below this size will not be used in the model
partialNGramRegexpStringnullIf this is defined and the regexp matches, then n-grams are made only from the matching text (if no capturing groups are defined) or from the first capturing group of the regexp, if there is one. This substring is used for both useNGrams and usePrefixSuffixNGrams.
realValuedbooleanfalseTreat this column as real-valued and do not perform any transforms on the feature value.Value
logTransformbooleanfalseTreat this column as real-valued and use the log of the value as the feature value.Log
logitTransformbooleanfalseTreat this column as real-valued and use the logit of the value as the feature value.Logit
sqrtTransformbooleanfalseTreat this column as real-valued and use the square root of the value as the feature value.Sqrt
wordShapeStringnoneEither "none" for no wordShape use, or the name of a word shape function recognized by WordShapeClassifier.lookupShaper(String), such as "dan1" or "chris4". WordShape functions equivalence-class strings based on the pattern of letter, digit, and symbol characters that they contain. The details depend on the particular function chosen.SHAPE-str
splitWordShapeStringnoneEither "none" for no wordShape or the name of a word shape function recognized by WordShapeClassifier.lookupShaper(String). This is applied to each "word" found by splitWordsRegexp or splitWordsTokenizerRegexp.SSHAPE-str
featureMinimumSupportint0A feature, that is, an (observed,class) pair, will only be included in the model providing it is seen a minimum of this number of times in the training data.
biasedHyperplaneStringnullIf non-null, a sequence of comma-separated pairs of className prob. An item will only be classified to a certain class className if its probability of class membership exceeds the given conditional probability prob; otherwise it will be assigned to a different class. If this list of classes is exhaustive, and no condition is satisfied, then the most probable class is chosen.
printFeaturesStringnullPrint out the features of the classifier to a file based on this name.
printClassifierStringnullStyle in which to print the classifier. One of: HighWeight, HighMagnitude, AllWeights, WeightHistogram, WeightDistribution. See LinearClassifier class for details.
printClassifierParamint100A parameter to the printing style, which may give, for example the number of parameters to print (for HighWeight or HighMagnitude).
justifybooleanfalseFor each test data item, print justification (weights) for active features used in classification.
exitAfterTrainingFeaturizationbooleanfalseIf true, the program exits after reading the training data (trainFile) and before building a classifier. This is useful in conjunction with printFeatures, if one only wants to convert data to features for use with another classifier.
internbooleanfalseIf true, (String) intern all of the (final) feature names. Recommended (this saves memory, but slows down feature generation in training).
cacheNGramsbooleanfalseIf true, record the NGram features that correspond to a String (under the current option settings and reuse rather than recalculating if the String is seen again. Disrecommended (speeds training but can require enormous amounts of memory).
useNBbooleanfalseUse a Naive Bayes generative classifier (over all features) rather than a discriminative logistic regression classifier. (Set useClass to true to get a prior term.)
useBinarybooleanfalseUse the binary classifier (i.e. use LogisticClassifierFactory, rather than LinearClassifierFactory) to get classifier
l1regdouble0.0If set to be larger than 0, uses L1 regularization
useAdaptL1booleanfalseIf true, uses adaptive L1 regularization to find value of l1reg that gives the desired number of features set by limitFeatures
l1regmindouble0.0Minimum L1 in search
l1regmaxdouble500.0Maximum L1 in search
featureWeightThresholddouble0.0Threshold of model weight at which feature is kept. "Unimportant" low weight features are discarded. (Currently only implemented for adaptL1.)
limitFeaturesLabelsStringnullIf set, only include features for these labels in the desired number of features
limitFeaturesint0If set to be larger than 0, uses adaptive L1 regularization to find value of l1reg that gives the desired number of features
priorString/intquadraticType of prior (regularization penalty on weights). Possible values are null, "no", "quadratic", "huber", "quartic", "cosh", or "adapt". See LogPrior for more information.
useSumbooleanfalseDo optimization via summed conditional likelihood, rather than the product. (This is expensive, non-standard, and somewhat unstable, but can be quite effective: see Klein and Manning 2002 EMNLP paper.)
tolerancedouble1e-4Convergence tolerance in parameter optimization
sigmadouble1.0A parameter to several of the smoothing (i.e., regularization) methods, usually giving a degree of smoothing as a standard deviation (with small positive values being stronger smoothing, and bigger values weaker smoothing)
epsilondouble0.01Used only as a parameter in the Huber loss: this is the distance from 0 at which the loss changes from quadratic to linear
useQNbooleantrueUse Quasi-Newton optimization if true, otherwise use Conjugate Gradient optimization. Recommended.
QNsizeint15Number of previous iterations of Quasi-Newton to store (this increases memory use, but speeds convergence by letting the Quasi-Newton optimization more effectively approximate the second derivative).
featureFormatbooleanfalseAssumes the input file isn't text strings but already featurized. One column is treated as the class column (as defined by goldAnswerColumn, and all other columns are treated as features of the instance. (If answers are not present, set goldAnswerColumn to a negative number.)
trainFromSVMLightbooleanfalseAssumes the trainFile is in SVMLight format (see SVMLight webpage for more information)
testFromSVMLightbooleanfalseAssumes the testFile is in SVMLight format

Author:
Christopher Manning, Anna Rafferty, Angel Chang (add options for using l1reg)

Constructor Summary
ColumnDataClassifier(java.util.Properties props)
          Construct a ColumnDataClassifier.
ColumnDataClassifier(java.lang.String filename)
          Construct a ColumnDataClassifier.
 
Method Summary
static void main(java.lang.String[] args)
          Runs the ColumnDataClassifier from the command-line.
 Classifier<java.lang.String,java.lang.String> makeClassifier(GeneralDataset<java.lang.String,java.lang.String> train)
          Creates a classifier from training data.
 Classifier<java.lang.String,java.lang.String> makeClassifierAdaptL1(GeneralDataset<java.lang.String,java.lang.String> train)
          Creates a classifier from training data.
 Datum<java.lang.String,java.lang.String> makeDatumFromLine(java.lang.String line, int lineNo)
          Entry point for taking a String (formatted as a line of a TSV file) and translating it into a Datum of features.
static java.lang.String[] makeSimpleLineInfo(java.lang.String line, int lineNo)
           
 Pair<GeneralDataset<java.lang.String,java.lang.String>,java.util.List<java.lang.String[]>> readTestExamples(java.lang.String filename)
           
 GeneralDataset<java.lang.String,java.lang.String> readTrainingExamples(java.lang.String fileName)
           
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

ColumnDataClassifier

public ColumnDataClassifier(java.lang.String filename)
Construct a ColumnDataClassifier.

Parameters:
filename - The file with properties which specifies all aspects of behavior. See the class documentation for details of the properties.

ColumnDataClassifier

public ColumnDataClassifier(java.util.Properties props)
Construct a ColumnDataClassifier.

Parameters:
props - The properties object specifies all aspects of its behavior. See the class documentation for details of the properties.
Method Detail

makeDatumFromLine

public Datum<java.lang.String,java.lang.String> makeDatumFromLine(java.lang.String line,
                                                                  int lineNo)
Entry point for taking a String (formatted as a line of a TSV file) and translating it into a Datum of features. If real valued features are used, this method accesses makeRVFDatumFromLine and returns an RVFDatum; otherwise, categorical features are used.

Parameters:
line - Line of file
lineNo - The line number. This is just used in error messages if there is an input format problem. You can make it 0.
Returns:
A Datum (may be an RVFDatum)

makeSimpleLineInfo

public static java.lang.String[] makeSimpleLineInfo(java.lang.String line,
                                                    int lineNo)

readTrainingExamples

public GeneralDataset<java.lang.String,java.lang.String> readTrainingExamples(java.lang.String fileName)

readTestExamples

public Pair<GeneralDataset<java.lang.String,java.lang.String>,java.util.List<java.lang.String[]>> readTestExamples(java.lang.String filename)

makeClassifierAdaptL1

public Classifier<java.lang.String,java.lang.String> makeClassifierAdaptL1(GeneralDataset<java.lang.String,java.lang.String> train)
Creates a classifier from training data. It searches for an appropriate l1reg parameter to use to get specified number of features.

Parameters:
train - training data
Returns:
trained classifier

makeClassifier

public Classifier<java.lang.String,java.lang.String> makeClassifier(GeneralDataset<java.lang.String,java.lang.String> train)
Creates a classifier from training data.

Parameters:
train - training data
Returns:
trained classifier

main

public static void main(java.lang.String[] args)
                 throws java.io.IOException
Runs the ColumnDataClassifier from the command-line. Usage:

java edu.stanford.nlp.classify.ColumnDataClassifier -trainFile trainFile -testFile testFile [-useNGrams|-useString|-sigma sigma|...]

or

java ColumnDataClassifier -prop propFile

Parameters:
args - Command line arguments, as described in the class documentation
Throws:
java.io.IOException


Stanford NLP Group