edu.stanford.nlp.classify
Class LinearClassifierFactory<L,F>

java.lang.Object
  extended by edu.stanford.nlp.classify.AbstractLinearClassifierFactory<L,F>
      extended by edu.stanford.nlp.classify.LinearClassifierFactory<L,F>
All Implemented Interfaces:
ClassifierFactory<L,F,Classifier<L,F>>, java.io.Serializable

public class LinearClassifierFactory<L,F>
extends AbstractLinearClassifierFactory<L,F>

Builds various types of linear classifiers, with functionality for setting objective function, optimization method, and other parameters. Classifiers can be defined with passed constructor arguments or using setter methods. Defaults to Quasi-newton optimization of a LogConditionalObjectiveFunction (Merges old classes: CGLinearClassifierFactory, QNLinearClassifierFactory, and MaxEntClassifierFactory).

Author:
Jenny Finkel, Chris Cox (merged factories, 8/11/04), Dan Klein (CGLinearClassifierFactory, MaxEntClassifierFactory), Galen Andrew (tuneSigma), Marie-Catherine de Marneffe (CV in tuneSigma), Sarah Spikes (Templatization, though I don't know what to do with the Minimizer), Ramesh Nallapati (nmramesh@cs.stanford.edu) trainSemiSupGE(edu.stanford.nlp.classify.GeneralDataset, java.util.List>, java.util.List, double) methods
See Also:
Serialized Form

Nested Class Summary
static class LinearClassifierFactory.LinearClassifierCreator<L,F>
           
 
Field Summary
protected static double[] sigmasToTry
           
 
Constructor Summary
LinearClassifierFactory()
           
LinearClassifierFactory(boolean useSum)
           
LinearClassifierFactory(double tol)
           
LinearClassifierFactory(double tol, boolean useSum, double sigma)
           
LinearClassifierFactory(double tol, boolean useSum, int prior, double sigma, double epsilon)
           
LinearClassifierFactory(double tol, boolean useSum, int prior, double sigma, double epsilon, int mem)
           
LinearClassifierFactory(Minimizer<DiffFunction> min)
           
LinearClassifierFactory(Minimizer<DiffFunction> min, boolean useSum)
           
LinearClassifierFactory(Minimizer<DiffFunction> min, double tol, boolean useSum)
           
LinearClassifierFactory(Minimizer<DiffFunction> min, double tol, boolean useSum, double sigma)
           
LinearClassifierFactory(Minimizer<DiffFunction> min, double tol, boolean useSum, int prior, double sigma)
           
LinearClassifierFactory(Minimizer<DiffFunction> min, double tol, boolean useSum, int prior, double sigma, double epsilon)
          Create a factory that builds linear classifiers from training data.
LinearClassifierFactory(Minimizer<DiffFunction> min, double tol, boolean useSum, LogPrior logPrior)
           
 
Method Summary
 double[][] adaptWeights(double[][] origWeights, GeneralDataset<L,F> adaptDataset)
          Adapt classifier (adjust the mean of Gaussian prior) under construction -pichuan
 void crossValidateSetSigma(GeneralDataset<L,F> dataset)
          Calls the method crossValidateSetSigma(GeneralDataset, int) with 5-fold cross-validation.
 void crossValidateSetSigma(GeneralDataset<L,F> dataset, int kfold)
          callls the method crossValidateSetSigma(GeneralDataset, int, Scorer, LineSearcher) with multi-class log-likelihood scoring (see MultiClassAccuracyStats) and golden-section line search (see GoldenSectionLineSearch).
 void crossValidateSetSigma(GeneralDataset<L,F> dataset, int kfold, LineSearcher minimizer)
           
 void crossValidateSetSigma(GeneralDataset<L,F> dataset, int kfold, Scorer<L> scorer)
           
 void crossValidateSetSigma(GeneralDataset<L,F> dataset, int kfold, Scorer<L> scorer, LineSearcher minimizer)
          Sets the sigma parameter to a value that optimizes the cross-validation score given by scorer.
 LinearClassifierFactory.LinearClassifierCreator<L,F> getClassifierCreator(GeneralDataset<L,F> dataset)
           
 double getSigma()
           
 double[] heldOutSetSigma(GeneralDataset<L,F> train)
           
 double[] heldOutSetSigma(GeneralDataset<L,F> train, GeneralDataset<L,F> dev)
           
 double[] heldOutSetSigma(GeneralDataset<L,F> train, GeneralDataset<L,F> dev, LineSearcher minimizer)
           
 double[] heldOutSetSigma(GeneralDataset<L,F> train, GeneralDataset<L,F> dev, Scorer<L> scorer)
           
 double[] heldOutSetSigma(GeneralDataset<L,F> trainSet, GeneralDataset<L,F> devSet, Scorer<L> scorer, LineSearcher minimizer)
          Sets the sigma parameter to a value that optimizes the held-out score given by scorer.
 double[] heldOutSetSigma(GeneralDataset<L,F> train, Scorer<L> scorer)
           
 Classifier<java.lang.String,java.lang.String> loadFromFilename(java.lang.String file)
          Given the path to a file representing the text based serialization of a Linear Classifier, reconstitutes and returns that LinearClassifier.
 void resetWeight()
          NOTE: Nothing is actually done with this value.
 void setEpsilon(double eps)
          Sets the epsilon value for LogConditionalObjectiveFunction.
 boolean setEvaluators(int iters, Evaluator[] evaluators)
           
 void setHeldOutSearcher(LineSearcher heldOutSearcher)
          Set the LineSearcher to be used in heldOutSetSigma(GeneralDataset, GeneralDataset).
 void setMem(int mem)
          Set the mem value for QNMinimizer.
 void setMinimizer(Minimizer<DiffFunction> min)
          Sets the minimizer.
 void setPrior(LogPrior logPrior)
          Set the prior.
 void setRetrainFromScratchAfterSigmaTuning(boolean retrainFromScratchAfterSigmaTuning)
          If set to true, then when training a classifier, after an optimal sigma is chosen a model is relearned from scratch.
 void setSigma(double sigma)
           
 void setTol(double tol)
          Set the tolerance.
 void setTuneSigmaCV(int folds)
          setTuneSigmaCV sets the tuneSigmaCV flag: when turned on, the sigma is tuned by cross-validation.
 void setTuneSigmaHeldOut()
          setTuneSigmaHeldOut sets the tuneSigmaHeldOut flag: when turned on, the sigma is tuned by means of held-out (70%-30%).
 void setUseSum(boolean useSum)
          NOTE: nothing is actually done with this value! SetUseSum sets the useSum flag: when turned on, the Summed Conditional Objective Function is used.
 void setVerbose(boolean verbose)
          Set the verbose flag for CGMinimizer.
 LinearClassifier<L,F> trainClassifier(GeneralDataset<L,F> dataset)
          Trains a Classifier on a Dataset.
 LinearClassifier<L,F> trainClassifier(GeneralDataset<L,F> dataset, double[] initial)
           
 Classifier<L,F> trainClassifier(GeneralDataset<L,F> dataset, float[] dataWeights, LogPrior prior)
           
 Classifier<L,F> trainClassifier(java.lang.Iterable<Datum<L,F>> dataIterable)
           
 LinearClassifier<L,F> trainClassifier(java.util.List<RVFDatum<L,F>> examples)
          Deprecated. 
 Classifier<L,F> trainClassifierSemiSup(GeneralDataset<L,F> data, GeneralDataset<L,F> biasedData, double[][] confusionMatrix, double[] initial)
          IMPORTANT: dataset and biasedDataset must have same featureIndex, labelIndex
 LinearClassifier<L,F> trainClassifierV(GeneralDataset<L,F> train, double min, double max, boolean accuracy)
          Train a classifier with a sigma tuned on a validation set.
 LinearClassifier<L,F> trainClassifierV(GeneralDataset<L,F> train, GeneralDataset<L,F> validation, double min, double max, boolean accuracy)
          Train a classifier with a sigma tuned on a validation set.
 LinearClassifier<L,F> trainSemiSupGE(GeneralDataset<L,F> labeledDataset, java.util.List<? extends Datum<L,F>> unlabeledDataList)
          Trains the linear classifier using Generalized Expectation criteria as described in Generalized Expectation Criteria for Semi Supervised Learning of Conditional Random Fields, Mann and McCallum, ACL 2008.
 LinearClassifier<L,F> trainSemiSupGE(GeneralDataset<L,F> labeledDataset, java.util.List<? extends Datum<L,F>> unlabeledDataList, double convexComboCoeff)
           
 LinearClassifier<L,F> trainSemiSupGE(GeneralDataset<L,F> labeledDataset, java.util.List<? extends Datum<L,F>> unlabeledDataList, java.util.List<F> GEFeatures, double convexComboCoeff)
          Trains the linear classifier using Generalized Expectation criteria as described in Generalized Expectation Criteria for Semi Supervised Learning of Conditional Random Fields, Mann and McCallum, ACL 2008.
 double[][] trainWeights(GeneralDataset<L,F> dataset)
           
 double[][] trainWeights(GeneralDataset<L,F> dataset, double[] initial)
           
 double[][] trainWeights(GeneralDataset<L,F> dataset, double[] initial, boolean bypassTuneSigma)
           
 double[][] trainWeightsSemiSup(GeneralDataset<L,F> data, GeneralDataset<L,F> biasedData, double[][] confusionMatrix, double[] initial)
           
 void useConjugateGradientAscent()
          Sets the minimizer to CGMinimizer.
 void useConjugateGradientAscent(boolean verbose)
          Sets the minimizer to CGMinimizer, with the passed verbose flag.
 void useHybridMinimizer()
           
 void useHybridMinimizer(double initialSMDGain, int stochasticBatchSize, StochasticCalculateMethods stochasticMethod, int cutoffIteration)
           
 void useHybridMinimizerWithInPlaceSGD(int SGDPasses, int tuneSampleSize, double sigma)
           
 void useInPlaceStochasticGradientDescent()
           
 void useInPlaceStochasticGradientDescent(int SGDPasses, int tuneSampleSize, double sigma)
           
 void useQuasiNewton()
          Sets the minimizer to QuasiNewton.
 void useQuasiNewton(boolean useRobust)
           
 void useStochasticGradientDescent()
           
 void useStochasticGradientDescent(double gainSGD, int stochasticBatchSize)
           
 void useStochasticGradientDescentToQuasiNewton(SeqClassifierFlags p)
           
 void useStochasticMetaDescent()
           
 void useStochasticMetaDescent(double initialSMDGain, int stochasticBatchSize, StochasticCalculateMethods stochasticMethod, int passes)
           
 void useStochasticQN(double initialSMDGain, int stochasticBatchSize)
           
 
Methods inherited from class edu.stanford.nlp.classify.AbstractLinearClassifierFactory
trainClassifier, trainClassifier
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

sigmasToTry

protected static double[] sigmasToTry
Constructor Detail

LinearClassifierFactory

public LinearClassifierFactory()

LinearClassifierFactory

public LinearClassifierFactory(Minimizer<DiffFunction> min)

LinearClassifierFactory

public LinearClassifierFactory(boolean useSum)

LinearClassifierFactory

public LinearClassifierFactory(double tol)

LinearClassifierFactory

public LinearClassifierFactory(Minimizer<DiffFunction> min,
                               boolean useSum)

LinearClassifierFactory

public LinearClassifierFactory(Minimizer<DiffFunction> min,
                               double tol,
                               boolean useSum)

LinearClassifierFactory

public LinearClassifierFactory(double tol,
                               boolean useSum,
                               double sigma)

LinearClassifierFactory

public LinearClassifierFactory(Minimizer<DiffFunction> min,
                               double tol,
                               boolean useSum,
                               double sigma)

LinearClassifierFactory

public LinearClassifierFactory(Minimizer<DiffFunction> min,
                               double tol,
                               boolean useSum,
                               int prior,
                               double sigma)

LinearClassifierFactory

public LinearClassifierFactory(double tol,
                               boolean useSum,
                               int prior,
                               double sigma,
                               double epsilon)

LinearClassifierFactory

public LinearClassifierFactory(double tol,
                               boolean useSum,
                               int prior,
                               double sigma,
                               double epsilon,
                               int mem)

LinearClassifierFactory

public LinearClassifierFactory(Minimizer<DiffFunction> min,
                               double tol,
                               boolean useSum,
                               int prior,
                               double sigma,
                               double epsilon)
Create a factory that builds linear classifiers from training data.

Parameters:
min - The method to be used for optimization (minimization) (default: QNMinimizer)
tol - The convergence threshold for the minimization (default: 1e-4)
useSum - Asks to the optimizer to minimize the sum of the likelihoods of individual data items rather than their product (default: false) NOTE: this is currently ignored!!!
prior - What kind of prior to use, as an enum constant from class LogPrior
sigma - The strength of the prior (smaller is stronger for most standard priors) (default: 1.0)
epsilon - A second parameter to the prior (currently only used by the Huber prior)

LinearClassifierFactory

public LinearClassifierFactory(Minimizer<DiffFunction> min,
                               double tol,
                               boolean useSum,
                               LogPrior logPrior)
Method Detail

adaptWeights

public double[][] adaptWeights(double[][] origWeights,
                               GeneralDataset<L,F> adaptDataset)
Adapt classifier (adjust the mean of Gaussian prior) under construction -pichuan

Parameters:
origWeights - the original weights trained from the training data
adaptDataset - the Dataset used to adapt the trained weights
Returns:
adapted weights

trainWeights

public double[][] trainWeights(GeneralDataset<L,F> dataset)
Specified by:
trainWeights in class AbstractLinearClassifierFactory<L,F>

trainWeights

public double[][] trainWeights(GeneralDataset<L,F> dataset,
                               double[] initial)

trainWeights

public double[][] trainWeights(GeneralDataset<L,F> dataset,
                               double[] initial,
                               boolean bypassTuneSigma)

trainClassifierSemiSup

public Classifier<L,F> trainClassifierSemiSup(GeneralDataset<L,F> data,
                                              GeneralDataset<L,F> biasedData,
                                              double[][] confusionMatrix,
                                              double[] initial)
IMPORTANT: dataset and biasedDataset must have same featureIndex, labelIndex


trainWeightsSemiSup

public double[][] trainWeightsSemiSup(GeneralDataset<L,F> data,
                                      GeneralDataset<L,F> biasedData,
                                      double[][] confusionMatrix,
                                      double[] initial)

trainSemiSupGE

public LinearClassifier<L,F> trainSemiSupGE(GeneralDataset<L,F> labeledDataset,
                                            java.util.List<? extends Datum<L,F>> unlabeledDataList,
                                            java.util.List<F> GEFeatures,
                                            double convexComboCoeff)
Trains the linear classifier using Generalized Expectation criteria as described in Generalized Expectation Criteria for Semi Supervised Learning of Conditional Random Fields, Mann and McCallum, ACL 2008. The original algorithm is proposed for CRFs but has been adopted to LinearClassifier (which is a simpler special case of a CRF). IMPORTANT: the labeled features that are passed as an argument are assumed to be binary valued, although other features are allowed to be real valued.


trainSemiSupGE

public LinearClassifier<L,F> trainSemiSupGE(GeneralDataset<L,F> labeledDataset,
                                            java.util.List<? extends Datum<L,F>> unlabeledDataList)
Trains the linear classifier using Generalized Expectation criteria as described in Generalized Expectation Criteria for Semi Supervised Learning of Conditional Random Fields, Mann and McCallum, ACL 2008. The original algorithm is proposed for CRFs but has been adopted to LinearClassifier (which is a simpler, special case of a CRF). Automatically discovers high precision, high frequency labeled features to be used as GE constraints. IMPORTANT: the current feature selector assumes the features are binary. The GE constraints assume the constraining features are binary anyway, although it doesn't make such assumptions about other features.


trainSemiSupGE

public LinearClassifier<L,F> trainSemiSupGE(GeneralDataset<L,F> labeledDataset,
                                            java.util.List<? extends Datum<L,F>> unlabeledDataList,
                                            double convexComboCoeff)

trainClassifierV

public LinearClassifier<L,F> trainClassifierV(GeneralDataset<L,F> train,
                                              GeneralDataset<L,F> validation,
                                              double min,
                                              double max,
                                              boolean accuracy)
Train a classifier with a sigma tuned on a validation set.

Returns:
The constructed classifier

trainClassifierV

public LinearClassifier<L,F> trainClassifierV(GeneralDataset<L,F> train,
                                              double min,
                                              double max,
                                              boolean accuracy)
Train a classifier with a sigma tuned on a validation set. In this case we are fitting on the last 30% of the training data.

Parameters:
train - The data to train (and validate) on.
Returns:
The constructed classifier

setTol

public void setTol(double tol)
Set the tolerance. 1e-4 is the default.


setPrior

public void setPrior(LogPrior logPrior)
Set the prior.

Parameters:
logPrior - One of the priors defined in LogConditionalObjectiveFunction. LogPrior.QUADRATIC is the default.

setVerbose

public void setVerbose(boolean verbose)
Set the verbose flag for CGMinimizer. Only used with conjugate-gradient minimization. false is the default.


setMinimizer

public void setMinimizer(Minimizer<DiffFunction> min)
Sets the minimizer. QNMinimizer is the default.


setEpsilon

public void setEpsilon(double eps)
Sets the epsilon value for LogConditionalObjectiveFunction.


setSigma

public void setSigma(double sigma)

getSigma

public double getSigma()

useQuasiNewton

public void useQuasiNewton()
Sets the minimizer to QuasiNewton. QNMinimizer is the default.


useQuasiNewton

public void useQuasiNewton(boolean useRobust)

useStochasticQN

public void useStochasticQN(double initialSMDGain,
                            int stochasticBatchSize)

useStochasticMetaDescent

public void useStochasticMetaDescent()

useStochasticMetaDescent

public void useStochasticMetaDescent(double initialSMDGain,
                                     int stochasticBatchSize,
                                     StochasticCalculateMethods stochasticMethod,
                                     int passes)

useStochasticGradientDescent

public void useStochasticGradientDescent()

useStochasticGradientDescent

public void useStochasticGradientDescent(double gainSGD,
                                         int stochasticBatchSize)

useInPlaceStochasticGradientDescent

public void useInPlaceStochasticGradientDescent()

useInPlaceStochasticGradientDescent

public void useInPlaceStochasticGradientDescent(int SGDPasses,
                                                int tuneSampleSize,
                                                double sigma)

useHybridMinimizerWithInPlaceSGD

public void useHybridMinimizerWithInPlaceSGD(int SGDPasses,
                                             int tuneSampleSize,
                                             double sigma)

useStochasticGradientDescentToQuasiNewton

public void useStochasticGradientDescentToQuasiNewton(SeqClassifierFlags p)

useHybridMinimizer

public void useHybridMinimizer()

useHybridMinimizer

public void useHybridMinimizer(double initialSMDGain,
                               int stochasticBatchSize,
                               StochasticCalculateMethods stochasticMethod,
                               int cutoffIteration)

setMem

public void setMem(int mem)
Set the mem value for QNMinimizer. Only used with quasi-newton minimization. 15 is the default.

Parameters:
mem - Number of previous function/derivative evaluations to store to estimate second derivative. Storing more previous evaluations improves training convergence speed. This number can be very small, if memory conservation is the priority. For large optimization systems (of 100,000-1,000,000 dimensions), setting this to 15 produces quite good results, but setting it to 50 can decrease the iteration count by about 20% over a value of 15.

useConjugateGradientAscent

public void useConjugateGradientAscent(boolean verbose)
Sets the minimizer to CGMinimizer, with the passed verbose flag.


useConjugateGradientAscent

public void useConjugateGradientAscent()
Sets the minimizer to CGMinimizer.


setUseSum

public void setUseSum(boolean useSum)
NOTE: nothing is actually done with this value! SetUseSum sets the useSum flag: when turned on, the Summed Conditional Objective Function is used. Otherwise, the LogConditionalObjectiveFunction is used. The default is false.


setTuneSigmaHeldOut

public void setTuneSigmaHeldOut()
setTuneSigmaHeldOut sets the tuneSigmaHeldOut flag: when turned on, the sigma is tuned by means of held-out (70%-30%). Otherwise no tuning on sigma is done. The default is false.


setTuneSigmaCV

public void setTuneSigmaCV(int folds)
setTuneSigmaCV sets the tuneSigmaCV flag: when turned on, the sigma is tuned by cross-validation. The number of folds is the parameter. If there is less data than the number of folds, leave-one-out is used. The default is false.


resetWeight

public void resetWeight()
NOTE: Nothing is actually done with this value. resetWeight sets the restWeight flag. This flag makes sense only if sigma is tuned: when turned on, the weights outputed by the tuneSigma method will be reset to zero when training the classifier. The default is false.


crossValidateSetSigma

public void crossValidateSetSigma(GeneralDataset<L,F> dataset)
Calls the method crossValidateSetSigma(GeneralDataset, int) with 5-fold cross-validation.

Parameters:
dataset - the data set to optimize sigma on.

crossValidateSetSigma

public void crossValidateSetSigma(GeneralDataset<L,F> dataset,
                                  int kfold)
callls the method crossValidateSetSigma(GeneralDataset, int, Scorer, LineSearcher) with multi-class log-likelihood scoring (see MultiClassAccuracyStats) and golden-section line search (see GoldenSectionLineSearch).

Parameters:
dataset - the data set to optimize sigma on.

crossValidateSetSigma

public void crossValidateSetSigma(GeneralDataset<L,F> dataset,
                                  int kfold,
                                  Scorer<L> scorer)

crossValidateSetSigma

public void crossValidateSetSigma(GeneralDataset<L,F> dataset,
                                  int kfold,
                                  LineSearcher minimizer)

crossValidateSetSigma

public void crossValidateSetSigma(GeneralDataset<L,F> dataset,
                                  int kfold,
                                  Scorer<L> scorer,
                                  LineSearcher minimizer)
Sets the sigma parameter to a value that optimizes the cross-validation score given by scorer. Search for an optimal value is carried out by minimizer

Parameters:
dataset - the data set to optimize sigma on.

setHeldOutSearcher

public void setHeldOutSearcher(LineSearcher heldOutSearcher)
Set the LineSearcher to be used in heldOutSetSigma(GeneralDataset, GeneralDataset).


heldOutSetSigma

public double[] heldOutSetSigma(GeneralDataset<L,F> train)

heldOutSetSigma

public double[] heldOutSetSigma(GeneralDataset<L,F> train,
                                Scorer<L> scorer)

heldOutSetSigma

public double[] heldOutSetSigma(GeneralDataset<L,F> train,
                                GeneralDataset<L,F> dev)

heldOutSetSigma

public double[] heldOutSetSigma(GeneralDataset<L,F> train,
                                GeneralDataset<L,F> dev,
                                Scorer<L> scorer)

heldOutSetSigma

public double[] heldOutSetSigma(GeneralDataset<L,F> train,
                                GeneralDataset<L,F> dev,
                                LineSearcher minimizer)

heldOutSetSigma

public double[] heldOutSetSigma(GeneralDataset<L,F> trainSet,
                                GeneralDataset<L,F> devSet,
                                Scorer<L> scorer,
                                LineSearcher minimizer)
Sets the sigma parameter to a value that optimizes the held-out score given by scorer. Search for an optimal value is carried out by minimizer dataset the data set to optimize sigma on. kfold

Returns:
an interim set of optimal weights: the weights

setRetrainFromScratchAfterSigmaTuning

public void setRetrainFromScratchAfterSigmaTuning(boolean retrainFromScratchAfterSigmaTuning)
If set to true, then when training a classifier, after an optimal sigma is chosen a model is relearned from scratch. If set to false (the default), then the model is updated from wherever it wound up in the sigma-tuning process. The latter is likely to be faster, but it's not clear which model will wind up better.


trainClassifier

public Classifier<L,F> trainClassifier(java.lang.Iterable<Datum<L,F>> dataIterable)

trainClassifier

public Classifier<L,F> trainClassifier(GeneralDataset<L,F> dataset,
                                       float[] dataWeights,
                                       LogPrior prior)

trainClassifier

public LinearClassifier<L,F> trainClassifier(GeneralDataset<L,F> dataset)
Description copied from class: AbstractLinearClassifierFactory
Trains a Classifier on a Dataset.

Specified by:
trainClassifier in interface ClassifierFactory<L,F,Classifier<L,F>>
Overrides:
trainClassifier in class AbstractLinearClassifierFactory<L,F>
Returns:
A Classifier trained on the data.

trainClassifier

public LinearClassifier<L,F> trainClassifier(GeneralDataset<L,F> dataset,
                                             double[] initial)

loadFromFilename

public Classifier<java.lang.String,java.lang.String> loadFromFilename(java.lang.String file)
Given the path to a file representing the text based serialization of a Linear Classifier, reconstitutes and returns that LinearClassifier. TODO: Leverage Index


trainClassifier

@Deprecated
public LinearClassifier<L,F> trainClassifier(java.util.List<RVFDatum<L,F>> examples)
Deprecated. 

Specified by:
trainClassifier in interface ClassifierFactory<L,F,Classifier<L,F>>
Overrides:
trainClassifier in class AbstractLinearClassifierFactory<L,F>

setEvaluators

public boolean setEvaluators(int iters,
                             Evaluator[] evaluators)

getClassifierCreator

public LinearClassifierFactory.LinearClassifierCreator<L,F> getClassifierCreator(GeneralDataset<L,F> dataset)


Stanford NLP Group