Package edu.stanford.nlp.parser.metrics

Class Summary
AbstractEval A framework for Set-based precision/recall/F1 evaluation.
AbstractEval.CatErrorEval This class counts which categories are over and underproposed in trees.
AbstractEval.RuleErrorEval  
AbstractEval.ScoreEval This isn't really a kind of AbstractEval: we're sort of cheating here.
DependencyEval Evaluates the dependency accuracy of a tree (based on HeadFinder dependency judgments).
Evalb A Java re-implementation of the evalb bracket scoring metric (Collins, 1997) that accepts Unicode input.
Evalb.CBEval  
EvalbByCat Computes labeled precision and recall (evalb) at the constituent category level.
TaggingEval Computes POS tagging P/R/F1 from guess/gold trees.
 



Stanford NLP Group