Package edu.stanford.nlp.parser.metrics

Class Summary
AbstractEval A framework for Set-based precision/recall/F1 evaluation.
AbstractEval.CatErrorEval This class counts which categories are over and underproposed in trees.
AbstractEval.RuleErrorEval  
AbstractEval.ScoreEval This isn't really a kind of AbstractEval: we're sort of cheating here.
DependencyEval Evaluates the dependency accuracy of a tree (based on HeadFinder dependency judgments).
Evalb A Java re-implementation of the evalb bracket scoring metric (Collins, 1997) that accepts Unicode input.
Evalb.CBEval  
EvalbByCat Computes labeled precision and recall (evalb) at the constituent category level.
LeafAncestorEval Implementation of the Leaf Ancestor metric first described by Sampson and Babarczy (2003) and later analyzed more completely by Clegg and Shepherd (2005).
TaggingEval Computes POS tagging P/R/F1 from guess/gold trees.
 



Stanford NLP Group