Package edu.stanford.nlp.parser.metrics

Class Summary
AbstractEval A framework for Set-based precision/recall/F1 evaluation.
AbstractEval.CatErrorEval This class counts which categories are over and underproposed in trees.
AbstractEval.RuleErrorEval  
AbstractEval.ScoreEval This isn't really a kind of AbstractEval: we're sort of cheating here.
BestOfTopKEval Applies an AbstractEval to a list of trees to pick the best tree using F1 measure.
Evalb A Java re-implementation of the evalb bracket scoring metric (Collins, 1997) that accepts Unicode input.
Evalb.CBEval  
EvalbByCat Computes labeled precision and recall (evalb) at the constituent category level.
FilteredEval An AbstractEval which doesn't just evaluate all constituents, but lets you provide a filter to only pay attention to constituents formed from certain subtrees.
LeafAncestorEval Implementation of the Leaf Ancestor metric first described by Sampson and Babarczy (2003) and later analyzed more completely by Clegg and Shepherd (2005).
TaggingEval Computes POS tagging P/R/F1 from guess/gold trees.
UnlabeledAttachmentEval Dependency unlabeled attachment score.
 



Stanford NLP Group