Package edu.stanford.nlp.parser.metrics

Class Summary
AbstractEval A framework for Set-based precision/recall/F1 evaluation.
AbstractEval.CatErrorEval This class counts which categories are over and underproposed in trees.
AbstractEval.RuleErrorEval  
AbstractEval.ScoreEval This isn't really a kind of AbstractEval: we're sort of cheating here.
 



Stanford NLP Group