Creates a document from the given document parameters.
Creates a document from the given document parameters.
Does inference on the given document until convergence.
Does inference on the given document until convergence.
Returns the probability of the given term in the given topic.
Returns the probability of the given term in the given topic.
The parameters used to create this model.
The parameters used to create this model.
Resets to the default state.
Resets to the default state.
Gets the current state of this object.
Gets the current state of this object.
Sets the current state of this object.
Sets the current state of this object.
Assert invariants.
Assert invariants.
Returns the distribution over terms for the given topic.
Returns the distribution over terms for the given topic. The return value of this method is assumed to have already incorporated the corresponding getTermSmoothing to the appropriate extent.
Returns the distribution over terms for the given topic.
Returns the distribution over terms for the given topic. The return value of this method is assumed to have already incorporated the corresponding getTermSmoothing to the appropriate extent.
Does inference on the given document until convergence.
Does inference on the given document until convergence.
Returns the topic number for the given topic within the given label.
Starting offset within the topic array for topics with the given label.
Starting offset within the topic array for topics with the given label.
Where log messages go.
Where log messages go. Defaults to System.err.println.
The number of terms in the model.
The number of terms in the model.
The number of topics in the model.
The number of topics in the model.
Returns the probability of the given term in the given topic.
Returns the probability of the given term in the given topic.
Registers a function as a checker of invariants.
Registers a function as a checker of invariants.
The term index describing which terms are in the model.
The term index describing which terms are in the model.
Add-k prior counts for each term (eta in the model formulation).
Add-k prior counts for each term (eta in the model formulation).
Tokenizes the given input string using our stored tokenizer and term index, if available.
Tokenizes the given input string using our stored tokenizer and term index, if available. Otherwise, throws an IllegalArgumentException.
The tokenizer used to break input documents into terms.
The tokenizer used to break input documents into terms.
The term index describing which terms are in the model.
The term index describing which terms are in the model.
Gets the name for this topic.
Gets the name for this topic.
Prior counts for each topic (alpha in the model formulation).
Prior counts for each topic (alpha in the model formulation).
PLDA models.