Describes the top-k terms in a topic by grouping.
Describes the usage of a topic within a grouping.
Describes the top-k terms in a topic.
Describes the usage of a topic within a grouping.
Estimates the per-word topic distributions using the given model counts and the per-document topic distributions.
Estimates the per-word topic distributions using the given model counts and the per-document topic distributions.
Estimates the per-word topic distributions using the given model counts and the per-document topic distributions. This is not as exact as inference, but is nearly so, and is much faster.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Returns an array of per-topic probabilities.
Loads a CVB0LDA model from the given path.
Loads a CVB0LDA model from the given path.
Loads a CVB0LabeledLDA model from the given path.
Loads a CVB0LabeledLDA model from the given path.
Loads a CVB0PLDA model from the given path.
Loads a CVB0PLDA model from the given path.
Loads a GibbsLDA model from the given path.
Loads a GibbsLDA model from the given path.
Loads a GibbsLabeledLDA model from the given path.
Loads a GibbsLabeledLDA model from the given path.
Loads the document-topic assignments from the given path.
Loads the document-topic assignments from the given path.
Gets the top terms in each topic, but counting terms instances separately for members of each group.
Gets the top terms in each topic, but counting terms instances separately for members of each group.
Gets the top terms in each topic, but counting terms instances separately for members of each group.
Gets the top terms in each topic, but counting terms instances separately for members of each group.
Returns the top terms associated with the model.
Returns the top terms associated with the model.
Gets the usage of each topic by sub-group.
Gets the usage of each topic by sub-group.
Gets the usage of each topic overall within a corpus.
Gets the usage of each topic overall within a corpus.
Trains a CVB0LDA model using the given model parametesr.
Trains a CVB0LDA model using the given model parametesr.
Trains a CVB0LabeledLDA model using the given model parametesr.
Trains a CVB0LabeledLDA model using the given model parametesr.
Trains a CVB0PLDA model using the given model parametesr.
Trains a CVB0PLDA model using the given model parametesr.
Trains a GibbsLDA model using the given model parametesr.
Trains a GibbsLDA model using the given model parametesr.
Trains a GibbsLabeledLDA model using the given model parametesr.
Trains a GibbsLabeledLDA model using the given model parametesr.
Estimates the per-word topic distributions using the given model counts and the per-document topic distributions. This is not as exact as inference, but is nearly so, and is much faster.