Bayesian smoothing through text classification

Tom Griffiths
gruffydd@psych.stanford.edu

Estimation of sparse multinomial distributions is an important component of many statistical learning tasks, and is particularly relevant to natural language processing. From their origins in [#!shannon51!#], statistical approaches to natural language typically require knowledge of the probability of a symbol or word conditioned upon some context. These probabilities are usually estimated from limited data, and require some form of smoothing before they can be used in a particular task. Consequently, a number of approaches to smoothing multinomial distributions have been suggested, typically combining statistical notions with heuristic strategies [#!cheng98!#]. In this paper, I present a generalization of a probabilistically correct approach to parameter estimation for sparse multinomial distributions. This new approach takes advantage of specific knowledge that we might have about a particular domain, such as natural language.

Bayesian parameter estimation

Perhaps the simplest and most elegant method of estimating a multinomial distribution is a generalization of an approach originally taken by Laplace. In the following discussion, I follow the presentation in [#!friedmans98!#]. For a language $\Sigma$ containing $L$ distinct symbols, a multinomial distribution is specified by a parameter vector ${\bf\theta} = \langle \theta_1, \ldots , \theta_L\rangle$, where $\theta_i$ is the probability of an observation being symbol $i$. Consequently, we have the constraints that $\sum_{i = 1}^L \theta_i = 1$ and $\theta_i \geq 0, i = 1, \ldots, L$. The task of multinomial estimation is to take a data set $D$ and produce a vector ${\bf\theta}$ that results in a good approximation to the distribution that produced $D$. In this case, a data set $D$ consists of $N$ independent observations $x^1,
\ldots x^n$ drawn from the distribution to be estimated, which can be summarised by the statistics $N_i$ specifying the number of times the $i$th symbol occurs in the data. $D$ also specifies the set $\Sigma^0$ of symbols that occur at least once in the training data.

Stated in this way, the task of multinomial estimation can be framed as one of predicting the next observation based on the data. Specifically, we wish to calculate $P(x^{N+1}\vert D)$. The Bayesian estimate for this probability is given by

\begin{displaymath}
P_L(x^{N+1}\vert D) = \int P(x^{N+1}\vert{\bf\theta}) P({\bf\theta}\vert D) d {\bf\theta})
\end{displaymath}

where $P(x^{N+1}\vert{\bf\theta})$ is given by the multinomial distribution corresponding to ${\bf\theta}$. The posterior probability $P({\bf \theta}\vert D)$ can be obtained via Bayes rule

\begin{displaymath}
P({\bf\theta}\vert D) \propto P(D\vert{\bf\theta}) P({\bf\theta}) = P({\bf \theta}) \prod_{i=1}^L \theta^{N_i}
\end{displaymath}

where $P({\bf\theta})$ is the prior probability of a given ${\bf\theta}$.

The above approach to estimating the parameters of a multinomial distribution was first exploited by Laplace [#!laplace95!#], who took a uniform prior over ${\bf\theta}$ to give the famous ``law of succession'': $P(X^{N+1} = i\vert D) = \frac{N_i+1}{N+L}$. A more general approach is to assume a Dirichlet prior over ${\bf\theta}$, which is conjugate to the multinomial distribution and gives

\begin{displaymath}
P(X^{N+1} = i \vert D) = \frac{N_i + \alpha_i}{N + \sum_{j = 1}^L \alpha_j}
\end{displaymath} (1)

where the $\alpha_i$ are the hyperparameters of the Dirichlet distribution. Different estimates are obtained for different choices of the $\alpha_i$, with most approaches making the simplifying assumption that $\alpha_i = \alpha$ for all $i$. Laplace's law results from $\alpha = 1$. In the natural language processing community, the case with $\alpha = 0.5$ is known as the Jeffreys-Perks law or Expected Likelihood Estimation [#!boxt73!#] [#!jeffreys46!#] [#!perks47!#], and the expression with arbitrary $\alpha$ is called Lidstone's law [#!lidstone20!#]. This simple Bayesian approach can be further elaborated to provide justification for some of the heuristic strategies that have been used in smoothing natural language data [#!mackayp95!#].

Sparse multinomial distributions

The simple Bayesian method outlined above is appropriate for the general task of multinomial estimation, but generally provides poor results when used for smoothing of sparse multinomial distributions. This is primarily a consequence of the erroneous assumption that all $L$ categories should be considered as possible values for $X^{N+1}$. In actuality, sparse multinomial distributions are characterized by the fact that only a few symbols actually occur. In such cases, applying the above method will give too much probability to symbols that never occur and consequently give a poor estimate of the true distribution.

In order to extend the Bayesian approach to sparse multinomial distributions, several authors have used the notion of maintaining uncertainty over the vocabulary from which observations are produced as well as their probabilities. In [#!ristad95!#], Ristad uses assumptions about the probability of strings based upon different vocabularies to give the estimate

\begin{displaymath}
P_C(X^{N+1}=i\vert D) = \left \{
\begin{array}{ll}
(N_i+1)/...
...0+1)/(L-k^0)(N^2+N+2k^0) & \mbox{otherwise}
\end{array}\right.
\end{displaymath}

where $k^0$ is the size of the smallest vocabulary that can accommodate the data, corresponding to the cardinality of $\Sigma^0$. Ristad shows that this method performs well in a number of different contexts, consistently making better predictions than Laplace's law and its cognates.

A more explicitly Bayesian approach is taken by Friedman and Singer in [#!friedmans98!#], who also point out that Ristad's method can be considered a special case of the framework that they present. Friedman and Singer's approach considers the vocabulary $V \subseteq \Sigma$ to be a random variable, allowing them to write

\begin{displaymath}
P(X^{N+1}=i\vert D) = \sum_V P(X^{N+1}=i\vert V,D) P(V\vert D)
\end{displaymath}

where $P(X^{N+1}=i\vert V,D)$ is given by assuming a Dirichlet ($\alpha$) prior over the symbols in $V$. Thus we have
\begin{displaymath}
P(X^{N+1}=i\vert V,D) = \left \{ \begin{array}{ll}
\frac{N_i...
...box{if $i \in V$} \\
0 & \mbox{otherwise}
\end{array} \right.
\end{displaymath} (2)

Friedman and Singer go on to define a hierarchical prior over $V$, such that all vocabularies of cardinality $k$ are given the same probability, namely $P(S=k)/(^L_k)$, where $P(S=k)$ is the probability that the size of the vocabulary ($\vert V\vert$) is $k$. It follows that if $i \in
\Sigma^0$, $P(X^{N+1}=i \vert D) = \sum_k \frac{N_i+\alpha}{N+k\alpha}
P(S=k\vert D)$. If $i \not\in \Sigma^0$, it is necessary to estimate the proportion of $V$ that contain $i$ for a given $k$. The simplified result is

\begin{displaymath}
P(X^{N+1}=i\vert D) = \left \{ \begin{array}{ll}
\frac{N_i +...
...rac{1}{L-k^0}(1-C(D,L)) & \mbox{otherwise}
\end{array} \right.
\end{displaymath}

where

\begin{displaymath}
C(D,L) = \frac{\sum_{k=k^0}^L \frac{N +
k^0\alpha}{N+k\alpha}m_k}{\sum_{k'=k^0}^{L} m_{k'}}
\end{displaymath}

with $m_k = P(S=k)
\frac{k!}{(k-k^0)!}\cdot\frac{\Gamma({k\alpha})}{\Gamma(N+k\alpha)}$. Friedman and Singer also point out that this distribution remains well-defined in the case where the alphabet is unbounded, and show that their method gives good performance in a simple character-prediction task.

Making use of knowledge

Friedman and Singer's approach assumes a prior that gives equal probability to all vocabularies of a given cardinality. This assumption aids in obtaining an efficient specification for $P(X^{N+1}=i\vert V,D)$. In many real-world tasks, we have at least some knowledge about the structure of the task that we might like to build into our methods for parameter estimation. One example of such knowledge might be the expectation that the symbols used by a sparse multinomial distribution will come from one of a few restricted vocabularies which we can pre-specify. For example, in predicting the next character in a file, our predictions might be facilitated by considering the fact that most files either use a vocabulary consisting of just the ASCII printing characters (such as text files), or all possible characters (such as object files). In such a case, giving equal prior probability to all vocabularies of a given cardinality may have negative consequences.

In the case where our knowledge about a domain leads us to specify some known set of vocabularies ${\bf V}_{\omega}$, we can write

\begin{displaymath}
P(V\vert D) = P(V\vert\omega,D)P(\omega\vert D) +
P(V\vert\overline{\omega},D)P(\overline{\omega}\vert D)
\end{displaymath}

where $\omega$ specifies whether we are considering the distribution over $V$ implied by our knowledge of the domain, which will be restricted to ${\bf V}_{\omega}$, or the knowledge-free distribution over $V$ used by Friedman and Singer. Consequently,
$\displaystyle P_B(X^{N+1}=i\vert D)$ $\textstyle =$ $\displaystyle \sum_V P(X^{N+1}\vert V,D)
P(V\vert\overline{\omega},D)P(\overline{\omega}\vert D)$  
    $\displaystyle + \sum_{V\in{\bf V}_{\omega}} P(X^{N+1}\vert V,D)
P(V\vert\omega,D)P(\omega\vert D)$  
  $\textstyle \propto$ $\displaystyle P(X^{N+1} = i\vert D, \overline{\omega})P(D\vert\overline{\omega})
P(\overline{\omega})$  
    $\displaystyle + \sum_{V\in{\bf V}_{\omega}} P(X^{N+1}=i\vert V,D)P(D\vert V, \omega) P(V\vert\omega) P(\omega)$ (3)

which is a mixture between two distributions - the first being that produced by Friedman and Singer's approach, and the second being produced by the same approach applied to a restricted set of hypotheses. From [#!friedmans98!#] (Equation 10) it follows that

\begin{displaymath}
P(D\vert\overline{\omega}) =
\frac{(L-k^0)!}{L!} \cdot \frac{\Gamma(k\alpha)}{\Gamma(N + k\alpha)}
\sum_{k={k^0}}^L m_k
\end{displaymath}

and, from the properties of Dirichlet priors,

\begin{displaymath}
P(D\vert V, \omega) = \left \{ \begin{array}{ll}
\frac{\Gamm...
...gma^0 \subseteq V \\
0 & \mbox{otherwise}
\end{array}\right.
\end{displaymath}

Finally, $P(X^{N+1}\vert V,D)$ is given in equation [*]. Thus we need only define the priors $P(S=k\vert\overline{\omega}), P(V\vert\omega), P(\omega)$, the parameter $\alpha$, and ${\bf V}_{\omega}$ to fully specify the distribution for a given $D$. While $\alpha$ and ${\bf V}_{\omega}$ should be set differently for specific domains, I use uniform priors in each of the empirical investigations discussed below.

The intuition behind this approach is that it adds to Friedman and Singer's method a second process that attempts to classify the target distribution as one of a number of known distributions, and uses the posterior probability of these distributions for full Bayesian smoothing. However, rather than using the potentially fragile approach of classifying the data based upon the distribution over symbols, the model attempts to classify the data in terms of a set of known vocabularies. Applying standard Bayesian multinomial estimation within each of these vocabularies gives sufficient flexibility for the method to capture a range of distributions, while allowing prior knowledge to play an important role in informing the results.

An illustration: Text compression

Text compressiom provides an effective test of methods for multinomial estimation. One approach to adaptive coding involves specifying a method for calculating a distribution over the probability of the next byte in a file based upon the preceding bytes [#!clearywb90!#]. The extent to which the file can be compressed will depend upon the quality of the resulting predictions. This method was explicitly used by Ristad to assess his approach to multinomial estimation, and implicitly used by Friedman and Singer.

To illustrate the utility of including prior knowledge in multinomial estimation, I will follow Ristad in examining the performance of these various methods on the Calgary text compression corpus [#!clearywb90!#]. This corpus consists of 19 files of several different types, each containing some subset of the 256 possible characters in some order ( $\Sigma = \{ \mbox{char}(0), \ldots, \mbox{char}(255) \}, L =
256$). The files include BibTeX source (bib), formatted English text (book1, book2, paper1, paper2, paper3, paper4, paper5, paper6), geological data (geo), newsgroup postings ( news), a bit-mapped monochrome picture (pic), programs in three different languages (progc, progl, progp) and a terminal transcript (trans). The task was to repeatedly estimate the multinomial distribution from which characters in the file were drawn based upon the first $N$ characters, and use this distribution to predict the $N+1$st character. Performance was measured in terms of the length of the resulting file, where the contribution the $N+1$st character makes to the length is given by $\log_2 P(x^{N+1}\vert D)$. The final file length is thus the accumulation of a single term in the cross entropy of the distribution a method produces for predicting each successive character, and provides a good measure of the quality of the estimator for a range of values of $N$.

Text compression is a domain in which files are likely to fall into one of a small number of categories, so giving extra weight to specific vocabularies can be of great utility. For the predictions of the extended Bayesian approach outlined above, the ``known'' vocabularies corresponded to the characters that occurred in a separate set of files each containing between 0.5 and 2 megabytes of BibTeX source, English text, C code, LISP code, and newsgroup postings. The resulting vocabularies identified 100, 92, 100, 157, and 102 specific characters as belonging to documents of a particular type. Finally, a vocabulary was added to cover files that use all 256 characters. Together, these six vocabularies specified ${\bf V}_{\omega}$. $\alpha$ was set to $0.25$, as was done by Friedman and Singer in their experiments with characters. The text compression results are shown in Table 1.

file size $k^0$ $N H(N_i/N)$ $P_B$ $P_{\omega}$ $P_{\overline{\omega}}$ $P_C$ $P_L$ $P_J$
bib 111261 81 72330 79 70 122 92 269 174
book1 768771 82 435043 160 151 137 116 352 219
book2 610856 96 365952 96 87 167 124 329 212
geo 102400 256 72274 173 172 279 165 165 161
news 377109 98 244633 96 86 159 116 304 201
obj1 21504 256 15989 132 136 284 129 129 126
obj2 246814 256 193144 197 190 333 190 189 182
paper1 53161 95 33113 75 66 137 100 236 156
paper2 82199 91 47280 74 65 133 105 259 167
paper3 46526 84 27132 69 61 118 92 238 154
paper4 13286 80 7806 60 51 104 79 190 126
paper5 11954 91 7376 61 53 119 83 181 122
paper6 38105 93 23861 72 63 131 95 223 149
pic 513216 159 77636 138 149 325 216 323 205
progc 39611 92 25743 73 65 131 91 222 150
progl 71646 87 42720 59 65 150 97 253 164
progp 49379 89 30052 72 64 131 94 236 155
trans 93695 99 64800 135 127 145 105 252 169

Table 1: Prediction results on the Calgary corpus for six parameter estimation methods. Results are given in whole bytes relative to the empirical entropy $N H(N_i/N)$ which is taken from [#!ristad95!#]. $P_B$ is the predictions of the full Bayesian model outlined above. $P_\omega$ uses only the vocabularies in ${\bf V}_\omega$, while $P_{\overline{\omega}}$ corresponds to the raw predictions of Friedman and Singer's method. $P_C$ is Ristad's law of natural succession, while $P_L$ and $P_J$ are Laplace's law and the Jeffreys-Perks law, following from Equation [*] with $\alpha = 1$ abd $0.5$ respectively. $P_C,
P_L$ and $P_J$ were taken from [#!ristad95!#], where Ristad also tested several other estimation methods. The results in italics are the best of those considered by Ristad in his paper, while the results in bold are the best overall.


It is instructive to examine the kinds of files on which the Bayesian method outlined above ($P_B$) outperformed the other methods for multinomial estimation. These files were bib, book1, news, paper1, paper2, paper3, paper4, paper5, paper6, progc, progl, and progp. In these cases, only $P_\omega$ outperformed $P_B$. These files all use a restricted vocabulary of characters corresponding to those used in English text, together with a small number of formatting characters. The high performance in these cases was a result of the fact that three of the vocabularies in ${\bf V}_\omega$ contain such characters - the Bibtex source, C code, and newsgroup postings. The vocabulary corresponding to plain English text was too restrictive for most of these files, and was hardly used, while the vocabulary for Lisp programs was too large to be of utility. Without this additional structure, Friedman and Singer's method ( $P_{\overline{\omega}}$) tended to perform worse than Ristad's simpler method ($P_C$), although better than the basic Dirichlet smoothing methods ($P_L$ and $P_J$) when the files used restricted vocabularies.

The results for book1 illustrate an important weakness of the approach outlined above. Here the file lengths for $P_B$ and $P_\omega$ are higher than those for $P_{\overline{\omega}}$ and $P_C$, despite the fact that the file in question features a restricted English-based vocabulary. The reason for this is that the file also contains two unusual characters that were not encountered in the data used to construct any of the specific hypotheses. Upon encountering these characters $P_\omega$ defaulted to the only $V \in {\bf V}_\omega$ that contained those characters: the unrestricted vocabulary of all 256 characters. From that point, the distribution for $P_\omega$ corresponded to Equation [*] with $\alpha=0.25$, and the resulting smoothing was worse than for $P_{\overline{\omega}}$ and $P_C$.

Introducing noise

The kind of behaviour that was demonstrated in the compression of book1 is undesirable - we don't want to reject one of our candidate vocabularies on the basis of one or two symbols that are inconsistent with that vocabulary. We can improve the robustness of our model by adding in a noise process such that, regardless of vocabulary, any character in $\Sigma$ can occur with some small probability. A direct mixture of $P(X^{N+1}\vert V,D)$ with a uniform noise process results in an intractable sum over all of the $2^N$ ways of assigning observations to the smoothed vocabulary and the noise process, but we can obtain a simple closed form solution if we assume that the noise process and vocabulary are mutually exclusive. Assuming that any symbol in $\Sigma-V$ occurs with probability $\epsilon$, we have

\begin{displaymath}
P(X^{N+1}=i\vert V,D) = \left \{ \begin{array}{ll}
(1-(L-\ve...
...i \in V$} \\
\epsilon & \mbox{otherwise}
\end{array} \right.
\end{displaymath}

where $N_V = \sum_{i \in V} N_i$. It follows that

\begin{displaymath}
P(D\vert V, \omega) =
(1-(L-\vert V\vert)\epsilon)^{N_V}\ep...
...\Sigma_0 \cap V} \frac{\Gamma(N_i +
\alpha)}{\Gamma(\alpha)}
\end{displaymath}

which allows us to specify $P_\omega$ and $P_B$ as above.

The definition of $\epsilon$ will determine the upper bound on the probability mass assigned to the noise distribution. Specifically, this bound will be given by $P = L\epsilon$. This gives us a simple heuristic for setting $\epsilon$: if the probability mass that we want to assign to noise is $P$, then we take $\epsilon = \frac{P}{L}$.

Smoothing through classification

While text compression serves as an illustrative domain for the comparison of different parameter estimation techniques, there are a number of other contexts in which we might wish to estimate multinomial distributions about which we have good prior knowledge. One such context is statistical natural language processing, in which accurate multinomial estimation is the topic of much research [#!cheng98!#]. Typically, such multinomial distributions are over large vocabularies of words. Here, the notion of smoothing a multinomial estimate based upon classification of the vocabulary involved has a direct connection to the ideas driving the text classification literature (eg. [#!yang99!#]): in different contexts, words will occur with different probabilities. In particular, different vocabularies will be used, and having a good set of candidate vocabularies may facilitate smoothing. If it is possible to classify a document as using a particular vocabulary, then we can perform smooth the results we obtain appropriately.

A dataset containing a total of approximately 20,000 articles drawn from 20 different Usenet newsgroups was used to examine this idea. This dataset was first used for text classification in [#!lang95!#], and has since been a benchmark for text classification algorithms. Ten of these newsgroups (rec.autos, rec.sport.baseball, sci.crypt, sci.med, talk.politics.misc, talk.religion.misc, misc.forsale, comp.sys.mac.hardware, comp.os.ms-windows.misc, comp.graphics) were used to estimate a set of vocabularies ${\bf V}_\omega$. These vocabularies were then applied in forming multinomial estimates for further data drawn from these ten newsgroups and ten others (alt.atheism, sci.space, rec.motorcycles, talk.politics.guns, comp.sys.ibm.pc.hardware, rec.sport.hockey,
talk.politics.mideast, sci.electronics, comp.windows.x).

The actual dataset used was 20news-18827, which consists of the 20 newsgroup data with headers and duplicates removed. The dataset was preprocessed to remove all punctuation and capitalization, as well as converting every number to a single symbol. The articles in each of the 20 newsgroups were then divided into three sets. The first set contained the first 500 articles, and this was used to build the candidate vocabularies ${\bf V}_\omega$ for the ten newsgroups described above. The second set contained articles 501-700, and was used as training data for multinomial estimation for all 20 newsgroups. The third set contained articles 701-900, and was used as testing data for all 20 newsgroups. A dictionary was built up by running over the 13,000 articles resulting from this division, and all words that occurred only once in this entire reduced dataset were mapped to an ``unknown'' word. The resulting dictionary contained $L=54309$ words.

For the generalized Bayesian model discussed above, ${\bf V}_\omega$ featured one vocabulary that contained all words in the dictionary, and 10 vocabularies estimated from each of the 10 newsgroups mentioned above. These 10 vocabularies were produced by thresholding word frequency in the 500 articles considered, with the threshold ranging from 1 to 10 instances. Each newsgroup thus provided a hierarchy of vocabularies representing a range of degrees of specificity.

Five methods of multinomial estimation were considered. Since the candidate vocabularies are simultaneously too general and too specific to give high probability to any set of observations, $P_\omega$ tends to make no contribution to $P_B$. For this reason I evaluated $P_\omega$ and $P_{\overline{\omega}}$ separately. I also considered Ristad's law ($P_C$), Laplace's law ($P_L$) and the Jeffreys-Perks law ($P_J$). For $P_\omega$, the maximum probability mass assigned to noise was $P=0.01$, while both $P_\omega$ and $P_{\overline{\omega}}$ used $\alpha = 0.5$, to facilitate comparison with the Jeffreys-Perks law.

Testing for each newsgroup consisted of taking words from the 200 articles assigned for training purposes, estimating a distribution using each method, and then computing the cross-entropy between that distribution and an empirical estimate of the true distribution. The cross $H(Q;\hat{P}) = \sum_i Q_i \log_2 \hat{P}_i$, where $Q_i$ is the true distribution and $\hat{P}$ is the distribution produced by the estimation method. The estimate of $Q$ corresponded to the maximum likelihood estimate formed from the word frequencies in all 200 articles assigned for testing purposes. The testing procedure was conducted with just 100 words, and then in increments of 450 words until 10000 words had been seen in total. The results are shown in Figure 1.

As can be seen in Figure 1, $P_\omega$ consistently outperforms the other methods, even on newsgroups that did not contribute to ${\bf V}_\omega$. Performance was worst for sci.space, rec.sport.hockey, and talk.politics.mideast. These newsgroups are those that showed the least correspondence to those constituting ${\bf V}_\omega$. Figure 2 shows the vocabularies from ${\bf V}_\omega$ that had highest posterior probability at each point in the training process. sci.space moves between comp.graphics and talk.politics.misc, although neither of these seem to be appropriate. rec.sport.hockey defaults to the vocabulary containing all words once the number of unknown words is easier to account for by this vocabulary than by a noise process combined with the most general vocabulary in rec.sport.baseball.

This defaulting behavior is an important aspect of $P_\omega$: at the point where the data are best accounted for by smoothing on the whole dictionary, the model will use an unrestricted vocabulary. The resulting multinomial estimates will be the same as applying Equation [*] with the appropriate setting of $\alpha$. In the present experiment, the $\alpha$ parameter was set so that the default estimate would correspond to $P_J$. This allows a direct evaluation of how much is being gained by considering restricted vocabularies.

The importance of defaulting is illustrated in Figure 3. After 10,000 words, $P_\omega$ is beginning to perform worse than $P_J$ on sci.space. However, if we continue to examine the predictions made by $P_\omega$ as more words are added, we see that the model swiftly defaults to using $P_J$ as soon as the penalty for the unknown words exceeds the gains of the restricted vocabulary. This is valuable, since at this point sufficient data have accumulated that $P_J$ gives a good estimate of the target distribution. The intelligent choice of ${\bf V}_\omega$ is important to these results. Experiments conducted with 100 vocabularies generated at random showed very rapid defaulting - the random vocabularies tended to be strongly inconsistent with the actual data.

Efficient implementation

The experiment presented above involved estimating only a single multinomial distribution over words. For tasks like the estimation of transition probabilities, it becomes necessary to maintain multiple such estimates. In these cases, the memory demands of an estimation method become an important concern. Implementing $P_\omega$ requires more memory than doing simple Bayesian smoothing, however the amount of memory required scales lineary with $\vert{\bf V}_\omega\vert$. Since standard Bayesian smoothing is equivalent to the case where $\vert{\bf V}_\omega\vert = 1$, the resulting cost is not too extreme in most situations.

Efficient implementation of $P_{\omega}$ requires storing a list of the words that belong in each of the $\vert{\bf V}_\omega\vert$ vocabularies, and a vector of the posterior probabilities of each $V \in {\bf V}_\omega$. $P_\omega$ can then be evaluated for any given word by taking a weighted average of the probabilities assigned to that word by applying standard Bayesian smoothing (Lidstone's law) within that vocabulary. The time required to compute a probability will thus also increase linearly with $\vert{\bf V}_\omega\vert$, but when the number of candidate vocabularies is small the algorithm will remain efficient.

Conclusion

In this paper, I have presented a novel approach to Bayesian smoothing of sparse multinomial distributions. This approach follows the idea of maintaining uncertainty over restricted vocabularies by allowing the vocabularies themselves to be specified on the basis of domain-specific knowledge. I have argued that this approach has its most valuable applications in statistical natural language processing, where data is sparse but domain knowledge is extensive. The main utility of this approach is that if a set of basis vocabularies that span a wide range of contexts can be found, it may be possible to achieve rapid and accurate smoothing of multinomial distributions over words by classifying documents according to their vocabularies.

No References!

\begin{figure}\centerline{\psfig{figure=n7.eps,width=5in}}\end{figure}

Figure 1: Smoothing results for newsgroup data. The top ten panels (alt.atheism and those to its right) are for the newsgroups with unknown vocabularies. The bottom ten are for those that contributed vocabularies to ${\bf V}_\omega$, although they were trained and tested on novel data. The plots show the cross-entropy as a function of the number of words presented, with the abscissa at the empirical entropy of the test distribution. $P_L$ and $P_J$ are both indicated with dotted lines, but can be identified by the fact that, on these data, it is always the case that $P_J$ performs better than $P_L$. The unabbreviated titles for all 20 newsgroups can be found in the text.

\begin{figure}\centerline{\psfig{figure=groupdisp.eps,width=5in}}\end{figure}

Figure 2: Classification of vocabularies. Newsgroups contributing vocabularies to ${\bf V}_\omega$ are listed on the left, with movement up the vertical axis within each corresponding to decreasing specificity (ie. a lower threshold on the frequency). The lines connect the vocabularies with highest posterior probability for each of the newsgroups that did not contribute to ${\bf V}_\omega$, with labels on the right. The horizontal axis displays the number of words observed.

\begin{figure}\centerline{\psfig{figure=longrun7.eps,width=5in}}\end{figure}

Figure 3: Smoothing for poorly matched data as a function of number of words observed. The vocabularies in ${\bf V}_\omega$ provided a poor match to the words in sci.space. However, $P_\omega$ defaults to $P_J$ when the probability of the data becomes higher under that distribution. Again, $P_J$ and $P_L$ can be identified by the fact that $P_J$ consistently outperforms $P_L$ on these data.