next up previous contents index
Next: References and further reading Up: Matrix decompositions and latent Previous: Low-rank approximations   Contents   Index

Latent semantic indexing

We now discuss the approximation of a term-document matrix $\lsimatrix$ by one of lower rank using the SVD. The low-rank approximation to $\lsimatrix$ yields a new representation for each document in the collection. We will cast queries into this low-rank representation as well, enabling us to compute query-document similarity scores in this low-rank representation. This process is known as latent semantic indexing (generally abbreviated LSI).

But first, we motivate such an approximation. Recall the vector space representation of documents and queries introduced in Section 6.3 (page [*]). This vector space representation enjoys a number of advantages including the uniform treatment of queries and documents as vectors, the induced score computation based on cosine similarity, the ability to weight different terms differently, and its extension beyond document retrieval to such applications as clustering and classification. The vector space representation suffers, however, from its inability to cope with two classic problems arising in natural languages: synonymy and polysemy. Synonymy refers to a case where two different words (say car and automobile) have the same meaning. Because the vector space representation fails to capture the relationship between synonymous terms such as car and automobile - according each a separate dimension in the vector space. Consequently the computed similarity $\vec{q}\cdot \vec{d}$ between a query $\vec{q}$ (say, car) and a document $\vec{d}$ containing both car and automobile underestimates the true similarity that a user would perceive. Polysemy on the other hand refers to the case where a term such as charge has multiple meanings, so that the computed similarity $\vec{q}\cdot \vec{d}$ overestimates the similarity that a user would perceive. Could we use the co-occurrences of terms (whether, for instance, charge occurs in a document containing steed versus in a document containing electron) to capture the latent semantic associations of terms and alleviate these problems?

Even for a collection of modest size, the term-document matrix $\lsimatrix$ is likely to have several tens of thousand of rows and columns, and a rank in the tens of thousands as well. In latent semantic indexing (sometimes referred to as latent semantic analysis (LSA) ), we use the SVD to construct a low-rank approximation $\lsimatrix_k$ to the term-document matrix, for a value of $k$ that is far smaller than the original rank of $\lsimatrix$. In the experimental work cited later in this section, $k$ is generally chosen to be in the low hundreds. We thus map each row/column (respectively corresponding to a term/document) to a $k$-dimensional space; this space is defined by the $k$ principal eigenvectors (corresponding to the largest eigenvalues) of $\lsimatrix\lsimatrix^T$ and $\lsimatrix^T\lsimatrix$. Note that the matrix $\lsimatrix_k$ is itself still an $\lsinoterms\times \lsinodocs$ matrix, irrespective of $k$.

Next, we use the new $k$-dimensional LSI representation as we did the original representation - to compute similarities between vectors. A query vector $\vec{q}$ is mapped into its representation in the LSI space by the transformation

\begin{displaymath}
\vec{q}_k=\Sigma_k^{-1}U_k^T\vec{q}.
\end{displaymath} (244)

Now, we may use cosine similarities as in Section 6.3.1 (page [*]) to compute the similarity between a query and a document, between two documents, or between two terms. Note especially that Equation 244 does not in any way depend on $\vec{q}$ being a query; it is simply a vector in the space of terms. This means that if we have an LSI representation of a collection of documents, a new document not in the collection can be ``folded in'' to this representation using Equation 244. This allows us to incrementally add documents to an LSI representation. Of course, such incremental addition fails to capture the co-occurrences of the newly added documents (and even ignores any new terms they contain). As such, the quality of the LSI representation will degrade as more documents are added and will eventually require a recomputation of the LSI representation.

The fidelity of the approximation of $\lsimatrix_k$ to $\lsimatrix$ leads us to hope that the relative values of cosine similarities are preserved: if a query is close to a document in the original space, it remains relatively close in the $k$-dimensional space. But this in itself is not sufficiently interesting, especially given that the sparse query vector $\vec{q}$ turns into a dense query vector $\vec{q}_k$ in the low-dimensional space. This has a significant computational cost, when compared with the cost of processing $\vec{q}$ in its native form.

Worked example. Consider the term-document matrix $\lsimatrix=$

   $d_1$ $d_2$ $d_3$ $d_4$ $d_5$ $d_6$  
 ship 1 0 1 0 0 0  
 boat 0 1 0 0 0 0  
 ocean 1 1 0 0 0 0  
 voyage 1 0 0 1 1 0  
 trip 0 0 0 1 0 1  

Its singular value decomposition is the product of three matrices as below. First we have $U$ which in this example is:

   1 2 3 4 5  
 ship $-0.44$ $-0.30$ $0.57$ $0.58$ $0.25$  
 boat $-0.13$ $-0.33$ $-0.59$ 0.00 0.73  
 ocean $-0.48$ $-0.51$ $-0.37$ 0.00 $-0.61$  
 voyage $-0.70$ 0.35 0.15 $-0.58$ 0.16  
 trip $-0.26$ 0.65 $-0.41$ 0.58 $-0.09$  

When applying the SVD to a term-document matrix, $U$ is known as the SVD term matrix. The singular values are $\Sigma=$

2.16 0.00 0.00 0.00 0.00
0.00 1.59 0.00 0.00 0.00
0.00 0.00 1.28 0.00 0.00
0.00 0.00 0.00 1.00 0.00
0.00 0.00 0.00 0.00 0.39

Finally we have $V^T$, which in the context of a term-document matrix is known as the SVD document matrix:

   $d_1$ $d_2$ $d_3$ $d_4$ $d_5$ $d_6$  
 1 $-0.75$ $-0.28$ $-0.20$ $-0.45$ $-0.33$ $-0.12$  
 2 $-0.29$ $-0.53$ $-0.19$ 0.63 0.22 0.41  
 3 0.28 $-0.75$ 0.45 $-0.20$ 0.12 $-0.33$  
 4 0.00 0.00 0.58 0.00 $-0.58$ 0.58  
 5 $-0.53$ 0.29 0.63 0.19 0.41 $-0.22$  

By ``zeroing out'' all but the two largest singular values of $\Sigma$, we obtain $\Sigma_2=$

2.16 0.00 0.00 0.00 0.00
0.00 1.59 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00

From this, we compute $\lsimatrix_2=$

   $d_1$ $d_2$ $d_3$ $d_4$ $d_5$ $d_6$  
 1 $-1.62$ $-0.60$ $-0.44$ $-0.97$ $-0.70$ $-0.26$  
 2 $-0.46$ $-0.84$ $-0.30$ 1.00 0.35 0.65  
 3 0.00 0.00 0.00 0.00 0.00 0.00  
 4 0.00 0.00 0.00 0.00 0.00 0.00  
 5 0.00 0.00 0.00 0.00 0.00 0.00  

Notice that the low-rank approximation, unlike the original matrix $\lsimatrix$, can have negative entries. End worked example.

Examination of $\lsimatrix_2$ and $\Sigma_2$ in Example 18.4 shows that the last 3 rows of each of these matrices are populated entirely by zeros. This suggests that the SVD product $U\Sigma V^T$ in Equation 241 can be carried out with only two rows in the representations of $\Sigma_2$ and $V^T$; we may then replace these matrices by their truncated versions $\Sigma_2^\prime$ and $(V^\prime )^T$. For instance, the truncated SVD document matrix $(V^\prime )^T$ in this example is:

   $d_1$ $d_2$ $d_3$ $d_4$ $d_5$ $d_6$  
 1 $-1.62$ $-0.60$ $-0.44$ $-0.97$ $-0.70$ $-0.26$  
 2 $-0.46$ $-0.84$ $-0.30$ 1.00 0.35 0.65  

Figure 18.3 illustrates the documents in $(V^\prime )^T$ in two dimensions. Note also that $\lsimatrix_2$ is dense relative to $\lsimatrix$.

Figure 18.3: The documents of Example 18.4 reduced to two dimensions in $(V^\prime )^T$.
\begin{figure}\psset{xunit=3cm,yunit=3cm}
\begin{pspicture}(-2,-1.5)(1,1.5)
\psa...
...}(0,0)(-0.897,0.442)
\psline{->}(0,0)(-0.371,0.928)
\end{pspicture}
\end{figure}

We may in general view the low-rank approximation of $\lsimatrix$ by $\lsimatrix_k$ as a constrained optimization problem: subject to the constraint that $\lsimatrix_k$ have rank at most $k$, we seek a representation of the terms and documents comprising $\lsimatrix$ with low Frobenius norm for the error $\lsimatrix-\lsimatrix_k$. When forced to squeeze the terms/documents down to a $k$-dimensional space, the SVD should bring together terms with similar co-occurrences. This intuition suggests, then, that not only should retrieval quality not suffer too much from the dimension reduction, but in fact may improve.

Dumais (1993) and Dumais (1995) conducted experiments with LSI on TREC documents and tasks, using the commonly-used Lanczos algorithm to compute the SVD. At the time of their work in the early 1990's, the LSI computation on tens of thousands of documents took approximately a day on one machine. On these experiments, they achieved precision at or above that of the median TREC participant. On about 20% of TREC topics their system was the top scorer, and reportedly slightly better on average than standard vector spaces for LSI at about 350 dimensions. Here are some conclusions on LSI first suggested by their work, and subsequently verified by many other experiments.

The experiments also documented some modes where LSI failed to match the effectiveness of more traditional indexes and score computations. Most notably (and perhaps obviously), LSI shares two basic drawbacks of vector space retrieval: there is no good way of expressing negations (find documents that contain german but not shepherd), and no way of enforcing Boolean conditions.

LSI can be viewed as soft clustering by interpreting each dimension of the reduced space as a cluster and the value that a document has on that dimension as its fractional membership in that cluster.


next up previous contents index
Next: References and further reading Up: Matrix decompositions and latent Previous: Low-rank approximations   Contents   Index
© 2008 Cambridge University Press
This is an automatically generated page. In case of formatting errors you may want to look at the PDF edition of the book.
2009-04-07