Another popular feature selection
method is .
In statistics, the test is
applied to test the independence of two events,
where two events A and B are defined to be
feature selection, the two events are occurrence of the
term and occurrence of the class.
We then rank terms with respect to the following
Worked example. We first compute for the data in Example 13.5.1:
We compute the other in the same way:
Plugging these values into
Equation 133, we get a value of 284:
is a measure of how much expected counts and observed counts deviate from each other. A high value of indicates that the hypothesis of independence, which implies that expected and observed counts are similar, is incorrect. In our example, . Based on Table 13.6 , we can reject the hypothesis that poultry and export are independent with only a 0.001 chance of being wrong.Equivalently, we say that the outcome is statistically significant at the 0.001 level. If the two events are dependent, then the occurrence of the term makes the occurrence of the class more likely (or less likely), so it should be helpful as a feature. This is the rationale of feature selection.
An arithmetically simpler way of computing is the