## Tuesday, June 20, 2006

### Dick Cheney and the Neyman-Pearson Lemma

This video over at Crooks and Liars reminds me of something that everyone who knows the least bit squat about information theory knows. Cheney, according to Ron Suskind, wanted to treat 1% probability events as "certainties" in responses.

Now the Neyman-Pearson, as outlined by Wikipedia says:

when performing a hypothesis test between two point hypotheses H0: θ=θ0 and H1: θ=θ1, then the likelihood-ratio test which rejects H0 in favour of H1 when
$\Lambda(x)=\frac{ L( \theta _{0} \mid x)}{ L (\theta _{1} \mid x)} \leq k \mbox{ where } Pr(\Lambda(X)\leq k|H_0)=\alpha$

is the most powerful test of size α.

Now for those of you who need further explanation without clicking over to Wikipedia:

The power of a statistical test is the probability that the test will reject a false null hypothesis, or in other words that it will not make a Type II error. As power increases, the chances of a Type II error decrease, and vice versa. The probability of a Type II error is referred to as β. Therefore power is equal to 1 − β.

What this means is that Dick Cheney guaranteed the consequences of making a Type I error (falsely calling the null hypothesis the alternative) would increase.

Hence all the bullshit we've been seeing...