Suppose that we have a random sample of and we would like to test $$H_0:\theta \in \Theta_0 \qquad H_1:\theta \in \Theta_1$$where and where and are disjoint. Usually as .
We like to generalise the ideas of the Neymann-Pearson Lemma, and get a similar sense of a ratio of likelihoods, we would like to have a quantity that becomes small when is false.
Test of Generalised Likelihood Ration
Def: Let be a random sample of and be the likelihood function, where . The generalised likelihood ratio is defined as $$\lambda = \dfrac{\max\limits_{\theta \in \Theta_0} L(\theta \mid \underline x)}{\max\limits_{\theta \in \Theta}L(\theta \mid \underline x)}$$
We see that the denominator of the generalised likelihood ratio, is actually just the likelihood function evaluated on the maximum likelihood estimator. Meaning that , with is the maximum likelihood estimator.
We see that since they are nonnegative quantities, we see that .
We see is a function of , if we substitute the observation with the random sample is a statistic and is denoted .
Test of the generalised likelihood ratio or the Principle of the generalised likelihood ratio
This test establishes the following decision rule:
The constant is specified by the size of the test and test statistic .
In general, we will have good tests with this method. The problem with this method is to calculate or the distribution of , which is necessary of the evaluation of the power of the test.
The asymptotic distribution of the likelihood quotient
Prop: Let be a random sample of where . For the hypothesis: $$H_0: \theta_1 =\theta_1', \dots, \theta_r= \theta_r', \theta_{r+1}, \dots, \theta_k$$where be fixed values and are not specified, it satisfies (converges in distribution) when is true
Th: For testing the hypotheses against , where is a parameter, suppose that is a random sample of a population of with density function , which satisfies the regularity conditions, and let be the maximum likelihood estimator of . Then under , when , it satisfies that (converges in distribution)