Generalised Likelihood Ratio

Subjects: Statistics
Links: Statistical Hypothesis Test

Suppose that we have a random sample of f(x;θ) and θΘ we would like to test $$H_0:\theta \in \Theta_0 \qquad H_1:\theta \in \Theta_1$$where Θ0,Θ1Θ and where Θ0 and Θ1 are disjoint. Usually as Θ1=ΘΘ0.

We like to generalise the ideas of the Neymann-Pearson Lemma, and get a similar sense of a ratio of likelihoods, we would like to have a quantity that becomes small when H0 is false.

Test of Generalised Likelihood Ration

Def: Let X1,,Xn be a random sample of f(x;θ) and L(θx) be the likelihood function, where θΘ. The generalised likelihood ratio is defined as $$\lambda = \dfrac{\max\limits_{\theta \in \Theta_0} L(\theta \mid \underline x)}{\max\limits_{\theta \in \Theta}L(\theta \mid \underline x)}$$
We see that the denominator of the generalised likelihood ratio, is actually just the likelihood function evaluated on the maximum likelihood estimator. Meaning that maxθΘL(θx)=L(θ^), with θ^ is the maximum likelihood estimator.

We see that since they are nonnegative quantities, we see that 0λ1.

We see λ is a function of x, if we substitute the observation with the random sample X is a statistic and is denoted Λ.

Test of the generalised likelihood ratio or the Principle of the generalised likelihood ratio

This test establishes the following decision rule:

Reject H0:θΘ0λfor some constant k[0,1]

The constant k is specified by the size of the test and test statistic Λ.

In general, we will have good tests with this method. The problem with this method is to calculate maxL(θ) or the distribution of Λ, which is necessary of the evaluation of the power of the test.

The asymptotic distribution of the likelihood quotient

Unbiased Point Estimation
Convergence in distribution

Prop: Let X1,,Xn be a random sample of f(x;θ) where θ=(θ1,,θk). For the hypothesis: $$H_0: \theta_1 =\theta_1', \dots, \theta_r= \theta_r', \theta_{r+1}, \dots, \theta_k$$where θ1,θ2,,θr be fixed values and θr+1,,θk are not specified, it satisfies 2lnλdχ2(r) (converges in distribution) when H0 is true

Th: For testing the hypotheses H0:θ=θ0 against H1:θθ0, where θ is a parameter, suppose that X1,,Xn is a random sample of a population of with density function f(x;θ), which satisfies the regularity conditions, and let θ^ be the maximum likelihood estimator of θ. Then under H0, when n, it satisfies that 2lnλdχ2(1) (converges in distribution)