TU-L0022 - Statistical Research Methods D, Lecture, 2.11.2021-6.4.2022
Kurssiasetusten perusteella kurssi on päättynyt 06.04.2022 Etsi kursseja: TU-L0022
Nested model comparison with likelihood ratio test (3:13)
This video describes comparing maximum likelihood estimates of two
different models that are nested using the likelihood ratio test.
Click to view transcript
Maximum likelihood estimates of two different models that are nested can be compared using the likelihood ratio test. The
idea is the same as in regression analysis and the F test. So in an F
test, in regression analysis, we have two models. One is the constraint
model and another one is the unconstrained model. Here, Model 2 is the
more general, unconstraint model. And Model 1 is a special case of Model
2, because we get Model 1 from Model 2 by saying that these regression
coefficients that are estimated here are actually zeros in Model 1,
because we don't include these variables. Then
we can calculate the likelihood ratio test of whether adding this one
more parameter increases the model fit more than what can be expected by
chance only, by comparing these deviances or -2 times log-likelihood.
So, Model 2 is the unrestricted model, Model 1 is the restricted model. We
calculate the difference between the deviances, which is 3.79. And that
difference follows the chi-square distribution with one degree of
freedom, because there is only one parameter difference here. And the
p-value for that would be 0.05, which says that there is no
statistically
Then we do some math,
we calculate the sum of squares or R-squared of these models, we compare
that to degrees of freedom, we get the statistic that follows the F
distribution.
In maximum likelihood estimates, we don't have the
R-squared, we don't have the sum of squares, instead, we use the
deviance statistic. So here in Kraimer's paper, we have two models.
So
Model 1 is the constraint model, Model 2 is the unconstrained model,
because we have this coefficient here that's estimated, here it is
constrained to be zero. So, we have one degree of freedom difference
between these two models.
significant difference between the models that's on the
border of 0.05 level. And it's also shown here that this is not very,
very significant here.
In contrast to the F test, which always,
if you have one parameter, gives you the same exact p-value as t test,
the significance test statistic here from z test, and the likelihood
ratio, p-value, they don't necessarily have the exact same value,
because these are based on large sample approximations that may not work
as exactly as intended in a small samples.
So there can be some
scenarios like here, we have a significant coefficient, p is less than
0.05. But here we don't, we have p that is more than 0.05. But it's on
the boundary. So we could just say that there is weak evidence for the
existence of this relationship.