To login with Google, please enable popups

or

Don’t have an account? Sign up

To signup with Google, please enable popups

or

Sign up with Google or Facebook

or

By signing up I agree to StudyBlue's

Terms of Use and Privacy Policy

Already have an account? Log in

Reminder

Edit a Copy

Study these flashcards

Harry W.

• 3

cards
Reason For F-tests

We want to conduct tests for two parameter values (or more) at the same time.

Consider two coefficients, b_{1} and b_{2}, and let X_{1} and X_{2} be the regressors. Is a t-test fails to reject b_{1}=0 and a second t-test fails to reject b_{2}=0 it could still be that X_{1} and X_{2} *jointly* have a significant effect on the dependent variable Y.

Testing for joint hypotheses on several parameters is not the same as conducting tests on each parameter separately because hypotheses pertain to the joint distribution of the estimators.

The relevant test is called an F-test. As before, we define: the hypothesis, the test statistic, its distribution, the rejection region and the p-value.

F-tests Examples

An F-test allows us to test a number q≥1 of linear restrictions on the parameters.

- One example could be:

H_{0}: b_{1}=0 & b_{2}=0 vs. H_{1}: b_{1}or b_{2}≠ 0. - Restrictions can be more elaborate (but still linear):

H_{0}: b_{1}=b_{2}& b_{3}+2b_{4}=1 vs. H_{1}: b_{1}≠b_{2}or b_{3}+2b_{4}≠1. - The following case is useful, as it checks whether the whole set of regressors have a significantly joint effect or not:

H_{0}: b_{k}= 0, for all k vs. H_{1}: b_{k}≠ 0 for at least one k.

F-test

The test statistic if the F-statistic. In the homoskedastic case:

F = [ (R^{2}-R_{0}^{2})/q / (1-R^{2})/(n-k-1) ]

where

- q is the number of restrictions,
- k is the number of regressors,
- n is the sample size,
- R
_{0}^{2}is the R^{2}of the constrained model (imposing H_{0}) and R^{2}is the R^{2}of the unconstrained model (not imposing H_{0})

The F-test essentially compares the fit of the constrained and unconstrained models.

The distribution follows an F-distribution with q and n-k-1 degrees of freedom. Its cdf is denoted as F_{q, n-k-1}. On a large scale we use F_{q, ∞}.

For a given significance level α, we look for the 1 - α quantile of the relevant F-distribution and:

Reject H_{0} if F > c_{α}, where c_{α} = F^{-1}_{q,n-k-1}(1 - α).

Note that this is slightly different from what we did with t-tests as we look for the 1 - α quantile and not 1 - α/2.

The p-value is computed as:

p-value = 1 - F_{q,n-k-1}(F)

where F is the F-statistic computed from the data and F_{q,n-k-1} the cdf of the relevant distribution.

Sign up for free and study better.

Anytime, anywhere.

Get started today!

Find materials for your class:

Download our app to study better. Anytime, anywhere.