We want to conduct tests for two parameter values (or more) at the same time.
Consider two coefficients, b1 and b2, and let X1 and X2 be the regressors. Is a t-test fails to reject b1=0 and a second t-test fails to reject b2=0 it could still be that X1 and X2jointly have a significant effect on the dependent variable Y.
Testing for joint hypotheses on several parameters is not the same as conducting tests on each parameter separately because hypotheses pertain to the joint distribution of the estimators.
The relevant test is called an F-test. As before, we define: the hypothesis, the test statistic, its distribution, the rejection region and the p-value.
An F-test allows us to test a number q≥1 of linear restrictions on the parameters.
One example could be:
H0: b1=0 & b2=0 vs. H1: b1 or b2 ≠ 0.
Restrictions can be more elaborate (but still linear):
H0: b1=b2 & b3+2b4=1 vs. H1: b1≠b2 or b3+2b4≠1.
The following case is useful, as it checks whether the whole set of regressors have a significantly joint effect or not:
H0: bk = 0, for all k vs. H1: bk ≠ 0 for at least one k.
The test statistic if the F-statistic. In the homoskedastic case:
F = [ (R2-R02)/q / (1-R2)/(n-k-1) ]
q is the number of restrictions,
k is the number of regressors,
n is the sample size,
R02 is the R2 of the constrained model (imposing H0) and R2 is the R2 of the unconstrained model (not imposing H0)
The F-test essentially compares the fit of the constrained and unconstrained models.
The distribution follows an F-distribution with q and n-k-1 degrees of freedom. Its cdf is denoted as Fq, n-k-1. On a large scale we use Fq, ∞.
For a given significance level α, we look for the 1 - α quantile of the relevant F-distribution and:
Reject H0 if F > cα, where cα = F-1q,n-k-1(1 - α).
Note that this is slightly different from what we did with t-tests as we look for the 1 - α quantile and not 1 - α/2.
The p-value is computed as:
p-value = 1 - Fq,n-k-1(F)
where F is the F-statistic computed from the data and Fq,n-k-1 the cdf of the relevant distribution.