In order to conduct a test on the assumptions for one parameter of the linear model, we need the following ingredients:
A null hypothesis H0 and an alternative H1
A test statistic T and its distribution
A significance level α and a rejection region
Alternatively, we can compute the p-value.
We want to test whether the parameter b is equal or not to a specific numerical value, denoted as b(0):
H0: b = b(0) and H1: b ≠ b(0).
Applied econometrics are often testing whether the effect of a variable on another is significant, in which case the relevant null hypothesis is b = 0.
The test statistic for this test is called the t-statistic:
t = [ b̂ - b(0) ] / σb̂
where b̂ is the OLS estimate of b and σb̂ is the standard error of b̂.
In a large sample under the null hypothesis, the t-statistic has approximately a standard normal distribution.
For a given significance level α, and given the symmetry of the normal distribution, we can define the rejection region as follows:
Reject H0 if |t| > Φ-1(1 - α/2)
Then, defining the rejection region as above means that:
P(t∈ reject. region | H0) = α.
The p-value of a test is the lowest significance level at which we can reject H0. Since the t-statistic t is standard normal, by symmetry the p-value equals:
p-value = 2Φ(-|t|).
We can then reject H0 at a significance level α iff p-value ≤ α.
A 95% confidence interval for a coefficient b is an interval that contains the true value of b with probability 0.95. Equivalently, a value b* is the 95% confidence interval if we cannot reject H0: b = b* at the 5% significance level.