Page 22 General Linear Regression Model in Matrix Terms
Suppose we have one response variable Y and (p-1) predictor (explanatory) variables X 1, X2, . . . , X p-1, and n observations, so that that the dataset dataset looks like like the following: X p-1
Y
ε (random error)
Y1 Y2
ε ε
X1
X2 . . .
X11 X21
X12 . . . X1(p-1) X22 . . . X2(p-1)
…..
…… ……
……….
…..
….
…..
…… ……
……….
…..
….
…..
…… ……
……….
…..
….
Xn1
Xn2 . . . Xn(p-1)
1 1
ε n b
Yn
The general linear regression model is given by Yi = β
0
+ β 1Xi1 + β 2X12 + . β
X1(p-1)
p-1
ε i, i = 1, 2, …,
+...
n. In matrix terms this becomes Y=X
+
where Yi = β
0
+ β 1Xi1 + β 2X12 + . β
ε i,
X1(p-1) + . . .
p-1
Y is the vector of n responses Y 1, Y2, . . . , Yn X is the n x p matrix matrix with first column column all 1’s and the values of of X 1, X2 , . . .X p-1 (assumed to be of rank p)
is the p x 1 vector of parameters. β 0, β
1.
is an n x 1 vector of uncorrelated errors. ε
, ...,β 1,
ε
2,
p-1.
...,ε
p.
The random errors ε 1, ε 2, . . . , ε p are assumed to be independent with mean 0 and common variance σ 2. For the purpose of making statistical inferences, it is further assumed that the errors are normally distributed. Page 23 Estimation of Parameters. The most commonly used criterion to estimate the parameters in the model is the principle of least squares, which involves minimizing n
∑ε
Q=
2
i
= Σ[Yi - β
0
- β 1Xi1 - β 2Xi2 - . β
Xi(p-1)]2 = (Y = X
p-1
)′ ( Y = X
)
1
It is easily shown that the value b of β which minimizes Q is the solution of the least squares normal equations X′Xb = X′Y
which has the solution b = (X′X)-1X′Y Note: If we assume that the errors are normally distributed then the least squares estimator b is also the maximum likelihood estimator of (to be discussed later).
Residuals ei are the difference between observed and fitted values and are given by ˆ ei = Yi – Y , i
or in vector form by ˆ = Y – Xb = Y - X(X′X) -1X′Y = [I – X(X′X)-1X′]Y = (I – e = Y – Y
H)Y
where H = X(X′X)-1X′. H is called the ‘hat matrix’ and plays an important role in regression diagnostics (to be discussed below). The resulting minimum value of Q, called the sum of squared errors SSE, is given by
ˆ 2 ) SSE =∑ ei = (Y- Xb)′(Y –Xb) = Σ(Yi – Y 2
i
Page 24
The fitted values are given by ˆ = Y
Xb = X(X′X)-1X′Y = HY
ˆ of predicted (fitted) values displays This representation of the vector Y directly the relationship between them and the observations. Letting h ij denote the i,jth element of H, the fitted value, we have n
ˆ Y i
=
∑h Y ij
j
j =1
High values of the diagonal elements h ij indicate that the observation Y i has a high influence on the fitted value. Example. Two predictors X 1 and X2 , n = 4 observations: 1 2 1 3 X= 1 4 1 10
,
Y
3 3 = 3 , 40
H
0.445 0.374 = 0.303 −.123
0.374
0.303
0.329
0.284
0.284
0.265
0.013
0.148
−.123 0.013 , 0.148 .0.961
.0.445 0.329 Diag= 0.265 0.961
The diagonal elements h ii measure the influence (leverage) of the individual observations. Both h 22 and h44 have very high leverage. For example, ˆ Y 4
= -0.123 Y1 + 0.013 Y2 + 0.148 Y3 + 0.961 Y4 . Fitted Line Plot
y = - 11.56 +5.013 x S R- Sq R-Sq(adj)
40
Here is a plot of the fitted and observed values:
30
y
20
10
0 1
2
3
4
5
6 x
7
8
9
10
5.14750 94.8% 92.3%
Page 25
From the graph it is easy to see why the observation Y 4 has a large influence (leverage) on its fitted value (and on the fitted regression line as well). Coefficient of Multiple Determination R 2.
We ask: ‘How much improvement is obtained by using a predictor to obtain fitted (average) values of the response, versus just using the mean y ?’, One answer is the following: Compare the two ways of getting fitted values: 1. Use the average value (sample mean ) of the observations Y , so ˆ Y and compute SSTO = ∑(Y i −Y ) 2 = sum of squared errors of Y predictions. =
ˆ + β ˆ x in the ˆ = β 2. Use the fitted regression line, getting fitted values Y 0 1 2 simple linear regression model.. Then compute SSE = ∑(Y i −Y ˆi ) .
Compare the sum of squared errors of fitted and observed values for the two methods. Then R 2 = (SSTO – SSE)/ SSTO
equals the proportionate reduction in the sum of squared errors using the fitted regression line vs. using the sample mean Y . R 2.is usually expressed as a percentage reduction. It is also interpreted as the amount of variability in the observations that can be explained (or accounted for) by the predictors. Note that the sample variance of the observations is S y2
= SSTO/ (n-1)
and the variance of the residuals using the regression equation is given by MSE = SSE/(n-p). the random errors ε i.
s=
MSE
is the estimated standard deviation of
It is easily shown that the ‘Total sum of Squares’ SSTO can be decomposed as Page 26 2 2 2 SST0 = ∑(Y i −Y ) = ∑(Y ˆi −Y ) + ∑(Y i −Y ˆi ) = SSR + SSE. This breakdown of sum of squares can be summarized in an ‘Analysis of Variance’ Table: Source of Variation ------------Regression Error ------------Total
Sum of df Squares ---------------------- -----2 SSR = ∑(Y ˆi −Y ) p-1 2 SSE = ∑(Y i −Y ˆi ) n–p ---------------------- -----2 SSTO = ∑(Y i −Y ) n–1
MS
F-Test
-------------------- ------------MSR=SSR/(p-1) MSR/MSE MSE = SSE/(n-p) --------------------
The F-test is used to test the hypothesis that all of the parameters β 1, β 2., …, β p-1 are simultaneously zero. Use the p-value of the test to make a decision on this (this is probably practically not an issue!). Confidence Intervals. Recall (from page 23) that b = (X′X)-1X′Y is the estimated vector of (vector) of parameters β . It can be shown that the variance –covariance matrix of b is given by Var-Cov (b) = (X′X)-1σ2 which is estimated by Est. Var-Cov (b) = (X′X)-1MSE
The square root of the diagonal elements s(b i) of this matrix are the standard errors of the estimated regression parameters b 1, b2., …, b p-1. Confidence intervals for the b i ‘s are then given by bi ± t* s(bi), where t* is a critical value of the t-distribution with n-p df.
Tests of hypotheses about individual parameters are conducted using the tdistribution also—refer to the p-values of these tests in regression output. Page 27 Similarly, one can construct confidence intervals for the mean response µnew=E(Ynew) corresponding to a population mean indexed by for values of ˆ X h' b , where x1, x2., …, x p-1. The mean response is estimated by Y new = ' X new is the (row) vector of values of x 1, x2., …, x p-1. It can be shown that the standard error of the estimated response is given by ˆ Y new
s.e.(
)=
' −1 X hnew ( X ' X ) X new MSE
Model Selection Criteria. If there are (P-1) predictors x 1, x2., …, xP-1. one can conceivably fit 2 P-1 different models to the data. For example, there are P-1 models with one predictor x 1, P(P-1) models with 2 predictors, etc. Some criteria used for comparing models include the following (p as a subscript below refers to the number of predictors in a model): SSE p,
2
2
R p , Ra , p ,
C p , AIC p , BIC p, and Press p .
These can be described as follows: 2
SSE p or R p . Note first that SSE p and 2
R p
=1=
2
R p
are equivalent measures, in that
SSE p SSTO
The goal in using either of these statistics is to choose a model where their 2 values are ‘small’. One can plot, e.g., R p against p and choose a model, or models, where it is asmptoting (not changing). 2
Ra , p ,
is the same measure as is given by 2
Ra , p
2
=
1
−
(n (n
1) SSE p
−
−
p ) SSTO
2
R p
=
but with an adjustment for sample size. It 1
−
MSE p S y2
where S y = SSTO/(n-1) is the sample variance of the observations. Thus, 2 Ra , p looks at how the ratio of sample variances for the model with p
Page 28
predictors changes in comparison with the model with no predictors (a ‘baseline’ model). C p . This criterion is concerned with the total mean squared error of the n fitted values for each subset selection model. It is a bit complicated to describe here. Suffice to say, most statisticians now prefer to use the BIC criterion. BIC p. Schwarz’s Bayesian Information Criterion is given by BIC p = n ln SSE p – n ln + [ln n] p
We will look at an example using the : SDSS Quasar Sloan Digital Sky Survey team (CASt dataset SDSS_quasar.dat). Here are 8 of the first 10 observations in the dataset (which contains 46420 observations in all). The variables are as follows: Dec. z u_mag g_mag r_mag i_mag z_ mag Radio X-ray J_mag H_mag K_mag M_i 15.30 1.20 19.92 -25.08 13.94 2.24 19.22 -27.42 14.93 0.46 19.64 -22.73 0.04 0.48 18.24 -24.05 14.18 0.95 19.52 -24.57 -8.86 1.25 19.15 -26.06 15.33 0.99 19.41 -24.71 13.77 0.77 19.35 -24.19
19.81 19.39 19.16 19.32 -1.00 -9.00 0.00
0.00
0.00
18.89 18.45 18.33 18.11 -1.00 -9.00 0.00
0.00
0.00
19.47 19.36 19.19 19.00 -1.00 -9.00 0.00
0.00
0.00
17.97 18.03 17.96 17.91 0.00
-1.66 16.65 15.82 14.82
19.28 19.11 19.16 19.07 -1.00 -9.00 0.00
0.00
0.00
18.72 18.26 18.28 18.26 13.97 -9.00 0.00
0.00
0.00
19.18 18.99 19.08 19.13 -1.00 -1.88 0.00
0.00
0.00
19.00 18.92 19.01 18.84 -1.00 -9.00 0.00
0.00
0.00