In this post, I explicitly derive the mathematical formulas of the linear model, and show how to compute them in practice with the R computer program. The first post of this series is available here (sorry, it’s in french): « le modèle linéaire (1) – présentation ».

### Aim

We want to analyze the relationship between the rate of DDT in fishes (variable to explain ) and their age (explanatory variable ). And for this, without any hesitation, we’re gonna use R.

### Data

We have a sample of fishes. For each fish, we have its age and its rate of DDT. For each age, we have three different fishes. Here is the whole dataset:

Assuming this dataset is stored on a flat file named « data.csv », we can load it into R and start analyzing it:

d <- read.csv( "data.csv", header=TRUE ) summary( d ) n <- length( d$obs ) tapply( d$rate, d$age, mean ) tapply( d$rate, d$age, sd ) library( ggplot2 ) qplot( age, rate, data=d ) ggsave( "data_fishes.png" )

From this, we see that the older the fish, the higher its rate of DDT. Moreover, the older the fish, the more variable the rate. We can propose hypotheses about the relationship between age and rate, but we cannot conclude for sure of which kind it is. That’s why we need a statistical model, i.e. a formal way to quantify the uncertainty in the measurements and our confidence into several hypotheses describing the mechanisms behind the phenomenon under study.

### Building the model

We assume that the natural phenomenon under study (the rate of DDT in fish) can be modeled by a continuous random variable linearly depending on a continuous factor (the age of fish). That is, we should be able to represent the link between and by a straight line. The observations are measured for several given values :

- is the index representing the identifier of the fish ();
- is a random variable corresponding to the rate of the th fish;
- is the age of the th fish (not random);
- is a random variable for the errors, following a Normal distribution with mean and variance ;
- is the variance of the , i.e. an unknown parameter we want to estimate ( is called the standard deviation);
- is a constant corresponding to the intercept of the regression line, i.e. an unknown parameter we want to estimate;
- is a constant corresponding to the regression coefficient (slope of the regression line), i.e. an unknown parameter we want to estimate;
- « i.i.d. » stands for « independent and identically distributed ».

This model means that we decompose the value of the rate in two parts:

- one explained by , , the fixed part;
- one left unexplained, , the random part.

As a result, follows a normal distribution (also called Gaussian), and we can write, for a given , , the being independents:

- expectation:
- variance:
- covariance:

The term « linear » in the expression « linear model » means that « linear » applies to the parameters:

- is a linear model although the relation between and is polynomial;
- is a linear model;
- is not a linear model as ;
- is not a linear model.

### Deriving estimators of the parameters

We will estimate the 3 unknown parameters, , and , by finding their values that maximize the likelihood of the parameters given the data. For a given set of parameters, the likelihood is the probability of obtaining the data given these parameters, and we want to maximize this probability:

As , the have a probability density :

And as the are independents, we can write the likelihood as a product:

We note the logarithm of the likelihood:

To find the values of , and that maximize , we only need to set to zero the partial derivatives of with respect to (w.r.t.) each parameter:

We can thus deduce the maximum-likelihood estimator (MLE) of parameter :

Similarly for :

When replacing by its value obtained previously, we obtain:

…<to do>…

We can thus deduce the maximum-likelihood estimator of parameter :

And finally for :

We can thus deduce the maximum-likelihood estimator of parameter :

Now, for each parameter again, we should compute the second derivatives of the likelihood and check that they are positives. This is to be sure that the formulas we obtained above with the first derivatives truly correspond to maxima and not minima. But I’m feeling a bit lazy here… Whatsoever, it works.

### Properties of the estimators

The estimators and are linear combinations of independent Gaussian variables, the , thus both follow a Gaussian distribution: and . We now need to derive the expectations and variances of these distributions.

First, let’s define a new quantity:

This new term is useful because it is not random, which facilitates the derivations of expectations and variances, as you will see below.

First formula:

Second formula:

However:

Thus:

Finally:

Third formula:

Now, let’s calculate :

The estimator has no bias: .

Now, let’s calculate :

The estimator has no bias: .

Now let’s calculate :

Now let’s calculate :

Now, let’s calculate :

…<to do>…

The estimator of is biased: .

Thus, we rather use the following, unbiased estimator for :

All these estimators are called « least-square estimators » because they minimize the sum of prediction squared errors:

Also, (i.e. the square root of ) is also called the standard error of the regression.

Thanks to the formulas above for , and , we can compute the estimations , and of the parameters , and . These estimations are thus also estimations of and . But how to estimate and ?

In fact, we just replace by in the formulas of and :

Moreover, for each estimation, it is often very useful to compute a confidence interval. Such an interval indicates that, if we were to re-do the whole experiment 100 times, the true value of the parameter would fall in it 95% of the times (corresponding to a level ).

We know that both and follow a Student distribution with degrees of freedom. This allows to compute confidence intervals:

where are realizations of , and the ‘s are quantiles taken from the Student cumulative distribution function.

### Predicting values

It is possible with this model, for a given , to predict a value for . We named such a prediction that corresponds to an estimator of the expectation of given :

, estimator of

We also note the calculated residual:

The difference between and is that is an unobserved random variable whereas is a residual computed thanks to the estimators and .

For , follows a normal law (as a linear combination of a Gaussian vector), and:

As an estimation of the expectation, we have: .

And to estimate the variance, we do as above:

Similarly, to compute a confidence interval for :

Moreover, we can also compute a prediction interval. For a random variable whose values correspond to the results we can observe for given that , we have:

Therefore:

And the prediction interval is:

where:

The prediction interval is always wider than the confidence interval but, as the estimation of by gets more precise (e.g. with more sampled data), the predictions will be more precise also.

### Checking the hypotheses of the model

However, even if we spent hours writing equations, we first need to be sure that all the hypotheses of our model are verified, otherwise, no way of being confident in the results (i.e. in the statistical inference). Here are the hypotheses:

- the relationship between the outcome and the explanatory variable(s) is linear;
- the error terms have the same variance ;
- the error terms are independents;
- the error terms follow a Gaussian distribution.

Basically, we need to look at the estimations of the error terms () as a function of the predicted values().

Therefore, first we record the results of the linear regression, and then we plot the residuals versus the fitted values:

mod1 <- lm( rate ~ age, data=d ) ggplot( mod1, aes(.fitted, .resid) ) + geom_hline( yintercept=0 ) + geom_point() + geom_smooth( se=F ) ggsave( "data_fishes_mod1_residuals-fitted.png" )

Clearly, the residuals are « structured », i.e. there is a tendency, which indicates that a relevant term was not considered in the modeling of . Moreover, the residuals don’t have the same variance (heteroscedasticity). Therefore, we can’t carry on with this model, let’s modify it:

again with but these are different from model 1.

mod2 <- lm( log10(rate) ~ age, data=d ) ggplot( mod2, aes(.fitted, .resid) ) + geom_hline( yintercept=0 ) + geom_point() + geom_smooth( se=F ) ggsave( "data_fishes_mod2_residuals-fitted.png" )

The variance was stabilized but the residuals are still slightly structured. But let’s keep this model as, although the blue line is similarly convex as in the previous model, the y-axis has a much zoomed-in scale. Therefore, our final model is: with . To ease the equations, we will use . Now, the model is .

### Testing the model

The first test aims at deciding if the relationship between the outcome and its explanatory variable is truly linear. The null hypothesis is:

(no relation between and )

and the alternative hypothesis is:

(there is a relation between and , which is linear)

To test this, we decompose the variance of the data into a part explained by the model and the rest being residual:

The equation above corresponds to decomposing the total sum of squares (TSS) into a model sum of squares (MSS) and an residual sum of squares (RSS):

Each sum of squares follows a chi-squared distribution (). Such a distribution is characterized by a parameter called « degrees of freedom » (df):

To test the hypothesis, we use the Fisher’s statistics:

This statistics can be interpreted as a variance ratio:

Under hypothesis , this statistics follows a Fischer distribution with parameters (degrees of freedom) and . We reject the null hypothesis, , at the level (typically 5%), if:

i.e. if is higher than the quantile of order from a Fisher law .

We can also compute a P-value that measures the agreement between the tested hypothesis and the obtained result, i.e. the probability to draw a value from a distribution that is higher than :

The smaller the P-value, the the stronger the disagreement between the null hypothesis and the results of the experiment. Usually we reject when the P-value is smaller than .

In our case, we can compute all the sum of squares as well as the Fisher statistics:

MSS <- sum( ( mod2$fitted.values - mean(log10(d$rate)) )^2 ) RSS <- sum( ( log10(d$rate) - mod2$fitted.values )^2 ) TSS <- sum( ( log10(d$rate) - mean(log10(d$rate)) )^2 ) F <- (MSS / 1) / (RSS / 13) f <- qf( p=0.95, df1=1, df2=13 ) Pval <- pf( q=F, df1=1, df2=13, lower.tail=FALSE )

We obtain , , , and .

It is also possible to have all these results in a simple way (although it is always good to know how to compute these quantities by oneself):

summary( mod2 ) anova( mod2 )

These results show that the variability explained by the model is far greater than the residual variability ( and ). Thus, we can reject the null hypothesis and consider that there exists a linear relationship between the log of the DDT rate in fishes and their age.

### Assessing the quality of the model

It is important to assess the adjustment of the model to the data as we may use it to predict the value of knowing the value of .For this purpose, we compute the R-square, , that corresponds to the proportion of the variability in explained by the model (in percentage).

However, it is more relevant to calculate the adjusted R-square, i.e. to divide it by the number of explanatory variables. Indeed, with more explanatory variables, the adjustment will be better but we may end with a model that is over-adjusted:

R2 <- 1 - RSS/TSS R2.adj <- 1 - (RSS/13)/(TSS/14)

We obtain and . It means that 85% of the variability of the log of DDT rate in fishes is explained by their age.

### Estimating the parameters

According to the formulas above of , and , we can compute estimations of the parameters:

b <- sum( (d$age - mean(d$age)) * (log10(d$rate) - mean(log10(d$rate))) ) / sum( (d$age - mean(d$age))^2 ) a <- mean(log10(d$rate)) - b * mean(d$age) sn2 <- 1/(n-2) * sum( (log10(d$rate) - (a + b*d$age))^2 )

Know that and , we get , and .

Here is the equation of the regression line: . It means that, in a year, the log of the DDT rate increases 0.16 times. We can also plot this line with the data:

qplot( age, log10(rate), data=d ) + geom_abline( intercept=a, slope=b, colour="red" ) ggsave( "data_fishes_mod2_regline.png" )

Are our estimates precise? Let’s compute the variances and confidence intervals for and :

s.a <- sn2 * ( 1/n + mean(d$age)^2 / sum( ( d$age - mean(d$age))^2 ) ) s.b <- sn2 / sum( ( d$age - mean(d$age) )^2 ) alpha <- 0.05 ci.a <- c( a - qt( p=1-alpha/2, df=n-2 ) * s.a, a + qt( p=1-alpha/2, df=n-2 ) * s.a ) ci.b <- c( b - qt( p=1-alpha/2, df=n-2 ) * s.b, b + qt( p=1-alpha/2, df=n-2 ) * s.b )

We get , , and . These values show that our estimates are quite precise.

Although it seems almost certain when looking at the confidence intervals, we can still wonder if we can reject the null hypotheses according to which the and parameters equal zero: . Note that the test for is equivalent to the test made previously with .

test.a <- a / sqrt( s.a ) test.b <- b / sqrt( s.b ) qt( 0.975, df=13 )^2 == qf( 0.95, df1=1, df2=13 ) Pval.a <- 2 * pt( q=test.a, df=n-2 ) Pval.b <- 2 * pt( q=test.b, df=n-2, lower.tail=FALSE )

We obtain , , , and . The P-values are far below 5%. Thus we reject both null hypotheses and conclude that:

- the log of the DDT rate for a fish that is just born is significantly different from zero ();
- the log of the DDT rate for a fish is linearly linked to its age(). And as , this is a growing relationship.

### Confidence and prediction intervals for the variable to explain

We can compute such interval for the values of the DDT rate from each fish:

s2.t <- sn2 * ( 1/n + (d$age-mean(d$age))^2 / sum((d$age-mean(d$age))^2) ) ci.y <- cbind( a+b*d$age - qt(p=1-alpha/2,df=n-2) * sqrt(s2.t), a+b*d$age + qt(p=1-alpha/2,df=n-2) * sqrt(s2.t) ) s2.y <- sn2 + s2.t pi.y <- cbind( a+b*d$age - qt(p=1-alpha/2,df=n-2) * sqrt(s2.y), a+b*d$age + qt(p=1-alpha/2,df=n-2) * sqrt(s2.y) ) pl <- qplot( age, log10(rate), data=d ) + geom_abline( intercept=a, slope=b, colour="red" ) pl + geom_line( aes(x=d$age, y=ci.y[,1], colour="low CI") ) + geom_line( aes(x=d$age, y=ci.y[,2], colour="high CI") ) + geom_line( aes(x=d$age, y=pi.y[,1], colour="low PI") ) + geom_line( aes(x=d$age, y=pi.y[,2], colour="high PI") ) + scale_colour_manual( "Intervals", c("blue","blue","green","green") ) + opts( legend.position="none" ) ggsave( "data_fishes_mod2_regline-intervals-nolegend.png" )

Here is the final plot with the regression line in red, the lines of the confidence interval in blue and the lines of the prediction interval in green:

Sources: Statistique inférentielle de Daudin, Robin et Vuillet (sur Amazon.fr), Exemples d’applications du modèle linéaire de Lebarbier et Robin (here)