# Fixed Effects, Random Effects, and First Differencing

I came across a stackoverflow post the other day touching on first differencing and decided to write a quick review of the topic as well as related random effects and fixed effects methods.

In the end we’ll see that random effects, fixed effects, and first differencing are primarily used to handle unobserved heterogeneity within a repeated measures context. One annoying thing about these topics is that there are many synonyms and they vary across (sometimes within) disciplines. Throughout the post I’ll be using a lot of “i.e.”, “aka”, and “/”s to indicate synonymous concepts/idea. Redundant terminology can be a huge barrier to seeing the underlying concepts.

Before getting into first differencing, I’ll lay out the context by example. Consider that we want to compare the effect of two blood pressure medications (Drug A and B) on reducing systolic blood pressure (BP). Suppose you’ve enrolled 100 patients in the study. You randomize each patient to receiving drug A or B as their first treatment. After a sufficient washout period, you give each subject the remaining drug. After each treatment, you record BP. Because of the washout period, the second treatment should be independent of the first. In case it is not, the randomization will ensure that the error is randomized across Drug A and B – so we’re still comparing apples to apples.

This experiment yields two repeated measurements of BP per person, one after having Drug A and another after having Drug B. Repeated measures data is also be referred to as hierarchical data/multilevel data.

$BP_{s,d}; \ s \in 0,1,..., 100, d \in \{0,1\}$

Above, $d=1$ marks the measurement with Drug A and $d=0$ marks the measurement with Drug B.

# Ordinary Least Squares

Now we are interested in the mean difference in BP between Drug A and Drug B and write the following linear model for subject $s$ and drug $d$:

$BP_{s,d} = \beta_0 + \beta_1 \times d_{s,d} + a_s + \epsilon_{s,d}$

In the above, $\beta_1$ represents the difference in average BP between Drug A and drug B – the average treatment effect. Using this coefficient we can text the null hypothesis that Drug A and Drug B have the same effect on BP:

$H_0: \beta_1=0 \\ H_a: \beta_1 \neq 0$

So we could estimate $\beta_1$ using OLS and do the corresponding hypothesis test (which, by the way, will yield an identical p-value to an unpaired two-sample t-test). However, we would be ignoring a crucial piece of the model, namely $a_s$. This is the unobserved subject-specific, time-invariant heterogeneity (e.g. race, sex, baseline BMI, etc) and it must be dealt with.

We have two options: we either assume the treatment effect is not correlated with unobserved characteristics, or assume it is correlated.

# Random Effects

If we believe these unobserved factors are uncorrelated with the treatment effect, then leaving them out of the model won’t bias our estimate of the treatment effect.

The catch is that $a_s$ will be absorbed into the error term. This induces autocorrelation on the error term within each patient.

$BP_{s,d} = \beta_0 + \beta_1 \times d_{s,d} + (a_s + \epsilon_{s,d})$

For a given subject $s$ we can define a composite error term, $c_{s,1} = a_s + \epsilon_{s,1}$ and $c_{s,0} = a_s + \epsilon_{s,0}$. The errors are correlated insofar as both errors contain $a_s$.

In order for OLS estimates to be efficient, the error term must be independent and identically distributed. We have just shown that they are not independent. The consequences of this is that our estimate of $\beta_1$, while still unbiased, is inefficient. The OLS standard errors are compromised.

Random effects (aka hierarchical modeling) is built to handle the fact that our measurements are clustered/nested within patients. Hierarchical models can be used to handle additional clustering/nesting. For example, consider the same experiment but in this case we know that some patients come from the same family. So now we have BP measurements nested within patients and patients nested within families.

# Fixed Effects

If we assume that the unobserved factors are indeed correlated with the treatment effect, then our estimate of the treatment effect will be biased. One way to solve this issue through the fixed effects estimator, which is derived from the original model:

$BP_{s,d} = \beta_0 + \beta_1 \times d_{s,d} + a_s + \epsilon_{s,d}$

Taking the average of each variable across each subject’s measurements, we obtain the average model:

$\bar{BP_{s}} = \beta_0 + \beta_1 \times \bar{d_{s}} + \bar{a_s} + \bar{\epsilon_{s}}$

Subtracting the latter from the former, we get what is typically called the “fixed effects estimator”

$BP_{s,d}- \bar{BP_s} = \beta_1 \times (d_{s,d} - \bar{d_s} ) + (a_s-\bar{a_s}) + (\epsilon_{s,d}-\bar{\epsilon_{s}})$

$BP_{s,d}- \bar{BP_s} = \beta_1 \times (d_{s,d} - \bar{d_s} ) + (\epsilon_{s,d}-\bar{\epsilon_{s}})$

This process of de-meaning the variables is called the “within” transformation – an appropriate name because it calculate BP variability within each patient. Most importantly, this process eliminates unobserved heterogeneity from the model. The math aside, the within transformation exploits the repeated measures structure of the experiment…essentially using each patient as their own control. The first difference approach will also use the same strategy.

This estimate of the treatment effect is identical to an OLS estimate of the first model with subject dummies (aka least squares dummy variable – LSDV – model). The coefficient and standard error of the treatment effect will be identical to LSDV. From what I understand the fixed effects estimator was shown to be equivalent to LSDV in the late 1970s and gained popularity because it’s must less computationally intensive to perform the within transformation than it is to estimate $N-1$ subject dummies.

# First Difference

First differencing is an alternative to fixed effects. It also achieves the same goal: to eliminate $a_s$ from the model. If  $a_s$ is related to the the treatment effect, first differencing will also yield an unbiased estimate of the effect.

The original equation admits a separate equations for each drug.

$BP_{s,d} = \beta_0 + \beta_1 \times d_{s,d} + a_s + \epsilon_{s,d}$

Drug A: $BP_{s,1} = \beta_0 + \beta_1 + a_s + \epsilon_{s,1}$

Drug B: $BP_{s,0} = \beta_0 + a_s + \epsilon_{s,0}$

Subtracting the latter from the former yields the first difference estimate of the treatment effect:

$BP_{s,1}-BP_{s,0} = \beta_1 + (\epsilon_{s,1}-\epsilon_{s,0})$

In this example, all we need to do is regression the difference in BP on a constant. This is equivalent to a one-sample t-test of the null hypothesis that the mean difference in BP is zero. It is also equivalent to a paired t-test. Most importantly, it’s also equivalent to the fixed effects estimator in the case where we have two measurements per subject. For more than two measurements, there are efficiency differences between the two methods – but that’s a subject for a different post.