Linear Mixed Model

- 4 mins
All models are wrong, but some are useful. -- George E.P. Box


Estimate $\psi$ via Generalized Estimating Equations (GEE)

By Theorem 4.3 of Jiang (2007), Suppose that V is known, and that is non-singular. Then, the optimal estimating function within H is given by $G^{\ast} = \dot{\mu}^{\prime} V^{-1}(y-\mu) , that is, with

.

Here the optimality is in a similar sense to the univariate case. Define the partial order of nonnegative definite matrices as A $\geq$ B if A - B is nonnegative definite. Then, the optimality in Theorem 4.3 is in the sense that the estimating function $G^{\ast}$ maximizes, in the partial order of nonnegative definite matrices, the generalized information criterion

where .

The longitudinal GLMM, the optimal estimating function according to Theorem 4.3 can be expressed as

where

*

$$ \dot{\mu}_{i} = \{\frac{\partial \mu_{ij}}{\partial \psi_{k}}\}_{1 \leq j \leq 4, 1 \leq k \leq 11} = \begin{pmatrix} \frac{\partial \mu_{i1}}{\partial \beta_{0}} & \frac{\partial \mu_{i1}}{\partial \beta_{1}} & ... & \frac{\partial \mu_{i1}}{\partial \beta_{6}} & \frac{\partial \mu_{i1}}{\partial \sigma_{1}} & ... & \frac{\partial \mu_{i1}}{\partial \tau} \\ . & . & & . & . & & . \\ . & . & ... & . & . & ... & . \\ . & . & & . & . & & . \\ \frac{\partial \mu_{i1}}{\partial \beta_{0}} & \frac{\partial \mu_{i1}}{\partial \beta_{1}} & ... & \frac{\partial \mu_{i1}}{\partial \beta_{6}} & \frac{\partial \mu_{i1}}{\partial \sigma_{1}} & ... & \frac{\partial \mu_{i1}}{\partial \tau} \end{pmatrix} $$

However, $V_{i}$, $1 \leq i \leq 59$, are unknown in practice. Liang and Zeger(1986) proposed replacing $V_{i}$’s with “working” covariance matrices in order to solve equation. They showed that under regualarity conditions,the resulting GEE estimator is consisitent even though the working covariance matrices misspecify the true $V_{i}$’s.

For simplicity, replace $V_{i}$ with $I_{4}$ and solve (1). That is, solve

We can derive $\mu_{i}$ and $\dot{\mu}_{i}$ analytically:

where

.

Therefore, using the law of total expectation and moment generating function of a normal random variable,

Let$x_{ijk}$ be the $k^{th}$ component of $x_{ij}$, $1 \leq k \leq 7$. Then,

$$ \begin{align} \frac{\partial \mu_{ij}}{\partial \beta_{k-1}} &= \mu_{ij}x_{ijk}, 1 \leq k \leq 7 \\ \frac{\partial \mu_{ij}}{\partial \sigma_{1}} &= \mu_{ij}(\sigma_{1} + v_{j}\rho \sigma_{2}) \\ \frac{\partial \mu_{ij}}{\partial \sigma_{2}} &= \mu_{ij}(v_{j}^{2}\sigma_{2} + v_{j}\rho \sigma_{1}) \\ \frac{\partial \mu_{ij}}{\partial \rho} &= \mu_{ij}v_{j}\sigma_{1}\sigma_{2} \\ \frac{\partial \mu_{ij}}{\partial \tau } &= \mu_{ij}\tau \end{align} $$

Now, solve for $G^{\ast}{I} = 0$ with the constraints $\sigma{1} > 0$, $\sigma_{2} > 0$, $\rho \in [-1,1]$, $\beta \in \textbf{R}^{7}$. This is an 11-dimensional nonlinear equation.

Why GEE better for this model?

Ying Zhang

Ying Zhang

A statistician who gets lost in analysis.

comments powered by Disqus
rss facebook twitter github youtube mail spotify instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora