glmmML {glmmML}R Documentation

Generalized Linear Models with random intercept

Description

Fits GLMs with random intercept by Maximum Likelihood and numerical integration via Gauss-Hermite quadrature.

Usage

glmmML(formula, family = binomial, data, cluster, weights,
cluster.weights, subset, na.action, 
offset, prior = c("gaussian", "logistic", "cauchy"),
start.coef = NULL, start.sigma = NULL, fix.sigma = FALSE, 
control = list(epsilon = 1e-08, maxit = 200, trace = FALSE),
method = c("Laplace", "ghq"), n.points = 8, boot = 0) 

Arguments

formula a symbolic description of the model to be fit. The details of model specification are given below.
family Currently, the only valid values are binomial and poisson. The binomial family allows for the logit and cloglog links.
data an optional data frame containing the variables in the model. By default the variables are taken from `environment(formula)', typically the environment from which `glmmML' is called.
cluster Factor indicating which items are correlated.
weights Case weights. Defaults to one.
cluster.weights Cluster weights. Defaults to one.
subset an optional vector specifying a subset of observations to be used in the fitting process.
na.action See glm.
start.coef starting values for the parameters in the linear predictor. Defaults to zero.
start.sigma starting value for the mixing standard deviation. Defaults to 0.5.
fix.sigma Should sigma be fixed at start.sigma?
offset this can be used to specify an a priori known component to be included in the linear predictor during fitting.
prior Which "prior" distribution (for the random effects)? Possible choices are "gaussian" (default), "logistic", and "cauchy".
control Controls the convergence criteria. See glm.control for details.
method There are two choices "Laplace" (default) and "ghq" (Gauss-Hermite).
n.points Number of points in the Gauss-Hermite quadrature. If n.points == 1, the Gauss-Hermite is the same as Laplace approximation. If method is set to "Laplace", this parameter is ignored.
boot Do you want a bootstrap estimate of cluster effect? The default is No (boot = 0). If you want to say yes, enter a positive integer here. It should be equal to the number of bootstrap samples you want to draw. A recomended absolute minimum value is boot = 2000.

Details

The integrals in the log likelihood function are evaluated by the Laplace approximation (default) or Gauss-Hermite quadrature. The latter is now fully adaptive; however, only approximate estimates of variances are available for the Gauss-Hermite (n.points > 1) method.

For the binomial families, the response can be a two-column matrix, see the help page for glm for details.

Value

The return value is a list, an object of class 'glmmML'. The components are:

boot No. of boot replicates
converged Logical
coefficients Estimated regression coefficients
coef.sd Their standard errors
sigma The estimated random effects' standard deviation
sigma.sd Its standard error
variance The estimated variance-covariance matrix. The last column/row corresponds to the standard deviation of the random effects (sigma)
aic AIC
bootP Bootstrap p value from testing the null hypothesis of no random effect (sigma = 0)
deviance Deviance
mixed Logical
df.residual Degrees of freedom
cluster.null.deviance Deviance from a glm with no clustering
cluster.null.df Its degrees of freedom
posterior.modes Estimated posterior modes of the random effects
terms The terms object
info From hessian inversion. Should be 0. If not, no variances could be estimated. You could try fixing sigma at the estimated value and rerun.
prior Which prior was used?
call The function call

Note

The optimization may not converge with the default value of start.sigma. In that case, try different start values for sigma. If still no convergence, consider the possibility to fix the value of sigma at several values and study the profile likelihood.

Author(s)

Göran Broström

References

Broström (2003). Generalized linear models with random intercepts. http://www.stat.umu.se/forskning/reports/glmmML.pdf

See Also

glmmboot, glm, optim, glmm in Lindsey's repeated package, lmer in Matrixand glmmPQL in MASS.

Examples

id <- factor(rep(1:20, rep(5, 20)))
y <- rbinom(100, prob = rep(runif(20), rep(5, 20)), size = 1)
x <- rnorm(100)
dat <- data.frame(y = y, x = x, id = id)
glmmML(y ~ x, data = dat, cluster = id)

[Package glmmML version 0.81-4 Index]