mle2 {bbmle}R Documentation

Maximum Likelihood Estimation

Description

Estimate parameters by the method of maximum likelihood.

Usage

mle2(minuslogl, start, method, optimizer,
    fixed = NULL, data=NULL,
    subset=NULL,
default.start=TRUE, eval.only = FALSE, vecpar=FALSE,
parameters=NULL,
skip.hessian=FALSE,trace=FALSE,
gr,...)
calc_mle2_function(formula,parameters,start,data=NULL,trace=FALSE)

Arguments

minuslogl Function to calculate negative log-likelihood.
start Named list. Initial values for optimizer.
method Optimization method to use. See optim.
optimizer Optimization function to use. (Stub.)
fixed Named list. Parameter values to keep fixed during optimization.
data list of data to pass to minuslogl
subset logical vector for subsetting data (STUB)
default.start Logical: allow default values of minuslogl as starting values?
eval.only Logical: return value of minuslogl(start) rather than optimizing
vecpar Logical: is first argument a vector of all parameters? (For compatibility with optim.)
parameters List of linear models for parameters
gr gradient function
... Further arguments to pass to optimizer
formula a formula for the likelihood (see Details)
trace Logical: print parameter values tested?
skip.hessian Bypass Hessian calculation?

Details

The optim optimizer is used to find the minimum of the negative log-likelihood. An approximate covariance matrix for the parameters is obtained by inverting the Hessian matrix at the optimum.

The minuslogl argument can also specify a formula, rather than an objective function, of the form x~ddistn(param1,...,paramn). In this case ddistn is taken to be a probability or density function, which must have (literally) x as its first argument (although this argument may be interpreted as a matrix of multivariate responses) and which must have a log argument that can be used to specify the log-probability or log-probability-density is required. If a formula is specified, then parameters can contain a list of linear models for the parameters.

If a formula is given and non-trivial linear models are given in parameters for some of the variables, then model matrices will be generated using model.matrix: start can either be an exhaustive list of starting values (in the order given by model.matrix) or values can be given just for the higher-level parameters: in this case, all of the additional parameters generated by model.matrix will be given starting values of zero.

The trace argument applies only when a formula is specified. If you specify a function, you can build in your own print() or cat() statement to trace its progress. (You can also specify a value for trace as part of a control list for optim(): see optim.)

The skip.hessian argument is useful if the function is crashing with a "non-finite finite difference value" error when trying to evaluate the Hessian, but will preclude many subsequent confidence interval calculations. (You will know the Hessian is failing if you use method="Nelder-Mead" and still get a finite-difference error.)

If convergence fails, see optim for the meanings of the error codes.

Value

An object of class "mle2".

Note

Note that the minuslogl function should return the negative log-likelihood, -log L (not the log-likelihood, log L, nor the deviance, -2 log L). It is the user's responsibility to ensure that the likelihood is correct, and that asymptotic likelihood inference is valid (e.g. that there are "enough" data and that the estimated parameter values do not lie on the boundary of the feasible parameter space).

If lower, upper, control$parscale, or control$ndeps are specified for optim fits, they must be named vectors.

See Also

mle2-class

Examples

x <- 0:10
y <- c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8)
LL <- function(ymax=15, xhalf=6)
    -sum(stats::dpois(y, lambda=ymax/(1+x/xhalf), log=TRUE))
## uses default parameters of LL
(fit <- mle2(LL))
mle2(LL, fixed=list(xhalf=6))

(fit0 <- mle2(y~dpois(lambda=ymean),start=list(ymean=mean(y))))
anova(fit0,fit)
summary(fit)
logLik(fit)
vcov(fit)
p1 <- profile(fit)
plot(p1, absVal=FALSE)
confint(fit)

## use bounded optimization
## the lower bounds are really > 0, but we use >=0 to stress-test
## profiling; note lower must be named
(fit1 <- mle2(LL, method="L-BFGS-B", lower=c(ymax=0, xhalf=0)))
p1 <- profile(fit1)

plot(p1, absVal=FALSE)
## a better parameterization:
LL2 <- function(lymax=log(15), lxhalf=log(6))
    -sum(stats::dpois(y, lambda=exp(lymax)/(1+x/exp(lxhalf)), log=TRUE))
(fit2 <- mle2(LL2))
plot(profile(fit2), absVal=FALSE)
exp(confint(fit2))
vcov(fit2)
cov2cor(vcov(fit2))

mle2(y~dpois(lambda=exp(lymax)/(1+x/exp(lhalf))),
   start=list(lymax=0,lhalf=0),
   parameters=list(lymax~1,lhalf~1))

## try bounded optimization with nlminb and constrOptim
(fit1B <- mle2(LL, optimizer="nlminb", lower=c(lymax=1e-7, lhalf=1e-7)))
p1B <- profile(fit1B)
confint(p1B)
(fit1C <- mle2(LL, optimizer="constrOptim", ui = c(lymax=1,lhalf=1), ci=2,
   method="Nelder-Mead"))

set.seed(1001)
lymax <- c(0,2)
lhalf <- 0
x <- sort(runif(200))
g <- factor(sample(c("a","b"),200,replace=TRUE))
y <- rnbinom(200,mu=exp(lymax[g])/(1+x/exp(lhalf)),size=2)

fit3 <- mle2(y~dnbinom(mu=exp(lymax)/(1+x/exp(lhalf)),size=exp(logk)),
    parameters=list(lymax~g),
    start=list(lymax=0,lhalf=0,logk=0))


[Package bbmle version 0.9.0 Index]