maxNR {micEcon} | R Documentation |
unconstrained maximisation algorithm based on Newton-Raphson method.
maxNR(fn, grad = NULL, hess = NULL, theta, print.level = 0, tol = 1e-06, gradtol = 1e-06, steptol = 1e-06, lambdatol = 1e-06, qrtol = 1e-10, iterlim = 15, constPar = NULL, activePar=rep(TRUE, NParam), ...)
fn |
function to be maximised. In order to use numeric gradient
and BHHH method, fn must return vector of
observation-specific likelihood values. Those are summed by maxNR
if necessary. If the parameters are out of range, fn should
return NA . See details for constant parameters. |
grad |
gradient of the function. If NULL , numeric
gradient is used. It must return a gradient vector, or matrix where
columns correspond to individual parameters. Note that this corresponds to
t(numericGradient(fn)) , not numericGradient(fn) . For
BHHH method,
the rows must correspond to the likelihood gradients of the individual
observations, it will be summed over
observations in order to get a single gradient vector. Similar
observation-wise matrix is allowed for maxNR too, it will be
simply summed over observations. |
hess |
hessian matrix of the function. If missing, numeric hessian, based on gradient, is used. |
theta |
initial value for the parameter vector |
print.level |
this argument determines the level of printing which is done during the minimization process. The default value of 0 means that no printing occurs, a value of 1 means that initial and final details are printed and a value of 2 means that full tracing information for every iteration is printed. |
tol |
stopping condition. Stop if the absolute difference
between successive iterations less than tol , return
code=2 . |
gradtol |
stopping condition. Stop if the norm of the gradient
less than gradtol , return code=1 . |
steptol |
stopping/error condition. If not find
acceptable/higher value of the function with step=1 , step is
divided by 2 and tried again. This is repeated until step <
steptol , then code=3 is returned. |
lambdatol |
control whether the hessian is treated as negative
definite. If the
largest of the eigenvalues of the hessian is larger than
-lambdatol , a suitable diagonal matrix is subtracted from the
hessian (quadratic hill-climbing). |
qrtol |
? |
iterlim |
stopping condition. Stop if more than iterlim
iterations, return code=4 . |
constPar |
index vector. Which of the parameters must be treated as constants. |
activePar |
logical vector, which parameters are treated as free (resp constant) |
... |
further argument to fn , grad and
hess . |
The algorithm can treat constant parameters and related changes of parameter values. Constant parameters are useful if a parameter value is converging toward the edge of support or for testing.
One way is to put
constPar
to non-NULL. Second possibility is to signal by
fn
which parameters are constant and change the values of the
parameter vector. The value of fn
may have following attributes:
constVal
and newVal
is that the
latter parameters are not set to constants. If the attribute
newVal
is present, the new function value is allowed to be below
the previous one.
list of class "maximisation" which following components:
maximum |
fn value at maximum (the last calculated value
if not converged). |
estimate |
estimated parameter value. |
gradient |
last gradient value which was calculated. Should be close to 0 if normal convergence. |
hessian |
hessian at the maximum (the last calculated value if not converged). May be used as basis for variance-covariance matrix. |
code |
return code:
|
message |
a short message, describing code . |
last.step |
list describing the last unsuccessful step if
code=3 with following components:
fn value at theta0 |
activePar |
logical vector, which parameters are not constants. |
iterations |
number of iterations. |
type |
character string, type of maximisation. |
Newton-Raphson algorithm with analytic gradient and hessian supplied should converge with no more than 20 iterations on a well-specified model.
Ott Toomet siim@obs.ee
W. Greene: "Advanced Econometrics"; S.M. Goldfeld, R.E. Quandt: "Nonlinear Methods in Econometrics". Amsterdam, North-Holland 1972.
nlm
for Newton-Raphson optimisation,
optim
for different gradient-based optimisation
methods.
## ML estimation of exponential duration model: t <- rexp(100, 2) loglik <- function(theta) sum(log(theta) - theta*t) ## Note the log-likelihood and gradient are summed over observations gradlik <- function(theta) sum(1/theta - t) hesslik <- function(theta) -100/theta^2 ## Estimate with numeric gradient and hessian a <- maxNR(loglik, theta=1, print.level=2) summary(a) ## You would probably prefer 1/mean(t) instead ;-) ## Estimate with analytic gradient and hessian a <- maxNR(loglik, gradlik, hesslik, theta=1) summary(a)