ucminf {ucminf} | R Documentation |
An algorithm for general-purpose unconstrained non-linear optimization. The algorithm is of quasi-Newton type with BFGS updating of the inverse Hessian and soft line search with a trust region type monitoring of the input to the line search algorithm. The interface of ‘ucminf’ is designed for easy interchange with ‘optim’.
ucminf(par, fn, gr = NULL, ..., control = list(), hessian=0)
par |
Initial estimate of minimum for fn .
|
fn |
Objective function to be minimized. |
gr |
Gradient of objective function. If NULL a finite difference
approximation is used.
|
... |
Optional arguments passed to the objective and gradient functions. |
control |
A list of control parameters. See ‘Details’. |
hessian |
Integer value:
TRUE or FALSE value is given it will switch
between option 1 or 0.
|
The algorithm is documented in (Nielsen, 2000) (see References below) together with a comparison to the Fortran subroutine ‘MINF’ and the Matlab function ‘fminunc’. The implementation of ‘ucminf’ in R uses the original Fortran version of the algorithm.
The interface in R is designed so that it is very easy to switch
between using ‘ucminf’ and ‘optim’. The
arguments par
, fn
, gr
, and hessian
are all the same (with a few extra options for hessian
in
‘ucminf’). The difference is that there is no method
argument in ‘ucminf’ and that some of the components in the
control
argument are different due to differences in the
algorithms.
The algorithm can be given an initial estimate of the Hessian for the optimization and it is possible to get the final approximation of the Hessian based on the series of BFGS updates. This extra functionality may be useful for optimization in a series of related problems.
The functions fn
and gr
can return Inf
or NaN
if the functions cannot be evaluated at the supplied value, but the
functions must be computable at the initial value. The functions
are not allowed to return NA
. Any names given to par
will be
copied to the vectors passed to fn
and gr
. No
other attributes of par
are copied over.
The control
argument is a list that can supply any of the
following components:
trace
grtol
grtol = 1e-6
. xtol
xtol = 1e-12
.stepmax
stepmax = 1
.maxeval
maxeval = 500
. grad
grad = 'forward'
.gradstep
gradstep = c(1e-6, 1e-8)
. invhessian.lt
H0
is the initial hessian matrix then
the lower triangle of the inverse of H0
can be found as
invhessian.lt = solve(H0)[lower.tri(H0,diag=TRUE)]
.par |
Computed minimizer. |
value |
Objective function value at computed minimizer. |
convergence |
Flag for reason of termination:
|
message |
String with reason of termination. |
hessian, invhessian |
Estimate of (inv.) Hessian at computed minimizer. The type of estimate is given by the input argument ‘hessian’. |
invhessian.lt |
The lower triangle of the final approximation to the inverse Hessian based on the series of BFGS updates during optimization. |
info |
Information about the search:
|
‘UCMINF’ algorithm design and Fortran code by Hans Bruun Nielsen.
Implementation in R by Stig B. Mortensen, sbm@imm.dtu.dk.
Nielsen, H. B. (2000) ‘UCMINF - An Algorithm For Unconstrained, Nonlinear Optimization’, Report IMM-REP-2000-18, Department of Mathematical Modelling, Technical University of Denmark. http://www2.imm.dtu.dk/~hbn/publ/TR0019.ps or http://orbit.dtu.dk/recid/200975.
The original Fortran source code can be found at http://www2.imm.dtu.dk/~hbn/Software/ucminf.f. The code has been slightly modified in this package to be suitable for use with R.
The general structure of the implementation in R is based on the package ‘FortranCallsR’ by Diethelm Wuertz.
## Rosenbrock Banana function fR = function(x) { (1-x[1])^2+100*(x[2]-x[1]^2)^2 } gR = function(x) { c( -400*x[1]*(x[2]-x[1]*x[1]) - 2*(1-x[1]), 200*(x[2]-x[1]*x[1])) } # Find minimum ucminf(par = c(2,.5), fn = fR, gr=gR) # Compare hessian approximations ucminf(par = c(2,.5), fn = fR, gr=gR, hessian=1)$hessian ucminf(par = c(2,.5), fn = fR, gr=gR, hessian=3)$hessian # Compare run times with optim's BFGS method # (chosen convergence criteria result in similar accuracy) system.time( for(i in 1:500) ucminf(par = c(2,0.5), fn = fR, gr=gR) ) system.time( for(i in 1:500) optim(par = c(2,0.5), fn = fR, gr=gR,method='BFGS') ) ## Quadratic function fQ = function(x) { sum((4*x-1)^2) } gQ = function(x) { 32*x -8} # Find minimum with too small stepmax and print trace ucminf(par = c(20.5,20.0), fn = fQ, gr = gQ, control=list(stepmax=1,trace=TRUE)) # The same again with a larger stepmax ucminf(par = c(20.5,20.0), fn = fQ, gr = gQ, control=list(stepmax=100,trace=TRUE))