numericGradient {micEcon} | R Documentation |
Calculate (central) numeric gradient and hessian of a function.
numericGradient
accepts vector-valued functions.
numericGradient(f, t0, eps=1e-06, ...) numericHessian(f, grad=NULL, t0, eps=1e-06, ...) numericNHessian(f, t0, eps=1e-6, ...)
f |
function to be differentiated. The first argument must be the parameter vector with respect to which it is differentiated. |
grad |
function, gradient of f |
t0 |
vector, the value of parameters |
eps |
numeric, the step for numeric differentiation |
... |
furter arguments for f |
numericGradient
numerically differentiates a (vector valued)
function with respect to it's (vector valued) argument. Nval *
1
vector function differentiated w.r.t. parameter vector of length
Npar
results in Npar * Nval
gradient.
numericHessian
checks whether a gradient function is present
and calculates a gradient of the gradient (if present), or full
numeric Hessian (numericNHessian
) if not present.
Matrix. For numericGradient
, the number of rows is equal to the
length of the function value vector, and the number of columns is
equal to the length of the parameter vector.
For the numericHessian
, both numer of rows and columns is
equal to the length of the parameter vector.
You should not use numerical differentiation in optimisation routines. Although quite precise in simple cases, they may work really poorly in difficult situations.
Ott Toomet siim@obs.ee
# A simple example with Gaussian bell f0 <- function(t0) exp(-t0[1]^2 - t0[2]^2) numericGradient(f0, c(1,2)) numericHessian(f0, t0=c(1,2)) # An example with the analytic gradient gradf0 <- function(t0) -2*t0*f0(t0) numericHessian(f0, gradf0, t0=c(1,2)) # The results should be similar as in the previous case # The central numeric derivatives have usually quite a high precision compare.derivatives(f0, gradf0, t0=1:2) # The differenc is around 1e-10