nleqslv {nleqslv} | R Documentation |
The function solves a system of non linear equations with either a Broyden or a full Newton method. It provides linesearch and trust region global strategies for difficult systems.
nleqslv(x, fn, jac=NULL,..., method = c("Broyden", "Newton"), global = c("dbldog", "pwldog", "qline", "gline"), xscalm = c("fixed","auto"), control = list() )
x |
A list or numeric vector of starting estimates. |
fn |
A function of x returning the function values. |
jac |
A function to return the Jacobian for the fn function.
If not supplied numerical derivatives will be used. |
... |
Further arguments to be passed to fn and jac . |
method |
The method used for finding a solution. See ‘Details’. |
global |
The global strategy to apply. See ‘Details’. |
xscalm |
The type of x scaling to use. See ‘Details’. |
control |
A list of control parameters. See ‘Details’. |
The algorithms implemented in nleqslv
are based on Dennis and Schnabel (1996).
Method Broyden
starts with a computed Jacobian of the function and
then updates this Jacobian after each successful iteration using the so-called Broyden update.
This method often shows super linear convergence to a solution.
When nleqslv
determines that it cannot
continue with the current Broyden matrix it will compute a new Jacobian.
Method Newton
calculates a Jacobian of the function fn
at each iteration.
Close to a solution this method will usually show quadratic convergence.
Both methods apply a so-called (backtracking) global strategy to find a better (more acceptable) iterate. The function criterion used by the algorithm is the sum of squares of the function values and “acceptable” means sufficient decrease of the current function criterion value compared to that of the previous iteration. A comprehensive discussion of these issues can be found in Dennis and Schnabel (1996). Both methods apply a QR-decomposition to the Jacobian as implemented in Lapack. The Broyden method applies a rank 1 update to the Jacobian at the end of each iteration and uses the methods implemented in Reichel and Gragg (1990).
The four global strategies that may be specified in the global
argument, are
qline
gline
dbldog
powell
The double dogleg method is the default global strategy employed by this package.
The parameters x
may be scaled during the search for a zero of fn
.
The xscalm
argument provides two possibilities for scaling
fixed
control
argument and remain unchanged during the iterations.
The scaling factor of any element of x
may be set
to the inverse of the typical value of that element of x
,
ensuring that all elements of x
are approximately equal in size.auto
When an ill-conditioned or singular Jacobian is detected, a Levenberg-Marquardt correction
is applied to the Jacobian to ensure that the algorithm can proceed in a sensible manner
(see Dennis and Schnabel, 1996, page 149, equation 6.5.5
and Nocedal (2006) and Moré (1978) for an efficient procedure for performing the correction).
In this implementation a Jacobian is considered to be ill-conditioned
when the inverse condition of the Jacobian is less than
(.Machine\$double.eps)^(2/3)
.
When the function to be solved returns non-finite values for a parameter vector x
and the algorithm is not evaluating a numerical jacobian, then the non finite
value will be replaced by a large number forcing the algorithm to backtrack,
i.e. decrease the linesearch factor or decrease the trust region size.
When the algorithm is evaluating a numerical jacobian, it will stop with an error message
on detecting non-finite function values.
The control
argument is a list that can supply any of the
following components:
xtol
1e-8
.
ftol
ftol
.
The default value is 1e-8
.
btol
Broyden
method a new Jacobian will be calculated if the Jacobian is outdated.
The default value is 1e-3
.
sigma
scalex
maxit
trace
chkjac
FALSE
. The first 10 errors are printed.
The code for this check is derived from the code in Bouaricha and Schnabel (1997).delta
stepmax
A list containing components
x |
final values for x |
fvec |
function values |
termcd |
termination code as integer.
The values returned are
|
message |
a string describing the termination code |
scalex |
a vector containing the scaling factors, which will be the final values when automatic scaling was selected |
njcnt |
number of jacobian evaluations |
nfcnt |
number of function evaluations, excluding those required for calculating a jacobian |
You cannnot use this function recursively.
Thus function fn
should not in its turn call nleqslv
.
Bouaricha, A. and Schnabel, R.B. (1997), Algorithm 768: TENSOLVE: A Software Package for Solving Systems of Nonlinear Equations and Nonlinear Least-squares Problems Using Tensor Methods, Transactions on Mathematical Software, 23, 2, pp. 174–195.
Dennis, J.E. Jr and Schnabel, R.B. (1996), Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Siam.
Moré, J.J. (1978), The Levenberg-Marquardt Algorithm, Implementation and Theory, In Numerical Analysis, G.A. Watson (Ed.), Lecture Notes in Mathematics 630, Springer-Verlag, pp. 105–116.
Nocedal, J. and Wright, S.J. (2006), Numerical Optimization, Springer.
Powell, M.J.D. (1970), A hybrid method for nonlinear algebraic equations, In Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz (Ed.), Gordon & Breach.
Powell, M.J.D. (1970), A Fortran subroutine for solving systems nonlinear equations, In Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz (Ed.), Gordon & Breach.
Reichel, L. and W.B. Gragg (1990), Algorithm 686: FORTRAN subroutines for updating the QR decomposition, ACM Trans. Math. Softw., 16, 4, pp. 369–377.
# Dennis Schnabel example 6.5.1 page 149 dslnex <- function(x) { y <- numeric(2) y[1] <- x[1]^2 + x[2]^2 - 2 y[2] <- exp(x[1]-1) + x[2]^3 - 2 y } jacdsln <- function(x) { n <- length(x) Df <- matrix(numeric(n*n),n,n) Df[1,1] <- 2*x[1] Df[1,2] <- 2*x[2] Df[2,1] <- exp(x[1]-1) Df[2,2] <- 3*x[2]^2 Df } BADjacdsln <- function(x) { n <- length(x) Df <- matrix(numeric(n*n),n,n) Df[1,1] <- 4*x[1] Df[1,2] <- 2*x[2] Df[2,1] <- exp(x[1]-1) Df[2,2] <- 5*x[2]^2 Df } xstart <- c(2,0.5) fstart <- dslnex(xstart) xstart fstart # a solution is c(1,1) nleqslv(xstart, dslnex, control=list(btol=.01)) # Cauchy start nleqslv(xstart, dslnex, control=list(trace=1,btol=.01)) # Newton start nleqslv(xstart, dslnex, control=list(trace=1,btol=.01,delta=-2.0)) # example for error message in case of an error in analytical jacobian nleqslv(xstart, dslnex, BADjacdsln, method="Newton", control=list(btol=.01,chkjac=TRUE)) # singular function with a singular start sngsrt <- function(x) { y <- numeric(length(x)) y[1] <- x[1]^2/2 + x[2] y[2] <- x[1] + x[2]^2/2 y } sngsrtjac <- function(x) { n <- length(x) Df <- matrix(numeric(n*n),n,n) Df[1,1] <- x[1] Df[1,2] <- 1 Df[2,1] <- 1 Df[2,2] <- x[2] Df } xstart <- c(1,1) nleqslv(xstart, sngsrt, sngsrtjac, global="dbldog", control=list(trace=1,btol=.01))