nleqslv {nleqslv}R Documentation

Solving systems of non linear equations with Broyden or Newton

Description

The function solves a system of non linear equations with either a Broyden or a full Newton method. It provides linesearch and trust region global strategies for difficult systems.

Usage

nleqslv(x, fn, jac=NULL,...,
               method = c("Broyden", "Newton"),
               global = c("dbldog", "pwldog", "qline", "gline"),
               xscalm = c("fixed","auto"),
               control = list()
        )

Arguments

x A list or numeric vector of starting estimates.
fn A function of x returning the function values.
jac A function to return the Jacobian for the fn function. If not supplied numerical derivatives will be used.
... Further arguments to be passed to fn and jac.
method The method used for finding a solution. See ‘Details’.
global The global strategy to apply. See ‘Details’.
xscalm The type of x scaling to use. See ‘Details’.
control A list of control parameters. See ‘Details’.

Details

The algorithms implemented in nleqslv are based on Dennis and Schnabel (1996).

Method Broyden starts with a computed Jacobian of the function and then updates this Jacobian after each successful iteration using the so-called Broyden update. This method often shows super linear convergence to a solution. When nleqslv determines that it cannot continue with the current Broyden matrix it will compute a new Jacobian.

Method Newton calculates a Jacobian of the function fn at each iteration. Close to a solution this method will usually show quadratic convergence.

Both methods apply a so-called (backtracking) global strategy to find a better (more acceptable) iterate. The function criterion used by the algorithm is the sum of squares of the function values and “acceptable” means sufficient decrease of the current function criterion value compared to that of the previous iteration. A comprehensive discussion of these issues can be found in Dennis and Schnabel (1996). Both methods apply a QR-decomposition to the Jacobian as implemented in Lapack. The Broyden method applies a rank 1 update to the Jacobian at the end of each iteration and uses the methods implemented in Reichel and Gragg (1990).

The four global strategies that may be specified in the global argument, are

qline
a quadratic linesearch
gline
a geometric linesearch
dbldog
a trust region method using the double dogleg method as described in Dennis and Schnabel (1996)
powell
a trust region method using the Powell dogleg method as developed by Powell (1970).

The double dogleg method is the default global strategy employed by this package.

The parameters x may be scaled during the search for a zero of fn. The xscalm argument provides two possibilities for scaling

fixed
the scaling factors are set to the values supplied in the control argument and remain unchanged during the iterations. The scaling factor of any element of x may be set to the inverse of the typical value of that element of x, ensuring that all elements of x are approximately equal in size.
auto
the scaling factors are calculated from the euclidean norms of the columns of the jacobian matrix. When a new Jacobian is computed, the scaling values will be set to the euclidean norm of the corresponding column if that is larger than the current scaling value. Thus the scaling values will not decrease during the iteration. This is the method described in Moré (1978). Usually manual scaling is preferable.

When an ill-conditioned or singular Jacobian is detected, a Levenberg-Marquardt correction is applied to the Jacobian to ensure that the algorithm can proceed in a sensible manner (see Dennis and Schnabel, 1996, page 149, equation 6.5.5 and Nocedal (2006) and Moré (1978) for an efficient procedure for performing the correction). In this implementation a Jacobian is considered to be ill-conditioned when the inverse condition of the Jacobian is less than (.Machine\$double.eps)^(2/3).

When the function to be solved returns non-finite values for a parameter vector x and the algorithm is not evaluating a numerical jacobian, then the non finite value will be replaced by a large number forcing the algorithm to backtrack, i.e. decrease the linesearch factor or decrease the trust region size. When the algorithm is evaluating a numerical jacobian, it will stop with an error message on detecting non-finite function values.

The control argument is a list that can supply any of the following components:

xtol
The relative steplength tolerance. When the relative steplength of all x values is smaller than this value convergence is declared. The default value is 1e-8.
ftol
The function value tolerance. Convergence is declared when the largest absolute function value is smaller than ftol. The default value is 1e-8.
btol
The backtracking tolerance. When the relative steplength in a backtracking step to find an acceptable point is smaller than the backtracking tolerance, the backtracking is terminated. In the Broyden method a new Jacobian will be calculated if the Jacobian is outdated. The default value is 1e-3.
sigma
Reduction factor for the geometric linesearch. The default value is 0.5.
scalex
a vector of scaling values for the parameters. The inverse of a scale value is an indication of the size of a parameter. The default value is 1.0 for all scale values.
maxit
The maximum number of major iterations. The default value is 150.
trace
Non-negative integer. A value of 1 will give a detailed report of the progress of the iteration.
chkjac
A logical value indicating whether to check an analytical jacobian, if supplied. The default value is FALSE. The first 10 errors are printed. The code for this check is derived from the code in Bouaricha and Schnabel (1997).
delta
Initial trust region size. A value of -1.0 is replaced by the length of the Cauchy step in the initial point. A value of -2.0 is replaced by the length of the Newton step in the initial point. Any value less than or equal to 0 and not equal to -2.0, will be replaced by -1.0; the algorithm will thus start with the length of the Cauchy step in the initial point. If it is positive it will be set to the smaller of the value supplied or the maximum stepsize. The default is -2.0.
stepmax
Maximum scaled stepsize. The default is 1000.

Value

A list containing components

x final values for x
fvec function values
termcd termination code as integer. The values returned are
1
Function criterion is near zero. Convergence of function values has been achieved.
2
x-values within tolerance. This means that the relative distance between two consecutive x-values is smaller than xtol.
3
No better point found. This means that the algorithm has stalled and cannot find an acceptable new point.
This may or may not indicate acceptably small function values.
4
Iteration limit maxit exceeded.
message a string describing the termination code
scalex a vector containing the scaling factors, which will be the final values when automatic scaling was selected
njcnt number of jacobian evaluations
nfcnt number of function evaluations, excluding those required for calculating a jacobian

Warning

You cannnot use this function recursively. Thus function fn should not in its turn call nleqslv.

References

Bouaricha, A. and Schnabel, R.B. (1997), Algorithm 768: TENSOLVE: A Software Package for Solving Systems of Nonlinear Equations and Nonlinear Least-squares Problems Using Tensor Methods, Transactions on Mathematical Software, 23, 2, pp. 174–195.

Dennis, J.E. Jr and Schnabel, R.B. (1996), Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Siam.

Moré, J.J. (1978), The Levenberg-Marquardt Algorithm, Implementation and Theory, In Numerical Analysis, G.A. Watson (Ed.), Lecture Notes in Mathematics 630, Springer-Verlag, pp. 105–116.

Nocedal, J. and Wright, S.J. (2006), Numerical Optimization, Springer.

Powell, M.J.D. (1970), A hybrid method for nonlinear algebraic equations, In Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz (Ed.), Gordon & Breach.

Powell, M.J.D. (1970), A Fortran subroutine for solving systems nonlinear equations, In Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz (Ed.), Gordon & Breach.

Reichel, L. and W.B. Gragg (1990), Algorithm 686: FORTRAN subroutines for updating the QR decomposition, ACM Trans. Math. Softw., 16, 4, pp. 369–377.

Examples

# Dennis Schnabel example 6.5.1 page 149
dslnex <- function(x) {
    y <- numeric(2)
    y[1] <- x[1]^2 + x[2]^2 - 2
    y[2] <- exp(x[1]-1) + x[2]^3 - 2
    y
}

jacdsln <- function(x) {
    n <- length(x)
    Df <- matrix(numeric(n*n),n,n)
    Df[1,1] <- 2*x[1]
    Df[1,2] <- 2*x[2]
    Df[2,1] <- exp(x[1]-1)
    Df[2,2] <- 3*x[2]^2

    Df
}

BADjacdsln <- function(x) {
    n <- length(x)
    Df <- matrix(numeric(n*n),n,n)
    Df[1,1] <- 4*x[1]
    Df[1,2] <- 2*x[2]
    Df[2,1] <- exp(x[1]-1)
    Df[2,2] <- 5*x[2]^2

    Df
}

xstart <- c(2,0.5)
fstart <- dslnex(xstart)
xstart
fstart

# a solution is c(1,1)

nleqslv(xstart, dslnex, control=list(btol=.01))

# Cauchy start
nleqslv(xstart, dslnex, control=list(trace=1,btol=.01))

# Newton start
nleqslv(xstart, dslnex, control=list(trace=1,btol=.01,delta=-2.0))

# example for error message in case of an error in analytical jacobian

nleqslv(xstart, dslnex, BADjacdsln, method="Newton", 
                    control=list(btol=.01,chkjac=TRUE))

# singular function with a singular start

sngsrt <- function(x) {
    y <- numeric(length(x))
    y[1] <- x[1]^2/2 + x[2]
    y[2] <- x[1] + x[2]^2/2

    y
}

sngsrtjac <- function(x)  {
    n <- length(x)
    Df <- matrix(numeric(n*n),n,n)
    Df[1,1] <- x[1]
    Df[1,2] <- 1
    Df[2,1] <- 1
    Df[2,2] <- x[2]

    Df
}
xstart <- c(1,1)
nleqslv(xstart, sngsrt, sngsrtjac, global="dbldog",
        control=list(trace=1,btol=.01))

[Package nleqslv version 1.3 Index]