optimal.scales {emulator} | R Documentation |
Uses optimization techniques (either Nelder-Mead or simulated annealing)
to find the optimal scales, or roughness lengths. Function
optimal.scale()
(ie singular) finds the optimal scale on the
assumption that the roughness is isotropic so all scales are identical.
optimal.scales(val, scales.start, d, use.like = TRUE, give.answers = FALSE, func=regressor.basis, ...) optimal.scale(val, d, use.like = TRUE, give.answers = FALSE, func=regressor.basis, ...)
val |
Matrix with rows corresponding to points at which the function is known |
scales.start |
Initial guess for the scales (plural). See details section for explanation |
d |
vector of observations, one for each row of val |
use.like |
Boolean, with default TRUE meaning to use
likelihood for the objective function, and FALSE
meaning to use a leave-out-one bootstrap estimator |
give.answers |
Boolean, with default FALSE meaning to
return just the roughness lengths and TRUE meaning to return
extra information as returned by optim() |
func |
Function used to determine basis vectors, defaulting
to regressor.basis if not given |
... |
Extra parameters passed to optim() or
optimize() . See examples for usage of this argument |
Internally, this function works with the logarithms of the roughness
lengths, because they are inherently positive. However, note that the
lengths themselves must be supplied to argument scales.start
,
not their logarithms.
The reason that there are two separate functions is that
optim()
and optimize()
are vey different.
If give.answers
takes the default value of FALSE
, a
vector of roughness lengths is returned. If TRUE
, output from
optim()
is returned directly (note that element par
is
the logarithm of the desired roughness length as the optimization
routine operates with the logs of the lengths as detailed above)
This function is slow to evaluate because it needs to
calculate and invert A
each time it is called, because the
scales change from call to call.
In this package, “scales” means the diagonal elements of the
B matrix. See the help page for corr
for more
discussion of this topic.
Note the warning about partial matching under the “dot-dot-dot”
argument in both optim.Rd
[used in optimal.scales()
] and
optimize.Rd
[used in optimal.scale()
]: any unmatched
arguments will be passed to the objective function. Thus, passing
named but unmatched arguments to optimal.scale[s]()
will cause
an error, because those arguments will be passed, by optim()
or
optimize()
, to the (internal) objective.fun()
.
In particular, note that passing control=list(maxit=4)
to
optimal.scale()
will cause an error for this reason
[optimize()
does not take a control
argument].
Robin K. S. Hankin
##First, define some scales: fish <- c(1,1,1,1,1,4) ## and a sigmasquared value: REAL.SIGMASQ <- 0.3 ## and a real relation: real.relation <- function(x){sum( (1:6)*x )} ## Now a design matrix: val <- latin.hypercube(100,6) ## apply the real relation: d <- apply(val,1,real.relation) ## and add some suitably correlated Gaussian noise: A <- corr.matrix(val,scales=fish) d.noisy <- as.vector(rmvnorm(n=1,mean=apply(val,1,real.relation),REAL.SIGMASQ*A)) ## Now see if we can estimate the roughness lengths well. Remember that ## the true values are those held in vector "fish": optimal.scales(val=val, scales.start=rep(1,6), d=d.noisy, method="SANN",control=list(trace=1000,maxit=3), give=FALSE) # Now a test of optimal.scale(), where there is only a single roughness # scale to estimate. This should be more straightforward: df <- latin.hypercube(40,6) fish2 <- rep(2,6) A2 <- corr.matrix(df,scales=fish2) d.noisy <- as.vector(rmvnorm(n=1, mean=apply(df,1,real.relation), sigma=A2)) jj.T <- optimal.scale(val=df,d=d.noisy,use.like=TRUE) jj.F <- optimal.scale(val=df,d=d.noisy,use.like=FALSE)