kernelpls {pls.pcr}R Documentation

Kernel PLS (Dayal and MacGregor)

Description

This function should not be called directly, but through the generic pls function with the argument method="kernel" (default). Kernel PLS is particularly efficient when the number of objects is (much) larger than the number of variables. The results are equal to the NIPALS algorithm. Several different forms of kernel PLS have been described in literature, e.g. by De Jong and Ter Braak, and two algorithms by Dayal and MacGregor. This function implements the fastest of the latter, not calculating the crossproduct matrix of X. In the Dyal & MacGregor paper, this is 'algorithm 1'. (kernelpls).

Usage

kernelpls(X, Y, ncomp, newX)

Arguments

X a matrix of observations. NAs and Infs are not allowed.
Y a vector or matrix of responses. NAs and Infs are not allowed.
ncomp the number of latent variables to be used in the modelling. The default number of latent variables is the smallest of the number of objects or the number of variables in X.
newX optional new measurements: if present, predictions will be made for them.

Value

A list containing the following components is returned:

B an array of regression coefficients for all items in ncomp. The dimensions of B are c(nvar, npred, length(ncomp)) with nvar the number of X variables and npred the number of variables to be predicted in Y.
XvarExpl Fraction of X-variance explained.
YvarExpl Fraction of Y-variance explained (one column, even for multiple Y).
Ypred predictions for newX (if asked).

References

S. de Jong and C.J.F. ter Braak, J. Chemometrics, 8 (1994) 169-174 B.S. Dayal and J. MacGregor, J. Chemometrics, 11 (1997) 73-85

See Also

pls simpls mvr

Examples

data(NIR)
attach(NIR)
NIR.kernelpls <- mvr(Xtrain, Ytrain, 1:6, validation="CV", method="kernelPLS")

[Package pls.pcr version 0.2.7 Index]