ksvm {kernlab} | R Documentation |
Support Vector Machines are an excellent tool for classification
novelty detection as well as regression. ksvm
supports the
well known C-svc, nu-svc, (classification) one-class-svc (novelty)
eps-svr, nu-svr (regression) formulations along with the Crammer-Singer
for multi-class classification formulation spoc-svc and
bound-constraint SVM C-bsvc, eps-bsvr.
The implementation also supports class-probabilities output and
confidence intervals for regression.
## S4 method for signature 'formula': ksvm(x, data = NULL, ..., subset, na.action = na.omit, scaled = TRUE) ## S4 method for signature 'vector': ksvm(x, ...) ## S4 method for signature 'matrix': ksvm(x, y = NULL, scaled = TRUE, type = NULL, kernel ="rbfdot", kpar = list(sigma = 0.1), C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL, cachesize = 40, tol = 0.001, shrinking = TRUE, cross = 0, fit = TRUE, ..., subset, na.action = na.omit)
x |
a symbolic description of the model to be fit. Note, that the intercept is always excluded, whether given in the formula or not. When not using a formula x is a matrix or vector containg the variables in the model |
data |
an optional data frame containing the variables in the model. By default the variables are taken from the environment which `ksvm' is called from. |
y |
a response vector with one label for each row/component of x . Can be either
a factor (for classification tasks) or a numeric vector (for
regression). |
scaled |
A logical vector indicating the variables to be
scaled. If scaled is of length 1, the value is recycled as
many times as needed and all non-binary variables are scaled.
Per default, data are scaled internally (both x and y
variables) to zero mean and unit variance. The center and scale
values are returned and used for later predictions. |
type |
ksvm can be used for classification
, for regression, or for novelty detection.
Depending on whether y is
a factor or not, the default setting for type is C-svc or eps-svr , respectively, but can be overwritten by setting an explicit value.Valid options are:
|
kernel |
the kernel function used in training and predicting.
This parameter can be set to any function, of class kernel, which computes a dot product between two
vector arguments. kernlab provides the most popular kernel functions
which can be used by setting the kernel parameter to the following
strings:
|
kpar |
the list of hyper-parameters (kernel parameters).
This is a list which contains the parameters to be used with the
kernel function. For valid parameters for existing kernels are :
sigest to calculate a good sigma value for the data. |
C |
cost of constraints violation (default: 1)—it is the `C'-constant of the regularization term in the Lagrange formulation. |
nu |
parameter needed for nu-svc ,
one-svc , and nu-svr . The nu
parameter sets the upper bound on the training error and the lower
bound on the fraction of data points to become Support Vectors (default: 0.2). |
epsilon |
epsilon in the insensitive-loss function used for
eps-svr , nu-svr and eps-bsvm (default: 0.1) |
prob.model |
if set to TRUE builds a model for calculating class
probabilities or in case of regression, calculates the scaling
parameter of the Laplacian distribution fitted on the residuals.
Fitting is done on output data created by performing a
3-fold cross-validation on the training data. For details see
references. (default: FALSE ) |
class.weights |
a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named. |
cachesize |
cache memory in MB (default 40) |
tol |
tolerance of termination criterion (default: 0.001) |
shrinking |
option whether to use the shrinking-heuristics
(default: TRUE ) |
cross |
if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression |
fit |
indicates whether the fitted values should be computed
and included in the model or not (default: TRUE ) |
... |
additional parameters for the low level fitting function |
subset |
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) |
na.action |
A function to specify the action to be taken if NA s are
found. The default action is na.omit , which leads to rejection of cases
with missing values on any required variable. An alternative
is na.fail , which causes an error if NA cases
are found. (NOTE: If given, this argument must be named.) |
For multiclass-classification with k levels, k>2, ksvm
uses the
`one-against-one'-approach, in which k(k-1)/2 binary classifiers are
trained; the appropriate class is found by a voting scheme.
If the predictor variables include factors, the formula interface must be used to get a
correct model matrix.
In classification when prob.model
is TRUE
a 3-fold cross validation is
performed on the data and a sigmoid function is fitted on the
resulting decision values f.
The plot
function for binary classification ksvm
objects
displays a contour plot of the decision values with the corresponding
support vectors highlighted.
The predict function can return probabilistic output (probability
matrix) in the case of
classification by setting the type
parameter to "probabilities".
An S4 object of class "ksvm"
containing the fitted model,
Accessor functions can be used to access the slots of the object (see
examples) which include:
alpha |
The resulting support vectors, (alpha vector) (possibly scaled). |
alphaindex |
The index of the resulting support vectors in the data
matrix. Note that this index refers to the pre-processed data (after
the possible effect of na.omit and subset ) |
coefs |
The corresponding coefficients times the training labels. |
b |
The negative intercept. |
nSV |
The number of Support Vectors |
error |
Training error |
cross |
Cross validation error, (when cross > 0) |
prob.model |
Contains the width of the Laplacian fitted on the residuals in case of regression, or the parameters of the sigmoid fitted on the decision values in case of classification. |
Data is scaled internally, usually yielding better results.
Alexandros Karatzoglou (SMO optimizers in C/C++ by Chih-Chung Chang & Chih-Jen Lin)
alexandros.karatzoglou@ci.tuwien.ac.at
## simple example using the spam data set data(spam) ## create test and training set index <- sample(1:dim(spam)[1]) spamtrain <- spam[index[1:floor(2 * dim(spam)[1]/3)], ] spamtest <- spam[index[((2 * ceiling(dim(spam)[1]/3)) + 1):dim(spam)[1]], ] ## train a support vector machine filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot",kpar=list(sigma=0.05),C=5,cross=3) filter ## predict mail type on the test set mailtype <- predict(filter,spamtest[,-58]) ## Check results table(mailtype,spamtest[,58]) ## Another example with the famous iris data data(iris) ## Create a kernel function using the build in rbfdot function rbf <- rbfdot(sigma=0.1) rbf ## train a bound constraint support vector machine irismodel <- ksvm(Species~.,data=iris,type="C-bsvc",kernel=rbf,C=10,prob.model=TRUE) irismodel ## get fitted values fit(irismodel) ## Test on the training set with probabilities as output predict(irismodel, iris[,-5], type="probabilities") ## Demo of the plot function x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2)) y <- matrix(c(rep(1,60),rep(-1,60))) svp <- ksvm(x,y,type="C-svc") plot(svp) #### Use custom kernel k <- function(x,y) {(sum(x*y) +1)*exp(0.001*sum((x-y)^2))} class(k) <- "kernel" data(promotergene) ## train svm using custom kernel gene <- ksvm(Class~.,data=promotergene,kernel=k,C=10,cross=5) gene ## regression # create data x <- seq(-20,20,0.1) y <- sin(x)/x + rnorm(401,sd=0.03) # train support vector machine regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3) plot(x,y,type="l") lines(x,predict(regm,x),col="red")