ksvm {kernlab}R Documentation

Support Vector Machines

Description

Support Vector Machines are an excellent tool for classification as well as regression. ksvm supports the classical C-classification, nu-classification, one-class-classification e-regression, nu-regression along with the Crammer-Singer method for multi-class classification spoc-classification.
A probabilistic prediction function for classification is also included.

Usage

## S4 method for signature 'formula':
ksvm(x, data = NULL, ..., subset, na.action = na.omit, scaled = TRUE)

## S4 method for signature 'vector':
ksvm{x, ...}

## S4 method for signature 'matrix':
ksvm(x, y = NULL, scaled = TRUE, type = NULL, kernel ="rbfdot", kpar = list(sigma = 0.1),
C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL, cachesize = 40, tol = 0.001,
shrinking = TRUE, cross = 0, fit = TRUE, ..., subset, na.action = na.omit)

Arguments

x a symbolic description of the model to be fit. Note, that an intercept is always included, whether given in the formula or not. When not using a formula x is a matrix or vector containg the variables in the model
data an optional data frame containing the variables in the model. By default the variables are taken from the environment which `ksvm' is called from.
y a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for regression).
scaled A logical vector indicating the variables to be scaled. If scaled is of length 1, the value is recycled as many times as needed and all non-binary variables are scaled. Per default, data are scaled internally (both x and y variables) to zero mean and unit variance. The center and scale values are returned and used for later predictions.
type ksvm can be used for classification , for regression, or for novelty detection. Depending on whether y is a factor or not, the default setting for type is C-classification or eps-regression, respectively, but can be overwritten by setting an explicit value.
Valid options are:
  • C-classification
  • nu-classification
  • spoc-classification (Crammer Singer multi-class)
  • one-classification (novelty detection)
  • eps-regression
  • nu-regression
kernel the kernel function used in training and predicting. This parameter can be set to any function, of class kernel, which computes a dot product between two vector arguments. kernlab provides the most popular kernel functions which can be used by setting the kernel parameter to the following strings:
  • rbfdot Radial Basis kernel function "Gaussian"
  • polydot Polynomial kernel function
  • vanilladot Linear kernel function
  • tanhdot Hyperbolic tangent kernel function
  • laplacedot Laplacian kernel function
  • besseldot Bessel kernel function
  • anovadot ANOVA RBF kernel function
The kernel parameter can also be set to a user defined function of class kernel by passing the function name as an argument.
kpar the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. For valid parameters for existing kernels are :
  • sigma inverse kernel width for the Radial Basis kernel function "rbfdot" and the Laplacian kernel "laplacedot".
  • degree, scale, offset for the Polynomial kernel "polydot"
  • scale, offset for the Hyperbolic tangent kernel function "tanhdot"
  • sigma, order, degree for the Bessel kernel "besseldot".
  • sigma, degree for the ANOVA kernel "anovadot".
Hyper-parameters for user defined kernels can be passed through the kpar parameter as well. In the case of a Radial Basis kernel function (Gaussian) kpar can also be set to the string "automatic" which uses the heuristics in sigest to calculate a good sigma value for the data.
C cost of constraints violation (default: 1)—it is the `C'-constant of the regularization term in the Lagrange formulation.
nu parameter needed for nu-classification, one-classification, and nu-regression. The nu parameter sets the upper bound on the training error and the lower bound on the fraction of data points to become Support Vectors (default: 0.2).
epsilon epsilon in the insensitive-loss function used for eps-regression and nu-regression (default: 0.1)
prob.model if set to TRUE a model for calculating class probabilities is fitted on output data created by performing a 3-fold cross-validation on the training data. For details see references. (default: FALSE)
class.weights a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named.
cachesize cache memory in MB (default 40)
tol tolerance of termination criterion (default: 0.001)
shrinking option whether to use the shrinking-heuristics (default: TRUE)
cross if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression
fit indicates whether the fitted values should be computed and included in the model or not (default: TRUE)
... additional parameters for the low level fitting function
subset An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.)
na.action A function to specify the action to be taken if NAs are found. The default action is na.omit, which leads to rejection of cases with missing values on any required variable. An alternative is na.fail, which causes an error if NA cases are found. (NOTE: If given, this argument must be named.)

Details

For multiclass-classification with k levels, k>2, ksvm uses the `one-against-one'-approach, in which k(k-1)/2 binary classifiers are trained; the appropriate class is found by a voting scheme.
If the predictor variables include factors, the formula interface must be used to get a correct model matrix. The predict function can return probabilistic output (probability matrix) in the case of classification by setting the type parameter to "probabilities".

Value

An S4 object of class "ksvm" containing the fitted model, Accessor functions can be used to access the slots of the object (see examples) which include:

alpha The resulting support vectors, (alpha vector) (possibly scaled).
alphaindex The index of the resulting support vectors in the data matrix. Note that this index refers to the pre-processed data (after the possible effect of na.omit and subset)
coefs The corresponding coefficients times the training labels.
b The negative intercept.
nSV The number of Support Vectors
error Training error
cross Cross validation error, (when cross > 0)

Note

Data is scaled internally, usually yielding better results.

Author(s)

Alexandros Karatzoglou (SMO optimizers in C/C++ by Chih-Chung Chang & Chih-Jen Lin)
alexandros.karatzoglou@ci.tuwien.ac.at

References

See Also

predict.ksvm, couple

Examples


## simple example using the spam data set
data(spam)

## create test and training set
index <- sample(1:dim(spam)[1])
spamtrain <- spam[index[1:floor(2 * dim(spam)[1]/3)], ]
spamtest <- spam[index[((2 * ceiling(dim(spam)[1]/3)) + 1):dim(spam)[1]], ]

## train a support vector machine
filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot",kpar=list(sigma=0.05),C=5,cross=3)
filter

## predict mail type on the test set
mailtype <- predict(filter,spamtest[,-58])

## Check results
table(mailtype,spamtest[,58])

## Another example with the famous iris data
data(iris)

## Create a kernel function using the build in rbfdot function
rbf <- rbfdot(sigma=0.1)
rbf

## train a support vector machine
irismodel <- ksvm(Species~.,data=iris,kernel=rbf,C=10,prob.model=TRUE)

irismodel

## get fitted values
fit(irismodel)

## Test on the training set with probabilities as output
predict(irismodel, iris[,-5], type="probabilities")

## regression
# create data
x <- seq(-20,20,0.1)
y <- sin(x)/x + rnorm(401,sd=0.03)

# train support vector machine
regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3)
plot(x,y,type="l")
lines(x,predict(regm,x),col="red")

[Package Contents]