kda, pda {ks} | R Documentation |
Kernel and parametric discriminant analysis.
kda(x, x.group, Hs, y, prior.prob=NULL) pda(x, x.group, y, prior.prob=NULL, type="quad")
x |
matrix of training data values |
x.group |
vector of group labels for training data |
y |
matrix of test data |
Hs |
(stacked) matrix of bandwidth matrices |
prior.prob |
vector of prior probabilities |
type |
"line" = linear discriminant, "quad" =
quadratic discriminant |
If you have prior probabilities then set prior.prob
to these.
Otherwise the default is to use the sample proportions as
estimates of the prior probabilities.
The parametric discriminant analysers use the code from the
MASS
library namely lda
and qda
for linear and quadratic discriminants.
The discriminant analysers are kda
and pda
and these
return a vector of group labels assigned via discriminant
analysis. If the test data y
are given then these are
classified. Otherwise the training data x
are classified.
Silverman, B. W. (1986) Data Analysis for Statistics and Data Analysis. Chapman & Hall. London.
Simonoff, J. S. (1996) Smoothing Methods in Statistics. Springer-Verlag. New York
Venables, W.N. & Ripley, B.D. (1997) Modern Applied Statistics with S-PLUS. Springer-Verlag. New York.
kda.kde
, pda.pde
,
compare
, compare.kda.diag.cv
,
compare.kda.cv
, compare.pda.cv
### bivariate example - restricted iris dataset library(MASS) data(iris) iris.mat <- rbind(iris[,,1], iris[,,2], iris[,,3]) ir <- iris.mat[,c(1,2)] ir.gr <- iris.mat[,5] H <- Hkda(ir, ir.gr, bw="plugin", pre="scale") kda.gr <- kda(ir, ir.gr, H, ir) lda.gr <- pda(ir, ir.gr, ir, type="line") qda.gr <- pda(ir, ir.gr, ir, type="quad") ### multivariate example - full iris dataset ir <- iris[,1:4] ir.gr <- iris[,5] H <- Hkda(ir, ir.gr, bw="plugin", pre="scale") kda.gr <- kda(ir, ir.gr, H, ir) lda.gr <- pda(ir, ir.gr, ir, type="line") qda.gr <- pda(ir, ir.gr, ir, type="quad")