methods {mboost} | R Documentation |
Methods for models fitted by boosting algorithms.
## S3 method for class 'glmboost': print(x, ...) ## S3 method for class 'gamboost': print(x, ...) ## S3 method for class 'glmboost': coef(object, ...) ## S3 method for class 'gb': AIC(object, method = c("corrected", "classical"), ...) ## S3 method for class 'gbAIC': mstop(object, ...) ## S3 method for class 'gb': mstop(object, ...) ## S3 method for class 'cvrisk': mstop(object, ...) ## S3 method for class 'blackboost': mstop(object, ...) ## S3 method for class 'gb': predict(object, newdata = NULL, type = c("lp", "response"), ...) ## S3 method for class 'gb': fitted(object, type = c("lp", "response"), ...) ## S3 method for class 'gb': logLik(object, ...)
object |
objects of class glmboost , gamboost or gbAIC . |
x |
objects of class glmboost or gamboost . |
newdata |
optionally, a data frame in which to look for variables with which to predict. |
type |
a character indicating whether the fit or the response (classes) should be predicted in case of classification problems. |
method |
a character specifying if the corrected AIC criterion or a classical (-2 logLik + 2 * df) should be computed. |
... |
additional arguments passed to callies. |
These functions can be used to extract details from fitted models. print
shows a dense representation of the model fit and coef
extracts the
regression coefficients of a linear model fitted using the glmboost
function.
The predict
function can be used to predict the status of the response variable
for new observations whereas fitted
extracts the regression fit for the observations
in the learning sample.
For (generalized) linear and additive models, the AIC
function can be used
to compute both the classical and corrected AIC (Hurvich et al., 1998, only available
when family = GaussReg()
was used),
which is useful for the determination
of the optimal number of boosting iterations to be applied (which can be extracted via
mstop
).
Note that logLik
and AIC
only make sense when the corresponding
Family
implements the appropriate loss function.
Clifford M. Hurvich, Jeffrey S. Simonoff and Chih-Ling Tsai (1998), Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. Journal of the Royal Statistical Society, Series B, 20(2), 271–293.
Peter Buhlmann and Torsten Hothorn (2007), Boosting algorithms: regularization, prediction and model fitting. Statistical Science, accepted. ftp://ftp.stat.math.ethz.ch/Research-Reports/Other-Manuscripts/buhlmann/BuehlmannHothorn_Boosting-rev.pdf
### a simple two-dimensional example: cars data cars.gb <- glmboost(dist ~ speed, data = cars, control = boost_control(mstop = 2000)) cars.gb ### initial number of boosting iterations mstop(cars.gb) ### AIC criterion aic <- AIC(cars.gb, method = "corrected") aic ### coefficients for optimal number of boosting iterations coef(cars.gb[mstop(aic)]) plot(cars$dist, predict(cars.gb[mstop(aic)]), ylim = range(cars$dist)) abline(a = 0, b = 1)