mylars {parcor} | R Documentation |
This function computes the cross-validation-optimal regression coefficients for lasso.
mylars(X, y, k = 10,fraction = seq(from = 0, to = 1, length = 1000),use.Gram=TRUE,normalize=TRUE)
X |
matrix of observations. The rows of X contain the
samples, the columns of X contain the observed variables |
y |
vector of responses. The length of y must equal the number of rows of X |
k |
the number of splits in k -fold cross-validation. Default is k=10. |
use.Gram |
When the number of variables is very large, you may not want LARS to precompute the Gram matrix. Default is use.Gram =TRUE. |
fraction |
vector of possible regularization parameters, in the range from 0 to 1. |
normalize |
Should the columns of X be scaled? Default is normalize=TRUE. |
This is a variation of the cv.lars
function of the lars
package. Here, we adjust the regularization parameter fraction
in order to avoid
its peaking behavior in the n=p case. See Kraemer (2009) for more details.
coefficients |
cross-validation optimal regression coefficients, without intercept. |
cv.lasso |
cv error for the optimal model. |
Nicole Kraemer
R. Tibshirani (1997) "Regression Shrinkage and Selection via the Lasso", Journal of the Royal Statistical Society B, 58 (1)
N. Kraemer (2009) "On the Peaking Phenomenon of the Lasso in Model Selection", preprint, http://ml.cs.tu-berlin.de/~nkraemer/publications.html
N. Kraemer, J. Schaefer, A.-L. Boulesteix (2009) "Regularized Estimation of Large-Scale Gene Regulatory Networks with Gaussian Graphical Models", preprint
http://ml.cs.tu-berlin.de/~nkraemer/publications.html
n<-50 p<-10 X<-matrix(rnorm(n*p),ncol=p) y<-rnorm(n) dummy<-mylars(X,y,k=5)