R : Copyright 2005, The R Foundation for Statistical Computing Version 2.1.1 (2005-06-20), ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for a HTML browser interface to help. Type 'q()' to quit R. > ### *
> ### > attach(NULL, name = "CheckExEnv") > assign(".CheckExEnv", as.environment(2), pos = length(search())) # base > ## add some hooks to label plot pages for base and grid graphics > setHook("plot.new", ".newplot.hook") > setHook("persp", ".newplot.hook") > setHook("grid.newpage", ".gridplot.hook") > > assign("cleanEx", + function(env = .GlobalEnv) { + rm(list = ls(envir = env, all.names = TRUE), envir = env) + RNGkind("default", "default") + set.seed(1) + options(warn = 1) + delayedAssign("T", stop("T used instead of TRUE"), + assign.env = .CheckExEnv) + delayedAssign("F", stop("F used instead of FALSE"), + assign.env = .CheckExEnv) + sch <- search() + newitems <- sch[! sch %in% .oldSearch] + for(item in rev(newitems)) + eval(substitute(detach(item), list(item=item))) + missitems <- .oldSearch[! .oldSearch %in% sch] + if(length(missitems)) + warning("items ", paste(missitems, collapse=", "), + " have been removed from the search path") + }, + env = .CheckExEnv) > assign("..nameEx", "__{must remake R-ex/*.R}__", env = .CheckExEnv) # for now > assign("ptime", proc.time(), env = .CheckExEnv) > grDevices::postscript("mvpart-Examples.ps") > assign("par.postscript", graphics::par(no.readonly = TRUE), env = .CheckExEnv) > options(contrasts = c(unordered = "contr.treatment", ordered = "contr.poly")) > options(warn = 1) > library('mvpart') > > assign(".oldSearch", search(), env = .CheckExEnv) > assign(".oldNS", loadedNamespaces(), env = .CheckExEnv) > cleanEx(); ..nameEx <- "car.test.frame" > > ### * car.test.frame > > flush(stderr()); flush(stdout()) > > ### Name: car.test.frame > ### Title: Automobile Data from `Consumer Reports' 1990 > ### Aliases: car.test.frame > ### Keywords: datasets > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > summary(z.auto) Call: rpart(formula = Mileage ~ Weight, data = car.test.frame) n= 60 CP nsplit rel error xerror xstd 1 0.59534912 0 1.0000000 1.0319924 0.17931937 2 0.13452819 1 0.4046509 0.4875114 0.07613003 3 0.06942684 2 0.2701227 0.3795295 0.05610012 4 0.03449195 3 0.2006958 0.3367415 0.05225040 5 0.01849081 4 0.1662039 0.3295272 0.05582591 6 0.01571904 5 0.1477131 0.2919391 0.04956914 7 0.01000000 7 0.1162750 0.2834672 0.04974102 Node number 1: 60 observations, complexity param=0.5953491 mean=24.58333, MSE=22.57639 left son=2 (45 obs) right son=3 (15 obs) Primary splits: Weight < 2567.5 to the right, improve=0.5953491, (0 missing) Node number 2: 45 observations, complexity param=0.1345282 mean=22.46667, MSE=8.026667 left son=4 (22 obs) right son=5 (23 obs) Primary splits: Weight < 3087.5 to the right, improve=0.5045118, (0 missing) Node number 3: 15 observations, complexity param=0.06942684 mean=30.93333, MSE=12.46222 left son=6 (9 obs) right son=7 (6 obs) Primary splits: Weight < 2280 to the right, improve=0.5030908, (0 missing) Node number 4: 22 observations, complexity param=0.01849081 mean=20.40909, MSE=2.78719 left son=8 (6 obs) right son=9 (16 obs) Primary splits: Weight < 3637.5 to the right, improve=0.4084816, (0 missing) Node number 5: 23 observations, complexity param=0.01571904 mean=24.43478, MSE=5.115312 left son=10 (15 obs) right son=11 (8 obs) Primary splits: Weight < 2747.5 to the right, improve=0.1476996, (0 missing) Node number 6: 9 observations, complexity param=0.03449195 mean=28.88889, MSE=8.54321 left son=12 (3 obs) right son=13 (6 obs) Primary splits: Weight < 2337.5 to the left, improve=0.607659, (0 missing) Node number 7: 6 observations mean=34, MSE=2.666667 Node number 8: 6 observations mean=18.66667, MSE=0.5555556 Node number 9: 16 observations mean=21.0625, MSE=2.058594 Node number 10: 15 observations mean=23.8, MSE=4.026667 Node number 11: 8 observations, complexity param=0.01571904 mean=25.625, MSE=4.984375 left son=22 (3 obs) right son=23 (5 obs) Primary splits: Weight < 2650 to the left, improve=0.6321839, (0 missing) Node number 12: 3 observations mean=25.66667, MSE=0.2222222 Node number 13: 6 observations mean=30.5, MSE=4.916667 Node number 22: 3 observations mean=23.33333, MSE=0.2222222 Node number 23: 5 observations mean=27, MSE=2.8 > > > > cleanEx(); ..nameEx <- "cmds.diss" > > ### * cmds.diss > > flush(stderr()); flush(stdout()) > > ### Name: cmds.diss > ### Title: Classical Scaling of Dissimilarity Measures > ### Aliases: cmds.diss > ### Keywords: multivariate > > ### ** Examples > > data(spider) > dist.vecs <- cmds.diss(spider) Warning in cmdscale(xdists, k = k) : some of the first 18 eigenvalues are < 0 Warning in sqrt(ev) : NaNs produced 1 columns with NAs or all zeros dropped > > # comparing splitting using "dist" and "mrt" methods > # for euclidean distance the answers are indentical : > # first using "mrt" on the data directly > mvpart(data.matrix(spider[,1:12])~water+twigs+reft+herbs+moss+sand,spider,method="mrt",size=5) > > # now using "dist" -- note we need the full distance matrix squared > mvpart(gdist(spider[,1:12],meth="euc",full=TRUE,sq=TRUE)~water+twigs+reft+herbs+moss+sand,spider,method="dist",size=5) > > # finally using "mrt" from the scaled dissimilarities. > mvpart(cmds.diss(spider[,1:12],meth="euc")~water+twigs+reft+herbs+moss+sand,spider,method="mrt",size=5) > > # try with some other measure of dissimilarity eg extended bray-curtis -- the result will differ > # between methods > > > > > cleanEx(); ..nameEx <- "gdist" > > ### * gdist > > flush(stderr()); flush(stdout()) > > ### Name: gdist > ### Title: Dissimilarity Measures > ### Aliases: gdist > ### Keywords: multivariate > > ### ** Examples > > data(spider) > spider.dist <- gdist(spider[1:12,]) > > > > cleanEx(); ..nameEx <- "kyphosis" > > ### * kyphosis > > flush(stderr()); flush(stdout()) > > ### Name: kyphosis > ### Title: Data on Children who have had Corrective Spinal Surgery > ### Aliases: kyphosis > ### Keywords: datasets > > ### ** Examples > > data(kyphosis) > fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis) > fit2 <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis, + parms=list(prior=c(.65,.35), split='information')) > fit3 <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis, + control=rpart.control(cp=.05)) > par(mfrow=c(1,2)) > plot(fit) > text(fit, use.n=TRUE) > plot(fit2) > text(fit2, use.n=TRUE) > > > > graphics::par(get("par.postscript", env = .CheckExEnv)) > cleanEx(); ..nameEx <- "meanvar.rpart" > > ### * meanvar.rpart > > flush(stderr()); flush(stdout()) > > ### Name: meanvar.rpart > ### Title: Mean-Variance Plot for an Rpart Object > ### Aliases: meanvar meanvar.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > meanvar(z.auto, log='xy') > > > > cleanEx(); ..nameEx <- "mvpart" > > ### * mvpart > > flush(stderr()); flush(stdout()) > > ### Name: mvpart > ### Title: Recursive Partitioning and Regression Trees > ### Aliases: mvpart > ### Keywords: tree > > ### ** Examples > > data(spider) > mvpart(data.matrix(spider[,1:12])~herbs+reft+moss+sand+twigs+water,spider) # defaults > mvpart(data.matrix(spider[,1:12])~herbs+reft+moss+sand+twigs+water,spider,xv="p") # pick the tree size > # pick cv size and do PCA > fit <- mvpart(data.matrix(spider[,1:12])~herbs+reft+moss+sand+twigs+water,spider,xv="1se",pca=TRUE) > rpart.pca(fit,interact=TRUE,wgt.ave=TRUE) # interactive PCA plot of saved multivariate tree > > > > cleanEx(); ..nameEx <- "path.rpart" > > ### * path.rpart > > flush(stderr()); flush(stdout()) > > ### Name: path.rpart > ### Title: Follow Paths to Selected Nodes of an Rpart Object > ### Aliases: path.rpart > ### Keywords: tree > > ### ** Examples > > data(kyphosis) > fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis) > summary(fit) Call: rpart(formula = Kyphosis ~ Age + Number + Start, data = kyphosis) n= 81 CP nsplit rel error xerror xstd 1 0.17647059 0 1.0000000 1.000000 0.2155872 2 0.11764706 1 0.8235294 1.294118 0.2354756 3 0.07843137 2 0.7058824 1.117647 0.2243268 4 0.05882353 5 0.4705882 1.058824 0.2200975 5 0.01000000 9 0.2352941 1.058824 0.2200975 Node number 1: 81 observations, complexity param=0.1764706 predicted class=absent expected loss=0.2098765 class counts: 64 17 probabilities: 0.790 0.210 left son=2 (62 obs) right son=3 (19 obs) Primary splits: Start < 8.5 to the right, improve=6.762330, (0 missing) Number < 5.5 to the left, improve=2.866795, (0 missing) Age < 39.5 to the left, improve=2.250212, (0 missing) Surrogate splits: Number < 6.5 to the left, agree=0.802, adj=0.158, (0 split) Node number 2: 62 observations, complexity param=0.05882353 predicted class=absent expected loss=0.09677419 class counts: 56 6 probabilities: 0.903 0.097 left son=4 (29 obs) right son=5 (33 obs) Primary splits: Start < 14.5 to the right, improve=1.0205280, (0 missing) Age < 55 to the left, improve=0.6848635, (0 missing) Number < 4.5 to the left, improve=0.2975332, (0 missing) Surrogate splits: Number < 3.5 to the left, agree=0.645, adj=0.241, (0 split) Age < 16 to the left, agree=0.597, adj=0.138, (0 split) Node number 3: 19 observations, complexity param=0.1176471 predicted class=present expected loss=0.4210526 class counts: 8 11 probabilities: 0.421 0.579 left son=6 (2 obs) right son=7 (17 obs) Primary splits: Age < 11.5 to the left, improve=1.498452, (0 missing) Start < 4 to the left, improve=1.352047, (0 missing) Number < 4.5 to the left, improve=1.149522, (0 missing) Node number 4: 29 observations predicted class=absent expected loss=0 class counts: 29 0 probabilities: 1.000 0.000 Node number 5: 33 observations, complexity param=0.05882353 predicted class=absent expected loss=0.1818182 class counts: 27 6 probabilities: 0.818 0.182 left son=10 (12 obs) right son=11 (21 obs) Primary splits: Age < 55 to the left, improve=1.2467530, (0 missing) Start < 9.5 to the left, improve=0.3009404, (0 missing) Number < 2.5 to the left, improve=0.2181818, (0 missing) Surrogate splits: Start < 9.5 to the left, agree=0.758, adj=0.333, (0 split) Number < 5.5 to the right, agree=0.697, adj=0.167, (0 split) Node number 6: 2 observations predicted class=absent expected loss=0 class counts: 2 0 probabilities: 1.000 0.000 Node number 7: 17 observations, complexity param=0.07843137 predicted class=present expected loss=0.3529412 class counts: 6 11 probabilities: 0.353 0.647 left son=14 (12 obs) right son=15 (5 obs) Primary splits: Start < 5.5 to the left, improve=1.7647060, (0 missing) Number < 4.5 to the left, improve=1.1361340, (0 missing) Age < 130.5 to the right, improve=0.7170868, (0 missing) Surrogate splits: Number < 6.5 to the left, agree=0.765, adj=0.2, (0 split) Node number 10: 12 observations predicted class=absent expected loss=0 class counts: 12 0 probabilities: 1.000 0.000 Node number 11: 21 observations, complexity param=0.05882353 predicted class=absent expected loss=0.2857143 class counts: 15 6 probabilities: 0.714 0.286 left son=22 (16 obs) right son=23 (5 obs) Primary splits: Age < 98 to the right, improve=3.4714290, (0 missing) Start < 12.5 to the right, improve=0.7936508, (0 missing) Number < 2.5 to the left, improve=0.5714286, (0 missing) Node number 14: 12 observations, complexity param=0.07843137 predicted class=absent expected loss=0.5 class counts: 6 6 probabilities: 0.500 0.500 left son=28 (2 obs) right son=29 (10 obs) Primary splits: Age < 130.5 to the right, improve=1.2000000, (0 missing) Number < 3.5 to the left, improve=0.2222222, (0 missing) Start < 4 to the left, improve=0.2222222, (0 missing) Node number 15: 5 observations predicted class=present expected loss=0 class counts: 0 5 probabilities: 0.000 1.000 Node number 22: 16 observations predicted class=absent expected loss=0.125 class counts: 14 2 probabilities: 0.875 0.125 Node number 23: 5 observations predicted class=present expected loss=0.2 class counts: 1 4 probabilities: 0.200 0.800 Node number 28: 2 observations predicted class=absent expected loss=0 class counts: 2 0 probabilities: 1.000 0.000 Node number 29: 10 observations, complexity param=0.07843137 predicted class=present expected loss=0.4 class counts: 4 6 probabilities: 0.400 0.600 left son=58 (6 obs) right son=59 (4 obs) Primary splits: Age < 93 to the left, improve=2.133333, (0 missing) Number < 5.5 to the left, improve=0.800000, (0 missing) Start < 2.5 to the left, improve=0.300000, (0 missing) Surrogate splits: Start < 2.5 to the left, agree=0.8, adj=0.5, (0 split) Node number 58: 6 observations, complexity param=0.05882353 predicted class=absent expected loss=0.3333333 class counts: 4 2 probabilities: 0.667 0.333 left son=116 (3 obs) right son=117 (3 obs) Primary splits: Number < 4.5 to the left, improve=1.3333330, (0 missing) Age < 39.5 to the right, improve=0.1666667, (0 missing) Surrogate splits: Age < 39.5 to the right, agree=0.833, adj=0.667, (0 split) Start < 1.5 to the left, agree=0.667, adj=0.333, (0 split) Node number 59: 4 observations predicted class=present expected loss=0 class counts: 0 4 probabilities: 0.000 1.000 Node number 116: 3 observations predicted class=absent expected loss=0 class counts: 3 0 probabilities: 1.000 0.000 Node number 117: 3 observations predicted class=present expected loss=0.3333333 class counts: 1 2 probabilities: 0.333 0.667 > path.rpart(fit, node=c(11, 22)) node number: 11 root Start>=8.5 Start< 14.5 Age>=55 node number: 22 root Start>=8.5 Start< 14.5 Age>=55 Age>=98 > > > > cleanEx(); ..nameEx <- "plot.rpart" > > ### * plot.rpart > > flush(stderr()); flush(stdout()) > > ### Name: plot.rpart > ### Title: Plot an Rpart Object > ### Aliases: plot.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > fit <- rpart(Price ~ Mileage + Type + Country, car.test.frame) > plot(fit, compress=TRUE) > text(fit, use.n=TRUE) > > > > cleanEx(); ..nameEx <- "post.rpart" > > ### * post.rpart > > flush(stderr()); flush(stdout()) > > ### Name: post.rpart > ### Title: PostScript Presentation Plot of an Rpart Object > ### Aliases: post.rpart post > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > post(z.auto, file = "") # display tree on active device > # now construct postscript version on file "pretty.ps" > # with no title > post(z.auto, file = "pretty.ps", title = " ") > z.hp <- rpart(Mileage ~ Weight + HP, car.test.frame) > post(z.hp) > > > > cleanEx(); ..nameEx <- "predict.rpart" > > ### * predict.rpart > > flush(stderr()); flush(stdout()) > > ### Name: predict.rpart > ### Title: Predictions from a Fitted Rpart Object > ### Aliases: predict.rpart pred.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > predict(z.auto) Eagle Summit 4 Ford Escort 4 30.50000 30.50000 Ford Festiva 4 Honda Civic 4 34.00000 34.00000 Mazda Protege 4 Mercury Tracer 4 30.50000 25.66667 Nissan Sentra 4 Pontiac LeMans 4 34.00000 30.50000 Subaru Loyale 4 Subaru Justy 3 25.66667 34.00000 Toyota Corolla 4 Toyota Tercel 4 30.50000 34.00000 Volkswagen Jetta 4 Chevrolet Camaro V8 25.66667 21.06250 Dodge Daytona Ford Mustang V8 23.80000 21.06250 Ford Probe Honda Civic CRX Si 4 27.00000 34.00000 Honda Prelude Si 4WS 4 Nissan 240SX 4 27.00000 23.80000 Plymouth Laser Subaru XT 4 23.80000 30.50000 Audi 80 4 Buick Skylark 4 27.00000 23.33333 Chevrolet Beretta 4 Chrysler Le Baron V6 27.00000 23.80000 Ford Tempo 4 Honda Accord 4 23.80000 23.80000 Mazda 626 4 Mitsubishi Galant 4 23.80000 27.00000 Mitsubishi Sigma V6 Nissan Stanza 4 21.06250 23.80000 Oldsmobile Calais 4 Peugeot 405 4 23.33333 23.33333 Subaru Legacy 4 Toyota Camry 4 23.80000 23.80000 Volvo 240 4 Acura Legend V6 23.80000 21.06250 Buick Century 4 Chrysler Le Baron Coupe 23.80000 23.80000 Chrysler New Yorker V6 Eagle Premier V6 21.06250 21.06250 Ford Taurus V6 Ford Thunderbird V6 21.06250 21.06250 Hyundai Sonata 4 Mazda 929 V6 23.80000 21.06250 Nissan Maxima V6 Oldsmobile Cutlass Ciera 4 21.06250 23.80000 Oldsmobile Cutlass Supreme V6 Toyota Cressida 6 21.06250 21.06250 Buick Le Sabre V6 Chevrolet Caprice V8 21.06250 18.66667 Ford LTD Crown Victoria V8 Chevrolet Lumina APV V6 18.66667 21.06250 Dodge Grand Caravan V6 Ford Aerostar V6 18.66667 18.66667 Mazda MPV V6 Mitsubishi Wagon 4 18.66667 21.06250 Nissan Axxess 4 Nissan Van 4 21.06250 18.66667 > > data(kyphosis) > fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis) > predict(fit, type="prob") # class probabilities (default) absent present 1 1.0000000 0.0000000 2 0.8750000 0.1250000 3 0.0000000 1.0000000 4 1.0000000 0.0000000 5 1.0000000 0.0000000 6 1.0000000 0.0000000 7 1.0000000 0.0000000 8 1.0000000 0.0000000 9 1.0000000 0.0000000 10 0.2000000 0.8000000 11 0.2000000 0.8000000 12 1.0000000 0.0000000 13 0.3333333 0.6666667 14 1.0000000 0.0000000 15 1.0000000 0.0000000 16 1.0000000 0.0000000 17 1.0000000 0.0000000 18 0.8750000 0.1250000 19 1.0000000 0.0000000 20 1.0000000 0.0000000 21 1.0000000 0.0000000 22 0.0000000 1.0000000 23 0.2000000 0.8000000 24 1.0000000 0.0000000 25 0.3333333 0.6666667 26 1.0000000 0.0000000 27 1.0000000 0.0000000 28 0.8750000 0.1250000 29 1.0000000 0.0000000 30 1.0000000 0.0000000 31 1.0000000 0.0000000 32 0.8750000 0.1250000 33 0.8750000 0.1250000 34 1.0000000 0.0000000 35 0.8750000 0.1250000 36 1.0000000 0.0000000 37 1.0000000 0.0000000 38 0.0000000 1.0000000 39 1.0000000 0.0000000 40 0.2000000 0.8000000 41 0.3333333 0.6666667 42 1.0000000 0.0000000 43 1.0000000 0.0000000 44 1.0000000 0.0000000 45 1.0000000 0.0000000 46 0.8750000 0.1250000 47 1.0000000 0.0000000 48 0.8750000 0.1250000 49 0.0000000 1.0000000 50 0.8750000 0.1250000 51 0.2000000 0.8000000 52 1.0000000 0.0000000 53 0.0000000 1.0000000 54 1.0000000 0.0000000 55 1.0000000 0.0000000 56 1.0000000 0.0000000 57 1.0000000 0.0000000 58 0.0000000 1.0000000 59 1.0000000 0.0000000 60 0.8750000 0.1250000 61 0.0000000 1.0000000 62 0.0000000 1.0000000 63 1.0000000 0.0000000 64 1.0000000 0.0000000 65 1.0000000 0.0000000 66 1.0000000 0.0000000 67 1.0000000 0.0000000 68 0.8750000 0.1250000 69 1.0000000 0.0000000 70 1.0000000 0.0000000 71 0.8750000 0.1250000 72 0.8750000 0.1250000 73 1.0000000 0.0000000 74 0.8750000 0.1250000 75 1.0000000 0.0000000 76 1.0000000 0.0000000 77 0.8750000 0.1250000 78 1.0000000 0.0000000 79 0.8750000 0.1250000 80 0.0000000 1.0000000 81 1.0000000 0.0000000 > predict(fit, type="vector") # level numbers 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 1 1 2 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 1 1 1 1 1 1 1 1 1 1 1 2 1 2 2 1 1 1 1 1 1 1 2 1 2 1 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 2 1 1 1 1 2 1 1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 79 80 81 1 2 1 > predict(fit, type="class") # factor [1] absent absent present absent absent absent absent absent absent [10] present present absent present absent absent absent absent absent [19] absent absent absent present present absent present absent absent [28] absent absent absent absent absent absent absent absent absent [37] absent present absent present present absent absent absent absent [46] absent absent absent present absent present absent present absent [55] absent absent absent present absent absent present present absent [64] absent absent absent absent absent absent absent absent absent [73] absent absent absent absent absent absent absent present absent Levels: absent present > predict(fit, type="matrix") # level number, class frequencies, probabilities [,1] [,2] [,3] [,4] [,5] 1 1 3 0 1.0000000 0.0000000 2 1 14 2 0.8750000 0.1250000 3 2 0 4 0.0000000 1.0000000 4 1 2 0 1.0000000 0.0000000 5 1 29 0 1.0000000 0.0000000 6 1 29 0 1.0000000 0.0000000 7 1 29 0 1.0000000 0.0000000 8 1 29 0 1.0000000 0.0000000 9 1 29 0 1.0000000 0.0000000 10 2 1 4 0.2000000 0.8000000 11 2 1 4 0.2000000 0.8000000 12 1 29 0 1.0000000 0.0000000 13 2 1 2 0.3333333 0.6666667 14 1 12 0 1.0000000 0.0000000 15 1 29 0 1.0000000 0.0000000 16 1 29 0 1.0000000 0.0000000 17 1 29 0 1.0000000 0.0000000 18 1 14 2 0.8750000 0.1250000 19 1 29 0 1.0000000 0.0000000 20 1 12 0 1.0000000 0.0000000 21 1 29 0 1.0000000 0.0000000 22 2 0 4 0.0000000 1.0000000 23 2 1 4 0.2000000 0.8000000 24 1 2 0 1.0000000 0.0000000 25 2 1 2 0.3333333 0.6666667 26 1 12 0 1.0000000 0.0000000 27 1 2 0 1.0000000 0.0000000 28 1 14 2 0.8750000 0.1250000 29 1 29 0 1.0000000 0.0000000 30 1 29 0 1.0000000 0.0000000 31 1 29 0 1.0000000 0.0000000 32 1 14 2 0.8750000 0.1250000 33 1 14 2 0.8750000 0.1250000 34 1 29 0 1.0000000 0.0000000 35 1 14 2 0.8750000 0.1250000 36 1 29 0 1.0000000 0.0000000 37 1 12 0 1.0000000 0.0000000 38 2 0 5 0.0000000 1.0000000 39 1 12 0 1.0000000 0.0000000 40 2 1 4 0.2000000 0.8000000 41 2 1 2 0.3333333 0.6666667 42 1 12 0 1.0000000 0.0000000 43 1 2 0 1.0000000 0.0000000 44 1 3 0 1.0000000 0.0000000 45 1 29 0 1.0000000 0.0000000 46 1 14 2 0.8750000 0.1250000 47 1 29 0 1.0000000 0.0000000 48 1 14 2 0.8750000 0.1250000 49 2 0 4 0.0000000 1.0000000 50 1 14 2 0.8750000 0.1250000 51 2 1 4 0.2000000 0.8000000 52 1 29 0 1.0000000 0.0000000 53 2 0 5 0.0000000 1.0000000 54 1 29 0 1.0000000 0.0000000 55 1 29 0 1.0000000 0.0000000 56 1 29 0 1.0000000 0.0000000 57 1 12 0 1.0000000 0.0000000 58 2 0 5 0.0000000 1.0000000 59 1 12 0 1.0000000 0.0000000 60 1 14 2 0.8750000 0.1250000 61 2 0 4 0.0000000 1.0000000 62 2 0 5 0.0000000 1.0000000 63 1 3 0 1.0000000 0.0000000 64 1 29 0 1.0000000 0.0000000 65 1 29 0 1.0000000 0.0000000 66 1 12 0 1.0000000 0.0000000 67 1 29 0 1.0000000 0.0000000 68 1 14 2 0.8750000 0.1250000 69 1 12 0 1.0000000 0.0000000 70 1 29 0 1.0000000 0.0000000 71 1 14 2 0.8750000 0.1250000 72 1 14 2 0.8750000 0.1250000 73 1 29 0 1.0000000 0.0000000 74 1 14 2 0.8750000 0.1250000 75 1 29 0 1.0000000 0.0000000 76 1 29 0 1.0000000 0.0000000 77 1 14 2 0.8750000 0.1250000 78 1 12 0 1.0000000 0.0000000 79 1 14 2 0.8750000 0.1250000 80 2 0 5 0.0000000 1.0000000 81 1 12 0 1.0000000 0.0000000 > > data(iris) > sub <- c(sample(1:50, 25), sample(51:100, 25), sample(101:150, 25)) > fit <- rpart(Species ~ ., data=iris, subset=sub) > fit n= 75 node), split, n, loss, yval, (yprob) * denotes terminal node 1) root 75 50 setosa (0.33333333 0.33333333 0.33333333) 2) Petal.Length< 2.35 25 0 setosa (1.00000000 0.00000000 0.00000000) * 3) Petal.Length>=2.35 50 25 versicolor (0.00000000 0.50000000 0.50000000) 6) Petal.Width< 1.65 27 2 versicolor (0.00000000 0.92592593 0.07407407) 12) Petal.Length< 4.95 24 0 versicolor (0.00000000 1.00000000 0.00000000) * 13) Petal.Length>=4.95 3 1 virginica (0.00000000 0.33333333 0.66666667) * 7) Petal.Width>=1.65 23 0 virginica (0.00000000 0.00000000 1.00000000) * > table(predict(fit, iris[-sub,], type="class"), iris[-sub, "Species"]) setosa versicolor virginica setosa 25 0 0 versicolor 0 23 0 virginica 0 2 25 > > > > cleanEx(); ..nameEx <- "print.rpart" > > ### * print.rpart > > flush(stderr()); flush(stdout()) > > ### Name: print.rpart > ### Title: Print an Rpart Object > ### Aliases: print.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > z.auto n= 60 node), split, n, deviance, yval * denotes terminal node 1) root 60 1354.5830000 24.58333 2) Weight>=2567.5 45 361.2000000 22.46667 4) Weight>=3087.5 22 61.3181800 20.40909 8) Weight>=3637.5 6 3.3333330 18.66667 * 9) Weight< 3637.5 16 32.9375000 21.06250 * 5) Weight< 3087.5 23 117.6522000 24.43478 10) Weight>=2747.5 15 60.4000000 23.80000 * 11) Weight< 2747.5 8 39.8750000 25.62500 22) Weight< 2650 3 0.6666667 23.33333 * 23) Weight>=2650 5 14.0000000 27.00000 * 3) Weight< 2567.5 15 186.9333000 30.93333 6) Weight>=2280 9 76.8888900 28.88889 12) Weight< 2337.5 3 0.6666667 25.66667 * 13) Weight>=2337.5 6 29.5000000 30.50000 * 7) Weight< 2280 6 16.0000000 34.00000 * > ## Not run: node), split, n, deviance, yval > ##D * denotes terminal node > ##D > ##D 1) root 60 1354.58300 24.58333 > ##D 2) Weight>=2567.5 45 361.20000 22.46667 > ##D 4) Weight>=3087.5 22 61.31818 20.40909 * > ##D 5) Weight<3087.5 23 117.65220 24.43478 > ##D 10) Weight>=2747.5 15 60.40000 23.80000 * > ##D 11) Weight<2747.5 8 39.87500 25.62500 * > ##D 3) Weight<2567.5 15 186.93330 30.93333 * > ## End(Not run) > > > cleanEx(); ..nameEx <- "printcp" > > ### * printcp > > flush(stderr()); flush(stdout()) > > ### Name: printcp > ### Title: Displays CP table for Fitted Rpart Object > ### Aliases: printcp > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > printcp(z.auto) Regression tree: rpart(formula = Mileage ~ Weight, data = car.test.frame) Variables actually used in tree construction: [1] Weight Root node error: 1354.6/60 = 22.576 n= 60 CP nsplit rel error xerror xstd 1 0.595349 0 1.00000 1.03199 0.179319 2 0.134528 1 0.40465 0.48751 0.076130 3 0.069427 2 0.27012 0.37953 0.056100 4 0.034492 3 0.20070 0.33674 0.052250 5 0.018491 4 0.16620 0.32953 0.055826 6 0.015719 5 0.14771 0.29194 0.049569 7 0.010000 7 0.11627 0.28347 0.049741 > ## Not run: > ##D Regression tree: > ##D rpart(formula = Mileage ~ Weight, data = car.test.frame) > ##D > ##D Variables actually used in tree construction: > ##D [1] Weight > ##D > ##D Root node error: 1354.6/60 = 22.576 > ##D > ##D CP nsplit rel error xerror xstd > ##D 1 0.595349 0 1.00000 1.03436 0.178526 > ##D 2 0.134528 1 0.40465 0.60508 0.105217 > ##D 3 0.012828 2 0.27012 0.45153 0.083330 > ##D 4 0.010000 3 0.25729 0.44826 0.076998 > ## End(Not run) > > > cleanEx(); ..nameEx <- "prune.rpart" > > ### * prune.rpart > > flush(stderr()); flush(stdout()) > > ### Name: prune.rpart > ### Title: Cost-complexity Pruning of an Rpart Object > ### Aliases: prune.rpart prune > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > zp <- prune(z.auto, cp=0.1) > plot(zp) #plot smaller rpart object > > > > cleanEx(); ..nameEx <- "residuals.rpart" > > ### * residuals.rpart > > flush(stderr()); flush(stdout()) > > ### Name: residuals.rpart > ### Title: Residuals From a Fitted Rpart Object > ### Aliases: residuals.rpart > ### Keywords: tree > > ### ** Examples > > data(solder) > fit <- rpart(skips ~ Opening + Solder + Mask + PadType + Panel, + data=solder, method='anova') > summary(residuals(fit)) Min. 1st Qu. Median Mean 3rd Qu. Max. -1.380e+01 -1.036e+00 -5.583e-01 4.589e-16 9.639e-01 1.620e+01 > plot(predict(fit),residuals(fit)) > > > > cleanEx(); ..nameEx <- "rpart" > > ### * rpart > > flush(stderr()); flush(stdout()) > > ### Name: rpart > ### Title: Recursive Partitioning and Regression Trees > ### Aliases: rpart rpartcallback > ### Keywords: tree > > ### ** Examples > > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > summary(z.auto) Call: rpart(formula = Mileage ~ Weight, data = car.test.frame) n= 60 CP nsplit rel error xerror xstd 1 0.59534912 0 1.0000000 1.0319924 0.17931937 2 0.13452819 1 0.4046509 0.4875114 0.07613003 3 0.06942684 2 0.2701227 0.3795295 0.05610012 4 0.03449195 3 0.2006958 0.3367415 0.05225040 5 0.01849081 4 0.1662039 0.3295272 0.05582591 6 0.01571904 5 0.1477131 0.2919391 0.04956914 7 0.01000000 7 0.1162750 0.2834672 0.04974102 Node number 1: 60 observations, complexity param=0.5953491 mean=24.58333, MSE=22.57639 left son=2 (45 obs) right son=3 (15 obs) Primary splits: Weight < 2567.5 to the right, improve=0.5953491, (0 missing) Node number 2: 45 observations, complexity param=0.1345282 mean=22.46667, MSE=8.026667 left son=4 (22 obs) right son=5 (23 obs) Primary splits: Weight < 3087.5 to the right, improve=0.5045118, (0 missing) Node number 3: 15 observations, complexity param=0.06942684 mean=30.93333, MSE=12.46222 left son=6 (9 obs) right son=7 (6 obs) Primary splits: Weight < 2280 to the right, improve=0.5030908, (0 missing) Node number 4: 22 observations, complexity param=0.01849081 mean=20.40909, MSE=2.78719 left son=8 (6 obs) right son=9 (16 obs) Primary splits: Weight < 3637.5 to the right, improve=0.4084816, (0 missing) Node number 5: 23 observations, complexity param=0.01571904 mean=24.43478, MSE=5.115312 left son=10 (15 obs) right son=11 (8 obs) Primary splits: Weight < 2747.5 to the right, improve=0.1476996, (0 missing) Node number 6: 9 observations, complexity param=0.03449195 mean=28.88889, MSE=8.54321 left son=12 (3 obs) right son=13 (6 obs) Primary splits: Weight < 2337.5 to the left, improve=0.607659, (0 missing) Node number 7: 6 observations mean=34, MSE=2.666667 Node number 8: 6 observations mean=18.66667, MSE=0.5555556 Node number 9: 16 observations mean=21.0625, MSE=2.058594 Node number 10: 15 observations mean=23.8, MSE=4.026667 Node number 11: 8 observations, complexity param=0.01571904 mean=25.625, MSE=4.984375 left son=22 (3 obs) right son=23 (5 obs) Primary splits: Weight < 2650 to the left, improve=0.6321839, (0 missing) Node number 12: 3 observations mean=25.66667, MSE=0.2222222 Node number 13: 6 observations mean=30.5, MSE=4.916667 Node number 22: 3 observations mean=23.33333, MSE=0.2222222 Node number 23: 5 observations mean=27, MSE=2.8 > plot(z.auto); text(z.auto) > > data(spider) > fit1 <- rpart(data.matrix(spider[,1:12])~water+twigs+reft+herbs+moss+sand,spider,method="mrt") > fit2 <- rpart(gdist(spider[,1:12],meth="bray",full=TRUE,sq=TRUE)~water+twigs+reft+herbs+moss+sand,spider,method="dist") > par(mfrow=c(1,2)) > plot(fit1); text(fit1) > plot(fit2); text(fit2) > > > > > graphics::par(get("par.postscript", env = .CheckExEnv)) > cleanEx(); ..nameEx <- "rpart.pca" > > ### * rpart.pca > > flush(stderr()); flush(stdout()) > > ### Name: rpart.pca > ### Title: Principle Components Plot of a Multivariate Rpart Object > ### Aliases: rpart.pca > ### Keywords: tree > > ### ** Examples > > data(spider) > fit<-mvpart(data.matrix(spider[,1:12])~herbs+reft+moss+sand+twigs+water,spider) > rpart.pca(fit) > rpart.pca(fit,wgt.ave=TRUE,interact=TRUE) > > > > cleanEx(); ..nameEx <- "rsq.rpart" > > ### * rsq.rpart > > flush(stderr()); flush(stdout()) > > ### Name: rsq.rpart > ### Title: Plots the Approximate R-Square for the Different Splits > ### Aliases: rsq.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > rsq.rpart(z.auto) Regression tree: rpart(formula = Mileage ~ Weight, data = car.test.frame) Variables actually used in tree construction: [1] Weight Root node error: 1354.6/60 = 22.576 n= 60 CP nsplit rel error xerror xstd 1 0.595349 0 1.00000 1.03199 0.179319 2 0.134528 1 0.40465 0.48751 0.076130 3 0.069427 2 0.27012 0.37953 0.056100 4 0.034492 3 0.20070 0.33674 0.052250 5 0.018491 4 0.16620 0.32953 0.055826 6 0.015719 5 0.14771 0.29194 0.049569 7 0.010000 7 0.11627 0.28347 0.049741 > > > > cleanEx(); ..nameEx <- "scaler" > > ### * scaler > > flush(stderr()); flush(stdout()) > > ### Name: scaler > ### Title: Row and Column Scaling of a Data Matrix > ### Aliases: scaler > ### Keywords: multivariate manip > > ### ** Examples > > data(spider) > spid.data <- scaler(spider, col = "max", row="mean1") > > > > cleanEx(); ..nameEx <- "snip.rpart" > > ### * snip.rpart > > flush(stderr()); flush(stdout()) > > ### Name: snip.rpart > ### Title: Snip Subtrees of an Rpart Object > ### Aliases: snip.rpart > ### Keywords: tree > > ### ** Examples > > ## dataset not in R > ## Not run: > ##D z.survey <- rpart(market.survey) #grow the rpart object > ##D plot(z.survey) #plot the tree > ##D z.survey2 <- snip.rpart(z.survey,toss=2) #trim subtree at node 2 > ##D plot(z.survey2) #plot new tree > ##D > ##D # can also interactively select the node using the mouse in the > ##D # graphics window > ## End(Not run) > > > cleanEx(); ..nameEx <- "solder" > > ### * solder > > flush(stderr()); flush(stdout()) > > ### Name: solder > ### Title: Soldering of Components on Printed-Circuit Boards > ### Aliases: solder > ### Keywords: datasets > > ### ** Examples > > data(solder) > fit <- rpart(skips ~ Opening + Solder + Mask + PadType + Panel, + data=solder, method='anova') > summary(residuals(fit)) Min. 1st Qu. Median Mean 3rd Qu. Max. -1.380e+01 -1.036e+00 -5.583e-01 4.589e-16 9.639e-01 1.620e+01 > plot(predict(fit),residuals(fit)) > > > > cleanEx(); ..nameEx <- "spider" > > ### * spider > > flush(stderr()); flush(stdout()) > > ### Name: spider > ### Title: Spider Data > ### Aliases: spider > ### Keywords: datasets > > ### ** Examples > > data(spider) > fit<-mvpart(as.matrix(spider[,1:12])~water+twigs+reft+herbs+moss+sand,spider) > summary(fit) Call: mvpart(form = as.matrix(spider[, 1:12]) ~ water + twigs + reft + herbs + moss + sand, data = spider) n= 28 CP nsplit rel error xerror xstd 1 0.51864091 0 1.0000000 1.0742120 0.12655458 2 0.14489010 1 0.4813591 0.5536583 0.07280688 3 0.07537481 2 0.3364690 0.4253094 0.07226453 Node number 1: 28 observations, complexity param=0.5186409 Means=0.3571,1.179,1.536,1.964,2.5,1.179,4.5,1.393,2.5,1.5,0.9286,0.4286, Summed MSE=50.64158 left son=2 (20 obs) right son=3 (8 obs) Primary splits: herbs < 8.5 to the left, improve=0.5186409, (0 missing) water < 5.5 to the left, improve=0.3015809, (0 missing) moss < 6 to the right, improve=0.2483042, (0 missing) reft < 7.5 to the right, improve=0.2123679, (0 missing) sand < 5.5 to the right, improve=0.2008664, (0 missing) Node number 2: 20 observations, complexity param=0.1448901 Means=0.1,1.3,0.75,0.6,0.5,0.3,2.9,0.8,2.1,1.5,1.2,0.6, Summed MSE=25.7775 left son=4 (11 obs) right son=5 (9 obs) Primary splits: water < 5.5 to the left, improve=0.3985045, (0 missing) twigs < 3.5 to the left, improve=0.3985045, (0 missing) reft < 3.5 to the right, improve=0.3985045, (0 missing) moss < 6 to the right, improve=0.3518347, (0 missing) herbs < 6.5 to the left, improve=0.2054174, (0 missing) Surrogate splits: twigs < 3.5 to the left, agree=1.0, adj=1.000, (0 split) reft < 3.5 to the right, agree=1.0, adj=1.000, (0 split) moss < 3.5 to the right, agree=0.9, adj=0.778, (0 split) sand < 2.5 to the right, agree=0.8, adj=0.556, (0 split) herbs < 5.5 to the right, agree=0.7, adj=0.333, (0 split) Node number 3: 8 observations Means=1,0.875,3.5,5.375,7.5,3.375,8.5,2.875,3.5,1.5,0.25,0, Summed MSE=20.875 Node number 4: 11 observations Means=0,0.1818,0.1818,0.3636,0.3636,0.1818,1.364,0.5455,3.364,2.727,2.182,1.091, Summed MSE=17.68595 Node number 5: 9 observations Means=0.2222,2.667,1.444,0.8889,0.6667,0.4444,4.778,1.111,0.5556,0,0,0, Summed MSE=12.83951 > > > > cleanEx(); ..nameEx <- "summary.rpart" > > ### * summary.rpart > > flush(stderr()); flush(stdout()) > > ### Name: summary.rpart > ### Title: Summarize a Fitted Rpart Object > ### Aliases: summary.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > summary(z.auto) Call: rpart(formula = Mileage ~ Weight, data = car.test.frame) n= 60 CP nsplit rel error xerror xstd 1 0.59534912 0 1.0000000 1.0319924 0.17931937 2 0.13452819 1 0.4046509 0.4875114 0.07613003 3 0.06942684 2 0.2701227 0.3795295 0.05610012 4 0.03449195 3 0.2006958 0.3367415 0.05225040 5 0.01849081 4 0.1662039 0.3295272 0.05582591 6 0.01571904 5 0.1477131 0.2919391 0.04956914 7 0.01000000 7 0.1162750 0.2834672 0.04974102 Node number 1: 60 observations, complexity param=0.5953491 mean=24.58333, MSE=22.57639 left son=2 (45 obs) right son=3 (15 obs) Primary splits: Weight < 2567.5 to the right, improve=0.5953491, (0 missing) Node number 2: 45 observations, complexity param=0.1345282 mean=22.46667, MSE=8.026667 left son=4 (22 obs) right son=5 (23 obs) Primary splits: Weight < 3087.5 to the right, improve=0.5045118, (0 missing) Node number 3: 15 observations, complexity param=0.06942684 mean=30.93333, MSE=12.46222 left son=6 (9 obs) right son=7 (6 obs) Primary splits: Weight < 2280 to the right, improve=0.5030908, (0 missing) Node number 4: 22 observations, complexity param=0.01849081 mean=20.40909, MSE=2.78719 left son=8 (6 obs) right son=9 (16 obs) Primary splits: Weight < 3637.5 to the right, improve=0.4084816, (0 missing) Node number 5: 23 observations, complexity param=0.01571904 mean=24.43478, MSE=5.115312 left son=10 (15 obs) right son=11 (8 obs) Primary splits: Weight < 2747.5 to the right, improve=0.1476996, (0 missing) Node number 6: 9 observations, complexity param=0.03449195 mean=28.88889, MSE=8.54321 left son=12 (3 obs) right son=13 (6 obs) Primary splits: Weight < 2337.5 to the left, improve=0.607659, (0 missing) Node number 7: 6 observations mean=34, MSE=2.666667 Node number 8: 6 observations mean=18.66667, MSE=0.5555556 Node number 9: 16 observations mean=21.0625, MSE=2.058594 Node number 10: 15 observations mean=23.8, MSE=4.026667 Node number 11: 8 observations, complexity param=0.01571904 mean=25.625, MSE=4.984375 left son=22 (3 obs) right son=23 (5 obs) Primary splits: Weight < 2650 to the left, improve=0.6321839, (0 missing) Node number 12: 3 observations mean=25.66667, MSE=0.2222222 Node number 13: 6 observations mean=30.5, MSE=4.916667 Node number 22: 3 observations mean=23.33333, MSE=0.2222222 Node number 23: 5 observations mean=27, MSE=2.8 > > > > cleanEx(); ..nameEx <- "text.rpart" > > ### * text.rpart > > flush(stderr()); flush(stdout()) > > ### Name: text.rpart > ### Title: Place Text on a Dendrogram > ### Aliases: text.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > z.auto <- rpart(Mileage ~ Weight, car.test.frame) > plot(z.auto) > text(z.auto, use.n=TRUE, all=TRUE) > > > > cleanEx(); ..nameEx <- "trclcomp" > > ### * trclcomp > > flush(stderr()); flush(stdout()) > > ### Name: trclcomp > ### Title: Tree-Clustering Comparison > ### Aliases: trclcomp > ### Keywords: multivariate > > ### ** Examples > > data(spider) > fit <- mvpart(data.matrix(spider[,1:12])~herbs+reft+moss+sand+twigs+water,spider) > trclcomp(fit) Tree error : 1 0.481 0.336 Cluster error : 1 0.436 0.344 > > > > cleanEx(); ..nameEx <- "xdiss" > > ### * xdiss > > flush(stderr()); flush(stdout()) > > ### Name: xdiss > ### Title: Extendend Dissimilarity Measures > ### Aliases: xdiss > ### Keywords: multivariate > > ### ** Examples > > data(spider) > spider.dist <- xdiss(spider) Using Extended Dissimilarity : Manhattan (Site Standardised by Mean) Maximum distance = 0.9655 Critical distance = 0.6474 % Distances > Crit Dist = 29.89 Summary of Extended Dissimilarities Min. 1st Qu. Median Mean 3rd Qu. Max. 0.05587 0.33480 0.51950 0.54690 0.74300 1.27800 > > > > cleanEx(); ..nameEx <- "xpred.rpart" > > ### * xpred.rpart > > flush(stderr()); flush(stdout()) > > ### Name: xpred.rpart > ### Title: Return Cross-Validated Predictions > ### Aliases: xpred.rpart > ### Keywords: tree > > ### ** Examples > > data(car.test.frame) > fit <- rpart(Mileage ~ Weight, car.test.frame) > xmat <- xpred.rpart(fit) > xerr <- (xmat - car.test.frame$Mileage)^2 > apply(xerr, 2, sum) # cross-validated error estimate 0.79767456 0.28300396 0.09664299 0.04893534 0.02525439 0.01704869 0.01253756 1423.9568 740.4845 544.3925 477.2728 646.3198 690.3484 644.4607 > > # approx same result as rel. error from printcp(fit) > apply(xerr, 2, sum)/var(car.test.frame$Mileage) 0.79767456 0.28300396 0.09664299 0.04893534 0.02525439 0.01704869 0.01253756 62.02162 32.25242 23.71147 20.78801 28.15099 30.06870 28.07002 > printcp(fit) Regression tree: rpart(formula = Mileage ~ Weight, data = car.test.frame) Variables actually used in tree construction: [1] Weight Root node error: 1354.6/60 = 22.576 n= 60 CP nsplit rel error xerror xstd 1 0.595349 0 1.00000 1.03199 0.179319 2 0.134528 1 0.40465 0.48751 0.076130 3 0.069427 2 0.27012 0.37953 0.056100 4 0.034492 3 0.20070 0.33674 0.052250 5 0.018491 4 0.16620 0.32953 0.055826 6 0.015719 5 0.14771 0.29194 0.049569 7 0.010000 7 0.11627 0.28347 0.049741 > > > > ### *