Last updated on 2020-01-11 09:49:13 CET.
Flavor | Version | Tinstall | Tcheck | Ttotal | Status | Flags |
---|---|---|---|---|---|---|
r-devel-linux-x86_64-debian-clang | 0.1.4 | 6.65 | 60.78 | 67.43 | ERROR | |
r-devel-linux-x86_64-debian-gcc | 0.1.4 | 5.23 | 47.19 | 52.42 | ERROR | |
r-devel-linux-x86_64-fedora-clang | 0.1.4 | 79.32 | OK | |||
r-devel-linux-x86_64-fedora-gcc | 0.1.4 | 78.10 | OK | |||
r-devel-windows-ix86+x86_64 | 0.1.4 | 21.00 | 102.00 | 123.00 | OK | |
r-devel-windows-ix86+x86_64-gcc8 | 0.1.4 | 14.00 | 111.00 | 125.00 | OK | |
r-patched-linux-x86_64 | 0.1.4 | 5.18 | 52.40 | 57.58 | OK | |
r-patched-solaris-x86 | 0.1.4 | 110.00 | OK | |||
r-release-linux-x86_64 | 0.1.4 | 5.37 | 52.44 | 57.81 | OK | |
r-release-windows-ix86+x86_64 | 0.1.4 | 16.00 | 103.00 | 119.00 | OK | |
r-release-osx-x86_64 | 0.1.4 | OK | ||||
r-oldrel-windows-ix86+x86_64 | 0.1.4 | 7.00 | 73.00 | 80.00 | OK | |
r-oldrel-osx-x86_64 | 0.1.4 | OK |
Version: 0.1.4
Check: tests
Result: ERROR
Running 'testthat.R' [11s/13s]
Running the tests in 'tests/testthat.R' failed.
Complete output:
> library(testthat)
> library(OptimClassifier)
>
> test_check("OptimClassifier")
6 successful models have been tested
CP rmse success_rate ti_error tii_error Nnodes
1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1
6 successful models have been tested
CP rmse success_rate ti_error tii_error Nnodes
1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1Call:
rpart::rpart(formula = formula, data = training, na.action = rpart::na.rpart,
model = FALSE, x = FALSE, y = FALSE, cp = 0)
n= 482
CP nsplit rel error xerror xstd
1 0.66515837 0 1.0000000 1.0000000 0.04949948
2 0.02262443 1 0.3348416 0.3348416 0.03581213
3 0.01357466 4 0.2669683 0.3438914 0.03620379
4 0.01131222 5 0.2533937 0.3438914 0.03620379
Variable importance
X8 X10 X9 X7 X5 X14 X13 X6 X12 X3
36 16 16 13 10 6 2 1 1 1
Node number 1: 482 observations, complexity param=0.6651584
predicted class=0 expected loss=0.4585062 P(node) =1
class counts: 261 221
probabilities: 0.541 0.459
left son=2 (219 obs) right son=3 (263 obs)
Primary splits:
X8 splits as LR, improve=119.25990, (0 missing)
X10 < 2.5 to the left, improve= 52.61262, (0 missing)
X9 splits as LR, improve= 47.76803, (0 missing)
X14 < 396 to the left, improve= 32.46584, (0 missing)
X7 < 1.0425 to the left, improve= 31.53528, (0 missing)
Surrogate splits:
X9 splits as LR, agree=0.701, adj=0.342, (0 split)
X10 < 0.5 to the left, agree=0.701, adj=0.342, (0 split)
X7 < 0.435 to the left, agree=0.699, adj=0.338, (0 split)
X5 splits as LLLLRRRRRRRRRR, agree=0.641, adj=0.210, (0 split)
X14 < 127 to the left, agree=0.606, adj=0.132, (0 split)
Node number 2: 219 observations
predicted class=0 expected loss=0.07305936 P(node) =0.4543568
class counts: 203 16
probabilities: 0.927 0.073
Node number 3: 263 observations, complexity param=0.02262443
predicted class=1 expected loss=0.2205323 P(node) =0.5456432
class counts: 58 205
probabilities: 0.221 0.779
left son=6 (99 obs) right son=7 (164 obs)
Primary splits:
X9 splits as LR, improve=11.902240, (0 missing)
X10 < 0.5 to the left, improve=11.902240, (0 missing)
X14 < 216.5 to the left, improve=10.195680, (0 missing)
X5 splits as LLLLLLRRRRRRRR, improve= 7.627675, (0 missing)
X13 < 72.5 to the right, improve= 6.568284, (0 missing)
Surrogate splits:
X10 < 0.5 to the left, agree=1.000, adj=1.000, (0 split)
X14 < 3 to the left, agree=0.722, adj=0.263, (0 split)
X12 splits as LR-, agree=0.688, adj=0.172, (0 split)
X5 splits as RLRLRLRRRRRRRR, agree=0.665, adj=0.111, (0 split)
X7 < 0.27 to the left, agree=0.665, adj=0.111, (0 split)
Node number 6: 99 observations, complexity param=0.02262443
predicted class=1 expected loss=0.4141414 P(node) =0.2053942
class counts: 41 58
probabilities: 0.414 0.586
left son=12 (61 obs) right son=13 (38 obs)
Primary splits:
X13 < 111 to the right, improve=6.520991, (0 missing)
X5 splits as LLLLLLLLRRRRRR, improve=5.770954, (0 missing)
X6 splits as LRLRL-RR, improve=4.176207, (0 missing)
X14 < 388.5 to the left, improve=3.403553, (0 missing)
X3 < 2.52 to the left, improve=2.599301, (0 missing)
Surrogate splits:
X3 < 4.5625 to the left, agree=0.697, adj=0.211, (0 split)
X2 < 22.835 to the right, agree=0.677, adj=0.158, (0 split)
X5 splits as RRLLRLLLLRRLLL, agree=0.667, adj=0.132, (0 split)
X7 < 0.02 to the right, agree=0.667, adj=0.132, (0 split)
X6 splits as RRRLL-LR, agree=0.657, adj=0.105, (0 split)
Node number 7: 164 observations
predicted class=1 expected loss=0.1036585 P(node) =0.340249
class counts: 17 147
probabilities: 0.104 0.896
Node number 12: 61 observations, complexity param=0.02262443
predicted class=0 expected loss=0.442623 P(node) =0.126556
class counts: 34 27
probabilities: 0.557 0.443
left son=24 (49 obs) right son=25 (12 obs)
Primary splits:
X5 splits as LLLL-LLLRLRRLL, improve=4.5609460, (0 missing)
X14 < 126 to the left, improve=3.7856330, (0 missing)
X6 splits as L--LL-R-, improve=3.2211680, (0 missing)
X3 < 9.625 to the right, improve=1.0257110, (0 missing)
X2 < 24.5 to the right, improve=0.9861812, (0 missing)
Surrogate splits:
X14 < 2202.5 to the left, agree=0.836, adj=0.167, (0 split)
X3 < 11.3125 to the left, agree=0.820, adj=0.083, (0 split)
Node number 13: 38 observations
predicted class=1 expected loss=0.1842105 P(node) =0.07883817
class counts: 7 31
probabilities: 0.184 0.816
Node number 24: 49 observations, complexity param=0.01357466
predicted class=0 expected loss=0.3469388 P(node) =0.1016598
class counts: 32 17
probabilities: 0.653 0.347
left son=48 (34 obs) right son=49 (15 obs)
Primary splits:
X6 splits as L--LL-R-, improve=2.7687880, (0 missing)
X13 < 150 to the left, improve=2.3016430, (0 missing)
X14 < 126 to the left, improve=2.2040820, (0 missing)
X3 < 4.4575 to the right, improve=1.5322870, (0 missing)
X5 splits as LLRL-LRR-R--RR, improve=0.8850852, (0 missing)
Surrogate splits:
X2 < 50.415 to the left, agree=0.735, adj=0.133, (0 split)
X5 splits as LLLL-LLL-L--LR, agree=0.735, adj=0.133, (0 split)
X7 < 2.625 to the left, agree=0.735, adj=0.133, (0 split)
Node number 25: 12 observations
predicted class=1 expected loss=0.1666667 P(node) =0.02489627
class counts: 2 10
probabilities: 0.167 0.833
Node number 48: 34 observations
predicted class=0 expected loss=0.2352941 P(node) =0.07053942
class counts: 26 8
probabilities: 0.765 0.235
Node number 49: 15 observations
predicted class=1 expected loss=0.4 P(node) =0.03112033
class counts: 6 9
probabilities: 0.400 0.600
1 successful models have been tested
Model rmse success_rate ti_error tii_error
1 lda 0.3509821 0.8768116 0.03623188 0.08695652 0 1
0.5507246 0.4492754
7 successful models have been tested and 21 thresholds evaluated
Model rmse Threshold success_rate ti_error tii_error
1 binomial(logit) 0.3011696 1.00 0.5865385 0.4134615 0
2 binomial(probit) 0.3016317 1.00 0.5865385 0.4134615 0
3 binomial(cloglog) 0.3020186 1.00 0.5865385 0.4134615 0
4 poisson(log) 0.3032150 0.95 0.6634615 0.3365385 0
5 poisson(sqrt) 0.3063370 0.95 0.6490385 0.3509615 0
6 gaussian 0.3109044 0.95 0.6442308 0.3557692 0
7 poisson 0.3111360 1.00 0.6153846 0.3846154 0
-- 1. Failure: Test GLM with Australian Credit (@test-OptimGLM.R#10) ----------
class(summary(modelFit)$coef) not equal to "matrix".
Lengths differ: 2 is not 1
3 successful models have been tested
Model rmse threshold success_rate ti_error tii_error
1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000
3 successful models have been tested
Model rmse threshold success_rate ti_error tii_error
1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000-- 2. Failure: Test LM with Australian Credit (@test-OptimLM.R#12) ------------
class(summary(modelFit)$coef) not equal to "matrix".
Lengths differ: 2 is not 1
8 random variables have been tested
Random_Variable aic bic rmse threshold success_rate ti_error
1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
tii_error
1 0.01923077
2 0.05769231
3 0.08653846
4 0.07692308
5 0.09615385
6 0.07692308
7 0.07692308
8 0.07692308
8 random variables have been tested
Random_Variable aic bic rmse threshold success_rate ti_error
1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
tii_error
1 0.01923077
2 0.05769231
3 0.08653846
4 0.07692308
5 0.09615385
6 0.07692308
7 0.07692308
8 0.07692308Warning: Thresholds' criteria not selected. The success rate is defined as the default.
# weights: 37
initial value 314.113022
iter 10 value 305.860086
iter 20 value 305.236595
iter 30 value 305.199531
final value 305.199440
converged
----------- FAILURE REPORT --------------
--- failure: the condition has length > 1 ---
--- srcref ---
:
--- package (from environment) ---
OptimClassifier
--- call from context ---
MC(y = y, yhat = CutR)
--- call from argument ---
if (class(yhat) != class(y)) {
yhat <- as.numeric(yhat)
y <- as.numeric(y)
}
--- R stacktrace ---
where 1: MC(y = y, yhat = CutR)
where 2: FUN(X[[i]], ...)
where 3: lapply(thresholdsused, threshold, y = testing[, response_variable],
yhat = predicts[[k]], categories = Names)
where 4 at testthat/test-OptimNN.R#4: Optim.NN(Y ~ ., AustralianCredit, p = 0.65, seed = 2018)
where 5: eval(code, test_env)
where 6: eval(code, test_env)
where 7: withCallingHandlers({
eval(code, test_env)
if (!handled && !is.null(test)) {
skip_empty()
}
}, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
message = handle_message, error = handle_error)
where 8: doTryCatch(return(expr), name, parentenv, handler)
where 9: tryCatchOne(expr, names, parentenv, handlers[[1L]])
where 10: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
where 11: doTryCatch(return(expr), name, parentenv, handler)
where 12: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
names[nh], parentenv, handlers[[nh]])
where 13: tryCatchList(expr, classes, parentenv, handlers)
where 14: tryCatch(withCallingHandlers({
eval(code, test_env)
if (!handled && !is.null(test)) {
skip_empty()
}
}, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
message = handle_message, error = handle_error), error = handle_fatal,
skip = function(e) {
})
where 15: test_code(desc, code, env = parent.frame())
where 16 at testthat/test-OptimNN.R#3: test_that("Test example with Australian Credit Dataset for NN",
{
modelFit <- Optim.NN(Y ~ ., AustralianCredit, p = 0.65,
seed = 2018)
expect_equal(class(modelFit), "Optim")
print(modelFit)
print(modelFit, plain = TRUE)
expect_equal(class(summary(modelFit)$value), "numeric")
})
where 17: eval(code, test_env)
where 18: eval(code, test_env)
where 19: withCallingHandlers({
eval(code, test_env)
if (!handled && !is.null(test)) {
skip_empty()
}
}, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
message = handle_message, error = handle_error)
where 20: doTryCatch(return(expr), name, parentenv, handler)
where 21: tryCatchOne(expr, names, parentenv, handlers[[1L]])
where 22: tryCatchList(expr, names[-nh], parentenv, handlers[-nh])
where 23: doTryCatch(return(expr), name, parentenv, handler)
where 24: tryCatchOne(tryCatchList(expr, names[-nh], parentenv, handlers[-nh]),
names[nh], parentenv, handlers[[nh]])
where 25: tryCatchList(expr, classes, parentenv, handlers)
where 26: tryCatch(withCallingHandlers({
eval(code, test_env)
if (!handled && !is.null(test)) {
skip_empty()
}
}, expectation = handle_expectation, skip = handle_skip, warning = handle_warning,
message = handle_message, error = handle_error), error = handle_fatal,
skip = function(e) {
})
where 27: test_code(NULL, exprs, env)
where 28: source_file(path, new.env(parent = env), chdir = TRUE, wrap = wrap)
where 29: force(code)
where 30: doWithOneRestart(return(expr), restart)
where 31: withOneRestart(expr, restarts[[1L]])
where 32: withRestarts(testthat_abort_reporter = function() NULL, force(code))
where 33: with_reporter(reporter = reporter, start_end_reporter = start_end_reporter,
{
reporter$start_file(basename(path))
lister$start_file(basename(path))
source_file(path, new.env(parent = env), chdir = TRUE,
wrap = wrap)
reporter$.end_context()
reporter$end_file()
})
where 34: FUN(X[[i]], ...)
where 35: lapply(paths, test_file, env = env, reporter = current_reporter,
start_end_reporter = FALSE, load_helpers = FALSE, wrap = wrap)
where 36: force(code)
where 37: doWithOneRestart(return(expr), restart)
where 38: withOneRestart(expr, restarts[[1L]])
where 39: withRestarts(testthat_abort_reporter = function() NULL, force(code))
where 40: with_reporter(reporter = current_reporter, results <- lapply(paths,
test_file, env = env, reporter = current_reporter, start_end_reporter = FALSE,
load_helpers = FALSE, wrap = wrap))
where 41: test_files(paths, reporter = reporter, env = env, stop_on_failure = stop_on_failure,
stop_on_warning = stop_on_warning, wrap = wrap)
where 42: test_dir(path = test_path, reporter = reporter, env = env, filter = filter,
..., stop_on_failure = stop_on_failure, stop_on_warning = stop_on_warning,
wrap = wrap)
where 43: test_package_dir(package = package, test_path = test_path, filter = filter,
reporter = reporter, ..., stop_on_failure = stop_on_failure,
stop_on_warning = stop_on_warning, wrap = wrap)
where 44: test_check("OptimClassifier")
--- value of length: 2 type: logical ---
[1] TRUE TRUE
--- function from context ---
function (yhat, y, metrics = FALSE)
{
if (class(yhat) != class(y)) {
yhat <- as.numeric(yhat)
y <- as.numeric(y)
}
Real <- y
Estimated <- yhat
MC <- table(Estimated, Real)
Success_rate <- (sum(diag(MC)))/sum(MC)
tI_error <- sum(MC[upper.tri(MC, diag = FALSE)])/sum(MC)
tII_error <- sum(MC[lower.tri(MC, diag = FALSE)])/sum(MC)
General_metrics <- data.frame(Success_rate = Success_rate,
tI_error = tI_error, tII_error = tII_error)
if (metrics == TRUE) {
Real_cases <- colSums(MC)
Sensitivity <- diag(MC)/colSums(MC)
Prevalence <- Real_cases/sum(MC)
Specificity_F <- function(N, Matrix) {
sum(diag(Matrix)[-N])/sum(colSums(Matrix)[-N])
}
Precision_F <- function(N, Matrix) {
diag(Matrix)[N]/sum(diag(Matrix))
}
Specificity <- unlist(lapply(X = 1:nrow(MC), FUN = Specificity_F,
Matrix = MC))
Precision <- unlist(lapply(X = 1:nrow(MC), FUN = Precision_F,
Matrix = MC))
Categories <- names(Precision)
Categorical_Metrics <- data.frame(Categories, Sensitivity,
Prevalence, Specificity, Precision)
output <- list(MC, General_metrics, Categorical_Metrics)
}
else {
output <- MC
}
return(output)
}
<bytecode: 0x4a13d50>
<environment: namespace:OptimClassifier>
--- function search by body ---
Function MC in namespace OptimClassifier has this body.
----------- END OF FAILURE REPORT --------------
-- 3. Error: Test example with Australian Credit Dataset for NN (@test-OptimNN.R
the condition has length > 1
Backtrace:
1. OptimClassifier::Optim.NN(...)
2. base::lapply(...)
3. OptimClassifier:::FUN(X[[i]], ...)
4. OptimClassifier::MC(y = y, yhat = CutR)
4 successful kernels have been tested
Kernels rmse threshold success_rate ti_error tii_error
1 radial 0.3351234 1.75 0.8745981 0.05787781 0.06752412
2 linear 0.3517729 1.20 0.8617363 0.03536977 0.10289389
3 sigmoid 0.4044390 1.30 0.8617363 0.04180064 0.09646302
4 polynomial 0.5285653 1.15 0.8360129 0.10932476 0.054662384 successful models have been tested
Kernels rmse threshold success_rate ti_error tii_error
1 radial 0.3351234 1.75 0.8745981 0.05787781 0.06752412
2 linear 0.3517729 1.20 0.8617363 0.03536977 0.10289389
3 sigmoid 0.4044390 1.30 0.8617363 0.04180064 0.09646302
4 polynomial 0.5285653 1.15 0.8360129 0.10932476 0.05466238
== testthat results ===========================================================
[ OK: 13 | SKIPPED: 0 | WARNINGS: 9 | FAILED: 3 ]
1. Failure: Test GLM with Australian Credit (@test-OptimGLM.R#10)
2. Failure: Test LM with Australian Credit (@test-OptimLM.R#12)
3. Error: Test example with Australian Credit Dataset for NN (@test-OptimNN.R#4)
Error: testthat unit tests failed
Execution halted
Flavor: r-devel-linux-x86_64-debian-clang
Version: 0.1.4
Check: tests
Result: ERROR
Running ‘testthat.R’ [9s/14s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> library(testthat)
> library(OptimClassifier)
>
> test_check("OptimClassifier")
6 successful models have been tested
CP rmse success_rate ti_error tii_error Nnodes
1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1
6 successful models have been tested
CP rmse success_rate ti_error tii_error Nnodes
1 0.002262443 0.3602883 0.8701923 0.04807692 0.08173077 17
2 0.009049774 0.3466876 0.8798077 0.03846154 0.08173077 15
3 0.011312217 0.3325311 0.8894231 0.04807692 0.06250000 11
4 0.013574661 0.3325311 0.8894231 0.05769231 0.05288462 9
5 0.022624434 0.3535534 0.8750000 0.03365385 0.09134615 3
6 0.665158371 0.6430097 0.5865385 0.41346154 0.00000000 1Call:
rpart::rpart(formula = formula, data = training, na.action = rpart::na.rpart,
model = FALSE, x = FALSE, y = FALSE, cp = 0)
n= 482
CP nsplit rel error xerror xstd
1 0.66515837 0 1.0000000 1.0000000 0.04949948
2 0.02262443 1 0.3348416 0.3348416 0.03581213
3 0.01357466 4 0.2669683 0.3438914 0.03620379
4 0.01131222 5 0.2533937 0.3438914 0.03620379
Variable importance
X8 X10 X9 X7 X5 X14 X13 X6 X12 X3
36 16 16 13 10 6 2 1 1 1
Node number 1: 482 observations, complexity param=0.6651584
predicted class=0 expected loss=0.4585062 P(node) =1
class counts: 261 221
probabilities: 0.541 0.459
left son=2 (219 obs) right son=3 (263 obs)
Primary splits:
X8 splits as LR, improve=119.25990, (0 missing)
X10 < 2.5 to the left, improve= 52.61262, (0 missing)
X9 splits as LR, improve= 47.76803, (0 missing)
X14 < 396 to the left, improve= 32.46584, (0 missing)
X7 < 1.0425 to the left, improve= 31.53528, (0 missing)
Surrogate splits:
X9 splits as LR, agree=0.701, adj=0.342, (0 split)
X10 < 0.5 to the left, agree=0.701, adj=0.342, (0 split)
X7 < 0.435 to the left, agree=0.699, adj=0.338, (0 split)
X5 splits as LLLLRRRRRRRRRR, agree=0.641, adj=0.210, (0 split)
X14 < 127 to the left, agree=0.606, adj=0.132, (0 split)
Node number 2: 219 observations
predicted class=0 expected loss=0.07305936 P(node) =0.4543568
class counts: 203 16
probabilities: 0.927 0.073
Node number 3: 263 observations, complexity param=0.02262443
predicted class=1 expected loss=0.2205323 P(node) =0.5456432
class counts: 58 205
probabilities: 0.221 0.779
left son=6 (99 obs) right son=7 (164 obs)
Primary splits:
X9 splits as LR, improve=11.902240, (0 missing)
X10 < 0.5 to the left, improve=11.902240, (0 missing)
X14 < 216.5 to the left, improve=10.195680, (0 missing)
X5 splits as LLLLLLRRRRRRRR, improve= 7.627675, (0 missing)
X13 < 72.5 to the right, improve= 6.568284, (0 missing)
Surrogate splits:
X10 < 0.5 to the left, agree=1.000, adj=1.000, (0 split)
X14 < 3 to the left, agree=0.722, adj=0.263, (0 split)
X12 splits as LR-, agree=0.688, adj=0.172, (0 split)
X5 splits as RLRLRLRRRRRRRR, agree=0.665, adj=0.111, (0 split)
X7 < 0.27 to the left, agree=0.665, adj=0.111, (0 split)
Node number 6: 99 observations, complexity param=0.02262443
predicted class=1 expected loss=0.4141414 P(node) =0.2053942
class counts: 41 58
probabilities: 0.414 0.586
left son=12 (61 obs) right son=13 (38 obs)
Primary splits:
X13 < 111 to the right, improve=6.520991, (0 missing)
X5 splits as LLLLLLLLRRRRRR, improve=5.770954, (0 missing)
X6 splits as LRLRL-RR, improve=4.176207, (0 missing)
X14 < 388.5 to the left, improve=3.403553, (0 missing)
X3 < 2.52 to the left, improve=2.599301, (0 missing)
Surrogate splits:
X3 < 4.5625 to the left, agree=0.697, adj=0.211, (0 split)
X2 < 22.835 to the right, agree=0.677, adj=0.158, (0 split)
X5 splits as RRLLRLLLLRRLLL, agree=0.667, adj=0.132, (0 split)
X7 < 0.02 to the right, agree=0.667, adj=0.132, (0 split)
X6 splits as RRRLL-LR, agree=0.657, adj=0.105, (0 split)
Node number 7: 164 observations
predicted class=1 expected loss=0.1036585 P(node) =0.340249
class counts: 17 147
probabilities: 0.104 0.896
Node number 12: 61 observations, complexity param=0.02262443
predicted class=0 expected loss=0.442623 P(node) =0.126556
class counts: 34 27
probabilities: 0.557 0.443
left son=24 (49 obs) right son=25 (12 obs)
Primary splits:
X5 splits as LLLL-LLLRLRRLL, improve=4.5609460, (0 missing)
X14 < 126 to the left, improve=3.7856330, (0 missing)
X6 splits as L--LL-R-, improve=3.2211680, (0 missing)
X3 < 9.625 to the right, improve=1.0257110, (0 missing)
X2 < 24.5 to the right, improve=0.9861812, (0 missing)
Surrogate splits:
X14 < 2202.5 to the left, agree=0.836, adj=0.167, (0 split)
X3 < 11.3125 to the left, agree=0.820, adj=0.083, (0 split)
Node number 13: 38 observations
predicted class=1 expected loss=0.1842105 P(node) =0.07883817
class counts: 7 31
probabilities: 0.184 0.816
Node number 24: 49 observations, complexity param=0.01357466
predicted class=0 expected loss=0.3469388 P(node) =0.1016598
class counts: 32 17
probabilities: 0.653 0.347
left son=48 (34 obs) right son=49 (15 obs)
Primary splits:
X6 splits as L--LL-R-, improve=2.7687880, (0 missing)
X13 < 150 to the left, improve=2.3016430, (0 missing)
X14 < 126 to the left, improve=2.2040820, (0 missing)
X3 < 4.4575 to the right, improve=1.5322870, (0 missing)
X5 splits as LLRL-LRR-R--RR, improve=0.8850852, (0 missing)
Surrogate splits:
X2 < 50.415 to the left, agree=0.735, adj=0.133, (0 split)
X5 splits as LLLL-LLL-L--LR, agree=0.735, adj=0.133, (0 split)
X7 < 2.625 to the left, agree=0.735, adj=0.133, (0 split)
Node number 25: 12 observations
predicted class=1 expected loss=0.1666667 P(node) =0.02489627
class counts: 2 10
probabilities: 0.167 0.833
Node number 48: 34 observations
predicted class=0 expected loss=0.2352941 P(node) =0.07053942
class counts: 26 8
probabilities: 0.765 0.235
Node number 49: 15 observations
predicted class=1 expected loss=0.4 P(node) =0.03112033
class counts: 6 9
probabilities: 0.400 0.600
1 successful models have been tested
Model rmse success_rate ti_error tii_error
1 lda 0.3509821 0.8768116 0.03623188 0.08695652 0 1
0.5507246 0.4492754
7 successful models have been tested and 21 thresholds evaluated
Model rmse Threshold success_rate ti_error tii_error
1 binomial(logit) 0.3011696 1.00 0.5865385 0.4134615 0
2 binomial(probit) 0.3016317 1.00 0.5865385 0.4134615 0
3 binomial(cloglog) 0.3020186 1.00 0.5865385 0.4134615 0
4 poisson(log) 0.3032150 0.95 0.6634615 0.3365385 0
5 poisson(sqrt) 0.3063370 0.95 0.6490385 0.3509615 0
6 gaussian 0.3109044 0.95 0.6442308 0.3557692 0
7 poisson 0.3111360 1.00 0.6153846 0.3846154 0
── 1. Failure: Test GLM with Australian Credit (@test-OptimGLM.R#10) ──────────
class(summary(modelFit)$coef) not equal to "matrix".
Lengths differ: 2 is not 1
3 successful models have been tested
Model rmse threshold success_rate ti_error tii_error
1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000
3 successful models have been tested
Model rmse threshold success_rate ti_error tii_error
1 LM 0.3109044 1 0.5625000 0.009615385 0.4278846
2 SQRT.LM 0.4516999 1 0.5625000 0.009615385 0.4278846
3 LOG.LM 1.1762341 1 0.5865385 0.413461538 0.0000000── 2. Failure: Test LM with Australian Credit (@test-OptimLM.R#12) ────────────
class(summary(modelFit)$coef) not equal to "matrix".
Lengths differ: 2 is not 1
8 random variables have been tested
Random_Variable aic bic rmse threshold success_rate ti_error
1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
tii_error
1 0.01923077
2 0.05769231
3 0.08653846
4 0.07692308
5 0.09615385
6 0.07692308
7 0.07692308
8 0.07692308
8 random variables have been tested
Random_Variable aic bic rmse threshold success_rate ti_error
1 X5 495.8364 600.7961 1.023786 1.70 0.8942308 0.08653846
2 X1 497.6737 628.8733 1.035826 1.60 0.8942308 0.04807692
3 X6 514.7091 645.9087 1.019398 1.50 0.8653846 0.04807692
4 X11 524.0760 677.1422 1.016578 1.55 0.8750000 0.04807692
5 X4 531.7380 684.8042 1.017809 1.30 0.8653846 0.03846154
6 X9 534.2266 691.6661 1.016536 1.55 0.8750000 0.04807692
7 X12 536.3424 689.4086 1.016180 1.55 0.8750000 0.04807692
8 X8 537.4437 694.8833 1.016513 1.55 0.8750000 0.04807692
tii_error
1 0.01923077
2 0.05769231
3 0.08653846
4 0.07692308
5 0.09615385
6 0.07692308
7 0.07692308
8 0.07692308Warning: Thresholds' criteria not selected. The success rate is defined as the default.
# weights: 37
initial value 314.113022
iter 10 value 305.860086
iter 20 value 305.236595
iter 30 value 305.199531
final value 305.199440
converged
# weights: 73
initial value 331.204920
iter 10 value 277.703814
iter 20 value 264.464583
iter 30 value 235.346147
iter 40 value 213.675019
iter 50 value 204.513413
iter 60 value 188.579524
iter 70 value 151.606445
iter 80 value 135.045373
iter 90 value 127.862716
iter 100 value 127.540380
final value 127.539685
converged
# weights: 109
initial value 313.022965
iter 10 value 293.213546
iter 20 value 276.666037
iter 30 value 272.763270
iter 40 value 271.860014
iter 50 value 271.852731
final value 271.852717
converged
# weights: 145
initial value 342.699156
iter 10 value 266.493159
iter 20 value 248.027580
iter 30 value 202.137254
iter 40 value 182.841633
iter 50 value 163.406077
iter 60 value 153.712411
iter 70 value 143.761097
iter 80 value 136.791530
iter 90 value 131.883986
iter 100 value 128.256325
iter 110 value 123.054770
iter 120 value 119.373904
iter 130 value 117.935121
iter 140 value 115.543916
iter 150 value 113.870705
iter 160 value 112.397158
iter 170 value 110.659059
iter 180 value 110.265167
iter 190 value 109.914964
iter 200 value 109.465033
iter 210 value 109.365831
iter 220 value 109.309421
iter 230 value 109.221952
iter 240 value 109.183033
iter 250 value 109.077007
iter 260 value 109.024442
iter 270 value 108.979083
iter 280 value 108.966573
iter 290 value 108.946679
iter 300 value 108.928539
iter 310 value 108.906541
iter 320 value 108.903732
iter 330 value 108.881879
iter 340 value 108.860613
iter 350 value 108.852071
iter 360 value 108.843541
iter 370 value 108.833519
iter 380 value 108.826552
iter 390 value 108.704253
iter 400 value 108.679474
iter 410 value 108.667142
iter 420 value 108.661706
iter 430 value 108.659019
iter 440 value 108.653729
iter 450 value 108.633311
iter 460 value 108.613541
iter 470 value 108.584588
iter 480 value 108.566776
iter 490 value 108.527453
iter 500 value 108.484387
final value 108.484387
stopped after 500 iterations
# weights: 181
initial value 431.964891
iter 10 value 268.804053
iter 20 value 257.332978
iter 30 value 249.853001
iter 40 value 185.335385
iter 50 value 153.782864
iter 60 value 139.784881
iter 70 value 126.922476
iter 80 value 123.252215
iter 90 value 120.355265
iter 100 value 116.841582
iter 110 value 112.118113
iter 120 value 105.917923
iter 130 value 105.028943
iter 140 value 105.024535
final value 105.024530
converged
# weights: 217
initial value 364.112551
iter 10 value 274.040840
iter 20 value 260.171164
iter 30 value 252.654051
iter 40 value 245.185007
iter 50 value 239.105792
iter 60 value 228.177441
iter 70 value 209.068307
iter 80 value 189.216192
iter 90 value 179.285289
iter 100 value 177.101089
iter 110 value 177.096487
final value 177.096448
converged
# weights: 253
initial value 325.866420
iter 10 value 269.878024
iter 20 value 260.328850
iter 30 value 255.407336
iter 40 value 248.367432
iter 50 value 201.019882
iter 60 value 154.795327
iter 70 value 142.061451
iter 80 value 129.177931
iter 90 value 113.635713
iter 100 value 109.226495
iter 110 value 102.067007
iter 120 value 100.120314
iter 130 value 99.785251
iter 140 value 99.503240
iter 150 value 99.238235
iter 160 value 99.194327
iter 170 value 99.004752
iter 180 value 98.488715
iter 190 value 97.866264
iter 200 value 97.553015
iter 210 value 97.406696
iter 220 value 97.262658
iter 230 value 97.069177
iter 240 value 96.926948
iter 250 value 96.846568
iter 260 value 96.575699
iter 270 value 96.307219
iter 280 value 96.246913
iter 290 value 96.193544
iter 300 value 96.112864
iter 310 value 96.061954
iter 320 value 95.737456
iter 330 value 95.016696
iter 340 value 94.608414
iter 350 value 94.331090
iter 360 value 94.229239
iter 370 value 94.123306
iter 380 value 94.070268
iter 390 value 93.966009
iter 400 value 93.756340
iter 410 value 93.566368
iter 420 value 93.510538
iter 430 value 93.426889
iter 440 value 93.342668
iter 450 value 93.293375
iter 460 value 93.232193
iter 470 value 93.158243
iter 480 value 93.026257
iter 490 value 92.808916
iter 500 value 92.697079
final value 92.697079
stopped after 500 iterations
# weights: 289
initial value 308.589551
iter 10 value 276.114138
iter 20 value 271.523670
iter 30 value 263.359821
iter 40 value 252.965028
iter 50 value 215.194350
iter 60 value 197.931441
iter 70 value 189.556282
iter 80 value 162.533108
iter 90 value 148.780324
iter 100 value 127.237270
iter 110 value 113.271915
iter 120 value 103.681517
iter 130 value 100.506900
iter 140 value 99.473416
iter 150 value 96.419897
iter 160 value 95.875123
iter 170 value 95.582754
iter 180 value 94.802815
iter 190 value 93.800813
iter 200 value 93.415047
iter 210 value 93.113811
iter 220 value 92.848904
iter 230 value 92.216786
iter 240 value 91.199472
iter 250 value 91.064264
iter 260 value 90.940703
iter 270 value 90.613176
iter 280 value 90.580067
iter 290 value 90.552349
iter 300 value 90.469581
iter 310 value 90.125040
iter 320 value 89.834299
iter 330 value 89.634271
iter 340 value 89.477507
iter 350 value 89.278901
iter 360 value 89.110192
iter 370 value 89.034938
iter 380 value 88.514948
iter 390 value 88.368879
iter 400 value 88.232517
iter 410 value 88.189377
iter 420 value 88.178188
iter 430 value 88.169094
iter 440 value 88.156563
iter 450 value 88.136171
iter 460 value 88.106613
iter 470 value 87.790970
iter 480 value 87.676820
iter 490 value 87.599647
iter 500 value 87.568524
final value 87.568524
stopped after 500 iterations
# weights: 325
initial value 387.634927
iter 10 value 268.437459
iter 20 value 261.669527
iter 30 value 238.226127
iter 40 value 218.985628
iter 50 value 203.340002
iter 60 value 197.855735
iter 70 value 192.165234
iter 80 value 191.161086
iter 90 value 191.147630
iter 100 value 191.146459
iter 110 value 191.146013
final value 191.145998
converged
9 models have been tested with differents levels of hidden layers
hiddenlayers rmse threshold success_rate ti_error tii_error
1 5 0.3276663 1 0.5950413 0.4049587 0.00000000
2 7 0.3331272 1 0.5950413 0.4049587 0.00000000
3 4 0.3552631 1 0.5950413 0.4049587 0.00000000
4 8 0.3645871 1 0.7851240 0.1983471 0.01652893
5 2 0.3706125 1 0.5950413 0.4049587 0.00000000
6 9 0.4330809 1 0.5950413 0.4049587 0.00000000
7 6 0.4491394 1 0.5950413 0.4049587 0.00000000
8 3 0.4793713 1 0.5950413 0.4049587 0.00000000
9 1 0.5048849 1 0.5950413 0.4049587 0.000000009 successful models have been tested
hiddenlayers rmse threshold success_rate ti_error tii_error
1 5 0.3276663 1 0.5950413 0.4049587 0.00000000
2 7 0.3331272 1 0.5950413 0.4049587 0.00000000
3 4 0.3552631 1 0.5950413 0.4049587 0.00000000
4 8 0.3645871 1 0.7851240 0.1983471 0.01652893
5 2 0.3706125 1 0.5950413 0.4049587 0.00000000
6 9 0.4330809 1 0.5950413 0.4049587 0.00000000
7 6 0.4491394 1 0.5950413 0.4049587 0.00000000
8 3 0.4793713 1 0.5950413 0.4049587 0.00000000
9 1 0.5048849 1 0.5950413 0.4049587 0.00000000
4 successful kernels have been tested
Kernels rmse threshold success_rate ti_error tii_error
1 radial 0.3351234 1.75 0.8745981 0.05787781 0.06752412
2 linear 0.3517729 1.20 0.8617363 0.03536977 0.10289389
3 sigmoid 0.4044390 1.30 0.8617363 0.04180064 0.09646302
4 polynomial 0.5285653 1.15 0.8360129 0.10932476 0.054662384 successful models have been tested
Kernels rmse threshold success_rate ti_error tii_error
1 radial 0.3351234 1.75 0.8745981 0.05787781 0.06752412
2 linear 0.3517729 1.20 0.8617363 0.03536977 0.10289389
3 sigmoid 0.4044390 1.30 0.8617363 0.04180064 0.09646302
4 polynomial 0.5285653 1.15 0.8360129 0.10932476 0.05466238
══ testthat results ═══════════════════════════════════════════════════════════
[ OK: 15 | SKIPPED: 0 | WARNINGS: 198 | FAILED: 2 ]
1. Failure: Test GLM with Australian Credit (@test-OptimGLM.R#10)
2. Failure: Test LM with Australian Credit (@test-OptimLM.R#12)
Error: testthat unit tests failed
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc