Last updated on 2022-07-13 06:55:34 CEST.
Flavor | Version | Tinstall | Tcheck | Ttotal | Status | Flags |
---|---|---|---|---|---|---|
r-devel-linux-x86_64-debian-clang | 1.1.4 | 211.40 | 136.21 | 347.61 | ERROR | |
r-devel-linux-x86_64-debian-gcc | 1.1.4 | 180.27 | 104.96 | 285.23 | ERROR | |
r-devel-linux-x86_64-fedora-clang | 1.1.4 | 499.78 | ERROR | |||
r-devel-linux-x86_64-fedora-gcc | 1.1.4 | 465.11 | ERROR | |||
r-devel-windows-x86_64 | 1.1.4 | 163.00 | 191.00 | 354.00 | ERROR | |
r-patched-linux-x86_64 | 1.1.4 | 188.91 | 209.54 | 398.45 | OK | |
r-release-linux-x86_64 | 1.1.4 | 186.77 | 210.57 | 397.34 | OK | |
r-release-macos-arm64 | 1.1.4 | 142.00 | NOTE | |||
r-release-macos-x86_64 | 1.1.4 | 225.00 | NOTE | |||
r-release-windows-x86_64 | 1.1.4 | 151.00 | 267.00 | 418.00 | OK | |
r-oldrel-macos-arm64 | 1.1.4 | 138.00 | NOTE | |||
r-oldrel-macos-x86_64 | 1.1.4 | 186.00 | NOTE | |||
r-oldrel-windows-ix86+x86_64 | 1.1.4 | 438.00 | 422.00 | 860.00 | OK |
Version: 1.1.4
Check: examples
Result: ERROR
Running examples in 'skpr-Ex.R' failed
The error most likely occurred in:
> base::assign(".ptime", proc.time(), pos = "CheckExEnv")
> ### Name: eval_design
> ### Title: Calculate Power of an Experimental Design
> ### Aliases: eval_design
>
> ### ** Examples
>
> #Generating a simple 2x3 factorial to feed into our optimal design generation
> #of an 11-run design.
> factorial = expand.grid(A = c(1, -1), B = c(1, -1), C = c(1, -1))
>
> optdesign = gen_design(candidateset = factorial,
+ model= ~A + B + C, trials = 11, optimality = "D", repeats = 100)
>
> #Now evaluating that design (with default anticipated coefficients and a effectsize of 2):
> eval_design(design = optdesign, model= ~A + B + C, alpha = 0.2)
parameter type power
1 (Intercept) effect.power 0.9622638
2 A effect.power 0.9622638
3 B effect.power 0.9622638
4 C effect.power 0.9622638
5 (Intercept) parameter.power 0.9622638
6 A parameter.power 0.9622638
7 B parameter.power 0.9622638
8 C parameter.power 0.9622638
============Evaluation Info============
<1b>[1m<U+2022> Alpha = <1b>[0m0.2 <1b>[1m<U+2022> Trials = <1b>[0m11 <1b>[1m<U+2022> Blocked = <1b>[0mFALSE
<1b>[1m<U+2022> Evaluating Model = <1b>[0m~A + B + C
<1b>[1m<U+2022> Anticipated Coefficients = <1b>[0mc(1, 1, 1, 1)
>
> #Evaluating a subset of the design (which changes the power due to a different number of
> #degrees of freedom)
> eval_design(design = optdesign, model= ~A + C, alpha = 0.2)
parameter type power
1 (Intercept) effect.power 0.9659328
2 A effect.power 0.9659328
3 C effect.power 0.9659328
4 (Intercept) parameter.power 0.9659328
5 A parameter.power 0.9659328
6 C parameter.power 0.9659328
============Evaluation Info============
<1b>[1m<U+2022> Alpha = <1b>[0m0.2 <1b>[1m<U+2022> Trials = <1b>[0m11 <1b>[1m<U+2022> Blocked = <1b>[0mFALSE
<1b>[1m<U+2022> Evaluating Model = <1b>[0m~A + C
<1b>[1m<U+2022> Anticipated Coefficients = <1b>[0mc(1, 1, 1)
>
> #We do not have to input the model if it's the same as the model used
> #During design generation. Here, we also use the default value for alpha (`0.05`)
> eval_design(optdesign)
parameter type power
1 (Intercept) effect.power 0.7991116
2 A effect.power 0.7991116
3 B effect.power 0.7991116
4 C effect.power 0.7991116
5 (Intercept) parameter.power 0.7991116
6 A parameter.power 0.7991116
7 B parameter.power 0.7991116
8 C parameter.power 0.7991116
============Evaluation Info============
<1b>[1m<U+2022> Alpha = <1b>[0m0.05 <1b>[1m<U+2022> Trials = <1b>[0m11 <1b>[1m<U+2022> Blocked = <1b>[0mFALSE
<1b>[1m<U+2022> Evaluating Model = <1b>[0m~A + B + C
<1b>[1m<U+2022> Anticipated Coefficients = <1b>[0mc(1, 1, 1, 1)
>
> #Halving the signal-to-noise ratio by setting a different effectsize (default is 2):
> eval_design(design = optdesign, model= ~A + B + C, alpha = 0.2, effectsize = 1)
parameter type power
1 (Intercept) effect.power 0.6021367
2 A effect.power 0.6021367
3 B effect.power 0.6021367
4 C effect.power 0.6021367
5 (Intercept) parameter.power 0.6021367
6 A parameter.power 0.6021367
7 B parameter.power 0.6021367
8 C parameter.power 0.6021367
============Evaluation Info============
<1b>[1m<U+2022> Alpha = <1b>[0m0.2 <1b>[1m<U+2022> Trials = <1b>[0m11 <1b>[1m<U+2022> Blocked = <1b>[0mFALSE
<1b>[1m<U+2022> Evaluating Model = <1b>[0m~A + B + C
<1b>[1m<U+2022> Anticipated Coefficients = <1b>[0mc(0.500, 0.500, 0.500, 0.500)
>
> #With 3+ level categorical factors, the choice of anticipated coefficients directly changes the
> #final power calculation. For the most conservative power calculation, that involves
> #setting all anticipated coefficients in a factor to zero except for one. We can specify this
> #option with the "conservative" argument.
>
> factorialcoffee = expand.grid(cost = c(1, 2),
+ type = as.factor(c("Kona", "Colombian", "Ethiopian", "Sumatra")),
+ size = as.factor(c("Short", "Grande", "Venti")))
>
> designcoffee = gen_design(factorialcoffee,
+ ~cost + size + type, trials = 29, optimality = "D", repeats = 100)
Error in n - 1 : non-numeric argument to binary operator
Calls: gen_design
Execution halted
Flavor: r-devel-linux-x86_64-debian-clang
Version: 1.1.4
Check: tests
Result: ERROR
Running 'testthat.R' [14s/30s]
Running the tests in 'tests/testthat.R' failed.
Complete output:
> Sys.setenv("R_TESTS" = "")
>
> library(testthat)
>
> test_check("skpr")
Loading required package: skpr
Loading required package: shiny
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
== Skipped tests ===============================================================
* On CRAN (1)
== Failed tests ================================================================
-- Error (testExampleCode.R:197:3): eval_design example code runs without errors --
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
x
1. +-testthat::expect_silent(...) at testExampleCode.R:197:2
2. | \-testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. | +-testthat (local) .capture(...)
4. | | +-withr::with_output_sink(...)
5. | | | \-base::force(code)
6. | | +-base::withCallingHandlers(...)
7. | | \-base::withVisible(code)
8. | \-rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. \-skpr::gen_design(...) at testExampleCode.R:197:17
10. +-base::suppressWarnings(...)
11. | \-base::withCallingHandlers(...)
12. +-stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. \-stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. \-stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. \-skpr (local) value(levels(x))
16. \-base::matrix(nrow = n - 1, ncol = n)
-- Error (testExampleCode.R:241:3): eval_design_mc example code runs without errors --
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
x
1. +-testthat::expect_silent(...) at testExampleCode.R:241:2
2. | \-testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. | +-testthat (local) .capture(...)
4. | | +-withr::with_output_sink(...)
5. | | | \-base::force(code)
6. | | +-base::withCallingHandlers(...)
7. | | \-base::withVisible(code)
8. | \-rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. \-skpr::gen_design(...) at testExampleCode.R:242:4
10. +-base::suppressWarnings(...)
11. | \-base::withCallingHandlers(...)
12. +-stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. \-stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. \-stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. \-skpr (local) value(levels(x))
16. \-base::matrix(nrow = n - 1, ncol = n)
-- Error (testPlottingFunctions.R:12:3): plot_correlations works as intended ---
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
x
1. \-skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:12:2
2. +-base::suppressWarnings(...)
3. | \-base::withCallingHandlers(...)
4. +-stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. \-stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. \-stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. \-skpr (local) value(levels(x))
8. \-base::matrix(nrow = n - 1, ncol = n)
-- Error (testPlottingFunctions.R:33:3): plot_fds works as intended ------------
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
x
1. \-skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:33:2
2. +-base::suppressWarnings(...)
3. | \-base::withCallingHandlers(...)
4. +-stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. \-stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. \-stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. \-skpr (local) value(levels(x))
8. \-base::matrix(nrow = n - 1, ncol = n)
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-debian-clang
Version: 1.1.4
Check: examples
Result: ERROR
Running examples in ‘skpr-Ex.R’ failed
The error most likely occurred in:
> base::assign(".ptime", proc.time(), pos = "CheckExEnv")
> ### Name: eval_design
> ### Title: Calculate Power of an Experimental Design
> ### Aliases: eval_design
>
> ### ** Examples
>
> #Generating a simple 2x3 factorial to feed into our optimal design generation
> #of an 11-run design.
> factorial = expand.grid(A = c(1, -1), B = c(1, -1), C = c(1, -1))
>
> optdesign = gen_design(candidateset = factorial,
+ model= ~A + B + C, trials = 11, optimality = "D", repeats = 100)
>
> #Now evaluating that design (with default anticipated coefficients and a effectsize of 2):
> eval_design(design = optdesign, model= ~A + B + C, alpha = 0.2)
parameter type power
1 (Intercept) effect.power 0.9622638
2 A effect.power 0.9622638
3 B effect.power 0.9622638
4 C effect.power 0.9622638
5 (Intercept) parameter.power 0.9622638
6 A parameter.power 0.9622638
7 B parameter.power 0.9622638
8 C parameter.power 0.9622638
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.2 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + B + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(1, 1, 1, 1)
>
> #Evaluating a subset of the design (which changes the power due to a different number of
> #degrees of freedom)
> eval_design(design = optdesign, model= ~A + C, alpha = 0.2)
parameter type power
1 (Intercept) effect.power 0.9659328
2 A effect.power 0.9659328
3 C effect.power 0.9659328
4 (Intercept) parameter.power 0.9659328
5 A parameter.power 0.9659328
6 C parameter.power 0.9659328
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.2 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(1, 1, 1)
>
> #We do not have to input the model if it's the same as the model used
> #During design generation. Here, we also use the default value for alpha (`0.05`)
> eval_design(optdesign)
parameter type power
1 (Intercept) effect.power 0.7991116
2 A effect.power 0.7991116
3 B effect.power 0.7991116
4 C effect.power 0.7991116
5 (Intercept) parameter.power 0.7991116
6 A parameter.power 0.7991116
7 B parameter.power 0.7991116
8 C parameter.power 0.7991116
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.05 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + B + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(1, 1, 1, 1)
>
> #Halving the signal-to-noise ratio by setting a different effectsize (default is 2):
> eval_design(design = optdesign, model= ~A + B + C, alpha = 0.2, effectsize = 1)
parameter type power
1 (Intercept) effect.power 0.6021367
2 A effect.power 0.6021367
3 B effect.power 0.6021367
4 C effect.power 0.6021367
5 (Intercept) parameter.power 0.6021367
6 A parameter.power 0.6021367
7 B parameter.power 0.6021367
8 C parameter.power 0.6021367
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.2 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + B + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(0.500, 0.500, 0.500, 0.500)
>
> #With 3+ level categorical factors, the choice of anticipated coefficients directly changes the
> #final power calculation. For the most conservative power calculation, that involves
> #setting all anticipated coefficients in a factor to zero except for one. We can specify this
> #option with the "conservative" argument.
>
> factorialcoffee = expand.grid(cost = c(1, 2),
+ type = as.factor(c("Kona", "Colombian", "Ethiopian", "Sumatra")),
+ size = as.factor(c("Short", "Grande", "Venti")))
>
> designcoffee = gen_design(factorialcoffee,
+ ~cost + size + type, trials = 29, optimality = "D", repeats = 100)
Error in n - 1 : non-numeric argument to binary operator
Calls: gen_design
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc
Version: 1.1.4
Check: tests
Result: ERROR
Running ‘testthat.R’ [10s/30s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> Sys.setenv("R_TESTS" = "")
>
> library(testthat)
>
> test_check("skpr")
Loading required package: skpr
Loading required package: shiny
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
══ Skipped tests ═══════════════════════════════════════════════════════════════
• On CRAN (1)
══ Failed tests ════════════════════════════════════════════════════════════════
── Error (testExampleCode.R:197:3): eval_design example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:197:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:197:17
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testExampleCode.R:241:3): eval_design_mc example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:241:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:242:4
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:12:3): plot_correlations works as intended ───
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:12:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:33:3): plot_fds works as intended ────────────
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:33:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc
Version: 1.1.4
Check: installed package size
Result: NOTE
installed size is 33.7Mb
sub-directories of 1Mb or more:
libs 32.1Mb
Flavors: r-devel-linux-x86_64-fedora-clang, r-release-macos-arm64, r-release-macos-x86_64, r-oldrel-macos-arm64, r-oldrel-macos-x86_64
Version: 1.1.4
Check: examples
Result: ERROR
Running examples in ‘skpr-Ex.R’ failed
The error most likely occurred in:
> ### Name: eval_design
> ### Title: Calculate Power of an Experimental Design
> ### Aliases: eval_design
>
> ### ** Examples
>
> #Generating a simple 2x3 factorial to feed into our optimal design generation
> #of an 11-run design.
> factorial = expand.grid(A = c(1, -1), B = c(1, -1), C = c(1, -1))
>
> optdesign = gen_design(candidateset = factorial,
+ model= ~A + B + C, trials = 11, optimality = "D", repeats = 100)
>
> #Now evaluating that design (with default anticipated coefficients and a effectsize of 2):
> eval_design(design = optdesign, model= ~A + B + C, alpha = 0.2)
parameter type power
1 (Intercept) effect.power 0.9622638
2 A effect.power 0.9622638
3 B effect.power 0.9622638
4 C effect.power 0.9622638
5 (Intercept) parameter.power 0.9622638
6 A parameter.power 0.9622638
7 B parameter.power 0.9622638
8 C parameter.power 0.9622638
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.2 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + B + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(1, 1, 1, 1)
>
> #Evaluating a subset of the design (which changes the power due to a different number of
> #degrees of freedom)
> eval_design(design = optdesign, model= ~A + C, alpha = 0.2)
parameter type power
1 (Intercept) effect.power 0.9659328
2 A effect.power 0.9659328
3 C effect.power 0.9659328
4 (Intercept) parameter.power 0.9659328
5 A parameter.power 0.9659328
6 C parameter.power 0.9659328
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.2 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(1, 1, 1)
>
> #We do not have to input the model if it's the same as the model used
> #During design generation. Here, we also use the default value for alpha (`0.05`)
> eval_design(optdesign)
parameter type power
1 (Intercept) effect.power 0.7991116
2 A effect.power 0.7991116
3 B effect.power 0.7991116
4 C effect.power 0.7991116
5 (Intercept) parameter.power 0.7991116
6 A parameter.power 0.7991116
7 B parameter.power 0.7991116
8 C parameter.power 0.7991116
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.05 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + B + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(1, 1, 1, 1)
>
> #Halving the signal-to-noise ratio by setting a different effectsize (default is 2):
> eval_design(design = optdesign, model= ~A + B + C, alpha = 0.2, effectsize = 1)
parameter type power
1 (Intercept) effect.power 0.6021367
2 A effect.power 0.6021367
3 B effect.power 0.6021367
4 C effect.power 0.6021367
5 (Intercept) parameter.power 0.6021367
6 A parameter.power 0.6021367
7 B parameter.power 0.6021367
8 C parameter.power 0.6021367
============Evaluation Info============
<1b>[1m• Alpha = <1b>[0m0.2 <1b>[1m• Trials = <1b>[0m11 <1b>[1m• Blocked = <1b>[0mFALSE
<1b>[1m• Evaluating Model = <1b>[0m~A + B + C
<1b>[1m• Anticipated Coefficients = <1b>[0mc(0.500, 0.500, 0.500, 0.500)
>
> #With 3+ level categorical factors, the choice of anticipated coefficients directly changes the
> #final power calculation. For the most conservative power calculation, that involves
> #setting all anticipated coefficients in a factor to zero except for one. We can specify this
> #option with the "conservative" argument.
>
> factorialcoffee = expand.grid(cost = c(1, 2),
+ type = as.factor(c("Kona", "Colombian", "Ethiopian", "Sumatra")),
+ size = as.factor(c("Short", "Grande", "Venti")))
>
> designcoffee = gen_design(factorialcoffee,
+ ~cost + size + type, trials = 29, optimality = "D", repeats = 100)
Error in n - 1 : non-numeric argument to binary operator
Calls: gen_design
Execution halted
Flavors: r-devel-linux-x86_64-fedora-clang, r-devel-linux-x86_64-fedora-gcc, r-devel-windows-x86_64
Version: 1.1.4
Check: tests
Result: ERROR
Running ‘testthat.R’ [16s/37s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> Sys.setenv("R_TESTS" = "")
>
> library(testthat)
>
> test_check("skpr")
Loading required package: skpr
Loading required package: shiny
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
══ Skipped tests ═══════════════════════════════════════════════════════════════
• On CRAN (1)
══ Failed tests ════════════════════════════════════════════════════════════════
── Error (testExampleCode.R:197:3): eval_design example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:197:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:197:17
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testExampleCode.R:241:3): eval_design_mc example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:241:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:242:4
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:12:3): plot_correlations works as intended ───
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:12:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:33:3): plot_fds works as intended ────────────
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:33:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang
Version: 1.1.4
Check: tests
Result: ERROR
Running ‘testthat.R’ [16s/42s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> Sys.setenv("R_TESTS" = "")
>
> library(testthat)
>
> test_check("skpr")
Loading required package: skpr
Loading required package: shiny
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
══ Skipped tests ═══════════════════════════════════════════════════════════════
• On CRAN (1)
══ Failed tests ════════════════════════════════════════════════════════════════
── Error (testExampleCode.R:197:3): eval_design example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:197:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:197:17
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testExampleCode.R:241:3): eval_design_mc example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:241:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:242:4
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:12:3): plot_correlations works as intended ───
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:12:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:33:3): plot_fds works as intended ────────────
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:33:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
Error: Test failures
Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc
Version: 1.1.4
Check: tests
Result: ERROR
Running 'testthat.R' [25s]
Running the tests in 'tests/testthat.R' failed.
Complete output:
> Sys.setenv("R_TESTS" = "")
>
> library(testthat)
>
> test_check("skpr")
Loading required package: skpr
Loading required package: shiny
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
══ Skipped tests ═══════════════════════════════════════════════════════════════
• On CRAN (1)
══ Failed tests ════════════════════════════════════════════════════════════════
── Error (testExampleCode.R:197:3): eval_design example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:197:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:197:17
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testExampleCode.R:241:3): eval_design_mc example code runs without errors ──
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. ├─testthat::expect_silent(...) at testExampleCode.R:241:2
2. │ └─testthat:::quasi_capture(enquo(object), NULL, evaluate_promise)
3. │ ├─testthat (local) .capture(...)
4. │ │ ├─withr::with_output_sink(...)
5. │ │ │ └─base::force(code)
6. │ │ ├─base::withCallingHandlers(...)
7. │ │ └─base::withVisible(code)
8. │ └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
9. └─skpr::gen_design(...) at testExampleCode.R:242:4
10. ├─base::suppressWarnings(...)
11. │ └─base::withCallingHandlers(...)
12. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
13. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
14. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
15. └─skpr (local) value(levels(x))
16. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:12:3): plot_correlations works as intended ───
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:12:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
── Error (testPlottingFunctions.R:33:3): plot_fds works as intended ────────────
Error in `n - 1`: non-numeric argument to binary operator
Backtrace:
▆
1. └─skpr::gen_design(candlist, ~., 23) at testPlottingFunctions.R:33:2
2. ├─base::suppressWarnings(...)
3. │ └─base::withCallingHandlers(...)
4. ├─stats::model.matrix(model, candidatesetnormalized, contrasts.arg = contrastslist)
5. └─stats::model.matrix.default(model, candidatesetnormalized, contrasts.arg = contrastslist)
6. └─stats::`contrasts<-`(`*tmp*`, value = contrasts.arg[[nn]])
7. └─skpr (local) value(levels(x))
8. └─base::matrix(nrow = n - 1, ncol = n)
[ FAIL 4 | WARN 0 | SKIP 1 | PASS 338 ]
Error: Test failures
Execution halted
Flavor: r-devel-windows-x86_64