CRAN Package Check Results for Package ck37r

Last updated on 2020-02-20 05:48:17 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 1.0.3 15.46 220.83 236.29 ERROR
r-devel-linux-x86_64-debian-gcc 1.0.3 14.44 168.14 182.58 ERROR
r-devel-linux-x86_64-fedora-clang 1.0.3 283.21 ERROR
r-devel-linux-x86_64-fedora-gcc 1.0.3 284.17 ERROR
r-devel-windows-ix86+x86_64 1.0.3 46.00 208.00 254.00 OK
r-patched-linux-x86_64 1.0.3 12.22 194.14 206.36 ERROR
r-patched-solaris-x86 1.0.3 344.60 ERROR
r-release-linux-x86_64 1.0.3 12.30 196.99 209.29 ERROR
r-release-windows-ix86+x86_64 1.0.3 28.00 194.00 222.00 OK
r-release-osx-x86_64 1.0.3 ERROR
r-oldrel-windows-ix86+x86_64 1.0.3 21.00 189.00 210.00 OK
r-oldrel-osx-x86_64 1.0.3 ERROR

Check Details

Version: 1.0.3
Check: tests
Result: ERROR
     Running 'testthat.R' [70s/85s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > library(ck37r)
     >
     > # Only run tests if testthat package is installed.
     > # This is in compliance with "Writing R Extensions" §1.1.3.1.
     > if (requireNamespace("testthat", quietly = TRUE)) {
     + testthat::test_check("ck37r", reporter = "check")
     + }
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     'data.frame': 366 obs. of 3 variables:
     $ month : num 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: num 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : num 4 5 6 7 1 2 3 4 5 6 ...
     Skipping month - already a factor.
     Converting day_of_month from numeric to factor. Unique vals: 31
     Converting day_of_week from numeric to factor. Unique vals: 7
     Skipping blah123blah - was not in the data frame.
     'data.frame': 366 obs. of 3 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     Converting factors (4): month, day_of_month, day_of_week, single_level
     Converting month from a factor to a matrix (12 levels).
     : month_2 month_3 month_4 month_5 month_6 month_7 month_8 month_9 month_10 month_11 month_12
     Converting day_of_month from a factor to a matrix (31 levels).
     : day_of_month_2 day_of_month_3 day_of_month_4 day_of_month_5 day_of_month_6 day_of_month_7 day_of_month_8 day_of_month_9 day_of_month_10 day_of_month_11 day_of_month_12 day_of_month_13 day_of_month_14 day_of_month_15 day_of_month_16 day_of_month_17 day_of_month_18 day_of_month_19 day_of_month_20 day_of_month_21 day_of_month_22 day_of_month_23 day_of_month_24 day_of_month_25 day_of_month_26 day_of_month_27 day_of_month_28 day_of_month_29 day_of_month_30 day_of_month_31
     Converting day_of_week from a factor to a matrix (7 levels).
     : day_of_week_2 day_of_week_3 day_of_week_4 day_of_week_5 day_of_week_6 day_of_week_7
     Converting single_level from a factor to a matrix (1 levels).
     Skipping single_level because it has only 1 level.
     Combining factor matrices into a data frame.
    
     Call:
     sl$sl_fn(Y = Boston$chas, X = X[, 1:3], family = binomial(), SL.library = c("SL.mean",
     "SL.glm"), cvControl = list(V = 2, stratifyCV = T))
    
    
     Risk Coef
     SL.mean_All 0.0900000 1
     SL.glm_All 0.0936156 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl))
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.090706 0.017150 0.050586 0.17235
     Discrete SL 0.090658 0.017141 0.050586 0.17235
     SL.mean_All 0.090469 0.017099 0.050586 0.17235
     SL.glm_All 0.092702 0.017418 0.050838 0.17775
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered:
     Workers enabled: 1
     -- 1. Error: (unknown) (@test-gen_superlearner.R#49) --------------------------
     object 'cl' not found
     Backtrace:
     1. ck37r::parallelize(type = "multicore", max_cores = 2, verbose = 2)
    
     Found 0 text files in "inst/extdata" to import.
     Found 0 text files in "inst/extdata/../" to import.
     'data.frame': 768 obs. of 10 variables:
     $ pregnant: Factor w/ 17 levels "0","1","2","3",..: NA NA NA 2 1 6 4 11 3 9 ...
     $ glucose : num 148 85 183 89 137 116 78 115 197 125 ...
     $ pressure: num 72 66 64 66 40 74 50 NA 70 96 ...
     $ triceps : num 35 29 NA 23 35 NA 32 NA 45 NA ...
     $ insulin : num NA NA NA 94 168 NA 88 NA 543 NA ...
     $ mass : chr "33.6" "26.6" "23.3" "28.1" ...
     $ pedigree: num 0.627 0.351 0.672 0.167 2.288 ...
     $ age : num 50 31 32 21 33 30 26 29 53 54 ...
     $ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 1 2 2 ...
     $ all_nas : logi NA NA NA NA NA NA ...
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing diabetes (9 factor) with 0 NAs. Impute value: neg
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Pre-filled. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Pre-filled. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Pre-filled. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Pre-filled. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Pre-filled. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Pre-filled. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Pre-filled. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Pre-filled. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Pre-filled. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     openjdk version "11.0.6" 2020-01-14
     OpenJDK Runtime Environment (build 11.0.6+10-post-Debian-1)
     OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Debian-1, mixed mode, sharing)
     [1] "/home/hornik/tmp/R.check/r-devel-clang/Work/PKGS/ck37r.Rcheck/tests/testthat"
    
    
     These packages need to be installed: ck37_blah123
     install.packages(c("ck37_blah123"))
     Error in install.packages(pkgs[!result], ...) :
     unable to install packages
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     3.7553 0.0194 0.4232 6.0470 4.4837 0.0002 1.8495
     2.9075 1.9082 1.5314 5.3340 total = 31.26
    
     REML score: 791.4097
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error t value Pr(>|t|)
     (Intercept) 18.69909 0.79189 23.613 < 2e-16 ***
     chas 2.73680 0.68917 3.971 9.19e-05 ***
     rad 0.36896 0.08078 4.567 7.53e-06 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df F p-value
     s(crim) 3.755334 9 4.296 8.82e-09 ***
     s(zn) 0.019389 9 0.002 0.32075
     s(indus) 0.423183 9 0.081 0.17097
     s(nox) 6.046951 9 5.730 2.08e-10 ***
     s(rm) 4.483747 9 18.510 < 2e-16 ***
     s(age) 0.000192 9 0.000 0.88205
     s(dis) 1.849519 9 2.225 4.00e-06 ***
     s(tax) 2.907545 9 2.591 7.78e-06 ***
     s(ptratio) 1.908213 9 2.988 1.19e-07 ***
     s(black) 1.531418 9 1.070 0.00146 **
     s(lstat) 5.333977 9 24.668 < 2e-16 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.897 Deviance explained = 90.7%
     -REML = 791.41 Scale est. = 8.4364 n = 300
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     0.897 0.000 0.000 0.000 3.416 0.921 2.031
     0.830 1.945 0.000 0.899 total = 13.94
    
     REML score: 82.1204
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error z value Pr(>|z|)
     (Intercept) -4.0883 1.1807 -3.463 0.000535 ***
     chas 0.2294 1.0390 0.221 0.825224
     rad 0.2856 0.1036 2.757 0.005832 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df Chi.sq p-value
     s(crim) 8.975e-01 7 5.743 0.006316 **
     s(zn) 5.680e-06 9 0.000 0.461433
     s(indus) 2.029e-06 9 0.000 1.000000
     s(nox) 3.457e-06 9 0.000 0.653398
     s(rm) 3.416e+00 9 28.592 1.97e-07 ***
     s(age) 9.213e-01 9 10.269 0.000385 ***
     s(dis) 2.031e+00 9 11.704 0.000628 ***
     s(tax) 8.297e-01 9 4.368 0.016663 *
     s(ptratio) 1.945e+00 9 9.539 0.002868 **
     s(black) 3.185e-06 9 0.000 0.759387
     s(lstat) 8.988e-01 9 7.222 0.003569 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.721 Deviance explained = 69%
     -REML = 82.12 Scale est. = 1 n = 300
    
     Call:
     SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = sl_lib,
     cvControl = list(V = 2L))
    
    
     Risk Coef
     SL.mgcv_1_All 0.09920816 0.93231264
     SL.mean_All 0.25393333 0.06768736
     Generating 5 missingness indicators.
     Checking for collinearity of indicators.
     Generating 0 missingness indicators.
     Checking for collinearity of indicators.
     Generating 1 missingness indicators.
     Checking for collinearity of indicators.
     Generating 6 missingness indicators.
     Checking for collinearity of indicators.
     Removing 1 indicators due to collinearity:
     miss_glucose2
     Generating 6 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     'data.frame': 5000 obs. of 6 variables:
     $ W1: int 1 1 0 1 1 0 0 1 0 0 ...
     $ W2: int 1 1 1 1 1 1 1 0 0 1 ...
     $ W3: num 0.7887 0.9758 0.951 0.6037 0.0303 ...
     $ W4: int 3 2 1 3 2 1 2 1 2 0 ...
     $ A : int 0 0 0 1 0 0 0 1 1 1 ...
     $ Y : int 1 1 1 1 1 0 1 1 0 0 ...
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.171 0.004 0.253
    
     $train
     user system elapsed
     0.137 0.004 0.217
    
     $predict
     user system elapsed
     0.029 0.000 0.029
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.164 0.000 0.165
    
     $train
     user system elapsed
     0.129 0.000 0.130
    
     $predict
     user system elapsed
     0.029 0.000 0.028
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.186 0.000 0.282
    
     $train
     user system elapsed
     0.138 0.000 0.218
    
     $predict
     user system elapsed
     0.042 0.000 0.042
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.158 0.000 0.159
    
     $train
     user system elapsed
     0.125 0.000 0.127
    
     $predict
     user system elapsed
     0.027 0.000 0.027
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     == testthat results ===========================================================
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-clang

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [55s/86s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(ck37r)
     >
     > # Only run tests if testthat package is installed.
     > # This is in compliance with "Writing R Extensions" §1.1.3.1.
     > if (requireNamespace("testthat", quietly = TRUE)) {
     + testthat::test_check("ck37r", reporter = "check")
     + }
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     'data.frame': 366 obs. of 3 variables:
     $ month : num 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: num 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : num 4 5 6 7 1 2 3 4 5 6 ...
     Skipping month - already a factor.
     Converting day_of_month from numeric to factor. Unique vals: 31
     Converting day_of_week from numeric to factor. Unique vals: 7
     Skipping blah123blah - was not in the data frame.
     'data.frame': 366 obs. of 3 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     Converting factors (4): month, day_of_month, day_of_week, single_level
     Converting month from a factor to a matrix (12 levels).
     : month_2 month_3 month_4 month_5 month_6 month_7 month_8 month_9 month_10 month_11 month_12
     Converting day_of_month from a factor to a matrix (31 levels).
     : day_of_month_2 day_of_month_3 day_of_month_4 day_of_month_5 day_of_month_6 day_of_month_7 day_of_month_8 day_of_month_9 day_of_month_10 day_of_month_11 day_of_month_12 day_of_month_13 day_of_month_14 day_of_month_15 day_of_month_16 day_of_month_17 day_of_month_18 day_of_month_19 day_of_month_20 day_of_month_21 day_of_month_22 day_of_month_23 day_of_month_24 day_of_month_25 day_of_month_26 day_of_month_27 day_of_month_28 day_of_month_29 day_of_month_30 day_of_month_31
     Converting day_of_week from a factor to a matrix (7 levels).
     : day_of_week_2 day_of_week_3 day_of_week_4 day_of_week_5 day_of_week_6 day_of_week_7
     Converting single_level from a factor to a matrix (1 levels).
     Skipping single_level because it has only 1 level.
     Combining factor matrices into a data frame.
    
     Call:
     sl$sl_fn(Y = Boston$chas, X = X[, 1:3], family = binomial(), SL.library = c("SL.mean",
     "SL.glm"), cvControl = list(V = 2, stratifyCV = T))
    
    
     Risk Coef
     SL.mean_All 0.0900000 1
     SL.glm_All 0.0936156 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl))
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.090706 0.017150 0.050586 0.17235
     Discrete SL 0.090658 0.017141 0.050586 0.17235
     SL.mean_All 0.090469 0.017099 0.050586 0.17235
     SL.glm_All 0.092702 0.017418 0.050838 0.17775
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered:
     Workers enabled: 1
     ── 1. Error: (unknown) (@test-gen_superlearner.R#49) ──────────────────────────
     object 'cl' not found
     Backtrace:
     1. ck37r::parallelize(type = "multicore", max_cores = 2, verbose = 2)
    
     Found 0 text files in "inst/extdata" to import.
     Found 0 text files in "inst/extdata/../" to import.
     'data.frame': 768 obs. of 10 variables:
     $ pregnant: Factor w/ 17 levels "0","1","2","3",..: NA NA NA 2 1 6 4 11 3 9 ...
     $ glucose : num 148 85 183 89 137 116 78 115 197 125 ...
     $ pressure: num 72 66 64 66 40 74 50 NA 70 96 ...
     $ triceps : num 35 29 NA 23 35 NA 32 NA 45 NA ...
     $ insulin : num NA NA NA 94 168 NA 88 NA 543 NA ...
     $ mass : chr "33.6" "26.6" "23.3" "28.1" ...
     $ pedigree: num 0.627 0.351 0.672 0.167 2.288 ...
     $ age : num 50 31 32 21 33 30 26 29 53 54 ...
     $ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 1 2 2 ...
     $ all_nas : logi NA NA NA NA NA NA ...
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing diabetes (9 factor) with 0 NAs. Impute value: neg
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Pre-filled. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Pre-filled. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Pre-filled. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Pre-filled. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Pre-filled. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Pre-filled. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Pre-filled. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Pre-filled. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Pre-filled. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     openjdk version "11.0.6" 2020-01-14
     OpenJDK Runtime Environment (build 11.0.6+10-post-Debian-1)
     OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Debian-1, mixed mode, sharing)
     [1] "/home/hornik/tmp/R.check/r-devel-gcc/Work/PKGS/ck37r.Rcheck/tests/testthat"
    
    
     These packages need to be installed: ck37_blah123
     install.packages(c("ck37_blah123"))
     Error in install.packages(pkgs[!result], ...) :
     unable to install packages
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     3.7553 0.0194 0.4232 6.0470 4.4837 0.0002 1.8495
     2.9075 1.9082 1.5314 5.3340 total = 31.26
    
     REML score: 791.4097
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error t value Pr(>|t|)
     (Intercept) 18.69909 0.79189 23.613 < 2e-16 ***
     chas 2.73680 0.68917 3.971 9.19e-05 ***
     rad 0.36896 0.08078 4.567 7.53e-06 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df F p-value
     s(crim) 3.755334 9 4.296 8.82e-09 ***
     s(zn) 0.019389 9 0.002 0.32075
     s(indus) 0.423183 9 0.081 0.17097
     s(nox) 6.046951 9 5.730 2.08e-10 ***
     s(rm) 4.483747 9 18.510 < 2e-16 ***
     s(age) 0.000192 9 0.000 0.88205
     s(dis) 1.849519 9 2.225 4.00e-06 ***
     s(tax) 2.907545 9 2.591 7.78e-06 ***
     s(ptratio) 1.908213 9 2.988 1.19e-07 ***
     s(black) 1.531418 9 1.070 0.00146 **
     s(lstat) 5.333977 9 24.668 < 2e-16 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.897 Deviance explained = 90.7%
     -REML = 791.41 Scale est. = 8.4364 n = 300
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     0.897 0.000 0.000 0.000 3.416 0.921 2.031
     0.830 1.945 0.000 0.899 total = 13.94
    
     REML score: 82.1204
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error z value Pr(>|z|)
     (Intercept) -4.0883 1.1807 -3.463 0.000535 ***
     chas 0.2294 1.0390 0.221 0.825224
     rad 0.2856 0.1036 2.757 0.005832 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df Chi.sq p-value
     s(crim) 8.975e-01 7 5.743 0.006316 **
     s(zn) 5.680e-06 9 0.000 0.461433
     s(indus) 2.029e-06 9 0.000 1.000000
     s(nox) 3.457e-06 9 0.000 0.653398
     s(rm) 3.416e+00 9 28.592 1.97e-07 ***
     s(age) 9.213e-01 9 10.269 0.000385 ***
     s(dis) 2.031e+00 9 11.704 0.000628 ***
     s(tax) 8.297e-01 9 4.368 0.016663 *
     s(ptratio) 1.945e+00 9 9.539 0.002868 **
     s(black) 3.185e-06 9 0.000 0.759387
     s(lstat) 8.988e-01 9 7.222 0.003569 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.721 Deviance explained = 69%
     -REML = 82.12 Scale est. = 1 n = 300
    
     Call:
     SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = sl_lib,
     cvControl = list(V = 2L))
    
    
     Risk Coef
     SL.mgcv_1_All 0.09920816 0.93231264
     SL.mean_All 0.25393333 0.06768736
     Generating 5 missingness indicators.
     Checking for collinearity of indicators.
     Generating 0 missingness indicators.
     Checking for collinearity of indicators.
     Generating 1 missingness indicators.
     Checking for collinearity of indicators.
     Generating 6 missingness indicators.
     Checking for collinearity of indicators.
     Removing 1 indicators due to collinearity:
     miss_glucose2
     Generating 6 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     'data.frame': 5000 obs. of 6 variables:
     $ W1: int 1 1 0 1 1 0 0 1 0 0 ...
     $ W2: int 1 1 1 1 1 1 1 0 0 1 ...
     $ W3: num 0.7887 0.9758 0.951 0.6037 0.0303 ...
     $ W4: int 3 2 1 3 2 1 2 1 2 0 ...
     $ A : int 0 0 0 1 0 0 0 1 1 1 ...
     $ Y : int 1 1 1 1 1 0 1 1 0 0 ...
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.160 0.004 0.171
    
     $train
     user system elapsed
     0.109 0.004 0.118
    
     $predict
     user system elapsed
     0.032 0.000 0.033
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.112 0.003 0.119
    
     $train
     user system elapsed
     0.088 0.003 0.094
    
     $predict
     user system elapsed
     0.019 0.000 0.020
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.126 0.000 0.253
    
     $train
     user system elapsed
     0.101 0.000 0.213
    
     $predict
     user system elapsed
     0.020 0.000 0.036
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.112 0.000 0.169
    
     $train
     user system elapsed
     0.088 0.000 0.145
    
     $predict
     user system elapsed
     0.019 0.000 0.019
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc

Version: 1.0.3
Check: dependencies in R code
Result: NOTE
    Namespace in Imports field not imported from: ‘checkmate’
     All declared Imports should be used.
Flavors: r-devel-linux-x86_64-fedora-clang, r-devel-linux-x86_64-fedora-gcc, r-patched-solaris-x86, r-release-osx-x86_64, r-oldrel-osx-x86_64

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [84s/372s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(ck37r)
     >
     > # Only run tests if testthat package is installed.
     > # This is in compliance with "Writing R Extensions" §1.1.3.1.
     > if (requireNamespace("testthat", quietly = TRUE)) {
     + testthat::test_check("ck37r", reporter = "check")
     + }
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     'data.frame': 366 obs. of 3 variables:
     $ month : num 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: num 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : num 4 5 6 7 1 2 3 4 5 6 ...
     Skipping month - already a factor.
     Converting day_of_month from numeric to factor. Unique vals: 31
     Converting day_of_week from numeric to factor. Unique vals: 7
     Skipping blah123blah - was not in the data frame.
     'data.frame': 366 obs. of 3 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     Converting factors (4): month, day_of_month, day_of_week, single_level
     Converting month from a factor to a matrix (12 levels).
     : month_2 month_3 month_4 month_5 month_6 month_7 month_8 month_9 month_10 month_11 month_12
     Converting day_of_month from a factor to a matrix (31 levels).
     : day_of_month_2 day_of_month_3 day_of_month_4 day_of_month_5 day_of_month_6 day_of_month_7 day_of_month_8 day_of_month_9 day_of_month_10 day_of_month_11 day_of_month_12 day_of_month_13 day_of_month_14 day_of_month_15 day_of_month_16 day_of_month_17 day_of_month_18 day_of_month_19 day_of_month_20 day_of_month_21 day_of_month_22 day_of_month_23 day_of_month_24 day_of_month_25 day_of_month_26 day_of_month_27 day_of_month_28 day_of_month_29 day_of_month_30 day_of_month_31
     Converting day_of_week from a factor to a matrix (7 levels).
     : day_of_week_2 day_of_week_3 day_of_week_4 day_of_week_5 day_of_week_6 day_of_week_7
     Converting single_level from a factor to a matrix (1 levels).
     Skipping single_level because it has only 1 level.
     Combining factor matrices into a data frame.
    
     Call:
     sl$sl_fn(Y = Boston$chas, X = X[, 1:3], family = binomial(), SL.library = c("SL.mean",
     "SL.glm"), cvControl = list(V = 2, stratifyCV = T))
    
    
     Risk Coef
     SL.mean_All 0.0900000 1
     SL.glm_All 0.0936156 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl))
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.090706 0.017150 0.050586 0.17235
     Discrete SL 0.090658 0.017141 0.050586 0.17235
     SL.mean_All 0.090469 0.017099 0.050586 0.17235
     SL.glm_All 0.092702 0.017418 0.050838 0.17775
     Local physical cores detected: 12
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered:
     Workers enabled: 1
     ── 1. Error: (unknown) (@test-gen_superlearner.R#49) ──────────────────────────
     object 'cl' not found
     Backtrace:
     1. ck37r::parallelize(type = "multicore", max_cores = 2, verbose = 2)
    
     Found 0 text files in "inst/extdata" to import.
     Found 0 text files in "inst/extdata/../" to import.
     'data.frame': 768 obs. of 10 variables:
     $ pregnant: Factor w/ 17 levels "0","1","2","3",..: NA NA NA 2 1 6 4 11 3 9 ...
     $ glucose : num 148 85 183 89 137 116 78 115 197 125 ...
     $ pressure: num 72 66 64 66 40 74 50 NA 70 96 ...
     $ triceps : num 35 29 NA 23 35 NA 32 NA 45 NA ...
     $ insulin : num NA NA NA 94 168 NA 88 NA 543 NA ...
     $ mass : chr "33.6" "26.6" "23.3" "28.1" ...
     $ pedigree: num 0.627 0.351 0.672 0.167 2.288 ...
     $ age : num 50 31 32 21 33 30 26 29 53 54 ...
     $ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 1 2 2 ...
     $ all_nas : logi NA NA NA NA NA NA ...
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing diabetes (9 factor) with 0 NAs. Impute value: neg
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Pre-filled. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Pre-filled. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Pre-filled. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Pre-filled. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Pre-filled. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Pre-filled. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Pre-filled. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Pre-filled. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Pre-filled. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     openjdk version "11.0.6" 2020-01-14
     OpenJDK Runtime Environment 18.9 (build 11.0.6+10)
     OpenJDK 64-Bit Server VM 18.9 (build 11.0.6+10, mixed mode, sharing)
     [1] "/data/gannet/ripley/R/packages/tests-clang/ck37r.Rcheck/tests/testthat"
    
    
     These packages need to be installed: ck37_blah123
     install.packages(c("ck37_blah123"))
     Error in contrib.url(repos, type) :
     trying to use CRAN without setting a mirror
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     3.7553 0.0194 0.4232 6.0470 4.4837 0.0002 1.8495
     2.9075 1.9082 1.5314 5.3340 total = 31.26
    
     REML score: 791.4097
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error t value Pr(>|t|)
     (Intercept) 18.69909 0.79189 23.613 < 2e-16 ***
     chas 2.73680 0.68917 3.971 9.19e-05 ***
     rad 0.36896 0.08078 4.567 7.53e-06 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df F p-value
     s(crim) 3.755334 9 4.296 8.82e-09 ***
     s(zn) 0.019389 9 0.002 0.32075
     s(indus) 0.423183 9 0.081 0.17097
     s(nox) 6.046951 9 5.730 2.08e-10 ***
     s(rm) 4.483747 9 18.510 < 2e-16 ***
     s(age) 0.000192 9 0.000 0.88205
     s(dis) 1.849519 9 2.225 4.00e-06 ***
     s(tax) 2.907545 9 2.591 7.78e-06 ***
     s(ptratio) 1.908213 9 2.988 1.19e-07 ***
     s(black) 1.531418 9 1.070 0.00146 **
     s(lstat) 5.333977 9 24.668 < 2e-16 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.897 Deviance explained = 90.7%
     -REML = 791.41 Scale est. = 8.4364 n = 300
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     0.897 0.000 0.000 0.000 3.416 0.921 2.031
     0.830 1.945 0.000 0.899 total = 13.94
    
     REML score: 82.1204
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error z value Pr(>|z|)
     (Intercept) -4.0883 1.1807 -3.463 0.000535 ***
     chas 0.2294 1.0390 0.221 0.825224
     rad 0.2856 0.1036 2.757 0.005832 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df Chi.sq p-value
     s(crim) 8.975e-01 7 5.743 0.006316 **
     s(zn) 5.680e-06 9 0.000 0.461433
     s(indus) 2.029e-06 9 0.000 1.000000
     s(nox) 3.457e-06 9 0.000 0.653398
     s(rm) 3.416e+00 9 28.592 1.97e-07 ***
     s(age) 9.213e-01 9 10.269 0.000385 ***
     s(dis) 2.031e+00 9 11.704 0.000628 ***
     s(tax) 8.297e-01 9 4.368 0.016663 *
     s(ptratio) 1.945e+00 9 9.539 0.002868 **
     s(black) 3.185e-06 9 0.000 0.759387
     s(lstat) 8.988e-01 9 7.222 0.003569 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.721 Deviance explained = 69%
     -REML = 82.12 Scale est. = 1 n = 300
    
     Call:
     SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = sl_lib,
     cvControl = list(V = 2L))
    
    
     Risk Coef
     SL.mgcv_1_All 0.09920816 0.93231264
     SL.mean_All 0.25393333 0.06768736
     Generating 5 missingness indicators.
     Checking for collinearity of indicators.
     Generating 0 missingness indicators.
     Checking for collinearity of indicators.
     Generating 1 missingness indicators.
     Checking for collinearity of indicators.
     Generating 6 missingness indicators.
     Checking for collinearity of indicators.
     Removing 1 indicators due to collinearity:
     miss_glucose2
     Generating 6 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Local physical cores detected: 12
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     Local physical cores detected: 12
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     'data.frame': 5000 obs. of 6 variables:
     $ W1: int 1 1 0 1 1 0 0 1 0 0 ...
     $ W2: int 1 1 1 1 1 1 1 0 0 1 ...
     $ W3: num 0.7887 0.9758 0.951 0.6037 0.0303 ...
     $ W4: int 3 2 1 3 2 1 2 1 2 0 ...
     $ A : int 0 0 0 1 0 0 0 1 1 1 ...
     $ Y : int 1 1 1 1 1 0 1 1 0 0 ...
     Local physical cores detected: 12
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.247 0.021 1.232
    
     $train
     user system elapsed
     0.188 0.017 0.923
    
     $predict
     user system elapsed
     0.052 0.004 0.279
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.196 0.004 0.849
    
     $train
     user system elapsed
     0.141 0.003 0.626
    
     $predict
     user system elapsed
     0.047 0.001 0.192
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.217 0.000 0.866
    
     $train
     user system elapsed
     0.162 0.000 0.635
    
     $predict
     user system elapsed
     0.036 0.000 0.164
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.195 0.000 0.686
    
     $train
     user system elapsed
     0.156 0.000 0.560
    
     $predict
     user system elapsed
     0.033 0.000 0.103
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [84s/352s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(ck37r)
     >
     > # Only run tests if testthat package is installed.
     > # This is in compliance with "Writing R Extensions" §1.1.3.1.
     > if (requireNamespace("testthat", quietly = TRUE)) {
     + testthat::test_check("ck37r", reporter = "check")
     + }
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     'data.frame': 366 obs. of 3 variables:
     $ month : num 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: num 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : num 4 5 6 7 1 2 3 4 5 6 ...
     Skipping month - already a factor.
     Converting day_of_month from numeric to factor. Unique vals: 31
     Converting day_of_week from numeric to factor. Unique vals: 7
     Skipping blah123blah - was not in the data frame.
     'data.frame': 366 obs. of 3 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     Converting factors (4): month, day_of_month, day_of_week, single_level
     Converting month from a factor to a matrix (12 levels).
     : month_2 month_3 month_4 month_5 month_6 month_7 month_8 month_9 month_10 month_11 month_12
     Converting day_of_month from a factor to a matrix (31 levels).
     : day_of_month_2 day_of_month_3 day_of_month_4 day_of_month_5 day_of_month_6 day_of_month_7 day_of_month_8 day_of_month_9 day_of_month_10 day_of_month_11 day_of_month_12 day_of_month_13 day_of_month_14 day_of_month_15 day_of_month_16 day_of_month_17 day_of_month_18 day_of_month_19 day_of_month_20 day_of_month_21 day_of_month_22 day_of_month_23 day_of_month_24 day_of_month_25 day_of_month_26 day_of_month_27 day_of_month_28 day_of_month_29 day_of_month_30 day_of_month_31
     Converting day_of_week from a factor to a matrix (7 levels).
     : day_of_week_2 day_of_week_3 day_of_week_4 day_of_week_5 day_of_week_6 day_of_week_7
     Converting single_level from a factor to a matrix (1 levels).
     Skipping single_level because it has only 1 level.
     Combining factor matrices into a data frame.
    
     Call:
     sl$sl_fn(Y = Boston$chas, X = X[, 1:3], family = binomial(), SL.library = c("SL.mean",
     "SL.glm"), cvControl = list(V = 2, stratifyCV = T))
    
    
     Risk Coef
     SL.mean_All 0.0900000 1
     SL.glm_All 0.0936156 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl))
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.090706 0.017150 0.050586 0.17235
     Discrete SL 0.090658 0.017141 0.050586 0.17235
     SL.mean_All 0.090469 0.017099 0.050586 0.17235
     SL.glm_All 0.092702 0.017418 0.050838 0.17775
     Local physical cores detected: 12
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered:
     Workers enabled: 1
     ── 1. Error: (unknown) (@test-gen_superlearner.R#49) ──────────────────────────
     object 'cl' not found
     Backtrace:
     1. ck37r::parallelize(type = "multicore", max_cores = 2, verbose = 2)
    
     Found 0 text files in "inst/extdata" to import.
     Found 0 text files in "inst/extdata/../" to import.
     'data.frame': 768 obs. of 10 variables:
     $ pregnant: Factor w/ 17 levels "0","1","2","3",..: NA NA NA 2 1 6 4 11 3 9 ...
     $ glucose : num 148 85 183 89 137 116 78 115 197 125 ...
     $ pressure: num 72 66 64 66 40 74 50 NA 70 96 ...
     $ triceps : num 35 29 NA 23 35 NA 32 NA 45 NA ...
     $ insulin : num NA NA NA 94 168 NA 88 NA 543 NA ...
     $ mass : chr "33.6" "26.6" "23.3" "28.1" ...
     $ pedigree: num 0.627 0.351 0.672 0.167 2.288 ...
     $ age : num 50 31 32 21 33 30 26 29 53 54 ...
     $ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 1 2 2 ...
     $ all_nas : logi NA NA NA NA NA NA ...
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing diabetes (9 factor) with 0 NAs. Impute value: neg
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Pre-filled. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Pre-filled. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Pre-filled. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Pre-filled. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Pre-filled. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Pre-filled. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Pre-filled. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Pre-filled. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Pre-filled. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     openjdk version "11.0.6" 2020-01-14
     OpenJDK Runtime Environment 18.9 (build 11.0.6+10)
     OpenJDK 64-Bit Server VM 18.9 (build 11.0.6+10, mixed mode, sharing)
     [1] "/data/gannet/ripley/R/packages/tests-devel/ck37r.Rcheck/tests/testthat"
    
    
     These packages need to be installed: ck37_blah123
     install.packages(c("ck37_blah123"))
     Error in contrib.url(repos, type) :
     trying to use CRAN without setting a mirror
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     3.7553 0.0194 0.4232 6.0470 4.4837 0.0002 1.8495
     2.9075 1.9082 1.5314 5.3340 total = 31.26
    
     REML score: 791.4097
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error t value Pr(>|t|)
     (Intercept) 18.69909 0.79189 23.613 < 2e-16 ***
     chas 2.73680 0.68917 3.971 9.19e-05 ***
     rad 0.36896 0.08078 4.567 7.53e-06 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df F p-value
     s(crim) 3.755334 9 4.296 8.82e-09 ***
     s(zn) 0.019389 9 0.002 0.32075
     s(indus) 0.423183 9 0.081 0.17097
     s(nox) 6.046951 9 5.730 2.08e-10 ***
     s(rm) 4.483747 9 18.510 < 2e-16 ***
     s(age) 0.000192 9 0.000 0.88205
     s(dis) 1.849519 9 2.225 4.00e-06 ***
     s(tax) 2.907545 9 2.591 7.78e-06 ***
     s(ptratio) 1.908213 9 2.988 1.19e-07 ***
     s(black) 1.531418 9 1.070 0.00146 **
     s(lstat) 5.333977 9 24.668 < 2e-16 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.897 Deviance explained = 90.7%
     -REML = 791.41 Scale est. = 8.4364 n = 300
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     0.897 0.000 0.000 0.000 3.416 0.921 2.031
     0.830 1.945 0.000 0.899 total = 13.94
    
     REML score: 82.1204
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error z value Pr(>|z|)
     (Intercept) -4.0883 1.1807 -3.463 0.000535 ***
     chas 0.2294 1.0390 0.221 0.825224
     rad 0.2856 0.1036 2.757 0.005832 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df Chi.sq p-value
     s(crim) 8.975e-01 7 5.743 0.006316 **
     s(zn) 5.680e-06 9 0.000 0.461433
     s(indus) 2.029e-06 9 0.000 1.000000
     s(nox) 3.457e-06 9 0.000 0.653398
     s(rm) 3.416e+00 9 28.592 1.97e-07 ***
     s(age) 9.213e-01 9 10.269 0.000385 ***
     s(dis) 2.031e+00 9 11.704 0.000628 ***
     s(tax) 8.297e-01 9 4.368 0.016663 *
     s(ptratio) 1.945e+00 9 9.539 0.002868 **
     s(black) 3.185e-06 9 0.000 0.759387
     s(lstat) 8.988e-01 9 7.222 0.003569 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.721 Deviance explained = 69%
     -REML = 82.12 Scale est. = 1 n = 300
    
     Call:
     SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = sl_lib,
     cvControl = list(V = 2L))
    
    
     Risk Coef
     SL.mgcv_1_All 0.09920816 0.93231264
     SL.mean_All 0.25393333 0.06768736
     Generating 5 missingness indicators.
     Checking for collinearity of indicators.
     Generating 0 missingness indicators.
     Checking for collinearity of indicators.
     Generating 1 missingness indicators.
     Checking for collinearity of indicators.
     Generating 6 missingness indicators.
     Checking for collinearity of indicators.
     Removing 1 indicators due to collinearity:
     miss_glucose2
     Generating 6 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Local physical cores detected: 12
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     Local physical cores detected: 12
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     'data.frame': 5000 obs. of 6 variables:
     $ W1: int 1 1 0 1 1 0 0 1 0 0 ...
     $ W2: int 1 1 1 1 1 1 1 0 0 1 ...
     $ W3: num 0.7887 0.9758 0.951 0.6037 0.0303 ...
     $ W4: int 3 2 1 3 2 1 2 1 2 0 ...
     $ A : int 0 0 0 1 0 0 0 1 1 1 ...
     $ Y : int 1 1 1 1 1 0 1 1 0 0 ...
     Local physical cores detected: 12
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.232 0.001 0.913
    
     $train
     user system elapsed
     0.192 0.001 0.759
    
     $predict
     user system elapsed
     0.033 0.000 0.114
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.185 0.001 0.671
    
     $train
     user system elapsed
     0.132 0.000 0.530
    
     $predict
     user system elapsed
     0.045 0.001 0.129
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.193 0.000 0.722
    
     $train
     user system elapsed
     0.153 0.000 0.562
    
     $predict
     user system elapsed
     0.034 0.000 0.129
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.179 0.000 0.676
    
     $train
     user system elapsed
     0.141 0.000 0.552
    
     $predict
     user system elapsed
     0.031 0.000 0.105
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [67s/84s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(ck37r)
     >
     > # Only run tests if testthat package is installed.
     > # This is in compliance with "Writing R Extensions" §1.1.3.1.
     > if (requireNamespace("testthat", quietly = TRUE)) {
     + testthat::test_check("ck37r", reporter = "check")
     + }
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     'data.frame': 366 obs. of 3 variables:
     $ month : num 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: num 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : num 4 5 6 7 1 2 3 4 5 6 ...
     Skipping month - already a factor.
     Converting day_of_month from numeric to factor. Unique vals: 31
     Converting day_of_week from numeric to factor. Unique vals: 7
     Skipping blah123blah - was not in the data frame.
     'data.frame': 366 obs. of 3 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     Converting factors (4): month, day_of_month, day_of_week, single_level
     Converting month from a factor to a matrix (12 levels).
     : month_2 month_3 month_4 month_5 month_6 month_7 month_8 month_9 month_10 month_11 month_12
     Converting day_of_month from a factor to a matrix (31 levels).
     : day_of_month_2 day_of_month_3 day_of_month_4 day_of_month_5 day_of_month_6 day_of_month_7 day_of_month_8 day_of_month_9 day_of_month_10 day_of_month_11 day_of_month_12 day_of_month_13 day_of_month_14 day_of_month_15 day_of_month_16 day_of_month_17 day_of_month_18 day_of_month_19 day_of_month_20 day_of_month_21 day_of_month_22 day_of_month_23 day_of_month_24 day_of_month_25 day_of_month_26 day_of_month_27 day_of_month_28 day_of_month_29 day_of_month_30 day_of_month_31
     Converting day_of_week from a factor to a matrix (7 levels).
     : day_of_week_2 day_of_week_3 day_of_week_4 day_of_week_5 day_of_week_6 day_of_week_7
     Converting single_level from a factor to a matrix (1 levels).
     Skipping single_level because it has only 1 level.
     Combining factor matrices into a data frame.
    
     Call:
     sl$sl_fn(Y = Boston$chas, X = X[, 1:3], family = binomial(), SL.library = c("SL.mean",
     "SL.glm"), cvControl = list(V = 2, stratifyCV = T))
    
    
     Risk Coef
     SL.mean_All 0.0900000 1
     SL.glm_All 0.0936156 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl))
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.090706 0.017150 0.050586 0.17235
     Discrete SL 0.090658 0.017141 0.050586 0.17235
     SL.mean_All 0.090469 0.017099 0.050586 0.17235
     SL.glm_All 0.092702 0.017418 0.050838 0.17775
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered:
     Workers enabled: 1
     ── 1. Error: (unknown) (@test-gen_superlearner.R#49) ──────────────────────────
     object 'cl' not found
     Backtrace:
     1. ck37r::parallelize(type = "multicore", max_cores = 2, verbose = 2)
    
     Found 0 text files in "inst/extdata" to import.
     Found 0 text files in "inst/extdata/../" to import.
     'data.frame': 768 obs. of 10 variables:
     $ pregnant: Factor w/ 17 levels "0","1","2","3",..: NA NA NA 2 1 6 4 11 3 9 ...
     $ glucose : num 148 85 183 89 137 116 78 115 197 125 ...
     $ pressure: num 72 66 64 66 40 74 50 NA 70 96 ...
     $ triceps : num 35 29 NA 23 35 NA 32 NA 45 NA ...
     $ insulin : num NA NA NA 94 168 NA 88 NA 543 NA ...
     $ mass : chr "33.6" "26.6" "23.3" "28.1" ...
     $ pedigree: num 0.627 0.351 0.672 0.167 2.288 ...
     $ age : num 50 31 32 21 33 30 26 29 53 54 ...
     $ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 1 2 2 ...
     $ all_nas : logi NA NA NA NA NA NA ...
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing diabetes (9 factor) with 0 NAs. Impute value: neg
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Pre-filled. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Pre-filled. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Pre-filled. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Pre-filled. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Pre-filled. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Pre-filled. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Pre-filled. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Pre-filled. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Pre-filled. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     openjdk version "11.0.6" 2020-01-14
     OpenJDK Runtime Environment (build 11.0.6+10-post-Debian-1)
     OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Debian-1, mixed mode, sharing)
     [1] "/home/hornik/tmp/R.check/r-patched-gcc/Work/PKGS/ck37r.Rcheck/tests/testthat"
    
    
     These packages need to be installed: ck37_blah123
     install.packages(c("ck37_blah123"))
     Error in install.packages(pkgs[!result], ...) :
     unable to install packages
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     3.7553 0.0194 0.4232 6.0470 4.4837 0.0002 1.8495
     2.9075 1.9082 1.5314 5.3340 total = 31.26
    
     REML score: 791.4097
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error t value Pr(>|t|)
     (Intercept) 18.69909 0.79189 23.613 < 2e-16 ***
     chas 2.73680 0.68917 3.971 9.19e-05 ***
     rad 0.36896 0.08078 4.567 7.53e-06 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df F p-value
     s(crim) 3.755334 9 4.296 8.82e-09 ***
     s(zn) 0.019389 9 0.002 0.32075
     s(indus) 0.423183 9 0.081 0.17097
     s(nox) 6.046951 9 5.730 2.08e-10 ***
     s(rm) 4.483747 9 18.510 < 2e-16 ***
     s(age) 0.000192 9 0.000 0.88205
     s(dis) 1.849519 9 2.225 4.00e-06 ***
     s(tax) 2.907545 9 2.591 7.78e-06 ***
     s(ptratio) 1.908213 9 2.988 1.19e-07 ***
     s(black) 1.531418 9 1.070 0.00146 **
     s(lstat) 5.333977 9 24.668 < 2e-16 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.897 Deviance explained = 90.7%
     -REML = 791.41 Scale est. = 8.4364 n = 300
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     0.897 0.000 0.000 0.000 3.416 0.921 2.031
     0.830 1.945 0.000 0.899 total = 13.94
    
     REML score: 82.1204
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error z value Pr(>|z|)
     (Intercept) -4.0883 1.1807 -3.463 0.000535 ***
     chas 0.2294 1.0390 0.221 0.825224
     rad 0.2856 0.1036 2.757 0.005832 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df Chi.sq p-value
     s(crim) 8.975e-01 7 5.743 0.006316 **
     s(zn) 5.680e-06 9 0.000 0.461433
     s(indus) 2.029e-06 9 0.000 1.000000
     s(nox) 3.457e-06 9 0.000 0.653398
     s(rm) 3.416e+00 9 28.592 1.97e-07 ***
     s(age) 9.213e-01 9 10.269 0.000385 ***
     s(dis) 2.031e+00 9 11.704 0.000628 ***
     s(tax) 8.297e-01 9 4.368 0.016663 *
     s(ptratio) 1.945e+00 9 9.539 0.002868 **
     s(black) 3.185e-06 9 0.000 0.759387
     s(lstat) 8.988e-01 9 7.222 0.003569 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.721 Deviance explained = 69%
     -REML = 82.12 Scale est. = 1 n = 300
    
     Call:
     SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = sl_lib,
     cvControl = list(V = 2L))
    
    
     Risk Coef
     SL.mgcv_1_All 0.09920816 0.93231264
     SL.mean_All 0.25393333 0.06768736
     Generating 5 missingness indicators.
     Checking for collinearity of indicators.
     Generating 0 missingness indicators.
     Checking for collinearity of indicators.
     Generating 1 missingness indicators.
     Checking for collinearity of indicators.
     Generating 6 missingness indicators.
     Checking for collinearity of indicators.
     Removing 1 indicators due to collinearity:
     miss_glucose2
     Generating 6 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     'data.frame': 5000 obs. of 6 variables:
     $ W1: int 1 1 0 1 1 0 0 1 0 0 ...
     $ W2: int 1 1 1 1 1 1 1 0 0 1 ...
     $ W3: num 0.7887 0.9758 0.951 0.6037 0.0303 ...
     $ W4: int 3 2 1 3 2 1 2 1 2 0 ...
     $ A : int 0 0 0 1 0 0 0 1 1 1 ...
     $ Y : int 1 1 1 1 1 0 1 1 0 0 ...
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.188 0.000 0.261
    
     $train
     user system elapsed
     0.156 0.000 0.230
    
     $predict
     user system elapsed
     0.027 0.000 0.026
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.144 0.000 0.145
    
     $train
     user system elapsed
     0.114 0.000 0.114
    
     $predict
     user system elapsed
     0.024 0.000 0.025
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.15 0.00 0.15
    
     $train
     user system elapsed
     0.118 0.000 0.119
    
     $predict
     user system elapsed
     0.027 0.000 0.026
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.152 0.004 0.156
    
     $train
     user system elapsed
     0.120 0.004 0.124
    
     $predict
     user system elapsed
     0.026 0.000 0.025
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-patched-linux-x86_64

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [102s/120s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(ck37r)
     >
     > # Only run tests if testthat package is installed.
     > # This is in compliance with "Writing R Extensions" §1.1.3.1.
     > if (requireNamespace("testthat", quietly = TRUE)) {
     + testthat::test_check("ck37r", reporter = "check")
     + }
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     'data.frame': 366 obs. of 3 variables:
     $ month : num 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: num 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : num 4 5 6 7 1 2 3 4 5 6 ...
     Skipping month - already a factor.
     Converting day_of_month from numeric to factor. Unique vals: 31
     Converting day_of_week from numeric to factor. Unique vals: 7
     Skipping blah123blah - was not in the data frame.
     'data.frame': 366 obs. of 3 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     Converting factors (4): month, day_of_month, day_of_week, single_level
     Converting month from a factor to a matrix (12 levels).
     : month_2 month_3 month_4 month_5 month_6 month_7 month_8 month_9 month_10 month_11 month_12
     Converting day_of_month from a factor to a matrix (31 levels).
     : day_of_month_2 day_of_month_3 day_of_month_4 day_of_month_5 day_of_month_6 day_of_month_7 day_of_month_8 day_of_month_9 day_of_month_10 day_of_month_11 day_of_month_12 day_of_month_13 day_of_month_14 day_of_month_15 day_of_month_16 day_of_month_17 day_of_month_18 day_of_month_19 day_of_month_20 day_of_month_21 day_of_month_22 day_of_month_23 day_of_month_24 day_of_month_25 day_of_month_26 day_of_month_27 day_of_month_28 day_of_month_29 day_of_month_30 day_of_month_31
     Converting day_of_week from a factor to a matrix (7 levels).
     : day_of_week_2 day_of_week_3 day_of_week_4 day_of_week_5 day_of_week_6 day_of_week_7
     Converting single_level from a factor to a matrix (1 levels).
     Skipping single_level because it has only 1 level.
     Combining factor matrices into a data frame.
    
     Call:
     sl$sl_fn(Y = Boston$chas, X = X[, 1:3], family = binomial(), SL.library = c("SL.mean",
     "SL.glm"), cvControl = list(V = 2, stratifyCV = T))
    
    
     Risk Coef
     SL.mean_All 0.0900000 1
     SL.glm_All 0.0936156 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl))
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.090706 0.017150 0.050586 0.17235
     Discrete SL 0.090658 0.017141 0.050586 0.17235
     SL.mean_All 0.090469 0.017099 0.050586 0.17235
     SL.glm_All 0.092702 0.017418 0.050838 0.17775
     Local physical cores detected: 1
     starting worker pid=27631 on localhost:11789 at 07:54:02.085
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doParallelSNOW
     Workers enabled: 1
     Running SL via multicore
    
     Call:
     SuperLearner::mcSuperLearner(Y = ..1, X = ..2, family = ..3, SL.library = ..5,
     cvControl = ..4)
    
    
     Risk Coef
     SL.mean_All 0.09000000 1
     SL.glm_All 0.09460273 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl), parallel = parallel)
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.091309 0.017239 0.012346 0.21528
     Discrete SL 0.091173 0.017227 0.012346 0.21528
     SL.mean_All 0.091173 0.017227 0.012346 0.21528
     SL.glm_All 0.092951 0.017417 0.013176 0.22050
     Found 0 text files in "inst/extdata" to import.
     Found 0 text files in "inst/extdata/../" to import.
     'data.frame': 768 obs. of 10 variables:
     $ pregnant: Factor w/ 17 levels "0","1","2","3",..: NA NA NA 2 1 6 4 11 3 9 ...
     $ glucose : num 148 85 183 89 137 116 78 115 197 125 ...
     $ pressure: num 72 66 64 66 40 74 50 NA 70 96 ...
     $ triceps : num 35 29 NA 23 35 NA 32 NA 45 NA ...
     $ insulin : num NA NA NA 94 168 NA 88 NA 543 NA ...
     $ mass : chr "33.6" "26.6" "23.3" "28.1" ...
     $ pedigree: num 0.627 0.351 0.672 0.167 2.288 ...
     $ age : num 50 31 32 21 33 30 26 29 53 54 ...
     $ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 1 2 2 ...
     $ all_nas : logi NA NA NA NA NA NA ...
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing diabetes (9 factor) with 0 NAs. Impute value: neg
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Pre-filled. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Pre-filled. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Pre-filled. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Pre-filled. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Pre-filled. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Pre-filled. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Pre-filled. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Pre-filled. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Pre-filled. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     ── 1. Error: (unknown) (@test-impute_missing_values.R#79) ─────────────────────
     Your java is not supported: java version "1.7.0_65"
     Please download the latest Java SE JDK from the following URL:
     https://www.oracle.com/technetwork/java/javase/downloads/index.html
     Backtrace:
     1. ck37r::impute_missing_values(...)
     7. h2o::h2o.init(nthreads = -1)
     8. h2o:::.h2o.startJar(...)
    
     [1] "/home/ripley/R/packages/tests32/ck37r.Rcheck/tests/testthat"
    
    
     These packages need to be installed: ck37_blah123
     install.packages(c("ck37_blah123"))
     Error in contrib.url(repos, type) :
     trying to use CRAN without setting a mirror
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     3.7553 0.0194 0.4232 6.0470 4.4837 0.0002 1.8495
     2.9075 1.9082 1.5314 5.3340 total = 31.26
    
     REML score: 791.4097
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error t value Pr(>|t|)
     (Intercept) 18.69909 0.79189 23.613 < 2e-16 ***
     chas 2.73680 0.68917 3.971 9.19e-05 ***
     rad 0.36896 0.08078 4.567 7.53e-06 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df F p-value
     s(crim) 3.755334 9 4.296 8.82e-09 ***
     s(zn) 0.019389 9 0.002 0.32075
     s(indus) 0.423183 9 0.081 0.17097
     s(nox) 6.046951 9 5.730 2.08e-10 ***
     s(rm) 4.483747 9 18.510 < 2e-16 ***
     s(age) 0.000192 9 0.000 0.88205
     s(dis) 1.849519 9 2.225 4.00e-06 ***
     s(tax) 2.907545 9 2.591 7.78e-06 ***
     s(ptratio) 1.908213 9 2.988 1.19e-07 ***
     s(black) 1.531418 9 1.070 0.00146 **
     s(lstat) 5.333977 9 24.668 < 2e-16 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.897 Deviance explained = 90.7%
     -REML = 791.41 Scale est. = 8.4364 n = 300
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     0.897 0.000 0.000 0.000 3.416 0.921 2.031
     0.830 1.945 0.000 0.899 total = 13.94
    
     REML score: 82.1204
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error z value Pr(>|z|)
     (Intercept) -4.0883 1.1807 -3.463 0.000535 ***
     chas 0.2294 1.0390 0.221 0.825224
     rad 0.2856 0.1036 2.757 0.005832 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df Chi.sq p-value
     s(crim) 8.975e-01 7 5.743 0.006316 **
     s(zn) 5.680e-06 9 0.000 0.461433
     s(indus) 2.029e-06 9 0.000 1.000000
     s(nox) 3.457e-06 9 0.000 0.653398
     s(rm) 3.416e+00 9 28.592 1.97e-07 ***
     s(age) 9.213e-01 9 10.269 0.000385 ***
     s(dis) 2.031e+00 9 11.704 0.000628 ***
     s(tax) 8.297e-01 9 4.368 0.016663 *
     s(ptratio) 1.945e+00 9 9.539 0.002868 **
     s(black) 3.185e-06 9 0.000 0.759387
     s(lstat) 8.988e-01 9 7.222 0.003569 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.721 Deviance explained = 69%
     -REML = 82.12 Scale est. = 1 n = 300
    
     Call:
     SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = sl_lib,
     cvControl = list(V = 2L))
    
    
     Risk Coef
     SL.mgcv_1_All 0.09920816 0.93231264
     SL.mean_All 0.25393333 0.06768736
     Generating 5 missingness indicators.
     Checking for collinearity of indicators.
     Generating 0 missingness indicators.
     Checking for collinearity of indicators.
     Generating 1 missingness indicators.
     Checking for collinearity of indicators.
     Generating 6 missingness indicators.
     Checking for collinearity of indicators.
     Removing 1 indicators due to collinearity:
     miss_glucose2
     Generating 6 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Local physical cores detected: 1
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     Local physical cores detected: 1
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     'data.frame': 5000 obs. of 6 variables:
     $ W1: int 1 1 0 1 1 0 0 1 0 0 ...
     $ W2: int 1 1 1 1 1 1 1 0 0 1 ...
     $ W3: num 0.7887 0.9758 0.951 0.6037 0.0303 ...
     $ W4: int 3 2 1 3 2 1 2 1 2 0 ...
     $ A : int 0 0 0 1 0 0 0 1 1 1 ...
     $ Y : int 1 1 1 1 1 0 1 1 0 0 ...
     Local physical cores detected: 1
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.419 0.003 0.465
    
     $train
     user system elapsed
     0.306 0.003 0.345
    
     $predict
     user system elapsed
     0.099 0.000 0.104
    
    
     Q object size: 3.1 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.372 0.011 0.531
    
     $train
     user system elapsed
     0.265 0.009 0.367
    
     $predict
     user system elapsed
     0.091 0.001 0.115
    
    
     g object size: 3 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.382 0.003 0.466
    
     $train
     user system elapsed
     0.295 0.001 0.347
    
     $predict
     user system elapsed
     0.072 0.001 0.104
    
    
     Q object size: 3.1 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.360 0.003 0.489
    
     $train
     user system elapsed
     0.279 0.002 0.374
    
     $predict
     user system elapsed
     0.067 0.001 0.067
    
    
     g object size: 3 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.2 Mb
     0.5 Mb
     6.6 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-impute_missing_values.R#79)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-patched-solaris-x86

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [67s/84s]
    Running the tests in ‘tests/testthat.R’ failed.
    Complete output:
     > library(ck37r)
     >
     > # Only run tests if testthat package is installed.
     > # This is in compliance with "Writing R Extensions" §1.1.3.1.
     > if (requireNamespace("testthat", quietly = TRUE)) {
     + testthat::test_check("ck37r", reporter = "check")
     + }
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     'data.frame': 366 obs. of 3 variables:
     $ month : num 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: num 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : num 4 5 6 7 1 2 3 4 5 6 ...
     Skipping month - already a factor.
     Converting day_of_month from numeric to factor. Unique vals: 31
     Converting day_of_week from numeric to factor. Unique vals: 7
     Skipping blah123blah - was not in the data frame.
     'data.frame': 366 obs. of 3 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month: Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     'data.frame': 366 obs. of 13 variables:
     $ month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
     $ day_of_month : Factor w/ 31 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
     $ day_of_week : Factor w/ 7 levels "1","2","3","4",..: 4 5 6 7 1 2 3 4 5 6 ...
     $ ozone_reading : num 3 3 3 5 5 6 4 4 6 7 ...
     $ pressure_height : num 5480 5660 5710 5700 5760 5720 5790 5790 5700 5700 ...
     $ wind_speed : num 8 6 4 3 3 4 6 3 3 3 ...
     $ humidity : num 20 NA 28 37 51 69 19 25 73 59 ...
     $ temperature_sandburg : num NA 38 40 45 54 35 45 55 41 44 ...
     $ temperature_elmonte : num NA NA NA NA 45.3 ...
     $ inversion_base_height: num 5000 NA 2693 590 1450 ...
     $ pressure_gradient : num -15 -14 -25 -24 25 15 -33 -28 23 -2 ...
     $ inversion_temperature: num 30.6 NA 47.7 55 57 ...
     $ visibility : num 200 300 250 100 60 60 100 250 120 120 ...
     Converting factors (4): month, day_of_month, day_of_week, single_level
     Converting month from a factor to a matrix (12 levels).
     : month_2 month_3 month_4 month_5 month_6 month_7 month_8 month_9 month_10 month_11 month_12
     Converting day_of_month from a factor to a matrix (31 levels).
     : day_of_month_2 day_of_month_3 day_of_month_4 day_of_month_5 day_of_month_6 day_of_month_7 day_of_month_8 day_of_month_9 day_of_month_10 day_of_month_11 day_of_month_12 day_of_month_13 day_of_month_14 day_of_month_15 day_of_month_16 day_of_month_17 day_of_month_18 day_of_month_19 day_of_month_20 day_of_month_21 day_of_month_22 day_of_month_23 day_of_month_24 day_of_month_25 day_of_month_26 day_of_month_27 day_of_month_28 day_of_month_29 day_of_month_30 day_of_month_31
     Converting day_of_week from a factor to a matrix (7 levels).
     : day_of_week_2 day_of_week_3 day_of_week_4 day_of_week_5 day_of_week_6 day_of_week_7
     Converting single_level from a factor to a matrix (1 levels).
     Skipping single_level because it has only 1 level.
     Combining factor matrices into a data frame.
    
     Call:
     sl$sl_fn(Y = Boston$chas, X = X[, 1:3], family = binomial(), SL.library = c("SL.mean",
     "SL.glm"), cvControl = list(V = 2, stratifyCV = T))
    
    
     Risk Coef
     SL.mean_All 0.0900000 1
     SL.glm_All 0.0936156 0
    
     Call:
     SuperLearner::CV.SuperLearner(Y = ..1, X = ..2, V = outer_cv_folds, family = ..3,
     SL.library = ..4, innerCvControl = list(cvControl))
    
     Risk is based on: Mean Squared Error
    
     All risk estimates are based on V = 10
    
     Algorithm Ave se Min Max
     Super Learner 0.090706 0.017150 0.050586 0.17235
     Discrete SL 0.090658 0.017141 0.050586 0.17235
     SL.mean_All 0.090469 0.017099 0.050586 0.17235
     SL.glm_All 0.092702 0.017418 0.050838 0.17775
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered:
     Workers enabled: 1
     ── 1. Error: (unknown) (@test-gen_superlearner.R#49) ──────────────────────────
     object 'cl' not found
     Backtrace:
     1. ck37r::parallelize(type = "multicore", max_cores = 2, verbose = 2)
    
     Found 0 text files in "inst/extdata" to import.
     Found 0 text files in "inst/extdata/../" to import.
     'data.frame': 768 obs. of 10 variables:
     $ pregnant: Factor w/ 17 levels "0","1","2","3",..: NA NA NA 2 1 6 4 11 3 9 ...
     $ glucose : num 148 85 183 89 137 116 78 115 197 125 ...
     $ pressure: num 72 66 64 66 40 74 50 NA 70 96 ...
     $ triceps : num 35 29 NA 23 35 NA 32 NA 45 NA ...
     $ insulin : num NA NA NA 94 168 NA 88 NA 543 NA ...
     $ mass : chr "33.6" "26.6" "23.3" "28.1" ...
     $ pedigree: num 0.627 0.351 0.672 0.167 2.288 ...
     $ age : num 50 31 32 21 33 30 26 29 53 54 ...
     $ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 1 2 2 ...
     $ all_nas : logi NA NA NA NA NA NA ...
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing diabetes (9 factor) with 0 NAs. Impute value: neg
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     Found 7 variables with NAs.
     Running standard imputation.
     Imputing pregnant (1 factor) with 3 NAs. Pre-filled. Impute value: 1
     Imputing glucose (2 numeric) with 5 NAs. Pre-filled. Impute value: 117
     Imputing pressure (3 numeric) with 35 NAs. Pre-filled. Impute value: 72
     Imputing triceps (4 numeric) with 227 NAs. Pre-filled. Impute value: 29
     Imputing insulin (5 numeric) with 374 NAs. Pre-filled. Impute value: 125
     Imputing mass (6 character) with 11 NAs. Pre-filled. Impute value: 125
     Imputing pedigree (7 numeric) with 0 NAs. Pre-filled. Impute value: 0.3725
     Imputing age (8 numeric) with 0 NAs. Pre-filled. Impute value: 29
     Imputing all_nas (10 logical) with 768 NAs. Pre-filled. Impute value: NA
     Note: cannot impute all_nas because all values are NA.
     Generating missingness indicators.
     Generating 7 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Indicators added (6): miss_pregnant, miss_glucose, miss_pressure, miss_triceps, miss_insulin, miss_mass
     openjdk version "11.0.6" 2020-01-14
     OpenJDK Runtime Environment (build 11.0.6+10-post-Debian-1)
     OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Debian-1, mixed mode, sharing)
     [1] "/home/hornik/tmp/R.check/r-release-gcc/Work/PKGS/ck37r.Rcheck/tests/testthat"
    
    
     These packages need to be installed: ck37_blah123
     install.packages(c("ck37_blah123"))
     Error in install.packages(pkgs[!result], ...) :
     unable to install packages
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     3.7553 0.0194 0.4232 6.0470 4.4837 0.0002 1.8495
     2.9075 1.9082 1.5314 5.3340 total = 31.26
    
     REML score: 791.4097
    
     Family: gaussian
     Link function: identity
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error t value Pr(>|t|)
     (Intercept) 18.69909 0.79189 23.613 < 2e-16 ***
     chas 2.73680 0.68917 3.971 9.19e-05 ***
     rad 0.36896 0.08078 4.567 7.53e-06 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df F p-value
     s(crim) 3.755334 9 4.296 8.82e-09 ***
     s(zn) 0.019389 9 0.002 0.32075
     s(indus) 0.423183 9 0.081 0.17097
     s(nox) 6.046951 9 5.730 2.08e-10 ***
     s(rm) 4.483747 9 18.510 < 2e-16 ***
     s(age) 0.000192 9 0.000 0.88205
     s(dis) 1.849519 9 2.225 4.00e-06 ***
     s(tax) 2.907545 9 2.591 7.78e-06 ***
     s(ptratio) 1.908213 9 2.988 1.19e-07 ***
     s(black) 1.531418 9 1.070 0.00146 **
     s(lstat) 5.333977 9 24.668 < 2e-16 ***
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.897 Deviance explained = 90.7%
     -REML = 791.41 Scale est. = 8.4364 n = 300
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Estimated degrees of freedom:
     0.897 0.000 0.000 0.000 3.416 0.921 2.031
     0.830 1.945 0.000 0.899 total = 13.94
    
     REML score: 82.1204
    
     Family: binomial
     Link function: logit
    
     Formula:
     Y ~ s(crim, k = -1) + s(zn, k = -1) + s(indus, k = -1) + s(nox,
     k = -1) + s(rm, k = -1) + s(age, k = -1) + s(dis, k = -1) +
     s(tax, k = -1) + s(ptratio, k = -1) + s(black, k = -1) +
     s(lstat, k = -1) + chas + rad
    
     Parametric coefficients:
     Estimate Std. Error z value Pr(>|z|)
     (Intercept) -4.0883 1.1807 -3.463 0.000535 ***
     chas 0.2294 1.0390 0.221 0.825224
     rad 0.2856 0.1036 2.757 0.005832 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     Approximate significance of smooth terms:
     edf Ref.df Chi.sq p-value
     s(crim) 8.975e-01 7 5.743 0.006316 **
     s(zn) 5.680e-06 9 0.000 0.461433
     s(indus) 2.029e-06 9 0.000 1.000000
     s(nox) 3.457e-06 9 0.000 0.653398
     s(rm) 3.416e+00 9 28.592 1.97e-07 ***
     s(age) 9.213e-01 9 10.269 0.000385 ***
     s(dis) 2.031e+00 9 11.704 0.000628 ***
     s(tax) 8.297e-01 9 4.368 0.016663 *
     s(ptratio) 1.945e+00 9 9.539 0.002868 **
     s(black) 3.185e-06 9 0.000 0.759387
     s(lstat) 8.988e-01 9 7.222 0.003569 **
     ---
     Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    
     R-sq.(adj) = 0.721 Deviance explained = 69%
     -REML = 82.12 Scale est. = 1 n = 300
    
     Call:
     SuperLearner(Y = Y_bin, X = X, family = binomial(), SL.library = sl_lib,
     cvControl = list(V = 2L))
    
    
     Risk Coef
     SL.mgcv_1_All 0.09920816 0.93231264
     SL.mean_All 0.25393333 0.06768736
     Generating 5 missingness indicators.
     Checking for collinearity of indicators.
     Generating 0 missingness indicators.
     Checking for collinearity of indicators.
     Generating 1 missingness indicators.
     Checking for collinearity of indicators.
     Generating 6 missingness indicators.
     Checking for collinearity of indicators.
     Removing 1 indicators due to collinearity:
     miss_glucose2
     Generating 6 missingness indicators.
     Removing 1 indicators that are constant.
     Checking for collinearity of indicators.
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     Local physical cores detected: 16
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     'data.frame': 5000 obs. of 6 variables:
     $ W1: int 1 1 0 1 1 0 0 1 0 0 ...
     $ W2: int 1 1 1 1 1 1 1 0 0 1 ...
     $ W3: num 0.7887 0.9758 0.951 0.6037 0.0303 ...
     $ W4: int 3 2 1 3 2 1 2 1 2 0 ...
     $ A : int 0 0 0 1 0 0 0 1 1 1 ...
     $ Y : int 1 1 1 1 1 0 1 1 0 0 ...
     Local physical cores detected: 16
     Restricting usage to 2 cores.
     Our BLAS is setup for 1 threads and OMP is 1 threads.
     doPar backend registered: doSEQ
     Workers enabled: 1
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.214 0.000 0.299
    
     $train
     user system elapsed
     0.169 0.000 0.217
    
     $predict
     user system elapsed
     0.040 0.000 0.076
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.146 0.000 0.146
    
     $train
     user system elapsed
     0.116 0.000 0.116
    
     $predict
     user system elapsed
     0.024 0.000 0.024
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     X dataframe object size: 0.1 MB
     Stacked df dimensions: 10,000 5
     Stacked dataframe object size: 0.3 MB
     Estimating Q using custom SuperLearner.
     Q init fit:
    
    
     Call:
     sl_fn(Y = Y, X = X, family = family, SL.library = Q.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2386825 0.008542372
     SL.glm_All 0.1761028 0.991457628
    
     Q init times:
     $everything
     user system elapsed
     0.174 0.000 0.175
    
     $train
     user system elapsed
     0.141 0.000 0.141
    
     $predict
     user system elapsed
     0.027 0.000 0.028
    
    
     Q object size: 4.3 Mb
     Estimating g using custom SuperLearner.
     g fit:
    
    
     Call:
     sl_fn(Y = A, X = W, family = "binomial", SL.library = g.SL.library, id = id,
     verbose = verbose, cvControl = list(V = V))
    
    
     Risk Coef
     SL.mean_All 0.2336885 0.0004017322
     SL.glm_All 0.1268937 0.9995982678
    
     g times:
     $everything
     user system elapsed
     0.150 0.000 0.152
    
     $train
     user system elapsed
     0.118 0.000 0.119
    
     $predict
     user system elapsed
     0.027 0.000 0.026
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-release-linux-x86_64

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [64s/65s]
    Running the tests in ‘tests/testthat.R’ failed.
    Last 13 lines of output:
     0.023 0.004 0.026
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-release-osx-x86_64

Version: 1.0.3
Check: tests
Result: ERROR
     Running ‘testthat.R’ [66s/67s]
    Running the tests in ‘tests/testthat.R’ failed.
    Last 13 lines of output:
     0.023 0.005 0.027
    
    
     g object size: 4.2 Mb
     Passing results to tmle.
     Estimating missingness mechanism
     1.4 Mb
     0.5 Mb
     9.1 Mb
     ══ testthat results ═══════════════════════════════════════════════════════════
     [ OK: 3 | SKIPPED: 0 | WARNINGS: 2 | FAILED: 1 ]
     1. Error: (unknown) (@test-gen_superlearner.R#49)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-oldrel-osx-x86_64