CRAN Package Check Results for Package SparkR

Last updated on 2018-05-01 07:48:51 CEST.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 2.3.0 20.00 102.39 122.39 ERROR
r-devel-linux-x86_64-debian-gcc 2.3.0 18.22 87.25 105.47 ERROR
r-devel-linux-x86_64-fedora-clang 2.3.0 85.02 ERROR
r-devel-linux-x86_64-fedora-gcc 2.3.0 147.94 ERROR
r-devel-windows-ix86+x86_64 2.3.0 47.00 304.00 351.00 OK
r-patched-linux-x86_64 2.3.0 18.65 101.52 120.17 ERROR
r-patched-solaris-x86 2.3.0 197.80 ERROR
r-release-linux-x86_64 2.3.0 18.29 102.87 121.16 ERROR
r-release-windows-ix86+x86_64 2.3.0 47.00 304.00 351.00 OK
r-release-osx-x86_64 2.3.0 ERROR
r-oldrel-windows-ix86+x86_64 2.3.0 19.00 337.00 356.00 OK
r-oldrel-osx-x86_64 2.3.0 OK

Check Details

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [11s/357s]
    Running the tests in ‘tests/run-all.R’ failed.
    Complete output:
     > #
     > # Licensed to the Apache Software Foundation (ASF) under one or more
     > # contributor license agreements. See the NOTICE file distributed with
     > # this work for additional information regarding copyright ownership.
     > # The ASF licenses this file to You under the Apache License, Version 2.0
     > # (the "License"); you may not use this file except in compliance with
     > # the License. You may obtain a copy of the License at
     > #
     > # http://www.apache.org/licenses/LICENSE-2.0
     > #
     > # Unless required by applicable law or agreed to in writing, software
     > # distributed under the License is distributed on an "AS IS" BASIS,
     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     > # See the License for the specific language governing permissions and
     > # limitations under the License.
     > #
     >
     > library(testthat)
     > library(SparkR)
    
     Attaching package: 'SparkR'
    
     The following objects are masked from 'package:testthat':
    
     describe, not
    
     The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
     The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
     rank, rbind, sample, startsWith, subset, summary, transform, union
    
     >
     > # Turn all warnings into errors
     > options("warn" = 2)
     >
     > if (.Platform$OS.type == "windows") {
     + Sys.setenv(TZ = "GMT")
     + }
     >
     > # Setup global test environment
     > # Install Spark first to set SPARK_HOME
     >
     > # NOTE(shivaram): We set overwrite to handle any old tar.gz files or directories left behind on
     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
     > install.spark(overwrite = TRUE)
     Overwrite = TRUE: download and overwrite the tar fileand Spark package directory if they exist.
     Spark not found in the cache directory. Installation will start.
     MirrorUrl not provided.
     Looking for preferred site from apache website...
     Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
     Downloading spark-2.3.0 for Hadoop 2.7 from:
     - http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
     trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz'
     Content type 'application/x-gzip' length 226128401 bytes (215.7 MB)
     ==================================================
     downloaded 215.7 MB
    
     Installing to /home/hornik/.cache/spark
     DONE.
     SPARK_HOME set to /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7
     >
     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
     > invisible(lapply(sparkRWhitelistSQLDirs,
     + function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
     >
     > sparkRTestMaster <- "local[1]"
     > sparkRTestConfig <- list()
     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
     + sparkRTestMaster <- ""
     + } else {
     + # Disable hsperfdata on CRAN
     + old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
     + Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
     + tmpDir <- tempdir()
     + tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
     + sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
     + spark.executor.extraJavaOptions = tmpArg)
     + }
     >
     > test_package("SparkR")
     Launching java with spark-submit command /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/tmp/RtmpKHoEFN" sparkr-shell /tmp/RtmpKHoEFN/backend_port7d7531b4a750
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     WARNING: An illegal reflective access operation has occurred
     WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
     WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
     WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
     WARNING: All illegal access operations will be denied in a future release
     2018-04-29 18:56:42 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     Setting default log level to "WARN".
     To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
     2018-04-29 18:56:49 ERROR RBackendHandler:91 - count on 13 failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     ... 36 more
     ── 1. Error: create DataFrame from list or data.frame (@test_basic.R#26) ──────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: expect_equal(count(df), i) at /home/hornik/tmp/R.check/r-devel-clang/Work/build/Packages/SparkR/tests/testthat/test_basic.R:26
     2: quasi_label(enquo(object), label)
     3: eval_bare(get_expr(quo), get_env(quo))
     4: count(df)
     5: count(df)
     6: callJMethod(x@sdf, "count")
     7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
     8: handleErrors(returnStatus, conn)
     9: stop(readString(conn))
    
     2018-04-29 18:56:50 ERROR RBackendHandler:91 - fit on org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     ... 36 more
     ── 2. Error: spark.glm and predict (@test_basic.R#58) ─────────────────────────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at /home/hornik/tmp/R.check/r-devel-clang/Work/build/Packages/SparkR/tests/testthat/test_basic.R:58
     2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
     3: .local(data, formula, ...)
     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper", "fit", formula,
     data@sdf, tolower(family$family), family$link, tol, as.integer(maxIter), weightCol,
     regParam, as.double(var.power), as.double(link.power), stringIndexerOrderType,
     offsetCol)
     5: invokeJava(isStatic = TRUE, className, methodName, ...)
     6: handleErrors(returnStatus, conn)
     7: stop(readString(conn))
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
     2. Error: spark.glm and predict (@test_basic.R#58)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-clang

Version: 2.3.0
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
    
    Attaching package: 'SparkR'
    
    The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
    The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith,
     intersect, rank, rbind, sample, startsWith, subset, summary,
     transform, union
    
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
    WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    2018-04-29 18:56:56 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
    [Stage 1:> (0 + 1) / 1]
    
    
    [Stage 10:===========================> (50 + 1) / 100]
    [Stage 10:==========================================> (77 + 1) / 100]
    
    2018-04-29 18:57:09 ERROR RBackendHandler:91 - dfToCols on org.apache.spark.sql.api.r.SQLUtils failed
    java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.collect(Dataset.scala:2722)
     at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:173)
     at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
     ... 36 more
    Quitting from lines 102-104 (sparkr-vignettes.Rmd)
    Error: processing vignette ‘sparkr-vignettes.Rmd’ failed with diagnostics:
    java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.fore
    Execution halted
Flavor: r-devel-linux-x86_64-debian-clang

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [9s/254s]
    Running the tests in ‘tests/run-all.R’ failed.
    Complete output:
     > #
     > # Licensed to the Apache Software Foundation (ASF) under one or more
     > # contributor license agreements. See the NOTICE file distributed with
     > # this work for additional information regarding copyright ownership.
     > # The ASF licenses this file to You under the Apache License, Version 2.0
     > # (the "License"); you may not use this file except in compliance with
     > # the License. You may obtain a copy of the License at
     > #
     > # http://www.apache.org/licenses/LICENSE-2.0
     > #
     > # Unless required by applicable law or agreed to in writing, software
     > # distributed under the License is distributed on an "AS IS" BASIS,
     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     > # See the License for the specific language governing permissions and
     > # limitations under the License.
     > #
     >
     > library(testthat)
     > library(SparkR)
    
     Attaching package: 'SparkR'
    
     The following objects are masked from 'package:testthat':
    
     describe, not
    
     The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
     The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
     rank, rbind, sample, startsWith, subset, summary, transform, union
    
     >
     > # Turn all warnings into errors
     > options("warn" = 2)
     >
     > if (.Platform$OS.type == "windows") {
     + Sys.setenv(TZ = "GMT")
     + }
     >
     > # Setup global test environment
     > # Install Spark first to set SPARK_HOME
     >
     > # NOTE(shivaram): We set overwrite to handle any old tar.gz files or directories left behind on
     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
     > install.spark(overwrite = TRUE)
     Overwrite = TRUE: download and overwrite the tar fileand Spark package directory if they exist.
     Spark not found in the cache directory. Installation will start.
     MirrorUrl not provided.
     Looking for preferred site from apache website...
     Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
     Downloading spark-2.3.0 for Hadoop 2.7 from:
     - http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
     trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz'
     Content type 'application/x-gzip' length 226128401 bytes (215.7 MB)
     ==================================================
     downloaded 215.7 MB
    
     Installing to /home/hornik/.cache/spark
     DONE.
     SPARK_HOME set to /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7
     >
     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
     > invisible(lapply(sparkRWhitelistSQLDirs,
     + function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
     >
     > sparkRTestMaster <- "local[1]"
     > sparkRTestConfig <- list()
     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
     + sparkRTestMaster <- ""
     + } else {
     + # Disable hsperfdata on CRAN
     + old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
     + Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
     + tmpDir <- tempdir()
     + tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
     + sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
     + spark.executor.extraJavaOptions = tmpArg)
     + }
     >
     > test_package("SparkR")
     Launching java with spark-submit command /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/home/hornik/tmp/scratch/RtmpFMMZSh" sparkr-shell /home/hornik/tmp/scratch/RtmpFMMZSh/backend_portaf71bd18d19
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     WARNING: An illegal reflective access operation has occurred
     WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
     WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
     WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
     WARNING: All illegal access operations will be denied in a future release
     2018-04-30 17:44:42 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     Setting default log level to "WARN".
     To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
     [Stage 0:> (0 + 0) / 1]
     [Stage 0:> (0 + 1) / 1]
    
     2018-04-30 17:44:56 ERROR RBackendHandler:91 - count on 13 failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     ... 36 more
     ── 1. Error: create DataFrame from list or data.frame (@test_basic.R#26) ──────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: expect_equal(count(df), i) at /home/hornik/tmp/R.check/r-devel-gcc/Work/build/Packages/SparkR/tests/testthat/test_basic.R:26
     2: quasi_label(enquo(object), label)
     3: eval_bare(get_expr(quo), get_env(quo))
     4: count(df)
     5: count(df)
     6: callJMethod(x@sdf, "count")
     7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
     8: handleErrors(returnStatus, conn)
     9: stop(readString(conn))
    
     2018-04-30 17:44:58 ERROR RBackendHandler:91 - fit on org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     ... 36 more
     ── 2. Error: spark.glm and predict (@test_basic.R#58) ─────────────────────────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at /home/hornik/tmp/R.check/r-devel-gcc/Work/build/Packages/SparkR/tests/testthat/test_basic.R:58
     2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
     3: .local(data, formula, ...)
     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper", "fit", formula,
     data@sdf, tolower(family$family), family$link, tol, as.integer(maxIter), weightCol,
     regParam, as.double(var.power), as.double(link.power), stringIndexerOrderType,
     offsetCol)
     5: invokeJava(isStatic = TRUE, className, methodName, ...)
     6: handleErrors(returnStatus, conn)
     7: stop(readString(conn))
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
     2. Error: spark.glm and predict (@test_basic.R#58)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc

Version: 2.3.0
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
    
    Attaching package: 'SparkR'
    
    The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
    The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith,
     intersect, rank, rbind, sample, startsWith, subset, summary,
     transform, union
    
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
    WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    2018-04-30 17:45:09 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
    [Stage 0:> (0 + 0) / 1]
    [Stage 0:> (0 + 1) / 1]
    
    
    [Stage 1:> (0 + 1) / 1]
    
    
    [Stage 8:======================================================> (19 + 1) / 20]
    
    
    [Stage 10:==============> (26 + 1) / 100]
    [Stage 10:====================> (38 + 1) / 100]
    [Stage 10:=============================> (54 + 1) / 100]
    [Stage 10:=====================================> (69 + 1) / 100]
    [Stage 10:==============================================> (85 + 1) / 100]
    [Stage 10:======================================================>(99 + 1) / 100]
    
    
    [Stage 12:==================================> (46 + 1) / 75]
    [Stage 12:============================================> (59 + 1) / 75]
    [Stage 12:=====================================================> (71 + 1) / 75]
    
    2018-04-30 17:45:34 ERROR RBackendHandler:91 - dfToCols on org.apache.spark.sql.api.r.SQLUtils failed
    java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.collect(Dataset.scala:2722)
     at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:173)
     at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
     ... 36 more
    Quitting from lines 102-104 (sparkr-vignettes.Rmd)
    Error: processing vignette ‘sparkr-vignettes.Rmd’ failed with diagnostics:
    java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.fore
    Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [4s/40s]
    Running the tests in ‘tests/run-all.R’ failed.
    Complete output:
     > #
     > # Licensed to the Apache Software Foundation (ASF) under one or more
     > # contributor license agreements. See the NOTICE file distributed with
     > # this work for additional information regarding copyright ownership.
     > # The ASF licenses this file to You under the Apache License, Version 2.0
     > # (the "License"); you may not use this file except in compliance with
     > # the License. You may obtain a copy of the License at
     > #
     > # http://www.apache.org/licenses/LICENSE-2.0
     > #
     > # Unless required by applicable law or agreed to in writing, software
     > # distributed under the License is distributed on an "AS IS" BASIS,
     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     > # See the License for the specific language governing permissions and
     > # limitations under the License.
     > #
     >
     > library(testthat)
     > library(SparkR)
    
     Attaching package: 'SparkR'
    
     The following objects are masked from 'package:testthat':
    
     describe, not
    
     The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
     The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
     rank, rbind, sample, startsWith, subset, summary, transform, union
    
     >
     > # Turn all warnings into errors
     > options("warn" = 2)
     >
     > if (.Platform$OS.type == "windows") {
     + Sys.setenv(TZ = "GMT")
     + }
     >
     > # Setup global test environment
     > # Install Spark first to set SPARK_HOME
     >
     > # NOTE(shivaram): We set overwrite to handle any old tar.gz files or directories left behind on
     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
     > install.spark(overwrite = TRUE)
     Overwrite = TRUE: download and overwrite the tar fileand Spark package directory if they exist.
     Spark not found in the cache directory. Installation will start.
     MirrorUrl not provided.
     Looking for preferred site from apache website...
     Preferred mirror site found: http://mirror.ox.ac.uk/sites/rsync.apache.org/spark
     Downloading spark-2.3.0 for Hadoop 2.7 from:
     - http://mirror.ox.ac.uk/sites/rsync.apache.org/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
     trying URL 'http://mirror.ox.ac.uk/sites/rsync.apache.org/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz'
     Content type 'application/x-tar' length 226128401 bytes (215.7 MB)
     ==================================================
     downloaded 215.7 MB
    
     Installing to /data/gannet/ripley/.cache/spark
     DONE.
     SPARK_HOME set to /data/gannet/ripley/.cache/spark/spark-2.3.0-bin-hadoop2.7
     >
     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
     > invisible(lapply(sparkRWhitelistSQLDirs,
     + function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
     >
     > sparkRTestMaster <- "local[1]"
     > sparkRTestConfig <- list()
     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
     + sparkRTestMaster <- ""
     + } else {
     + # Disable hsperfdata on CRAN
     + old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
     + Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
     + tmpDir <- tempdir()
     + tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
     + sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
     + spark.executor.extraJavaOptions = tmpArg)
     + }
     >
     > test_package("SparkR")
     Launching java with spark-submit command /data/gannet/ripley/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/tmp/RtmpMDTRki" sparkr-shell /tmp/RtmpMDTRki/backend_porta183b2d2506
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     2018-04-30 12:23:31 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     Setting default log level to "WARN".
     To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
     [Stage 1:> (0 + 1) / 1]2018-04-30 12:23:47 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1)
     java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
     2018-04-30 12:23:47 WARN TaskSetManager:66 - Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     2018-04-30 12:23:47 ERROR TaskSetManager:70 - Task 0 in stage 1.0 failed 1 times; aborting job
     2018-04-30 12:23:47 ERROR RBackendHandler:91 - count on 13 failed
     java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.lang.Thread.run(Thread.java:748)
     Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     ... 36 more
     Caused by: java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     ... 1 more
     ── 1. Error: create DataFrame from list or data.frame (@test_basic.R#26) ──────
     org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.lang.Thread.run(Thread.java:748)
     Caused by: java.net.SocketTimeoutExce
     1: expect_equal(count(df), i) at /data/gannet/ripley/R/packages/tests-clang/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
     2: quasi_label(enquo(object), label)
     3: eval_bare(get_expr(quo), get_env(quo))
     4: count(df)
     5: count(df)
     6: callJMethod(x@sdf, "count")
     7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
     8: handleErrors(returnStatus, conn)
     9: stop(readString(conn))
    
    
    
    
     [Stage 4:> (0 + 1) / 1]2018-04-30 12:23:58 ERROR Executor:91 - Exception in task 0.0 in stage 4.0 (TID 3)
     java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
     2018-04-30 12:23:58 WARN TaskSetManager:66 - Lost task 0.0 in stage 4.0 (TID 3, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     2018-04-30 12:23:58 ERROR TaskSetManager:70 - Task 0 in stage 4.0 failed 1 times; aborting job
     2018-04-30 12:23:58 ERROR RBackendHandler:91 - fit on org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
     java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.lang.Thread.run(Thread.java:748)
     Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 3, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     ... 36 more
     Caused by: java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     ... 1 more
     ── 2. Error: spark.glm and predict (@test_basic.R#58) ─────────────────────────
     org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 3, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.
     1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at /data/gannet/ripley/R/packages/tests-clang/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
     2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
     3: .local(data, formula, ...)
     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper", "fit", formula,
     data@sdf, tolower(family$family), family$link, tol, as.integer(maxIter), weightCol,
     regParam, as.double(var.power), as.double(link.power), stringIndexerOrderType,
     offsetCol)
     5: invokeJava(isStatic = TRUE, className, methodName, ...)
     6: handleErrors(returnStatus, conn)
     7: stop(readString(conn))
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
     2. Error: spark.glm and predict (@test_basic.R#58)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [7s/63s]
    Running the tests in ‘tests/run-all.R’ failed.
    Complete output:
     > #
     > # Licensed to the Apache Software Foundation (ASF) under one or more
     > # contributor license agreements. See the NOTICE file distributed with
     > # this work for additional information regarding copyright ownership.
     > # The ASF licenses this file to You under the Apache License, Version 2.0
     > # (the "License"); you may not use this file except in compliance with
     > # the License. You may obtain a copy of the License at
     > #
     > # http://www.apache.org/licenses/LICENSE-2.0
     > #
     > # Unless required by applicable law or agreed to in writing, software
     > # distributed under the License is distributed on an "AS IS" BASIS,
     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     > # See the License for the specific language governing permissions and
     > # limitations under the License.
     > #
     >
     > library(testthat)
     > library(SparkR)
    
     Attaching package: 'SparkR'
    
     The following objects are masked from 'package:testthat':
    
     describe, not
    
     The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
     The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
     rank, rbind, sample, startsWith, subset, summary, transform, union
    
     >
     > # Turn all warnings into errors
     > options("warn" = 2)
     >
     > if (.Platform$OS.type == "windows") {
     + Sys.setenv(TZ = "GMT")
     + }
     >
     > # Setup global test environment
     > # Install Spark first to set SPARK_HOME
     >
     > # NOTE(shivaram): We set overwrite to handle any old tar.gz files or directories left behind on
     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
     > install.spark(overwrite = TRUE)
     Overwrite = TRUE: download and overwrite the tar fileand Spark package directory if they exist.
     Spark not found in the cache directory. Installation will start.
     MirrorUrl not provided.
     Looking for preferred site from apache website...
     Preferred mirror site found: http://www.mirrorservice.org/sites/ftp.apache.org/spark
     Downloading spark-2.3.0 for Hadoop 2.7 from:
     - http://www.mirrorservice.org/sites/ftp.apache.org/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
     trying URL 'http://www.mirrorservice.org/sites/ftp.apache.org/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz'
     Content type 'application/x-gzip' length 226128401 bytes (215.7 MB)
     ==================================================
     downloaded 215.7 MB
    
     Installing to /data/gannet/ripley/.cache/spark
     DONE.
     SPARK_HOME set to /data/gannet/ripley/.cache/spark/spark-2.3.0-bin-hadoop2.7
     >
     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
     > invisible(lapply(sparkRWhitelistSQLDirs,
     + function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
     >
     > sparkRTestMaster <- "local[1]"
     > sparkRTestConfig <- list()
     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
     + sparkRTestMaster <- ""
     + } else {
     + # Disable hsperfdata on CRAN
     + old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
     + Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
     + tmpDir <- tempdir()
     + tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
     + sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
     + spark.executor.extraJavaOptions = tmpArg)
     + }
     >
     > test_package("SparkR")
     Launching java with spark-submit command /data/gannet/ripley/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/tmp/RtmpqyGksa" sparkr-shell /tmp/RtmpqyGksa/backend_port16f87c7de0a4
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     2018-04-30 12:02:08 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     Setting default log level to "WARN".
     To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
     [Stage 0:> (0 + 1) / 1]
    
    
     [Stage 1:> (0 + 1) / 1]2018-04-30 12:02:36 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1)
     java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
     2018-04-30 12:02:36 WARN TaskSetManager:66 - Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     2018-04-30 12:02:36 ERROR TaskSetManager:70 - Task 0 in stage 1.0 failed 1 times; aborting job
     2018-04-30 12:02:36 ERROR RBackendHandler:91 - count on 13 failed
     java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.lang.Thread.run(Thread.java:748)
     Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     ... 36 more
     Caused by: java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     ... 1 more
     ── 1. Error: create DataFrame from list or data.frame (@test_basic.R#26) ──────
     org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.lang.Thread.run(Thread.java:748)
     Caused by: java.net.SocketTimeoutExce
     1: expect_equal(count(df), i) at /data/gannet/ripley/R/packages/tests-devel/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
     2: quasi_label(enquo(object), label)
     3: eval_bare(get_expr(quo), get_env(quo))
     4: count(df)
     5: count(df)
     6: callJMethod(x@sdf, "count")
     7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
     8: handleErrors(returnStatus, conn)
     9: stop(readString(conn))
    
    
    
    
     [Stage 4:> (0 + 1) / 1]2018-04-30 12:02:48 ERROR Executor:91 - Exception in task 0.0 in stage 4.0 (TID 3)
     java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
     2018-04-30 12:02:48 WARN TaskSetManager:66 - Lost task 0.0 in stage 4.0 (TID 3, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     2018-04-30 12:02:48 ERROR TaskSetManager:70 - Task 0 in stage 4.0 failed 1 times; aborting job
     2018-04-30 12:02:48 ERROR RBackendHandler:91 - fit on org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
     java.lang.reflect.InvocationTargetException
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.lang.Thread.run(Thread.java:748)
     Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 3, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     ... 36 more
     Caused by: java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     ... 1 more
     ── 2. Error: spark.glm and predict (@test_basic.R#58) ─────────────────────────
     org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 3, localhost, executor driver): java.net.SocketTimeoutException: Accept timed out
     at java.net.PlainSocketImpl.socketAccept(Native Method)
     at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
     at java.net.ServerSocket.implAccept(ServerSocket.java:545)
     at java.net.ServerSocket.accept(ServerSocket.java:513)
     at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:372)
     at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
     at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:51)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
     at org.apache.spark.scheduler.Task.run(Task.scala:109)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)
    
     Driver stacktrace:
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
     at scala.Option.foreach(Option.scala:257)
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.
     1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at /data/gannet/ripley/R/packages/tests-devel/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
     2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
     3: .local(data, formula, ...)
     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper", "fit", formula,
     data@sdf, tolower(family$family), family$link, tol, as.integer(maxIter), weightCol,
     regParam, as.double(var.power), as.double(link.power), stringIndexerOrderType,
     offsetCol)
     5: invokeJava(isStatic = TRUE, className, methodName, ...)
     6: handleErrors(returnStatus, conn)
     7: stop(readString(conn))
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
     2. Error: spark.glm and predict (@test_basic.R#58)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [7s/54s]
    Running the tests in ‘tests/run-all.R’ failed.
    Complete output:
     > #
     > # Licensed to the Apache Software Foundation (ASF) under one or more
     > # contributor license agreements. See the NOTICE file distributed with
     > # this work for additional information regarding copyright ownership.
     > # The ASF licenses this file to You under the Apache License, Version 2.0
     > # (the "License"); you may not use this file except in compliance with
     > # the License. You may obtain a copy of the License at
     > #
     > # http://www.apache.org/licenses/LICENSE-2.0
     > #
     > # Unless required by applicable law or agreed to in writing, software
     > # distributed under the License is distributed on an "AS IS" BASIS,
     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     > # See the License for the specific language governing permissions and
     > # limitations under the License.
     > #
     >
     > library(testthat)
     > library(SparkR)
    
     Attaching package: 'SparkR'
    
     The following objects are masked from 'package:testthat':
    
     describe, not
    
     The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
     The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
     rank, rbind, sample, startsWith, subset, summary, transform, union
    
     >
     > # Turn all warnings into errors
     > options("warn" = 2)
     >
     > if (.Platform$OS.type == "windows") {
     + Sys.setenv(TZ = "GMT")
     + }
     >
     > # Setup global test environment
     > # Install Spark first to set SPARK_HOME
     >
     > # NOTE(shivaram): We set overwrite to handle any old tar.gz files or directories left behind on
     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
     > install.spark(overwrite = TRUE)
     Overwrite = TRUE: download and overwrite the tar fileand Spark package directory if they exist.
     Spark not found in the cache directory. Installation will start.
     MirrorUrl not provided.
     Looking for preferred site from apache website...
     Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
     Downloading spark-2.3.0 for Hadoop 2.7 from:
     - http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
     trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz'
     Content type 'application/x-gzip' length 226128401 bytes (215.7 MB)
     ==================================================
     downloaded 215.7 MB
    
     Installing to /home/hornik/.cache/spark
     DONE.
     SPARK_HOME set to /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7
     >
     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
     > invisible(lapply(sparkRWhitelistSQLDirs,
     + function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
     >
     > sparkRTestMaster <- "local[1]"
     > sparkRTestConfig <- list()
     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
     + sparkRTestMaster <- ""
     + } else {
     + # Disable hsperfdata on CRAN
     + old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
     + Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
     + tmpDir <- tempdir()
     + tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
     + sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
     + spark.executor.extraJavaOptions = tmpArg)
     + }
     >
     > test_package("SparkR")
     Launching java with spark-submit command /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/tmp/RtmpRgbvOO" sparkr-shell /tmp/RtmpRgbvOO/backend_port443a2cbc7fdc
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     WARNING: An illegal reflective access operation has occurred
     WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
     WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
     WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
     WARNING: All illegal access operations will be denied in a future release
     2018-04-30 06:30:19 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     Setting default log level to "WARN".
     To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
     [Stage 0:> (0 + 0) / 1]
     [Stage 0:> (0 + 1) / 1]
    
     2018-04-30 06:30:35 ERROR RBackendHandler:91 - count on 13 failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     ... 36 more
     ── 1. Error: create DataFrame from list or data.frame (@test_basic.R#26) ──────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: expect_equal(count(df), i) at /home/hornik/tmp/R.check/r-patched-gcc/Work/build/Packages/SparkR/tests/testthat/test_basic.R:26
     2: quasi_label(enquo(object), label)
     3: eval_bare(get_expr(quo), get_env(quo))
     4: count(df)
     5: count(df)
     6: callJMethod(x@sdf, "count")
     7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
     8: handleErrors(returnStatus, conn)
     9: stop(readString(conn))
    
     2018-04-30 06:30:37 ERROR RBackendHandler:91 - fit on org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     ... 36 more
     ── 2. Error: spark.glm and predict (@test_basic.R#58) ─────────────────────────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at /home/hornik/tmp/R.check/r-patched-gcc/Work/build/Packages/SparkR/tests/testthat/test_basic.R:58
     2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
     3: .local(data, formula, ...)
     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper", "fit", formula,
     data@sdf, tolower(family$family), family$link, tol, as.integer(maxIter), weightCol,
     regParam, as.double(var.power), as.double(link.power), stringIndexerOrderType,
     offsetCol)
     5: invokeJava(isStatic = TRUE, className, methodName, ...)
     6: handleErrors(returnStatus, conn)
     7: stop(readString(conn))
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
     2. Error: spark.glm and predict (@test_basic.R#58)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-patched-linux-x86_64

Version: 2.3.0
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
    
    Attaching package: 'SparkR'
    
    The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
    The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith,
     intersect, rank, rbind, sample, startsWith, subset, summary,
     transform, union
    
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
    WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    2018-04-30 06:30:50 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
    [Stage 0:> (0 + 0) / 1]
    [Stage 0:===========================================================(1 + 0) / 1]
    
    
    [Stage 1:> (0 + 1) / 1]
    
    
    [Stage 3:> (0 + 1) / 1]
    
    
    [Stage 8:==========================================> (15 + 1) / 20]
    
    
    [Stage 10:============> (23 + 1) / 100]
    [Stage 10:=================> (32 + 1) / 100]
    [Stage 10:=======================> (42 + 1) / 100]
    [Stage 10:=============================> (54 + 1) / 100]
    [Stage 10:===================================> (65 + 1) / 100]
    [Stage 10:=========================================> (75 + 1) / 100]
    [Stage 10:================================================> (89 + 1) / 100]
    
    
    [Stage 12:=============================> (39 + 1) / 75]
    [Stage 12:=====================================> (50 + 1) / 75]
    [Stage 12:===============================================> (63 + 1) / 75]
    
    2018-04-30 06:31:22 ERROR RBackendHandler:91 - dfToCols on org.apache.spark.sql.api.r.SQLUtils failed
    java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.collect(Dataset.scala:2722)
     at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:173)
     at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
     ... 36 more
    Quitting from lines 102-104 (sparkr-vignettes.Rmd)
    Error: processing vignette ‘sparkr-vignettes.Rmd’ failed with diagnostics:
    java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.fore
    Execution halted
Flavor: r-patched-linux-x86_64

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [17s/264s]
    Running the tests in ‘tests/run-all.R’ failed.
    Complete output:
     > #
     > # Licensed to the Apache Software Foundation (ASF) under one or more
     > # contributor license agreements. See the NOTICE file distributed with
     > # this work for additional information regarding copyright ownership.
     > # The ASF licenses this file to You under the Apache License, Version 2.0
     > # (the "License"); you may not use this file except in compliance with
     > # the License. You may obtain a copy of the License at
     > #
     > # http://www.apache.org/licenses/LICENSE-2.0
     > #
     > # Unless required by applicable law or agreed to in writing, software
     > # distributed under the License is distributed on an "AS IS" BASIS,
     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     > # See the License for the specific language governing permissions and
     > # limitations under the License.
     > #
     >
     > library(testthat)
     > library(SparkR)
    
     Attaching package: 'SparkR'
    
     The following objects are masked from 'package:testthat':
    
     describe, not
    
     The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
     The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
     rank, rbind, sample, startsWith, subset, summary, transform, union
    
     >
     > # Turn all warnings into errors
     > options("warn" = 2)
     >
     > if (.Platform$OS.type == "windows") {
     + Sys.setenv(TZ = "GMT")
     + }
     >
     > # Setup global test environment
     > # Install Spark first to set SPARK_HOME
     >
     > # NOTE(shivaram): We set overwrite to handle any old tar.gz files or directories left behind on
     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
     > install.spark(overwrite = TRUE)
     Overwrite = TRUE: download and overwrite the tar fileand Spark package directory if they exist.
     Spark not found in the cache directory. Installation will start.
     MirrorUrl not provided.
     Looking for preferred site from apache website...
     Preferred mirror site found: http://www.mirrorservice.org/sites/ftp.apache.org/spark
     Downloading spark-2.3.0 for Hadoop 2.7 from:
     - http://www.mirrorservice.org/sites/ftp.apache.org/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
     trying URL 'http://www.mirrorservice.org/sites/ftp.apache.org/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz'
     Content type 'application/x-gzip' length 226128401 bytes (215.7 MB)
     ==================================================
     downloaded 215.7 MB
    
     Installing to /home/ripley/.cache/spark
     DONE.
     SPARK_HOME set to /home/ripley/.cache/spark/spark-2.3.0-bin-hadoop2.7
     >
     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
     > invisible(lapply(sparkRWhitelistSQLDirs,
     + function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
     >
     > sparkRTestMaster <- "local[1]"
     > sparkRTestConfig <- list()
     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
     + sparkRTestMaster <- ""
     + } else {
     + # Disable hsperfdata on CRAN
     + old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
     + Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
     + tmpDir <- tempdir()
     + tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
     + sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
     + spark.executor.extraJavaOptions = tmpArg)
     + }
     >
     > test_package("SparkR")
     Launching java with spark-submit command /home/ripley/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/tmp/RtmpUQaqmo" sparkr-shell /tmp/RtmpUQaqmo/backend_port1c6238234986
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/spark/launcher/Main : Unsupported major.minor version 52.0
     at java.lang.ClassLoader.defineClass1(Native Method)
     at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
     at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
     at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
     at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
     at java.security.AccessController.doPrivileged(Native Method)
     at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
     at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
     ── 1. Error: create DataFrame from list or data.frame (@test_basic.R#21) ──────
     JVM is not ready after 10 seconds
     1: sparkR.session(master = sparkRTestMaster, enableHiveSupport = FALSE, sparkConfig = sparkRTestConfig) at /home/ripley/R/Lib32/SparkR/tests/testthat/test_basic.R:21
     2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, sparkExecutorEnvMap,
     sparkJars, sparkPackages)
     3: stop("JVM is not ready after 10 seconds")
    
     Launching java with spark-submit command /home/ripley/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/tmp/RtmpUQaqmo" sparkr-shell /tmp/RtmpUQaqmo/backend_port1c62599a450d
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/spark/launcher/Main : Unsupported major.minor version 52.0
     at java.lang.ClassLoader.defineClass1(Native Method)
     at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
     at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
     at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
     at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
     at java.security.AccessController.doPrivileged(Native Method)
     at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
     at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
     ── 2. Error: spark.glm and predict (@test_basic.R#53) ─────────────────────────
     JVM is not ready after 10 seconds
     1: sparkR.session(master = sparkRTestMaster, enableHiveSupport = FALSE, sparkConfig = sparkRTestConfig) at /home/ripley/R/Lib32/SparkR/tests/testthat/test_basic.R:53
     2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, sparkExecutorEnvMap,
     sparkJars, sparkPackages)
     3: stop("JVM is not ready after 10 seconds")
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#21)
     2. Error: spark.glm and predict (@test_basic.R#53)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-patched-solaris-x86

Version: 2.3.0
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
    Warning in engine$weave(file, quiet = quiet, encoding = enc) :
     Pandoc (>= 1.12.3) and/or pandoc-citeproc not available. Falling back to R Markdown v1.
    
    Attaching package: 'SparkR'
    
    The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
    The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith,
     intersect, rank, rbind, sample, startsWith, subset, summary,
     transform, union
    
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/spark/launcher/Main : Unsupported major.minor version 52.0
     at java.lang.ClassLoader.defineClass1(Native Method)
     at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
     at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
     at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
     at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
     at java.security.AccessController.doPrivileged(Native Method)
     at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
     at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
    Quitting from lines 65-67 (sparkr-vignettes.Rmd)
    Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
    JVM is not ready after 10 seconds
    Execution halted
Flavor: r-patched-solaris-x86

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [9s/106s]
    Running the tests in ‘tests/run-all.R’ failed.
    Complete output:
     > #
     > # Licensed to the Apache Software Foundation (ASF) under one or more
     > # contributor license agreements. See the NOTICE file distributed with
     > # this work for additional information regarding copyright ownership.
     > # The ASF licenses this file to You under the Apache License, Version 2.0
     > # (the "License"); you may not use this file except in compliance with
     > # the License. You may obtain a copy of the License at
     > #
     > # http://www.apache.org/licenses/LICENSE-2.0
     > #
     > # Unless required by applicable law or agreed to in writing, software
     > # distributed under the License is distributed on an "AS IS" BASIS,
     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     > # See the License for the specific language governing permissions and
     > # limitations under the License.
     > #
     >
     > library(testthat)
     > library(SparkR)
    
     Attaching package: 'SparkR'
    
     The following objects are masked from 'package:testthat':
    
     describe, not
    
     The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
     The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
     rank, rbind, sample, startsWith, subset, summary, transform, union
    
     >
     > # Turn all warnings into errors
     > options("warn" = 2)
     >
     > if (.Platform$OS.type == "windows") {
     + Sys.setenv(TZ = "GMT")
     + }
     >
     > # Setup global test environment
     > # Install Spark first to set SPARK_HOME
     >
     > # NOTE(shivaram): We set overwrite to handle any old tar.gz files or directories left behind on
     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
     > install.spark(overwrite = TRUE)
     Overwrite = TRUE: download and overwrite the tar fileand Spark package directory if they exist.
     Spark not found in the cache directory. Installation will start.
     MirrorUrl not provided.
     Looking for preferred site from apache website...
     Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
     Downloading spark-2.3.0 for Hadoop 2.7 from:
     - http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
     trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz'
     Content type 'application/x-gzip' length 226128401 bytes (215.7 MB)
     ==================================================
     downloaded 215.7 MB
    
     Installing to /home/hornik/.cache/spark
     DONE.
     SPARK_HOME set to /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7
     >
     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
     > invisible(lapply(sparkRWhitelistSQLDirs,
     + function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
     >
     > sparkRTestMaster <- "local[1]"
     > sparkRTestConfig <- list()
     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
     + sparkRTestMaster <- ""
     + } else {
     + # Disable hsperfdata on CRAN
     + old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
     + Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
     + tmpDir <- tempdir()
     + tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
     + sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
     + spark.executor.extraJavaOptions = tmpArg)
     + }
     >
     > test_package("SparkR")
     Launching java with spark-submit command /home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --driver-java-options "-Djava.io.tmpdir=/tmp/RtmpaYEcUT" sparkr-shell /tmp/RtmpaYEcUT/backend_portc3915a213f5
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
     WARNING: An illegal reflective access operation has occurred
     WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
     WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
     WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
     WARNING: All illegal access operations will be denied in a future release
     2018-04-30 18:32:36 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     Setting default log level to "WARN".
     To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
     [Stage 0:> (0 + 0) / 1]
     [Stage 0:> (0 + 1) / 1]
    
     2018-04-30 18:32:52 ERROR RBackendHandler:91 - count on 13 failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     ... 36 more
     ── 1. Error: create DataFrame from list or data.frame (@test_basic.R#26) ──────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
     at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: expect_equal(count(df), i) at /home/hornik/tmp/R.check/r-release-gcc/Work/build/Packages/SparkR/tests/testthat/test_basic.R:26
     2: quasi_label(enquo(object), label)
     3: eval_bare(get_expr(quo), get_env(quo))
     4: count(df)
     5: count(df)
     6: callJMethod(x@sdf, "count")
     7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
     8: handleErrors(returnStatus, conn)
     9: stop(readString(conn))
    
     2018-04-30 18:32:54 ERROR RBackendHandler:91 - fit on org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
     java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     ... 36 more
     ── 2. Error: spark.glm and predict (@test_basic.R#58) ─────────────────────────
     java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140)
     at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
     at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
     at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
     at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
     at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:292)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:86)
     at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
     1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at /home/hornik/tmp/R.check/r-release-gcc/Work/build/Packages/SparkR/tests/testthat/test_basic.R:58
     2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
     3: .local(data, formula, ...)
     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper", "fit", formula,
     data@sdf, tolower(family$family), family$link, tol, as.integer(maxIter), weightCol,
     regParam, as.double(var.power), as.double(link.power), stringIndexerOrderType,
     offsetCol)
     5: invokeJava(isStatic = TRUE, className, methodName, ...)
     6: handleErrors(returnStatus, conn)
     7: stop(readString(conn))
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
     2. Error: spark.glm and predict (@test_basic.R#58)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-release-linux-x86_64

Version: 2.3.0
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
    
    Attaching package: 'SparkR'
    
    The following objects are masked from 'package:stats':
    
     cov, filter, lag, na.omit, predict, sd, var, window
    
    The following objects are masked from 'package:base':
    
     as.data.frame, colnames, colnames<-, drop, endsWith,
     intersect, rank, rbind, sample, startsWith, subset, summary,
     transform, union
    
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/hornik/.cache/spark/spark-2.3.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
    WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    2018-04-30 18:33:06 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    
    [Stage 0:> (0 + 0) / 1]
    [Stage 0:> (0 + 1) / 1]
    
    
    [Stage 1:> (0 + 1) / 1]
    
    
    [Stage 3:> (0 + 1) / 1]
    
    
    [Stage 10:=============> (25 + 1) / 100]
    [Stage 10:====================> (38 + 1) / 100]
    [Stage 10:============================> (51 + 1) / 100]
    [Stage 10:==================================> (63 + 2) / 100]
    [Stage 10:=========================================> (76 + 1) / 100]
    [Stage 10:================================================> (88 + 1) / 100]
    [Stage 10:======================================================>(99 + 1) / 100]
    
    
    [Stage 12:============================> (38 + 1) / 75]
    [Stage 12:======================================> (52 + 1) / 75]
    [Stage 12:==================================================> (67 + 1) / 75]
    
    2018-04-30 18:33:35 ERROR RBackendHandler:91 - dfToCols on org.apache.spark.sql.api.r.SQLUtils failed
    java.lang.reflect.InvocationTargetException
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.base/java.lang.reflect.Method.invoke(Method.java:564)
     at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
     at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
     at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
     at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
     at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
     at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
     at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
     at scala.collection.immutable.List.foreach(List.scala:381)
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
     at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
     at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.collect(Dataset.scala:2722)
     at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:173)
     at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
     ... 36 more
    Quitting from lines 102-104 (sparkr-vignettes.Rmd)
    Error: processing vignette ‘sparkr-vignettes.Rmd’ failed with diagnostics:
    java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.fore
    Execution halted
Flavor: r-release-linux-x86_64

Version: 2.3.0
Check: tests
Result: ERROR
     Running ‘run-all.R’ [8s/35s]
    Running the tests in ‘tests/run-all.R’ failed.
    Last 13 lines of output:
     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper", "fit", formula,
     data@sdf, tolower(family$family), family$link, tol, as.integer(maxIter), weightCol,
     regParam, as.double(var.power), as.double(link.power), stringIndexerOrderType,
     offsetCol)
     5: invokeJava(isStatic = TRUE, className, methodName, ...)
     6: handleErrors(returnStatus, conn)
     7: stop(readString(conn))
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 0 SKIPPED: 0 FAILED: 2
     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
     2. Error: spark.glm and predict (@test_basic.R#58)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-release-osx-x86_64

Version: 2.3.0
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
     at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3272)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
     at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
     at org.apache.spark.sql.Dataset.collect(Dataset.scala:2722)
     at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:173)
     at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
     ... 36 more
    Quitting from lines 102-104 (sparkr-vignettes.Rmd)
    Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
    java.lang.IllegalArgumentException
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
     at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
     at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
     at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
     at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
     at scala.collection.mutable.HashMap.foreachE
    Execution halted
Flavor: r-release-osx-x86_64