Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark java.io.IOException:帧大小[…]大于最大长度[…]!_Apache Spark_Alluxio - Fatal编程技术网

Apache spark java.io.IOException:帧大小[…]大于最大长度[…]!

Apache spark java.io.IOException:帧大小[…]大于最大长度[…]!,apache-spark,alluxio,Apache Spark,Alluxio,我在独立模式下运行Spark+Alluxio进行数据访问。更具体地说,我有一个火花大师和一个火花工作者 运行作业时,我遇到以下错误: 17/03/22 14:35:43 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 10.254.22.6): java.io.IOException: Frame size (67108864) larger than max length (16777216)! at alluxi

我在独立模式下运行Spark+Alluxio进行数据访问。更具体地说,我有一个火花大师和一个火花工作者

运行作业时,我遇到以下错误:

17/03/22 14:35:43 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 10.254.22.6): java.io.IOException: Frame size (67108864) larger than max length (16777216)!
        at alluxio.AbstractClient.checkVersion(AbstractClient.java:112)
        at alluxio.AbstractClient.connect(AbstractClient.java:175)
        at alluxio.AbstractClient.retryRPC(AbstractClient.java:322)
        at alluxio.client.file.FileSystemMasterClient.getStatus(FileSystemMasterClient.java:183)
        at alluxio.client.file.BaseFileSystem.getStatus(BaseFileSystem.java:175)
        at alluxio.client.file.BaseFileSystem.getStatus(BaseFileSystem.java:167)
        at alluxio.hadoop.HdfsFileInputStream.<init>(HdfsFileInputStream.java:86)
        at alluxio.hadoop.AbstractFileSystem.open(AbstractFileSystem.java:514)
        at alluxio.hadoop.FileSystem.open(FileSystem.java:25)
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
        at net.atos.hadoop.ImageRecordReader.initialize(ImageRecordReader.java:47)
        at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:153)
        at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:124)
        at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:262)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
17/03/22 14:35:43警告任务集管理器:在阶段0.0(TID 0,10.254.22.6)中丢失任务0.0:java.io.IOException:帧大小(67108864)大于最大长度(16777216)!
在alluxio.AbstractClient.checkVersion(AbstractClient.java:112)上
在alluxio.AbstractClient.connect(AbstractClient.java:175)
在alluxio.AbstractClient.retryRPC(AbstractClient.java:322)
在alluxio.client.file.FileSystemMasterClient.getStatus(FileSystemMasterClient.java:183)
位于alluxio.client.file.BaseFileSystem.getStatus(BaseFileSystem.java:175)
位于alluxio.client.file.BaseFileSystem.getStatus(BaseFileSystem.java:167)
在alluxio.hadoop.HdfsFileInputStream。(HdfsFileInputStream.java:86)
在alluxio.hadoop.AbstractFileSystem.open(AbstractFileSystem.java:514)
在alluxio.hadoop.FileSystem.open(FileSystem.java:25)
位于org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
在net.atos.hadoop.ImageRecordReader.initialize(ImageRecordReader.java:47)
位于org.apache.spark.rdd.NewHadoopRDD$$anon$1。(NewHadoopRDD.scala:153)
位于org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:124)
位于org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:300)上
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:264)
在org.apache.spark.rdd.MapPartitionsRDD.compute上(MapPartitionsRDD.scala:38)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:300)上
位于org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:262)
在org.apache.spark.rdd.MapPartitionsRDD.compute上(MapPartitionsRDD.scala:38)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:300)上
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:264)
在org.apache.spark.rdd.MapPartitionsRDD.compute上(MapPartitionsRDD.scala:38)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:300)上
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:264)
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)上
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)上
位于org.apache.spark.scheduler.Task.run(Task.scala:88)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:214)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
运行(Thread.java:745)
环境:

  • Spark 1.5.2
  • Alluxio 1.3.0

我在主控和辅助上都将
SPARK\u WORKER\u内存设置为
2G
(将其作为环境变量传递,请参阅)。我试图将其增加到
4G
。但是,我只更改了worker的参数。我想这会造成主人和工人之间的不匹配

在两个节点上将其设置为相同的值(
4G
)解决了该问题