Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 错误:';java.lang.UnsupportedOperationException';对于PyU udf文档代码_Apache Spark_Pyspark_Apache Spark Sql_Pyspark Dataframes - Fatal编程技术网

Apache spark 错误:';java.lang.UnsupportedOperationException';对于PyU udf文档代码

Apache spark 错误:';java.lang.UnsupportedOperationException';对于PyU udf文档代码,apache-spark,pyspark,apache-spark-sql,pyspark-dataframes,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Dataframes,从可用的Pyspark文档复制Spark代码时遇到问题 例如,当我尝试以下与分组映射相关的代码时: import numpy as np import pandas as pd from pyspark.sql.functions import pandas_udf, PandasUDFType from pyspark.sql import SparkSession spark.stop() spark = SparkSession.builder.appName("New_App_gro

从可用的Pyspark文档复制Spark代码时遇到问题

例如,当我尝试以下与
分组映射相关的代码时:

import numpy as np
import pandas as pd
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql import SparkSession

spark.stop()

spark = SparkSession.builder.appName("New_App_grouped_map").getOrCreate()
spark.conf.set("spark.sql.execution.arrow.enabled", "true")

df = spark.createDataFrame(
    [(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
    ("id", "v"))


@pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP)
def subtract_mean(pdf):
    v = pdf.v
    return pdf.assign(v=v - v.mean())

df.groupby("id").apply(subtract_mean).show()
我得到以下错误日志

主要错误:

ERROR ArrowPythonRunner: Python worker exited unexpectedly (crashed)
我在一个单独的
C:\spark\
文件夹中下载了spark,因此我不确定是否必须将全局安装的
pyarrow
软件包移动到spark文件夹中。这就是问题所在吗

完整错误日志:

>>> df.groupby("id").apply(subtract_mean).show()
[Stage 16:======================================================>(99 + 1) / 100]20/05/
30 16:57:17 ERROR ArrowPythonRunner: Python worker exited unexpectedly (crashed)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\spark\python\lib\pyspark.zip\pyspark\worker.py", line 577, in main
  File "C:\spark\python\lib\pyspark.zip\pyspark\serializers.py", line 837, in read_int

    raise EOFError
EOFError

        at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonExc
eption(PythonRunner.scala:484)
        at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(Python
ArrowOutput.scala:99)
        at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(Python
ArrowOutput.scala:49)
        at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonR
unner.scala:437)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:
37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorF
orCodegenStage3.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowItera
tor.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeS
tageCodegenExec.scala:726)
        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPl
an.scala:321)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala
:872)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala
:441)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecu
tor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExec
utor.java:630)
        at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.Direct
ByteBuffer.<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR ArrowPythonRunner: This may have been caused by a prior except
ion:
java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.
<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR Executor: Exception in task 44.0 in stage 16.0 (TID 159)
java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.
<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR TaskSetManager: Task 44 in stage 16.0 failed 1 times; aborting
 job
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\spark\python\pyspark\sql\dataframe.py", line 407, in show
    print(self._jdf.showString(n, 20, vertical))
  File "C:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\java_gateway.py", line 1286, in
 __call__
  File "C:\spark\python\pyspark\sql\utils.py", line 98, in deco
    return f(*a, **kw)
  File "C:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py", line 328, in get_
return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o170.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage
16.0 failed 1 times, most recent failure: Lost task 44.0 in stage 16.0 (TID 159, DESKT
OP-ASG768U, executor driver): java.lang.UnsupportedOperationException: sun.misc.Unsafe
 or java.nio.DirectByteBuffer.<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGSche
duler.scala:1989)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.
scala:1977)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGSc
heduler.scala:1976)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1976)

        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGS
cheduler.scala:956)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adap
ted(DAGScheduler.scala:956)
        at scala.Option.foreach(Option.scala:407)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.sc
ala:956)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGSche
duler.scala:2206)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSchedu
ler.scala:2155)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSchedu
ler.scala:2144)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:758)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2116)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2137)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2156)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:431)
        at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:
47)
        at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3482)
        at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2581)
        at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3472)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(
SQLExecution.scala:100)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecu
tion.scala:160)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecutio
n.scala:87)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3468)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:2581)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:2788)
        at org.apache.spark.sql.Dataset.getRows(Dataset.scala:297)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:334)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Meth
od)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethod
AccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Delegati
ngMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.Direct
ByteBuffer.<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)

df.groupby(“id”).apply(减去平均值).show() [第16阶段:==============================================>(99+1)/100)20/05/ 30 16:57:17错误箭头Python运行程序:Python工作进程意外退出(崩溃) org.apache.spark.api.python.PythonException:回溯(最近一次调用last): 文件“C:\spark\python\lib\pyspark.zip\pyspark\worker.py”,第577行,在main中 文件“C:\spark\python\lib\pyspark.zip\pyspark\serializers.py”,第837行,只读 提高采收率 伊奥费罗 位于org.apache.spark.api.python.BasePythonRunner$readeriator.handlePythonExc eption(PythonRunner.scala:484) 位于org.apache.spark.sql.execution.python.pythonearoutput$$anon$1.read(python 箭头输出。scala:99) 位于org.apache.spark.sql.execution.python.pythonearoutput$$anon$1.read(python 箭头输出。scala:49) 位于org.apache.spark.api.python.BasePythonRunner$readeriator.hasNext(PythonR unner.scala:437) 在org.apache.spark.interruptblediator.hasNext(interruptblediator.scala: 37) 位于scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489) 位于scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) 位于org.apache.spark.sql.catalyst.expressions.GeneratedClass$GenerateEditorF orCodegenStage3.processNext(未知源) 位于org.apache.spark.sql.execution.BufferedRowitter.hasNext(BufferedRowItera tor.java:43) 位于org.apache.spark.sql.execution.whisttagecodegenexec$$anon$1.hasNext(whist tageCodegenExec.scala:726) 位于org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPl 安.斯卡拉:321) 位于org.apache.spark.rdd.rdd.$anonfun$mapPartitionsInternal$2(rdd.scala:872) 在org.apache.spark.rdd.rdd.$anonfun$mapPartitionsInternal$2$adapted(rdd.scala :872) 位于org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) 在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:349) 位于org.apache.spark.rdd.rdd.iterator(rdd.scala:313) 位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 位于org.apache.spark.scheduler.Task.run(Task.scala:127) 在org.apache.spark.executor.executor$TaskRunner.$anonfun$run$3(executor.scala :441) 位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) 位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:444) 位于java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutu tor.java:1130) 位于java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExec utor.java:630) 位于java.base/java.lang.Thread.run(Thread.java:832) 原因:java.lang.UnsupportedOperationException:sun.misc.Unsafe或java.nio.Direct ByteBuffer.(长,整数)不可用 位于io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav a:473) 位于io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243) 位于io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233) 位于io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245) 在org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro wRecordBatch.java:222) 位于org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri alizer.java:240) 位于org.apache.arrow.vector.ipc.arrowriter.writeRecordBatch(arrowriter.java:1 32) 位于org.apache.arrow.vector.ipc.arrowriter.writeBatch(arrowriter.java:120) 在org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr ItiteratorToStream$1(ArrowPythonRunner.scala:94) 在scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) 位于org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeInterat orToStream(箭头pythonRunner.scala:101) 位于org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py thonRunner.scala:373) 位于org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932) 位于org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner。 斯卡拉:213) 20/05/30 16:57:17错误箭头pythonRunner:这可能是由先前的异常事件引起的 离子: java.lang.UnsupportedOperationException:sun.misc.Unsafe或java.nio.DirectByteBuffer。 (长,int)不可用 位于io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav a:473) 位于io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243) 位于io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233) 位于io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245) 在org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro wRecordBatch.java:222) 位于org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri alizer.java:240) 位于org.apache.arrow.vector.ipc.arrowriter.writeRecordBatch(arrowriter.java:1 32) 位于org.apache.arrow.vector.ipc.arrowriter.writeBatch(arrowriter.java:120) 在org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr ItiteratorToStream$1(ArrowPythonRunner.scala:94) 在scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) 在org。
pyarrow==0.17.1
pandas==1.0.4
numpy==1.18.4
>>> df.groupby("id").apply(subtract_mean).show()
[Stage 16:======================================================>(99 + 1) / 100]20/05/
30 16:57:17 ERROR ArrowPythonRunner: Python worker exited unexpectedly (crashed)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\spark\python\lib\pyspark.zip\pyspark\worker.py", line 577, in main
  File "C:\spark\python\lib\pyspark.zip\pyspark\serializers.py", line 837, in read_int

    raise EOFError
EOFError

        at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonExc
eption(PythonRunner.scala:484)
        at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(Python
ArrowOutput.scala:99)
        at org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(Python
ArrowOutput.scala:49)
        at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonR
unner.scala:437)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:
37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorF
orCodegenStage3.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowItera
tor.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeS
tageCodegenExec.scala:726)
        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPl
an.scala:321)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala
:872)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala
:441)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecu
tor.java:1130)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExec
utor.java:630)
        at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.Direct
ByteBuffer.<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR ArrowPythonRunner: This may have been caused by a prior except
ion:
java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.
<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR Executor: Exception in task 44.0 in stage 16.0 (TID 159)
java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.
<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR TaskSetManager: Task 44 in stage 16.0 failed 1 times; aborting
 job
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\spark\python\pyspark\sql\dataframe.py", line 407, in show
    print(self._jdf.showString(n, 20, vertical))
  File "C:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\java_gateway.py", line 1286, in
 __call__
  File "C:\spark\python\pyspark\sql\utils.py", line 98, in deco
    return f(*a, **kw)
  File "C:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py", line 328, in get_
return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o170.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage
16.0 failed 1 times, most recent failure: Lost task 44.0 in stage 16.0 (TID 159, DESKT
OP-ASG768U, executor driver): java.lang.UnsupportedOperationException: sun.misc.Unsafe
 or java.nio.DirectByteBuffer.<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGSche
duler.scala:1989)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.
scala:1977)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGSc
heduler.scala:1976)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1976)

        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGS
cheduler.scala:956)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adap
ted(DAGScheduler.scala:956)
        at scala.Option.foreach(Option.scala:407)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.sc
ala:956)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGSche
duler.scala:2206)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSchedu
ler.scala:2155)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSchedu
ler.scala:2144)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:758)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2116)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2137)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2156)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:431)
        at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:
47)
        at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3482)
        at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2581)
        at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3472)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(
SQLExecution.scala:100)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecu
tion.scala:160)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecutio
n.scala:87)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3468)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:2581)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:2788)
        at org.apache.spark.sql.Dataset.getRows(Dataset.scala:297)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:334)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Meth
od)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethod
AccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Delegati
ngMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.Direct
ByteBuffer.<init>(long, int) not available
        at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
        at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.java:243)
        at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.java:233)
        at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.java:245)
        at org.apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.java:222)
        at org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.java:240)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.java:1
32)
        at org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:120)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
        at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/
$SPARK_HOME/conf/spark-defaults.conf.template
spark.driver.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true"
spark.executor.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true"
conf = {"spark.driver.extraJavaOptions":
"-Dio.netty.tryReflectionSetAccessible=true",
        "spark.executor.extraJavaOptions":
"-Dio.netty.tryReflectionSetAccessible=true"
}

SparkSession.builder.config(conf=conf).getOrCreate()
def _build_spark_session(app_name: str) -> SparkSession:
    conf = SparkConf()
    conf.set("spark.driver.extraJavaOptions", "-Dio.netty.tryReflectionSetAccessible=true")
    conf.set("spark.executor.extraJavaOptions", "-Dio.netty.tryReflectionSetAccessible=true")

    return SparkSession \
        .builder \
        .config(conf=conf) \
        .appName(app_name) \
        .getOrCreate()