pyspark检查点在本地计算机上失败

pyspark检查点在本地计算机上失败,pyspark,spark-checkpoint,Pyspark,Spark Checkpoint,我刚刚开始学习pyspark在本地机器上使用standalone。我无法让检查站工作。我把剧本归结为 spark = SparkSession.builder.appName("PyTest").master("local[*]").getOrCreate() spark.sparkContext.setCheckpointDir("/RddCheckPoint") df = spark.createDataFrame(["10","11","13"], "string").toDF("age

我刚刚开始学习pyspark在本地机器上使用standalone。我无法让检查站工作。我把剧本归结为

spark = SparkSession.builder.appName("PyTest").master("local[*]").getOrCreate()

spark.sparkContext.setCheckpointDir("/RddCheckPoint")
df = spark.createDataFrame(["10","11","13"], "string").toDF("age")
df.checkpoint()
我得到这个输出

>>> spark = SparkSession.builder.appName("PyTest").master("local[*]").getOrCreate()
>>>
>>> spark.sparkContext.setCheckpointDir("/RddCheckPoint")
>>> df = spark.createDataFrame(["10","11","13"], "string").toDF("age")
>>> df.checkpoint()
20/01/24 15:26:45 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "N:\spark\python\pyspark\sql\dataframe.py", line 463, in checkpoint
    jdf = self._jdf.checkpoint(eager)
  File "N:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\java_gateway.py", line 1286, in __call__
  File "N:\spark\python\pyspark\sql\utils.py", line 98, in deco
    return f(*a, **kw)
  File "N:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o71.checkpoint.
: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:645)
        at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1230)
        at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1435)
        at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:493)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910)
        at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678)
        at org.apache.spark.rdd.ReliableCheckpointRDD.getPartitions(ReliableCheckpointRDD.scala:74)
        at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:272)
        at org.apache.spark.rdd.ReliableCheckpointRDD$.writeRDDToCheckpointDirectory(ReliableCheckpointRDD.scala:179)
        at org.apache.spark.rdd.ReliableRDDCheckpointData.doCheckpoint(ReliableRDDCheckpointData.scala:59)
        at org.apache.spark.rdd.RDDCheckpointData.checkpoint(RDDCheckpointData.scala:75)
        at org.apache.spark.rdd.RDD.$anonfun$doCheckpoint$1(RDD.scala:1801)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDD.doCheckpoint(RDD.scala:1791)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2118)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2137)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2156)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2181)
        at org.apache.spark.rdd.RDD.count(RDD.scala:1227)
        at org.apache.spark.sql.Dataset.$anonfun$checkpoint$1(Dataset.scala:689)
        at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3472)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(SQLExecution.scala:100)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3468)
        at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:680)
        at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:643)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Unknown Source)
spark=SparkSession.builder.appName(“PyTest”).master(“local[*]).getOrCreate() >>> >>>spark.sparkContext.setCheckpointDir(“/RddCheckPoint”) >>>df=spark.createDataFrame([“10”、“11”、“13”],“字符串”).toDF(“年龄”) >>>df.checkpoint() 20/01/24 15:26:45警告大小估计器:检查是否设置了UseCompressedOops失败;假设是 回溯(最近一次呼叫最后一次): 文件“”,第1行,在 检查点中第463行的文件“N:\spark\python\pyspark\sql\dataframe.py” jdf=self.\u jdf.checkpoint(渴望) 文件“N:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\java\u gateway.py”,第1286行,在\uu调用中__ 文件“N:\spark\python\pyspark\sql\utils.py”,第98行,deco格式 返回f(*a,**kw) 文件“N:\spark\python\lib\py4j-0.10.8.1-src.zip\py4j\protocol.py”,第328行,在get\u return\u值中 py4j.protocol.Py4JJavaError:调用o71.checkpoint时出错。 :java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.nativeio$Windows.access0(Ljava/lang/String;I)Z 位于org.apache.hadoop.io.nativeio.nativeio$Windows.access0(本机方法) 位于org.apache.hadoop.io.nativeio.nativeio$Windows.access(nativeio.java:645) 位于org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1230) 位于org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1435) 位于org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:493) 位于org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868) 位于org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910) 位于org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678) 位于org.apache.spark.rdd.ReliableCheckpointRDD.getPartitions(ReliableCheckpointRDD.scala:74) 位于org.apache.spark.rdd.rdd.$anonfun$partitions$2(rdd.scala:276) 位于scala.Option.getOrElse(Option.scala:189) 位于org.apache.spark.rdd.rdd.partitions(rdd.scala:272) 位于org.apache.spark.rdd.reliableCheckpoint$.writerdTocheckPointDirectory(ReliableCheckpointRDD.scala:179) 位于org.apache.spark.rdd.ReliableRDDCheckpointData.doCheckpoint(ReliableRDDCheckpointData.scala:59) 位于org.apache.spark.rdd.RDDCheckpointData.checkpoint(RDDCheckpointData.scala:75) 在org.apache.spark.rdd.rdd.$anonfun$doCheckpoint$1(rdd.scala:1801) 在scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 位于org.apache.spark.rdd.rdd.doCheckpoint(rdd.scala:1791) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2118) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2137) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2156) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2181) 位于org.apache.spark.rdd.rdd.count(rdd.scala:1227) 位于org.apache.spark.sql.Dataset.$anonfun$checkpoint$1(Dataset.scala:689) 在org.apache.spark.sql.Dataset.$anonfun$上,操作$1(Dataset.scala:3472) 位于org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(SQLExecution.scala:100) 在org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160) 位于org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87) 位于org.apache.spark.sql.Dataset.withAction(Dataset.scala:3468) 位于org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:680) 位于org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:643) 在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处 位于sun.reflect.NativeMethodAccessorImpl.invoke(未知源) 在sun.reflect.DelegatingMethodAccessorImpl.invoke处(未知源) 位于java.lang.reflect.Method.invoke(未知源) 位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 在py4j.Gateway.invoke处(Gateway.java:282) 位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 在py4j.commands.CallCommand.execute(CallCommand.java:79) 在py4j.GatewayConnection.run处(GatewayConnection.java:238) 位于java.lang.Thread.run(未知源)
该错误没有给出失败原因的任何细节。我怀疑我错过了一些spark配置,但不确定是什么…

您有此错误,因为您没有创建检查点目录,或者您没有在该目录中写入的权限(因为检查点目录位于根目录“/”下)


谢谢我试过了,但还是失败了。有趣的是,文件夹被创建了,它确实有内容<代码>目录N:\tmp\RddCheckPoint\0769c686-9b14-415d-92b6-1da98dc9a4dd\rdd-9`
25/01/2020 17:12.
25/01/2020 17:12.
25/01/2020 17:12.part-00000.crc
etcAs我看到它有内容,.crc文件。你期望什么内容?这是一个检查点,不是.write(),数据仅供spark内部使用,不能作为CSV读取。我完全尝试了我的答案代码,它工作得非常完美。我不期望有任何特别的内容。只是意味着内容表明写访问不是(剩下的)问题,但是异常和stacktrace仍然会被抛出。。。那么,如果它可以写入目录,那么是什么导致了错误呢?下面,创建一个新的检查点目录(最好不要在根目录中)并运行我的代码片段。也许你对元数据有问题。因为,我运行这个代码段时没有出错。@有什么消息吗?
import os

os.mkdir("RddCheckPoint")
spark = SparkSession.builder.appName("PyTest").master("local[*]").getOrCreate()

spark.sparkContext.setCheckpointDir("RddCheckPoint")
df = spark.createDataFrame(["10","11","13"], "string").toDF("age")
df.checkpoint()