Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark-无法将数据帧保存到磁盘_Apache Spark_Pyspark_Apache Spark Sql_Spark Dataframe_Parquet - Fatal编程技术网

Apache spark Spark-无法将数据帧保存到磁盘

Apache spark Spark-无法将数据帧保存到磁盘,apache-spark,pyspark,apache-spark-sql,spark-dataframe,parquet,Apache Spark,Pyspark,Apache Spark Sql,Spark Dataframe,Parquet,我正在使用配置单元目录以独立模式运行Spark。我试图从外部文档加载数据,然后以拼花格式将其保存回磁盘 rdd=sc\ .text文件('/data/source.txt',NUM_切片)\ .map(lambda x:(x[:5],x[6:12],gensim.utils.simple_预处理(x[13:])) schema=StructType([ StructField('c1',StringType(),False), StructField('c2',StringType(),Fals

我正在使用配置单元目录以独立模式运行Spark。我试图从外部文档加载数据,然后以拼花格式将其保存回磁盘

rdd=sc\
.text文件('/data/source.txt',NUM_切片)\
.map(lambda x:(x[:5],x[6:12],gensim.utils.simple_预处理(x[13:]))
schema=StructType([
StructField('c1',StringType(),False),
StructField('c2',StringType(),False),
StructField('c3',ArrayType(StringType(),True),False),
])
data=sql\u context.createDataFrame(rdd,模式)
data.write.mode('overwrite').parquet('/data/some_dir'))
当我试图读回该文件时,它失败,原因是:

AnalysisException: 'Unable to infer schema for Parquet. It must be specified manually.;'
看起来它只是无法解析位置或文件的属性

现在,如果我查看所有3个工作节点上的位置,它看起来像:

clush -ab 'locate some_file'
---------------
master
---------------
/data/some_file
/data/some_file/._SUCCESS.crc
/data/some_file/_SUCCESS
---------------
worker1
---------------
/data/some_file
/data/some_file/_temporary
/data/some_file/_temporary/0
/data/some_file/_temporary/0/_temporary
/data/some_file/_temporary/0/task_20180511211832_0010_m_000000
/data/some_file/_temporary/0/task_20180511211832_0010_m_000039
/data/some_file/_temporary/0/task_20180511211832_0010_m_000000/.part-00000-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000000/part-00000-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
/data/some_file/_temporary/0/task_20180511211832_0010_m_000039/.part-00039-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000039/part-00039-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
---------------
worker2
---------------
/data/some_file
/data/some_file/_temporary
/data/some_file/_temporary/0
/data/some_file/_temporary/0/_temporary
/data/some_file/_temporary/0/task_20180511211832_0010_m_000011
/data/some_file/_temporary/0/task_20180511211832_0010_m_000017
/data/some_file/_temporary/0/task_20180511211832_0010_m_000029
/data/some_file/_temporary/0/task_20180511211832_0010_m_000038
/data/some_file/_temporary/0/task_20180511211832_0010_m_000011/.part-00011-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000011/part-00011-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
/data/some_file/_temporary/0/task_20180511211832_0010_m_000017/.part-00017-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000017/part-00017-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
/data/some_file/_temporary/0/task_20180511211832_0010_m_000029/.part-00029-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000029/part-00029-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
/data/some_file/_temporary/0/task_20180511211832_0010_m_000038/.part-00038-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000038/part-00038-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
---------------
worker3
---------------
/data/some_file
/data/some_file/_temporary
/data/some_file/_temporary/0
/data/some_file/_temporary/0/_temporary
/data/some_file/_temporary/0/task_20180511211832_0010_m_000040
/data/some_file/_temporary/0/task_20180511211832_0010_m_000043
/data/some_file/_temporary/0/task_20180511211832_0010_m_000046
/data/some_file/_temporary/0/task_20180511211832_0010_m_000040/.part-00040-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000040/part-00040-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
/data/some_file/_temporary/0/task_20180511211832_0010_m_000043/.part-00043-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000043/part-00043-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
/data/some_file/_temporary/0/task_20180511211832_0010_m_000046/.part-00046-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet.crc
/data/some_file/_temporary/0/task_20180511211832_0010_m_000046/part-00046-1b2764a6-28a3-4ba2-9493-766074eef4d5-c000.snappy.parquet
我不明白为什么它会将此保存到“_temporary”而不是持久文件夹中

如果您需要其他上下文,请告诉我


谢谢

TL;DR要以分布式模式保存和加载数据,您需要一个分布式文件系统。本地存储不足

我不明白为什么它会将此保存到“_temporary”而不是持久文件夹中

这是因为您没有分布式文件系统。在这种情况下,每个执行器都可以完成自己的部分,但Spark无法正确完成作业


此外,由于每个执行器只能访问结果的一部分,因此无法使用Spark加载数据。

事实上,我理解这是为什么。尽管如此,我还是希望通过使本地连接的卷在节点之间完全相同来模拟分布式存储已经足够了,但显然不是这样。将NAS装载添加到每个节点(这是我的分布式文件系统)后,一切都正常了。我觉得它有点不直观,我想更好地理解它。因此,任何有用的阅读资料,你也可以指给我看(除了任何易于谷歌搜索的资料)都会有所帮助。我不确定这里是否有任何明确的文档。写作只是一种奇怪的行为(我已经看到过这样的解释,但我找不到问题)。对于读取,每个节点都必须能够访问所有数据块,而当数据在本地存储上是一块块时,则不满足此条件。