将嵌套列表转换为数据帧:Pyspark

将嵌套列表转换为数据帧:Pyspark,pyspark,apache-spark-sql,Pyspark,Apache Spark Sql,我试图通过以下链接中的答案将嵌套列表转换为Dataframe 但我得到了这个错误: --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-147-780a8d7196df> in <module>

我试图通过以下链接中的答案将嵌套列表转换为Dataframe

但我得到了这个错误:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-147-780a8d7196df> in <module>()
----> 5 spark.createDataFrame([R(i, x) for i, x in enumerate(my_data)]).show()

F:\spark\spark\python\pyspark\sql\session.py in createDataFrame(self, data, schema, samplingRatio, verifySchema)

--> 689             rdd, schema = self._createFromLocal(map(prepare, data), schema)

F:\spark\spark\python\pyspark\sql\session.py in _createFromLocal(self, data, schema)

--> 424         return self._sc.parallelize(data), schema

F:\spark\spark\python\pyspark\context.py in parallelize(self, c, numSlices)

--> 484         jrdd = self._serialize_to_jvm(c, numSlices, serializer)

F:\spark\spark\python\pyspark\context.py in _serialize_to_jvm(self, data, parallelism, serializer)

--> 493         tempFile = NamedTemporaryFile(delete=False, dir=self._temp_dir)


~\Anaconda3\lib\tempfile.py in NamedTemporaryFile(mode, buffering, encoding, newline, suffix, prefix, dir, delete)
    547         flags |= _os.O_TEMPORARY
    548 
--> 549     (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
    550     try:
    551         file = _io.open(fd, mode, buffering=buffering,

~\Anaconda3\lib\tempfile.py in _mkstemp_inner(dir, pre, suf, flags, output_type)
    258         file = _os.path.join(dir, pre + name + suf)
    259         try:
--> 260             fd = _os.open(file, flags, 0o600)
    261         except FileExistsError:
    262             continue    # try again

FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\*****\\AppData\\Local\\Temp\\spark-e340269d-a29e-4b95-90d3-c424a04fcb0a\\pyspark-f7fce557-e11b-47c9-b7a5-81e72a360b36\\tmp7n0s97t2'
---------------------------------------------------------------------------
FileNotFoundError回溯(最近一次调用上次)
在()
---->5.createDataFrame([R(i,x)表示枚举(my_数据)中的i,x)]).show()
F:\spark\spark\python\pyspark\sql\session.py在createDataFrame中(self、data、schema、samplinglatio、verifySchema)
-->689 rdd,schema=self.\u createFromLocal(映射(准备,数据),schema)
F:\spark\spark\python\pyspark\sql\session.py in\u createFromLocal(self、data、schema)
-->424返回自并行化(数据),模式
F:\spark\spark\python\pyspark\context.py在parallelize中(self、c、numSlices)
-->484 jrdd=self.\u序列化\u到\u jvm(c、numSlices、序列化程序)
F:\spark\spark\python\pyspark\context.py in_serialize_to_jvm(self、data、parallelism、serializer)
-->493 tempFile=NamedTemporaryFile(delete=False,dir=self.\u temp\u dir)
NamedTemporaryFile中的~\Anaconda3\lib\tempfile.py(模式、缓冲、编码、换行、后缀、前缀、目录、删除)
547个标志|=_os.O_临时
548
-->549(fd,名称)=\u mkstemp\u内部(目录、前缀、后缀、标志、输出类型)
550尝试:
551文件=io.open(fd,模式,缓冲=缓冲,
~Anaconda3\lib\tempfile.py位于\u mkstemp\u内部(dir、pre、suf、flags、output\u类型)
258 file=_os.path.join(dir,pre+name+suf)
259试试:
-->260 fd=_os.open(文件、标志、0o600)
261除文件ExistError外:
262继续#再试一次
FileNotFoundError:[Errno 2]没有这样的文件或目录:“C:\\Users\\*****\\AppData\\Local\\Temp\\spark-e340269d-a29e-4b95-90d3-c424a04fcb0a\\pyspark-f7fce557-e11b-47c9-b7a5-81e72a360b36\\tmp7n0s97t2”

我从jupyter笔记本电脑/pyspark收到了相同的错误。
它在重新启动笔记本内核后工作。

该错误似乎与您尝试执行的操作无关。请尝试创建一个简单的数据帧:
spark.createDataFrame([(1,),(2,)],[“col1”]).show()
首先,感谢您的回复。我试图创建简单的数据框,但再次尝试从newfolder运行pyspark时也出现了相同的错误。我从嵌套列表中创建了数据框,没有任何问题。但在第一个文件夹中不起作用。问题是什么???@Sidhom请提供有关如何运行的详细信息,以及第一个文件夹是什么意思?在Jupyter笔记本中点击内核>重启并清除输出
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-147-780a8d7196df> in <module>()
----> 5 spark.createDataFrame([R(i, x) for i, x in enumerate(my_data)]).show()

F:\spark\spark\python\pyspark\sql\session.py in createDataFrame(self, data, schema, samplingRatio, verifySchema)

--> 689             rdd, schema = self._createFromLocal(map(prepare, data), schema)

F:\spark\spark\python\pyspark\sql\session.py in _createFromLocal(self, data, schema)

--> 424         return self._sc.parallelize(data), schema

F:\spark\spark\python\pyspark\context.py in parallelize(self, c, numSlices)

--> 484         jrdd = self._serialize_to_jvm(c, numSlices, serializer)

F:\spark\spark\python\pyspark\context.py in _serialize_to_jvm(self, data, parallelism, serializer)

--> 493         tempFile = NamedTemporaryFile(delete=False, dir=self._temp_dir)


~\Anaconda3\lib\tempfile.py in NamedTemporaryFile(mode, buffering, encoding, newline, suffix, prefix, dir, delete)
    547         flags |= _os.O_TEMPORARY
    548 
--> 549     (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
    550     try:
    551         file = _io.open(fd, mode, buffering=buffering,

~\Anaconda3\lib\tempfile.py in _mkstemp_inner(dir, pre, suf, flags, output_type)
    258         file = _os.path.join(dir, pre + name + suf)
    259         try:
--> 260             fd = _os.open(file, flags, 0o600)
    261         except FileExistsError:
    262             continue    # try again

FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\*****\\AppData\\Local\\Temp\\spark-e340269d-a29e-4b95-90d3-c424a04fcb0a\\pyspark-f7fce557-e11b-47c9-b7a5-81e72a360b36\\tmp7n0s97t2'