Python 无法在Jupyter笔记本中安装pyspark

Python 无法在Jupyter笔记本中安装pyspark,python,apache-spark,pyspark,Python,Apache Spark,Pyspark,我已经成功安装了Python和Anaconda,但在配置pyspark时,我遇到了一些问题 Python 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. 20/05/24 13:33:48 WARN NativeCo

我已经成功安装了Python和Anaconda,但在配置pyspark时,我遇到了一些问题

Python 3.7.6 (default, Jan 8 2020, 20:23:39) 
[MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 
Type "help", "copyright", "credits" or "license" for more information.
20/05/24 13:33:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 3.0.0-preview2
      /_/
Using Python version 3.7.6 (default, Jan 8 2020 20:23:39) 
SparkSession available as 'spark'. 

>>> 20/05/24 13:34:05 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped.


有人可以帮我做这件事,或者给我提供配置pyspark for Jupyter笔记本的博客。

您可以尝试在windows中设置以下环境变量

PYSPARK_驱动程序_PYTHON_选择作为笔记本

PYSPARK_驱动程序_PYTHON作为jupyter


一旦设置了这些环境变量,当您在cmd中键入pyspark时,它将直接打开配置了pyspark的jupyter笔记本。

如果我可以提供替代方案,您可以使用Google Colab执行pyspark作业。它是免费的,也让你可以访问GPU。您可以查看我的指南,了解如何使用它并设置PySpark