Python 未找到Pyspark模块

Python 未找到Pyspark模块,python,hadoop,apache-spark,yarn,pyspark,Python,Hadoop,Apache Spark,Yarn,Pyspark,我正在尝试在纱线中执行一个简单的Pypark作业。代码如下: from pyspark import SparkConf, SparkContext conf = (SparkConf() .setMaster("yarn-client") .setAppName("HDFS Filter") .set("spark.executor.memory", "1g")) sc = SparkContext(conf = conf) input

我正在尝试在纱线中执行一个简单的Pypark作业。代码如下:

from pyspark import SparkConf, SparkContext

conf = (SparkConf()
         .setMaster("yarn-client")
         .setAppName("HDFS Filter")
         .set("spark.executor.memory", "1g"))
sc = SparkContext(conf = conf)

inputFile = sc.textFile("hdfs://myserver:9000/1436304078054.json.gz").cache()
matchTerm = "spark"
numMatches = inputFile.filter(lambda line: matchTerm in line).count()
print(numMatches, "lines contain", matchTerm)
我不知道代码是否有效,这不是重点。问题是,当我从spark目录内部使用命令
/bin/pyspark../job.py
运行它时,我得到了下一个错误(只是整个输出的一小部分):


知道我做错了什么吗?

我认为需要将
PYSPARK\u PYTHON
环境变量设置为指向正在使用的PYTHON安装。您似乎没有使用
/usr/bin/python2.7
启动作业

我通常在导入和运行pyspark之前调用此函数,以确保设置正确:

def configure_spark(spark_home=None, pyspark_python=None):
    spark_home = spark_home or "/path/to/default/spark/home"
    os.environ['SPARK_HOME'] = spark_home

    # Add the PySpark directories to the Python path:
    sys.path.insert(1, os.path.join(spark_home, 'python'))
    sys.path.insert(1, os.path.join(spark_home, 'python', 'pyspark'))
    sys.path.insert(1, os.path.join(spark_home, 'python', 'build'))

    # If PySpark isn't specified, use currently running Python binary:
    pyspark_python = pyspark_python or sys.executable
    os.environ['PYSPARK_PYTHON'] = pyspark_python

对我来说,解决这个问题的方法是在
SparkConf
中添加一些额外的设置,这些设置似乎可以确保工作人员能够访问PySpark和Py4J模块:

conf = (SparkConf()
     .setMaster("yarn-client")
     .setAppName("HDFS Filter")
     .set("spark.executor.memory", "1g")
     .set('spark.yarn.dist.files','file:/usr/hdp/2.3.2.0-2950/spark/python/lib/pyspark.zip,file:/usr/hdp/2.3.2.0-2950/spark/python/lib/py4j-0.8.2.1-src.zip')
     .setExecutorEnv('PYTHONPATH','pyspark.zip:py4j-0.8.2.1-src.zip'))
您需要根据您的系统来编辑路径

def configure_spark(spark_home=None, pyspark_python=None):
    spark_home = spark_home or "/path/to/default/spark/home"
    os.environ['SPARK_HOME'] = spark_home

    # Add the PySpark directories to the Python path:
    sys.path.insert(1, os.path.join(spark_home, 'python'))
    sys.path.insert(1, os.path.join(spark_home, 'python', 'pyspark'))
    sys.path.insert(1, os.path.join(spark_home, 'python', 'build'))

    # If PySpark isn't specified, use currently running Python binary:
    pyspark_python = pyspark_python or sys.executable
    os.environ['PYSPARK_PYTHON'] = pyspark_python
conf = (SparkConf()
     .setMaster("yarn-client")
     .setAppName("HDFS Filter")
     .set("spark.executor.memory", "1g")
     .set('spark.yarn.dist.files','file:/usr/hdp/2.3.2.0-2950/spark/python/lib/pyspark.zip,file:/usr/hdp/2.3.2.0-2950/spark/python/lib/py4j-0.8.2.1-src.zip')
     .setExecutorEnv('PYTHONPATH','pyspark.zip:py4j-0.8.2.1-src.zip'))