为导入到pyspark的s3文件设置条件

为导入到pyspark的s3文件设置条件,pyspark,amazon-emr,Pyspark,Amazon Emr,我是新的PySpark和AWS EMR 对于Pyspark.py脚本,其简单性如下: 我想检查s3文件内容加载,以123xxxx开始 from __future__ import print_function from pyspark import SparkContext import sys if __name__ == "__main__": if len(sys.argv) != 3: print("Usage: wordcount

我是新的PySpark和AWS EMR

对于Pyspark.py脚本,其简单性如下:

我想检查s3文件内容加载,以
123xxxx
开始

from __future__ import print_function
from pyspark import SparkContext
import sys
if __name__ == "__main__":
    if len(sys.argv) != 3:
        print("Usage: wordcount  ", file=sys.stderr)
        exit(-1)
    sc = SparkContext(appName="WordCount")
    text_file = sc.textFile(sys.argv[1])
    if text_file.startswith('123'):
        counts = text_file.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b)
        counts.saveAsTextFile(sys.argv[2])
        sc.stop()
    else:
        exit(-1)
当我在AWS emr中运行步骤时:

s3a://sparkpy/output/a/a.txt s3a://sparkpy/output/a

但它有一个错误。

基本上,我计算字符串并比较:

rdd = text_file.filter(lambda x: "gfg" in x)
    if rdd.count() > 0: