Python 如何将spark流媒体保存到本地pc和hdfs?
尝试将此数据流化,但无法将该数据以元组形式保存在本地磁盘或hdfs中。 从pyspark导入SparkConf,SparkContextPython 如何将spark流媒体保存到本地pc和hdfs?,python,pyspark,apache-kafka,hdfs,spark-streaming,Python,Pyspark,Apache Kafka,Hdfs,Spark Streaming,尝试将此数据流化,但无法将该数据以元组形式保存在本地磁盘或hdfs中。 从pyspark导入SparkConf,SparkContext from operator import add import sys from pyspark.streaming import StreamingContext from pyspark.streaming.kafka import KafkaUtils ## Constants APP_NAME = "PythonStreamingDirectKafka
from operator import add
import sys
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
## Constants
APP_NAME = "PythonStreamingDirectKafkaWordCount"
##OTHER FUNCTIONS/CLASSES
def main():
sc = SparkContext(appName="PythonStreamingDirectKafkaWordCount")
ssc = StreamingContext(sc, 2)
brokers, topic = sys.argv[1:]
kvs = KafkaUtils.createDirectStream(ssc, [topic], {"metadata.broker.list": brokers})
lines = kvs.map(lambda x: x[1])
counts = lines.flatMap(lambda line: line.split(" ")) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a+b)
def process(RDD):
#RDD.pprint()
kvs2=RDD.map()
kvs2.saveAsTextFiles('path')
#kvs.foreachRDD(lambda x: process(x))
#kvs1=kvs.map(lambda x: x)
kvs.pprint()
kvs.saveAsTextFiles('path','txt')
ssc.start()
ssc.awaitTermination()
if __name__ == "__main__":
main()
在这一行:
kvs.saveAsTextFiles('path','txt')
您存储的是原始流,而不是带有元组的流。而是从计数中存储:
counts.saveAsTextFiles('path','txt')
请注意保存在“path”中提供的目录下的工作节点上的文件
pySpark API不支持保存到HDFS。对于最新版本,其他语言确实有saveAsHadoopFiles。链接到