Python pyspark中大数据处理的优化

Python pyspark中大数据处理的优化,python,python-2.7,apache-spark,pyspark,pyspark-sql,Python,Python 2.7,Apache Spark,Pyspark,Pyspark Sql,不是问题->需要建议吗 我操作的是20gb+6gb=26Gb的csv文件,带有1+3(1主,3从)(每个16 gb RAM) 我就是这样做的 df = spark.read.csv() #20gb df1 = spark.read.csv() #6gb df_merged= df.join(df1,'name','left') ###merging df_merged.persists(StorageLevel.MEMORY_AND_DISK) ##if i do MEMORY_ONLY wi

不是问题->需要建议吗

我操作的是20gb+6gb=26Gb的csv文件,带有1+3(1主,3从)(每个16 gb RAM)

我就是这样做的

df = spark.read.csv() #20gb
df1 = spark.read.csv() #6gb
df_merged= df.join(df1,'name','left') ###merging 
df_merged.persists(StorageLevel.MEMORY_AND_DISK) ##if i do MEMORY_ONLY will I gain more performance?????
print('No. of records found: ',df_merged.count())  ##just ensure persist by calling an action
df_merged.registerTempTable('table_satya')
query_list= [query1,query2,query3]  ###sql query string to be fired
city_list = [city1, city2,city3...total 8 cities]
file_index=0 ###will create files based on increasing index
for query_str in query_list:
   result = spark.sql(query_str) #ex: select * from table_satya where date >= '2016-01-01'
   #result.persist()  ###willit increase performance
   for city in city_list:
        df_city = result.where(result.city_name==city)
        #store as csv file(pandas style single file)
        df_city.collect().toPandas().to_csv('file_'+str(file_index)+'.csv',index=False)
        file_index += 1

df_merged.unpersist()  ###do I even need to do it or Spark can handle it internally
目前这需要很长时间

#persist(On count())-34 mins.
#each result(on firing each sql query)-around (2*8=16min toPandas() Op)
#          #for each toPandas().to_csv() - around 2 min each
#for 3 query 16*3= 48min
#total 34+48 = 82 min  ###Need optimization seriously
那么,有谁能建议我如何优化上述过程以获得更好的性能(时间和内存都可以)

我担心的原因是:我是在Python Pandas平台(64Gb单机,带有序列化pickle数据)上完成上述操作的,我可以在8-12分钟内完成。由于我的数据量似乎在增长,所以需要采用spark这样的技术


提前感谢。:)

我认为最好的办法是将源数据缩减到适当的大小。您提到您的源数据有90个城市,但您只对其中的8个感兴趣。筛选出您不想要的城市,并将您想要的城市保存在单独的csv文件中:

import itertools
import csv

city_list = [city1, city2,city3...total 8 cities]

with open('f1.csv', 'rb') as f1, open('f2.csv', 'rb') as f2:
    r1, r2 = csv.reader(f1), csv.reader(f2)
    header = next(r1)
    next(r2) # discard headers in second file
    city_col = header.index('city_name')
    city_files = []
    city_writers = {}
    try:
    for city in city_list:
            f = open(city+'.csv', 'wb')
            city_files.append(f)
            writer = csv.writer(f)
            writer.writerow(header)
            city_writers[city] = writer
        for row in itertools.chain(r1, r2):
            city_name = row[city_col]
            if city_name in city_writers:
                city_writers[city_name].writerow(row)
    finally:
        for f in city_files:
            f.close()

在对每个城市进行迭代之后,为该城市创建一个数据帧,然后在嵌套循环中运行三个查询。每个数据帧在内存中的匹配应该没有问题,查询应该运行得很快,因为它们运行的数据集要小得多。

任何时候序列化到磁盘都会招致巨大的I/O惩罚。火花的力量在于只使用记忆来保持事物的快速。您需要保留内存,并确保有足够的内存来处理数据。您还需要确保正确配置了环境…透明的大型页面应关闭,交换设置为0或1。您的原始数据是否只有您要查找的八个城市,或者它是否有更多城市?这是否正确<代码>结果。其中('city\u name'==city)。这似乎需要
结果。where(False)
。你的意思是像
result.where(“city_name='%s'%city)
?@StevenRumbalski不,我还有很多城市(90 appx.)。是result.where(result.city\u name==city)。我已经更正了。@tadamhicks-如何设置“透明的大页面应该关闭,交换设置为0或1”。这样做的变量是什么。请帮忙。您的评论是指,如果我使用df_merged.persistents(仅限StorageLevel.MEMORY_),我会获得性能提升吗???