Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/290.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/visual-studio-2012/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python pyspark中减少数据帧的最有效方法是什么?_Python_Apache Spark_Pyspark - Fatal编程技术网

Python pyspark中减少数据帧的最有效方法是什么?

Python pyspark中减少数据帧的最有效方法是什么?,python,apache-spark,pyspark,Python,Apache Spark,Pyspark,我有以下数据框,第一行有两个数据框,如下所示: ['station_id', 'country', 'temperature', 'time'] ['12', 'usa', '22', '12:04:14'] 我想按“法国”前100个站点的降序显示平均温度 在pyspark中执行此操作的最佳方式(最有效)是什么?我们将您的查询转换为Spark SQL,方法如下: from pyspark.sql.functions import mean, desc df.filter(df["countr

我有以下数据框,第一行有两个数据框,如下所示:

['station_id', 'country', 'temperature', 'time']
['12', 'usa', '22', '12:04:14']
我想按“法国”前100个站点的降序显示平均温度


在pyspark中执行此操作的最佳方式(最有效)是什么?

我们将您的查询转换为
Spark SQL
,方法如下:

from pyspark.sql.functions import mean, desc

df.filter(df["country"] == "france") \ # only french stations
  .groupBy("station_id") \ # by station
  .agg(mean("temperature").alias("average_temp")) \ # calculate average
  .orderBy(desc("average_temp")) \ # order by average 
  .take(100) # return first 100 rows
使用
RDD
API和匿名函数:

df.rdd \
  .filter(lambda x: x[1] == "france") \ # only french stations
  .map(lambda x: (x[0], x[2])) \ # select station & temp
  .mapValues(lambda x: (x, 1)) \ # generate count
  .reduceByKey(lambda x, y: (x[0]+y[0], x[1]+y[1])) \ # calculate sum & count
  .mapValues(lambda x: x[0]/x[1]) \ # calculate average
  .sortBy(lambda x: x[1], ascending = False) \ # sort
  .take(100)

你尝试过的,像
filter
map
reduceByKey
sortBy
都可以。你是说列标题和第一行吗?是的,第一行是列标题,没有spark sql?你是说使用
rdd
api,如果你有一个
数据框
,为什么?事实上,这就是我的说明。我的代码非常混乱,我定义了函数,我认为有一种更简单的方法,这就是为什么我在这里问的原因。