Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/325.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 基于pyspark中带有groupby的筛选行创建具有最大值的新列_Python_Apache Spark_Pyspark - Fatal编程技术网

Python 基于pyspark中带有groupby的筛选行创建具有最大值的新列

Python 基于pyspark中带有groupby的筛选行创建具有最大值的新列,python,apache-spark,pyspark,Python,Apache Spark,Pyspark,我有一个sparkdataframe import pandas as pd foo = pd.DataFrame({'id': [1,1,2,2,2], 'col': ['a','b','a','a','b'], 'value': [1,5,2,3,4], 'col_b': ['a','c','a','a','c']}) 我想用值列的max创建一个新列,按id分组。但是我只希望max值用于col==colu b 我的结果spark数据帧应该如下所示 foo = pd.DataFrame({'

我有一个
spark
dataframe

import pandas as pd
foo = pd.DataFrame({'id': [1,1,2,2,2], 'col': ['a','b','a','a','b'], 'value': [1,5,2,3,4],
'col_b': ['a','c','a','a','c']})
我想用
列的
max
创建一个新列,按
id
分组。但是我只希望
max
用于
col==colu b

我的结果spark数据帧应该如下所示

foo = pd.DataFrame({'id': [1,1,2,2,2], 'col': ['a','b','a','a','b'], 'value': [1,5,2,3,4],
'max_value':[1,1,3,3,3], 'col_b': ['a','c','a','a','c']})
我试过了

from pyspark.sql import functions as f
from pyspark.sql.window import Window
w = Window.partitionBy('id')
foo = foo.withColumn('max_value', f.max('value').over(w))\
    .where(f.col('col') == f.col('col_b'))
但我最后还是输了几行

有什么想法吗?

使用函数进行条件聚合
max

from pyspark.sql import Window
from pyspark.sql import functions as F

w = Window.partitionBy('id')

foo = foo.withColumn('max_value', F.max(F.when(F.col('col') == F.col('col_b'), F.col('value'))).over(w))
将函数用于条件聚合
max

from pyspark.sql import Window
from pyspark.sql import functions as F

w = Window.partitionBy('id')

foo = foo.withColumn('max_value', F.max(F.when(F.col('col') == F.col('col_b'), F.col('value'))).over(w))