Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 根据列值拆分数据集的行_Apache Spark - Fatal编程技术网

Apache spark 根据列值拆分数据集的行

Apache spark 根据列值拆分数据集的行,apache-spark,Apache Spark,我正在使用Spark 3.1.1以及JAVA 8,我正在尝试根据数据集某个数字列的值(大于或小于阈值)拆分数据集,只有当行的某些字符串列值相同时才可以拆分:我正在尝试类似这样的操作: Iterator<Row> iter2 = partition.toLocalIterator(); while (iter2.hasNext()) { Row

我正在使用
Spark 3.1.1
以及
JAVA 8
,我正在尝试根据数据集某个数字列的值(大于或小于阈值)拆分数据集,只有当行的某些字符串列值相同时才可以拆分:我正在尝试类似这样的操作:

                Iterator<Row> iter2 = partition.toLocalIterator();                   
                while (iter2.hasNext()) {
                    Row item = iter2.next();
                      //getColVal is a function that gets the value given a column
                    String numValue = getColVal(item, dim);
                    if (Integer.parseInt(numValue) < threshold)    
                        pl.add(item);  
                    else
                        pr.add(item);
如果阈值为
30
,则第一行和最后一行将形成两个数据集,因为它们的第一列和第四列是相同的;否则,分割是不可能的

编辑:结果输出将是

    abc,9,40,A
    abc,7,50,A


    cde,4,20,B
    cde,3,25,B

我主要使用
pyspark
,但您可以适应您的环境

## could add some conditional logic or just always output 2 data frames where 
##   one would be empty

print("pdf - two dataframe")
## create pandas dataframe
pdf = pd.DataFrame({'col1':['abc','abc','cde','cde'],'col2':[9,7,4,3],'col3':[40,50,20,25],'col4':['A','A','B','B']})
print( pdf )

## move it to spark
print("sdf")
sdf = spark.createDataFrame(pdf) 

sdf.show()
# +----+----+----+----+
# |col1|col2|col3|col4|
# +----+----+----+----+
# | abc|   9|  40|   A|
# | abc|   7|  50|   A|
# | cde|   4|  20|   B|
# | cde|   3|  25|   B|
# +----+----+----+----+




## filter
pl = sdf.filter('col3 <= 30')\
        .groupBy("col1","col4").agg(F.sum('col2').alias('sumC2'))
pr = sdf.filter('col3 > 30')\
        .groupBy("col1","col4").agg(F.sum('col2').alias('sumC2'))
print("pl")
pl.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# | cde|   B|    7|
# +----+----+-----+


print("pr")
pr.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# | abc|   A|   16|
# +----+----+-----+


print("pdf - one dataframe")
## create pandas dataframe
pdf = pd.DataFrame({'col1':['abc','abc','cde','cde'],'col2':[9,7,4,3],'col3':[11,29,20,25],'col4':['A','A','B','B']})
print( pdf )

## move it to spark
print("sdf")
sdf = spark.createDataFrame(pdf) 
sdf.show()
# +----+----+----+----+
# |col1|col2|col3|col4|
# +----+----+----+----+
# | abc|   9|  11|   A|
# | abc|   7|  29|   A|
# | cde|   4|  20|   B|
# | cde|   3|  25|   B|
# +----+----+----+----+



pl = sdf.filter('col3 <= 30')\
        .groupBy("col1","col4").agg( F.sum('col2').alias('sumC2') )
pr = sdf.filter('col3 > 30')\
        .groupBy("col1","col4").agg(F.sum('col2').alias('sumC2'))

print("pl")
pl.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# | abc|   A|   16|
# | cde|   B|    7|
# +----+----+-----+

print("pr")
pr.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# +----+----+-----+

如果我理解正确,如果col3的阈值大于30,您是否希望获得2个数据集。您还可以添加结果数据集的外观吗?另外,如果您的阈值未达到,您能提供一个示例输出吗?是的,如果col3的阈值大于30,则正好有2个数据集,如果未达到阈值,则返回原始数据检查示例输出的我的编辑您能解释这些行吗
pl=sdf.filter('col3 30')\.groupBy(“col1”,“col4”).agg(F.sum('col2')。别名('sumC2')))
如果可能,在JAVA中,
pl=sdf.filter('col3找到了这个`JavaDataFrameSuite.testExecution()`
@Test
public void testExecution(){
`Dataset df=spark.table(“testData”).filter(“key=1”);`Assert.assertEquals(1,df.select(“key”).collectAsList().get(0.get(0))`
}
OK for select将充当一个过滤器,类似于
.select(“col3>阈值”)
但是
collectAsList().get(0).get(0))
的作用是什么?我正在尝试这样的方法:
数据集filteredDF=dF.filter(“col2>60”).groupBy(“col0”、“col1”、“col3”).agg()但不知道放入什么
agg()
    abc,9,40,A
    abc,7,50,A


    cde,4,20,B
    cde,3,25,B
## could add some conditional logic or just always output 2 data frames where 
##   one would be empty

print("pdf - two dataframe")
## create pandas dataframe
pdf = pd.DataFrame({'col1':['abc','abc','cde','cde'],'col2':[9,7,4,3],'col3':[40,50,20,25],'col4':['A','A','B','B']})
print( pdf )

## move it to spark
print("sdf")
sdf = spark.createDataFrame(pdf) 

sdf.show()
# +----+----+----+----+
# |col1|col2|col3|col4|
# +----+----+----+----+
# | abc|   9|  40|   A|
# | abc|   7|  50|   A|
# | cde|   4|  20|   B|
# | cde|   3|  25|   B|
# +----+----+----+----+




## filter
pl = sdf.filter('col3 <= 30')\
        .groupBy("col1","col4").agg(F.sum('col2').alias('sumC2'))
pr = sdf.filter('col3 > 30')\
        .groupBy("col1","col4").agg(F.sum('col2').alias('sumC2'))
print("pl")
pl.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# | cde|   B|    7|
# +----+----+-----+


print("pr")
pr.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# | abc|   A|   16|
# +----+----+-----+


print("pdf - one dataframe")
## create pandas dataframe
pdf = pd.DataFrame({'col1':['abc','abc','cde','cde'],'col2':[9,7,4,3],'col3':[11,29,20,25],'col4':['A','A','B','B']})
print( pdf )

## move it to spark
print("sdf")
sdf = spark.createDataFrame(pdf) 
sdf.show()
# +----+----+----+----+
# |col1|col2|col3|col4|
# +----+----+----+----+
# | abc|   9|  11|   A|
# | abc|   7|  29|   A|
# | cde|   4|  20|   B|
# | cde|   3|  25|   B|
# +----+----+----+----+



pl = sdf.filter('col3 <= 30')\
        .groupBy("col1","col4").agg( F.sum('col2').alias('sumC2') )
pr = sdf.filter('col3 > 30')\
        .groupBy("col1","col4").agg(F.sum('col2').alias('sumC2'))

print("pl")
pl.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# | abc|   A|   16|
# | cde|   B|    7|
# +----+----+-----+

print("pr")
pr.show()
# +----+----+-----+
# |col1|col4|sumC2|
# +----+----+-----+
# +----+----+-----+
print("pdf - filter by mean")
## create pandas dataframe
pdf = pd.DataFrame({'col1':['abc','abc','cde','cde'],'col2':[9,7,4,3],'col3':[40,50,20,25],'col4':['A','A','B','B']})
print( pdf )

## move it to spark
print("sdf")
sdf = spark.createDataFrame(pdf) 
sdf.show()
# +----+----+----+----+
# |col1|col2|col3|col4|
# +----+----+----+----+
# | abc|   9|  40|   A|
# | abc|   7|  50|   A|
# | cde|   4|  20|   B|
# | cde|   3|  25|   B|
# +----+----+----+----+

w = Window.partitionBy("col1").orderBy("col2")
## add another column, the mean of col2 partitioned by col1
sdf = sdf.withColumn('mean_c2', F.mean('col2').over(w))

## filter by the dynamic mean
pr = sdf.filter('col2 > mean_c2')
pr.show()

# +----+----+----+----+-------+
# |col1|col2|col3|col4|mean_c2|
# +----+----+----+----+-------+
# | cde|   4|  20|   B|    3.5|
# | abc|   9|  40|   A|    8.0|
# +----+----+----+----+-------+