Spark/Scala:如何获取位于前X%的行?
我有一个数据帧:Spark/Scala:如何获取位于前X%的行?,scala,dataframe,apache-spark,apache-spark-sql,Scala,Dataframe,Apache Spark,Apache Spark Sql,我有一个数据帧: val df = Seq( ("q1", "a1", 0.31, "food"), ("q1", "a2", 0.01, "food"), ("q1", "a3", 0.51, "food"), ("q2", "b1", 0.01, "tools"), ("q2", "b2", 0.03, "tools"), ("q2", "b3", 0.01, "tools") ).toDF("id","part", "ratio", "category") df.show(false)
val df = Seq(
("q1", "a1", 0.31, "food"), ("q1", "a2", 0.01, "food"), ("q1", "a3", 0.51, "food"),
("q2", "b1", 0.01, "tools"), ("q2", "b2", 0.03, "tools"), ("q2", "b3", 0.01, "tools")
).toDF("id","part", "ratio", "category")
df.show(false)
+---+----+-----+--------+
|id |part|ratio|category|
+---+----+-----+--------+
|q1 |a1 |0.31 |food |
|q2 |a2 |0.01 |food |
|q3 |a3 |0.51 |food |
|q4 |b1 |0.01 |tools |
|q5 |b2 |0.03 |tools |
|q6 |b3 |0.01 |tools |
+---+----+-----+--------+
我试图根据类别中的异常值来找到每个类别的阈值。例如:在食品中,66%大于0.30,而在工具中,几乎所有都大于0.0。我如何找到阈值,以使大多数ID位于更大的存储桶中
任何建议都是有用的
尝试:
spark.sql("select category, percentile_approx(ratio, 0.2) as threshold from df group by category order by category").show(1000, false)
+--------+---------+
|category|threshold|
+--------+---------+
|food |0.31 |
|tools |0.01 |
+--------+---------+
但这里的问题是,我需要指定X来获得阈值,但我正在寻找一个异常值检测 您可以通过定义可接受数据的平均值和标准偏差范围,然后找出超出可接受范围的行来实现这一点
//define the acceptable range limits by looking at the mean and standard deviation
val statsDF = df
.groupBy("category")
.agg(mean("ratio").as("mean"), stddev("ratio").as("stddev"))
.withColumn("UpperLimit", col("mean") + col("stddev")*3)
.withColumn("LowerLimit", col("mean") - col("stddev")*3).drop("mean","stddev")
// join statsDF with the original df and filter rows that are outside the acceptable range
val outliersDF= df.join(statsDF, usingColumns = Seq("category")).filter($"ratio"< $"LowerLimit" || $"ratio"> $"UpperLimit")
//通过查看平均值和标准偏差来定义可接受的范围限值
val statsDF=df
.groupBy(“类别”)
.agg(平均值(“比率”).as(“平均值”)、STDEV(“比率”).as(“STDEV”))
.带列(“上限”,列(“平均值”)+列(“标准差”)*3)
.带列(“下限”,col(“平均值”)-col(“标准差”)*3)。下降(“平均值”,“标准差”)
//将statsDF与超出可接受范围的原始df和筛选器行联接
val outliersDF=df.join(statsDF,使用columns=Seq(“category”)).filter($“比率”<$“下限”|$“比率”>$“上限”)
我已经为这个解决方案提供了参考