Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala Spark SQL-检查多列中的值_Scala_Apache Spark_Apache Spark Sql - Fatal编程技术网

Scala Spark SQL-检查多列中的值

Scala Spark SQL-检查多列中的值,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我的状态数据集如下所示: SELECT * FROM status WHERE "FAILURE" IN (Status1, Status2, Status3, Status4, Status5); 我想从这个数据集中选择在这5个状态列中任何一个列中有“失败”的所有行 因此,我希望结果只包含ID 1、2、4,因为它们在其中一个状态列中有FAILURE 我想,在SQL中,我们可以执行以下操作: SELECT * FROM status WHERE "FAILURE

我的状态数据集如下所示:

SELECT * FROM status WHERE "FAILURE" IN (Status1, Status2, Status3, Status4, Status5);

我想从这个数据集中选择在这5个状态列中任何一个列中有“失败”的所有行

因此,我希望结果只包含ID 1、2、4,因为它们在其中一个状态列中有FAILURE

我想,在SQL中,我们可以执行以下操作:

SELECT * FROM status WHERE "FAILURE" IN (Status1, Status2, Status3, Status4, Status5);
在spark中,我知道我可以通过将每个状态列与“失败”进行比较来进行筛选

但我想知道在Spark SQL中是否有更聪明的方法来实现这一点


提前谢谢

一个类似的示例,您可以对添加的新列进行修改和筛选。我将此留给您,在这里检查不包括第一列的零:

import org.apache.spark.sql.functions._
import spark.implicits._

val df = sc.parallelize(Seq(
    ("r1", 0.0, 0.0, 0.0, 0.0),
    ("r2", 6.4, 4.9, 6.3, 7.1),
    ("r3", 4.2, 0.0, 7.2, 8.4),
    ("r4", 1.0, 2.0, 0.0, 0.0)
)).toDF("ID", "a", "b", "c", "d")

val count_some_val = df.columns.tail.map(x => when(col(x) === 0.0, 1).otherwise(0)).reduce(_ + _)     

val df2 = df.withColumn("some_val_count", count_some_val)
df2.filter(col("some_val_count") > 0).show(false)
当第一场比赛很容易找到的时候,我不可能停下来,但我记得有一个比我更聪明的人向我展示了这种与懒惰存在的方法,我认为这种方法在第一次遇到比赛时就停止了。就像这样,但我喜欢一种不同的方法:

import org.apache.spark.sql.functions._
import spark.implicits._

val df = sc.parallelize(Seq(
    ("r1", 0.0, 0.0, 0.0, 0.0),
    ("r2", 6.0, 4.9, 6.3, 7.1),
    ("r3", 4.2, 0.0, 7.2, 8.4),
    ("r4", 1.0, 2.0, 0.0, 0.0)
)).toDF("ID", "a", "b", "c", "d")

df.map{r => (r.getString(0),r.toSeq.tail.exists(c => 
             c.asInstanceOf[Double]==0))}
  .toDF("ID","ones")
  .show() 

如果要检查的列很多,考虑一个递归函数,在第一个匹配项上进行短路,如下所示:

val df = Seq(
  (1, "T", "F", "T", "F"),
  (2, "T", "T", "T", "T"),
  (3, "T", "T", "F", "T")
).toDF("id", "c1", "c2", "c3", "c4")

import org.apache.spark.sql.Column

def checkFor(elem: Column, cols: List[Column]): Column = cols match {
  case Nil =>
    lit(true)
  case h :: tail =>
    when(h === elem, lit(false)).otherwise(checkFor(elem, tail))
}

val cols = df.columns.filter(_.startsWith("c")).map(col).toList

df.where(checkFor(lit("F"), cols)).show

// +---+---+---+---+---+
// | id| c1| c2| c3| c4|
// +---+---+---+---+---+
// |  2|  T|  T|  T|  T|
// +---+---+---+---+---+

sql和spark sql之间有什么区别?我认为在这种情况下它们几乎相同。或者使用dataframe,可以查看懒存在的答案,也不存在短路?非常确定Scala的
的实现
将在迭代集合时利用短路(尽管我有点不愿意使用
asInstanceOf[a]
)。
        scala> import org.apache.spark.sql.functions._
        import org.apache.spark.sql.functions._

        scala> import spark.implicits._
        import spark.implicits._

        scala> val df = Seq(
             |     ("Prop1", "SUCCESS", "SUCCESS", "SUCCESS", "FAILURE" ,"SUCCESS"),
             |     ("Prop2", "SUCCESS", "FAILURE", "SUCCESS", "FAILURE", "SUCCESS"),
             |     ("Prop3", "SUCCESS", "SUCCESS", "SUCCESS", "SUCCESS", "SUCCESS" ),
             |     ("Prop4", "SUCCESS", "FAILURE", "SUCCESS", "FAILURE", "SUCCESS"),
             |     ("Prop5", "SUCCESS", "SUCCESS", "SUCCESS", "SUCCESS","SUCCESS")
             |    ).toDF("Name", "Status1", "Status2", "Status3", "Status4","Status5")
        df: org.apache.spark.sql.DataFrame = [Name: string, Status1: string ... 4 more fields]


        scala> df.show
        +-----+-------+-------+-------+-------+-------+
        | Name|Status1|Status2|Status3|Status4|Status5|
        +-----+-------+-------+-------+-------+-------+
        |Prop1|SUCCESS|SUCCESS|SUCCESS|FAILURE|SUCCESS|
        |Prop2|SUCCESS|FAILURE|SUCCESS|FAILURE|SUCCESS|
        |Prop3|SUCCESS|SUCCESS|SUCCESS|SUCCESS|SUCCESS|
        |Prop4|SUCCESS|FAILURE|SUCCESS|FAILURE|SUCCESS|
        |Prop5|SUCCESS|SUCCESS|SUCCESS|SUCCESS|SUCCESS|
        +-----+-------+-------+-------+-------+-------+


        scala> df.where($"Name".isin("Prop1","Prop4") and $"Status1".isin("SUCCESS","FAILURE")).show
        +-----+-------+-------+-------+-------+-------+
        | Name|Status1|Status2|Status3|Status4|Status5|
        +-----+-------+-------+-------+-------+-------+
        |Prop1|SUCCESS|SUCCESS|SUCCESS|FAILURE|SUCCESS|
        |Prop4|SUCCESS|FAILURE|SUCCESS|FAILURE|SUCCESS|
        +-----+-------+-------+-------+-------+-------+