Scala 如何根据作为映射的列值筛选spark数据框条目

Scala 如何根据作为映射的列值筛选spark数据框条目,scala,dataframe,apache-spark,apache-spark-sql,Scala,Dataframe,Apache Spark,Apache Spark Sql,我有一个这样的数据帧 +-------+------------------------+ |key | data| +-------+------------------------+ | 61|[a -> b, c -> d, e -> f]| | 71|[a -> 1, c -> d, e -> f]| | 81|[c -> d, e -> f] | |

我有一个这样的数据帧

+-------+------------------------+
|key    |                    data|
+-------+------------------------+
|     61|[a -> b, c -> d, e -> f]|
|     71|[a -> 1, c -> d, e -> f]|
|     81|[c -> d, e -> f]        |
|     91|[x -> b, y -> d, e -> f]|
|     11|[a -> a, c -> b, e -> f]|
|     21|[a -> a, c -> x, e -> f]|
+-------+------------------------+
我想筛选其数据列映射包含键
'a'
且键'a'的
值为'a'
的行。因此,下面的数据帧是所需的输出

+-------+------------------------+
|key    |                    data|
+-------+------------------------+
|     11|[a -> a, c -> b, e -> f]|
|     21|[a -> a, c -> x, e -> f]|
+-------+------------------------+
我尝试将该值强制转换为地图,但出现了此错误

== SQL ==
Map
^^^

  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitPrimitiveDataType$1.apply(AstBuilder.scala:1673)
  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitPrimitiveDataType$1.apply(AstBuilder.scala:1651)
  at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:108)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.visitPrimitiveDataType(AstBuilder.scala:1651)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.visitPrimitiveDataType(AstBuilder.scala:49)
  at org.apache.spark.sql.catalyst.parser.SqlBaseParser$PrimitiveDataTypeContext.accept(SqlBaseParser.java:13779)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.typedVisit(AstBuilder.scala:55)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.org$apache$spark$sql$catalyst$parser$AstBuilder$$visitSparkDataType(AstBuilder.scala:1645)
  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleDataType$1.apply(AstBuilder.scala:90)
  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleDataType$1.apply(AstBuilder.scala:90)
  at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:108)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.visitSingleDataType(AstBuilder.scala:89)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parseDataType$1.apply(ParseDriver.scala:40)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parseDataType$1.apply(ParseDriver.scala:39)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:98)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseDataType(ParseDriver.scala:39)
  at org.apache.spark.sql.Column.cast(Column.scala:1017)
  ... 49 elided
如果我只想根据列
'key'
进行筛选,我可以通过执行
df.filter(col(“key”)==61)
。但问题是,值是一张地图


是否存在类似于
df.filter(col(“data”).toMap.contains(“a”)和&col(“data”).toMap.get(“a”)==“a”)
您可以像这样进行过滤
df.filter(col(“data.x”)==“a”)
其中x是数据中的嵌套列