Apache spark scala集合的数据集编码器

Apache spark scala集合的数据集编码器,apache-spark,apache-spark-dataset,apache-spark-encoders,Apache Spark,Apache Spark Dataset,Apache Spark Encoders,尝试从s3读取数据集时出现异常。公司案例类包含带有员工案例类的集合 Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for Set[com.model.company.common.Employee] - field (class: "scala.collection.immutable.Set", name: "employees") - field (class: "co

尝试从s3读取数据集时出现异常。公司案例类包含带有员工案例类的集合

Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for Set[com.model.company.common.Employee]
- field (class: "scala.collection.immutable.Set", name: "employees")
- field (class: "com.model.company.Company", name: "company")
我试过用kryo:

但它也不起作用。 您知道如何将scala集合转换为数据集吗

代码:


从列表更改,设置->顺序

有关数据集数据类型的更多信息,请参见:

您可以添加您尝试过的代码吗?范围中是否隐含了此代码?请参阅上面的我的代码
implicit def myDataEncoder[T]: Encoder[Set[Employee]] = Encoders.kryo[scala.collection.immutable.Set[Employee]
val sqlContext = sparkSession.sqlContext
import sqlContext.implicits._

val records = sparkSession.read.json(s"s3a://${config.input.fullPath}").as[Company]