Dataframe groupByKey能否用于优化和减少计算工作量

Dataframe groupByKey能否用于优化和减少计算工作量,dataframe,apache-spark,apache-spark-sql,Dataframe,Apache Spark,Apache Spark Sql,我有这样的数据帧 empId | firstName | lastName | DOB | effStartDate | effEndDate | 121 | Rahul | Jaiswal | 27-10-1194 | 03-05-2019 | 03-05-2020 | 147 | Dev | Kumar | 12-03-1995 | 04-08-2019 | 03-05-2020 | 121 | Rahul | Jaiswal | 27-10-1194 | 03-05-2019 | 03-0

我有这样的数据帧

empId | firstName | lastName | DOB | effStartDate | effEndDate |

121 | Rahul | Jaiswal | 27-10-1194 | 03-05-2019 | 03-05-2020 |

147 | Dev | Kumar | 12-03-1995 | 04-08-2019 | 03-05-2020 |

121 | Rahul | Jaiswal | 27-10-1194 | 03-05-2019 | 03-05-2020 |

。。。 …继续

现在,我正在从DF中提取如下值:

  implicit val encoder = kryo[EmployeeJobDataFields]
val sortedDF = df.orderBy(asc(EMP_ID_COLUMN), asc(EFF_START_DATE_COLUMN)).na.fill(EMPTY_STRING)
   val recordList: List[EmployeeJobDataFields] = sortedDF
  .map(row => {
    EmployeeJobDataFields(row.getString(0), row.getString(1), row.getString(2), row.getString(3),
      row.getString(4), row.getString(5), row.getString(6), row.getString(7), row.getString(8), row.getString(9)
  })(encoder)
  .collectAsList
这里,empId将对某些用户重复

这是最好的方法还是我们可以用其他方法来增强代码? 我不确定groupByKey()是否适合这里并减少任何类型的计算工作

专家,请指导我