错误scala:递归值变量需要类型
我有一段代码,应该为每个配置单元表创建dataframe:错误scala:递归值变量需要类型,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我有一段代码,应该为每个配置单元表创建dataframe: for (e <- df_tables) { val v(df_tables.indexOf(e)) = hiveObj.sql("select * from database."+ e +" order by event_date") } 运行此代码时,我遇到两个错误: <console>:145: error: recursive value e needs type
for (e <- df_tables) {
val v(df_tables.indexOf(e)) = hiveObj.sql("select * from database."+ e +" order by event_date")
}
运行此代码时,我遇到两个错误:
<console>:145: error: recursive value e needs type
val v(df_tables.indexOf(e)) = hiveObj.sql("select * from database."+ e +" order by event_date")
^
<console>:145: error: value v is not a case class constructor, nor does it have an unapply/unapplySeq method
val v(df_tables.indexOf(e)) = hiveObj.sql("select * from database."+ e +" order by event_date")
:145:错误:递归值e需要类型
val v(df_tables.indexOf(e))=hiveObj.sql(“从数据库中选择*”+e+“按事件日期排序”)
^
:145:错误:值v不是案例类构造函数,也没有unapply/unapplySeq方法
val v(df_tables.indexOf(e))=hiveObj.sql(“从数据库中选择*”+e+“按事件日期排序”)
我怀疑所有这些问题都是由于试图在每次迭代中使用
valv(df_tables.indexOf(e))
创建一个值引起的。如果您删除val
,您的代码也应该可以工作您可以共享df_表的数据吗?
替换此处的内容吗?错误:扩展函数缺少参数类型((x$1)=>“从数据集选择*。$plus(x$1)。$plus(“按事件日期排序”)
有什么问题吗?也许我应该使用:val v=df_表.map(x=>hiveObj.sql(“从数据集中选择*”+x+“按事件日期排序”)
。
<console>:145: error: recursive value e needs type
val v(df_tables.indexOf(e)) = hiveObj.sql("select * from database."+ e +" order by event_date")
^
<console>:145: error: value v is not a case class constructor, nor does it have an unapply/unapplySeq method
val v(df_tables.indexOf(e)) = hiveObj.sql("select * from database."+ e +" order by event_date")
val v = df_tables.map((r: Row) => hiveObj.sql("select * from database."+ r +" order by event_date"))