Apache spark 向spark dataframe添加一列,其值为现有dataframe行的hashMod

Apache spark 向spark dataframe添加一列,其值为现有dataframe行的hashMod,apache-spark,apache-spark-sql,spark-dataframe,Apache Spark,Apache Spark Sql,Spark Dataframe,我想向spark dataframe添加一列,其值为现有dataframe行的hashMod。在下面的示例中,我可以为1个特定列数据的哈希实现相同的结果,如何为整个dataframe行allcolumns实现相同的结果 object Container { case class intContainer(data: Int) } val sqlContext = new SQLContext(sc) import sqlContext.implicits._ val getBuc

我想向spark dataframe添加一列,其值为现有dataframe行的hashMod。在下面的示例中,我可以为1个特定列数据的哈希实现相同的结果,如何为整个dataframe行allcolumns实现相同的结果

object Container {
  case class intContainer(data: Int)
}


val sqlContext = new SQLContext(sc)
import sqlContext.implicits._    
val getBucket = udf((data: Object) => data.hashCode() %10 )
val schema = StructType(List(StructField("age", IntegerType)))

val userList = List(( 23),( 24), (25), (57) )
val df1:RDD[Container.intContainer] = sc.parallelize(userList).map(x=> Container.intContainer(x))
val df = df1.toDF()
df.registerTempTable("dfcount")
val countdf = sqlContext.sql("select data,data+1 as count, current_timestamp() as time  from dfcount")
val xx = countdf.withColumn("bucket_id", getBucket( col("data")))

下面的代码段使用了一个udf,它接受一个列数组,这些列的哈希代码被求和以获得bucket值。此函数适用于任意数量的列和任何模式

import sqlContext.implicits._
import org.apache.spark.sql.functions.udf
def hasher(data: AnyRef *) = (data.map(_.hashCode).sum % 10)
val getBucket = udf(hasher _)
val df = sc.parallelize(('a' to 'z').map(_.toString) zip (1 to 30)).toDF("c1","c2")
df.withColumn("bucket", getBucket(array(df.columns.map(df.apply): _*))).show()
+---+---+------+
| c1| c2|bucket|
+---+---+------+
|  a|  1|     6|
|  b|  2|     8|
|  c|  3|     0|
|  d|  4|     2|
|  e|  5|     4|
|  f|  6|     6|
|  g|  7|     8|
|  h|  8|     0|
|  i|  9|     2|
|  j| 10|     3|
|  k| 11|     5|
|  l| 12|     7|
|  m| 13|     9|
|  n| 14|     1|
|  o| 15|     3|
|  p| 16|     5|
|  q| 17|     7|
|  r| 18|     9|
|  s| 19|     1|
|  t| 20|     4|
+---+---+------+

df中可能有多少列?