Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/68.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Sql Spark dataframe将窗口函数的结果添加到常规函数中,如最大自动增量_Sql_Apache Spark_Dataframe_Pyspark_Spark Dataframe - Fatal编程技术网

Sql Spark dataframe将窗口函数的结果添加到常规函数中,如最大自动增量

Sql Spark dataframe将窗口函数的结果添加到常规函数中,如最大自动增量,sql,apache-spark,dataframe,pyspark,spark-dataframe,Sql,Apache Spark,Dataframe,Pyspark,Spark Dataframe,我需要为id字段生成自动递增的值。我的方法是使用windows函数和max函数 我试图找到纯数据帧解决方案(没有rdd) 因此,在我进行了右外部联接之后,我得到了这个数据帧: df2 = sqlContext.createDataFrame([(1,2), (3, None), (5, None)], ['someattr', 'id']) # notice null values? it's a new records that don't have id just yet. # The t

我需要为id字段生成自动递增的值。我的方法是使用windows函数和max函数

我试图找到纯数据帧解决方案(没有rdd)

因此,在我进行了
右外部联接之后,我得到了这个数据帧:

df2 = sqlContext.createDataFrame([(1,2), (3, None), (5, None)], ['someattr', 'id'])

# notice null values? it's a new records that don't have id just yet.
# The task is to generate them. Preferably with one query.

df2.show()

+--------+----+
|someattr|  id|
+--------+----+
|       1|   2|
|       3|null|
|       5|null|
+--------+----+
我需要为
id
字段生成自动递增的值。我的方法是使用windows函数

df2.withColumn('id', when(df2.id.isNull(), row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')).otherwise(df2.id))
当我这样做时,会引发以下异常:

AnalysisException                         Traceback (most recent call last)
<ipython-input-102-b3221098e895> in <module>()
     10 
     11 
---> 12 df2.withColumn('hello', when(df2.id.isNull(), row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')).otherwise(df2.id)).show()

/Users/ipolynets/workspace/spark-2.0.0/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col)
   1371         """
   1372         assert isinstance(col, Column), "col should be Column"
-> 1373         return DataFrame(self._jdf.withColumn(colName, col._jc), self.sql_ctx)
   1374 
   1375     @ignore_unicode_prefix

/Users/ipolynets/workspace/spark-2.0.0/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
    931         answer = self.gateway_client.send_command(command)
    932         return_value = get_return_value(
--> 933             answer, self.gateway_client, self.target_id, self.name)
    934 
    935         for temp_arg in temp_args:

/Users/ipolynets/workspace/spark-2.0.0/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: u"expression '`someattr`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;"

您正在添加列,因此在结果数据框中也将有
someattr

您必须将
someattr
包含在
groupby
中,或者在某些聚合函数中使用它

但是,这样做更简单:

df2.registerTempTable("test")
df3 = sqlContext.sql("""
    select t.someattr, nvl (t.id, row_number(partition by id) over () + maxId.maxId) as id
    from test t
    cross join (select max(id) as maxId from test) as maxId
""")
当然,您可以将其转换为DSL,但是SQL似乎更容易完成此任务

df2.registerTempTable("test")
df3 = sqlContext.sql("""
    select t.someattr, nvl (t.id, row_number(partition by id) over () + maxId.maxId) as id
    from test t
    cross join (select max(id) as maxId from test) as maxId
""")