Apache spark 创建一个数字范围不使用UDF的Df列

Apache spark 创建一个数字范围不使用UDF的Df列,apache-spark,pyspark,Apache Spark,Pyspark,在使用python的Spark 2.4.3中,如何创建一个具有数字范围的新列?我让它与UDF一起工作,但我更愿意不这样做。这是代码 from pyspark.sql import SparkSession from pyspark.sql.functions import to_timestamp, explode, udf, col, minute, second from pyspark.sql.types import ArrayType, IntegerType def seconds

在使用python的Spark 2.4.3中,如何创建一个具有数字范围的新列?我让它与UDF一起工作,但我更愿意不这样做。这是代码

from pyspark.sql import SparkSession
from pyspark.sql.functions import to_timestamp, explode, udf, col, minute, second
from pyspark.sql.types import ArrayType, IntegerType

def seconds_range(start_date,end_date):
    start_seconds = start_date.minute * 60 + start_date.second
    end_seconds = end_date.minute * 60 + end_date.second
    return list(range(start_seconds, end_seconds+1))

spark = SparkSession.builder.appName('MyApp').master('local[2]').getOrCreate()

# register udf function with spark
seconds_range_udf = udf(seconds_range, ArrayType(IntegerType()))


# create dataframe with sample data.
df1 = spark.createDataFrame([('user1', '2019-12-01 9:02:30', '2019-12-01 09:04:00'),\
    ('user2', '2019-12-01 9:02:30', '2019-12-01 09:04:00'),\
    ('user3', '2019-12-01 9:03:23', '2019-12-01 09:03:50')],\
    ['user', 'login_start_dt', 'login_end_dt'])

df1 = df1.\
    withColumn('user', df1.user).\
    withColumn('login_start_dt', to_timestamp(df1.login_start_dt , 'yyyy-MM-dd HH:mm:ss')).\
    withColumn('login_end_dt', to_timestamp(df1.login_end_dt, 'yyyy-MM-dd HH:mm:ss'))

df2 = df1.\
    withColumn('login_offset', (minute(df1.login_start_dt) * 60 + (second(df1.login_start_dt))).cast(IntegerType())).\
    withColumn('logout_offset', (minute(df1.login_end_dt) * 60 + (second(df1.login_end_dt))).cast(IntegerType())).\
#     withColumn('arr_logged_seconds', seconds_range_udf('login_start_dt ', 'login_end_dt'))  # this line works
    withColumn('arr_logged_seconds', list(range(col('login_offset'), col('logout_offset'))))  # would like to get this line to work.

df2.show()
我得到一个错误“'Column'对象不能解释为整数”。我还想确保我能够添加额外的秒数,因为“range”排除了第二个参数


感谢IIUC,从spark>2.4开始,您可以使用该功能实现同样的功能

import  pyspark.sql.functions as f


df2 = df1.\
     withColumn('login_offset', (minute(df1.login_start_dt) * 60 + (second(df1.login_start_dt))).cast(IntegerType())).\
     withColumn('logout_offset', (minute(df1.login_end_dt) * 60 + (second(df1.login_end_dt))).cast(IntegerType())).\
     withColumn('arr_logged_seconds', f.sequence('login_offset', 'logout_offset')) # sequence(start, stop, step=None) : step = 1 default

df2.show()
+-----+-------------------+-------------------+------------+-------------+--------------------+
| user|     login_start_dt|       login_end_dt|login_offset|logout_offset|  arr_logged_seconds|
+-----+-------------------+-------------------+------------+-------------+--------------------+
|user1|2019-12-01 09:02:30|2019-12-01 09:04:00|         150|          240|[150, 151, 152, 1...|
|user2|2019-12-01 09:02:30|2019-12-01 09:04:00|         150|          240|[150, 151, 152, 1...|
|user3|2019-12-01 09:03:23|2019-12-01 09:03:50|         203|          230|[203, 204, 205, 2...|
+-----+-------------------+-------------------+------------+-------------+--------------------+


您可以从
Dataframe
转到
RDD
,然后返回到
Dataframe
。那么你就可以不用UDF了。但我认为UDF是更好的解决方案。谢谢。正是我想要的!