Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/284.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 派斯帕克赢了';t转换时间戳_Python_Apache Spark_Pyspark_Jupyter Notebook_Simpledateformat - Fatal编程技术网

Python 派斯帕克赢了';t转换时间戳

Python 派斯帕克赢了';t转换时间戳,python,apache-spark,pyspark,jupyter-notebook,simpledateformat,Python,Apache Spark,Pyspark,Jupyter Notebook,Simpledateformat,我有一个非常简单的CSV,称之为test.CSV name,timestamp,action A,2012-10-12 00:30:00.0000000,1 B,2012-10-12 01:00:00.0000000,2 C,2012-10-12 01:30:00.0000000,2 D,2012-10-12 02:00:00.0000000,3 E,2012-10-12 02:30:00.0000000,1 我正在尝试使用pyspark阅读它,并添加一个新的列来指示月份 首先我读取了数

我有一个非常简单的CSV,称之为
test.CSV

name,timestamp,action
A,2012-10-12 00:30:00.0000000,1
B,2012-10-12 01:00:00.0000000,2 
C,2012-10-12 01:30:00.0000000,2 
D,2012-10-12 02:00:00.0000000,3 
E,2012-10-12 02:30:00.0000000,1
我正在尝试使用pyspark阅读它,并添加一个新的列来指示月份

首先我读取了数据,一切看起来都正常

df = spark.read.csv('test.csv', inferSchema=True, header=True)
df.printSchema()
df.show()
输出:

root
 |-- name: string (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- action: double (nullable = true)

+----+-------------------+------+
|name|          timestamp|action|
+----+-------------------+------+
|   A|2012-10-12 00:30:00|   1.0|
|   B|2012-10-12 01:00:00|   2.0|
|   C|2012-10-12 01:30:00|   2.0|
|   D|2012-10-12 02:00:00|   3.0|
|   E|2012-10-12 02:30:00|   1.0|
+----+-------------------+------+
+----+-------------------+------+----------+
|name|          timestamp|action|     month|
+----+-------------------+------+----------+
|   A|2012-10-12 00:30:00|   1.0|2012-10-12|
|   B|2012-10-12 01:00:00|   2.0|2012-10-12|
|   C|2012-10-12 01:30:00|   2.0|2012-10-12|
|   D|2012-10-12 02:00:00|   3.0|2012-10-12|
|   E|2012-10-12 02:30:00|   1.0|2012-10-12|
+----+-------------------+------+----------+
但是当我尝试添加列时,格式化选项似乎没有任何作用

df.withColumn('month', to_date(col('timestamp'), format='MMM')).show()
输出:

root
 |-- name: string (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- action: double (nullable = true)

+----+-------------------+------+
|name|          timestamp|action|
+----+-------------------+------+
|   A|2012-10-12 00:30:00|   1.0|
|   B|2012-10-12 01:00:00|   2.0|
|   C|2012-10-12 01:30:00|   2.0|
|   D|2012-10-12 02:00:00|   3.0|
|   E|2012-10-12 02:30:00|   1.0|
+----+-------------------+------+
+----+-------------------+------+----------+
|name|          timestamp|action|     month|
+----+-------------------+------+----------+
|   A|2012-10-12 00:30:00|   1.0|2012-10-12|
|   B|2012-10-12 01:00:00|   2.0|2012-10-12|
|   C|2012-10-12 01:30:00|   2.0|2012-10-12|
|   D|2012-10-12 02:00:00|   3.0|2012-10-12|
|   E|2012-10-12 02:30:00|   1.0|2012-10-12|
+----+-------------------+------+----------+

这是怎么回事?

到目前为止
使用
格式
用于分析字符串类型列。您需要的是
date\u格式

from pyspark.sql.functions import date_format

df.withColumn('month', date_format(col('timestamp'), format='MMM')).show()

# +----+-------------------+------+-----+
# |name|          timestamp|action|month|
# +----+-------------------+------+-----+
# |   A|2012-10-12 00:30:00|   1.0|  Oct|
# |   B|2012-10-12 01:00:00|   2.0|  Oct|
# |   C|2012-10-12 01:30:00|   2.0|  Oct|
# |   D|2012-10-12 02:00:00|   3.0|  Oct|
# |   E|2012-10-12 02:30:00|   1.0|  Oct|
# +----+-------------------+------+-----+

你想把它转换成什么?一个月?是的。根据Oracle页面上的文档,MMM应该可以做到这一点,但我尝试过的格式没有任何效果。有一个名为month@RameshMaharjan的内置函数非常有用,我不知道存在这样的函数!不过,您会意识到这是一个简化的示例,我仍然希望自定义格式工作,或者理解为什么它不工作。您所做的是基于列的转换,并且上面的链接中也有to_date函数,它不带format参数。因此它对你不起作用。我想你要找的是udf函数。