Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/sql-server-2005/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Pandas PySpark日期列错误:";AttributeError:只能对datetimelike值使用.dt访问器;_Pandas_Pyspark_Apache Spark Sql_User Defined Functions - Fatal编程技术网

Pandas PySpark日期列错误:";AttributeError:只能对datetimelike值使用.dt访问器;

Pandas PySpark日期列错误:";AttributeError:只能对datetimelike值使用.dt访问器;,pandas,pyspark,apache-spark-sql,user-defined-functions,Pandas,Pyspark,Apache Spark Sql,User Defined Functions,我有以下python脚本: 导入pyspark 从pyspark.sql导入SparkSession 从pyspark.sql.functions导入* 从pyspark.sql.types导入* 作为pd进口熊猫 导入操作系统,shutil 将numpy作为np导入 spark=SparkSession.builder\ .master(“本地[2]”)\ .appName(“第2章”)\ .config('spark.jars.packages','io.delta:delta-core_2

我有以下python脚本:

导入pyspark
从pyspark.sql导入SparkSession
从pyspark.sql.functions导入*
从pyspark.sql.types导入*
作为pd进口熊猫
导入操作系统,shutil
将numpy作为np导入
spark=SparkSession.builder\
.master(“本地[2]”)\
.appName(“第2章”)\
.config('spark.jars.packages','io.delta:delta-core_2.11:0.4.0')\
.config('spark.executor.memory','6gb')\
.getOrCreate()
sc=spark.sparkContext
sql(“set spark.sql.shuffle.partitions=1”)
delta_path=“文件夹/with/delta_格式”
series=spark.read.format(“增量”).load(增量路径)
系列=系列。带列(“卷”),列(“卷”)。铸造(“双”))
系列=系列。带列(“日期”,截止日期(列(“日期”),“MM/dd/yy”))
系列节目
series.printSchema()
我有以下数据框:

+----------+-------+---------+-------+-------+-------+
|      Date|  Close|   Volume|   Open|   High|    Low|
+----------+-------+---------+-------+-------+-------+
|2015-06-01|2109.25|1337694.0| 2109.5|2117.75|2100.25|
|2015-06-02|2106.75|1442673.0| 2106.5| 2116.0| 2094.0|
|2015-06-03| 2116.0|1310989.0|2116.25|2120.75|2106.75|
|2015-06-04| 2099.0|1716475.0| 2099.0| 2116.5|2091.25|
|2015-06-05|2092.25|1459933.0| 2092.0|2102.75| 2083.5|
|2015-06-08|2078.25|1290580.0| 2079.0|2093.25|2076.25|
|2015-06-09| 2080.0|1446234.0| 2080.5|2084.75|2068.75|
|2015-06-10| 2107.0|1664080.0| 2106.0| 2108.0| 2080.0|
|2015-06-11|2109.25|1480391.0|2109.25|2114.75|2103.25|
|2015-06-12| 2093.0|1130566.0| 2094.0|2109.25|2090.25|
|2015-06-15| 2084.0|1077154.0|2083.75|2089.75|2071.25|
|2015-06-16| 2097.5| 790233.0|2097.25|2098.25| 2070.5|
|2015-06-17|2089.25|1577521.0|2088.75|2098.75|2078.75|
|2015-06-18|2114.75|1899198.0| 2114.0|2119.25| 2082.0|
|2015-06-19|2097.75|1236103.0|2097.75|2117.75| 2097.0|
|2015-06-22|2112.75|1095590.0|2113.25| 2122.0| 2103.5|
|2015-06-23| 2116.5| 835219.0| 2117.0| 2120.5|2111.25|
|2015-06-24| 2099.5|1153248.0| 2099.5| 2118.5| 2099.0|
|2015-06-25| 2094.0|1213961.0| 2094.0|2112.75| 2092.0|
|2015-06-26|2095.75|1318744.0|2095.75|2100.75|2086.25|
+----------+-------+---------+-------+-------+-------+
only showing top 20 rows

root
 |-- Date: date (nullable = true)
 |-- Close: double (nullable = true)
 |-- Volume: double (nullable = true)
 |-- Open: double (nullable = true)
 |-- High: double (nullable = true)
 |-- Low: double (nullable = true)


之后,我声明了一些我想在Spark中执行的
udf

def get_bt(data):
    s = np.sign(np.diff(data))
    for i in range(1, len(s)):
        if s[i] == 0:
            s[i] = s[i-1]
    return s

def get_theta_t(bt):
    return np.sum(bt)

def ewma(data, window):
    alpha = 2 /(window + 1.0)
    alpha_rev = 1-alpha
    scale = 1/alpha_rev
    n = data.shape[0]
    r = np.arange(n)
    scale_arr = scale**r
    offset = data[0]*alpha_rev**(r+1)
    pw0 = alpha*alpha_rev**(n-1)
    mult = data*pw0*scale_arr
    cumsums = mult.cumsum()
    out = offset + cumsums*scale_arr[::-1]
    return out

schema = series.select('*').schema
column_name = 'Close'; volume_column = 'Volume'; datetimecolumn = 'Date'; initital_T = 100; min_bar = 10; max_bar = 1000;

@pandas_udf(schema, PandasUDFType.GROUPED_MAP)             
def process_column(pdf):
    #pdf = pdf.set_index(pd.to_datetime(pdf[datetimecolumn], infer_datetime_format = True,format='%Y-%m-%d'))
    init_bar = pdf[:initital_T][column_name].values.tolist()
    ts = [initital_T]
    bts = [bti for bti in get_bt(pdf[column_name])]  
    res = []
    buf_bar, vbuf, T = [], [], 0.
    for i in range(initital_T, len(pdf)):

        di = pdf.index.values[i]
        buf_bar.append(pdf[column_name].iloc[i])
        bt = get_bt(buf_bar)
        theta_t = get_theta_t(bt)
        try:
            e_t = ewma(np.array(ts), initital_T / 10)[-1]
            e_bt = ewma(np.array(bts), initital_T)[-1]
        except:
            e_t = np.mean(ts)
            e_bt = np.mean(bts)
        finally:                   
            if np.isnan(e_bt):
                e_bt = np.mean(bts[int(len(bts) * 0.9):])
            if np.isnan(e_t):
                e_t = np.mean(ts[int(len(ts) * 0.9):])

        condition = np.abs(theta_t) >= e_t * np.abs(e_bt)

        if (condition or len(buf_bar) > max_bar) and len(buf_bar) >= min_bar:
            o = buf_bar[0]
            h = np.max(buf_bar)
            l = np.min(buf_bar)
            c = buf_bar[-1]
            v = np.sum(vbuf)

            res.append({
                datetimecolumn: di,
                'Open': o,
                'High': h,
                'Low': l,
                'Close': c,
                'Volume': v
            })

            ts.append(T)
            for b in bt:
                bts.append(b) 

            buf_bar = []
            vbuf = []
            T = 0.           
        else:
            vbuf.append(pdf[volume_column].iloc[i])
            T += 1
    res = pd.DataFrame(res).set_index(datetimecolumn)
    return res 

但是,当我执行以下操作时:

imbtick_bars = series.withColumn('Date', unix_timestamp(col('Date'), "yyyy-MM-dd").cast("timestamp")) \
    .groupBy('Date').apply(process_column)
imbtick_bars.show() 


我得到以下错误
AttributeError:只能使用带有datetimelike值的.dt访问器
。但是我不知道为什么它不接受“Date”列作为
datetype
列(我认为这是错误的根源)。如果有人能指出我的错误或者我应该怎么做,我会很感激的,因为我已经努力了好几天修改了代码的某些部分,但我仍然无法找到解决方案

我也遇到了同样的问题,通过从
datetime.date
切换到
datetime.datetime
来解决它。这里是一个最小的例子。两个小火花dfs,一个带有
日期时间
,一个带有
日期

from datetime import date, datetime
date_df = spark.createDataFrame(pd.DataFrame({"id": 1, "recording_date": 
              date(2016, 4, 1)}, index = [0]))
date_df = date_df.groupBy("id")

datetime_df = spark.createDataFrame(pd.DataFrame({"id": 1, "recording_date": 
              datetime(2016, 4, 1)}, index = [0]))
datetime_df = datetime_df.groupBy("id")
然后,简单的函数和运行。这一次失败,错误与您的相同

[IN]
@pandas_udf("id bigint, recording_date timestamp", PandasUDFType.GROUPED_MAP)
def f(df):
    return df

date_df.apply(f).show()

[OUT]
AttributeError: Can only use .dt accessor with datetimelike values
然而,这一次的产出与预期一致

[IN]
datetime_df.apply(f).show()

[OUT]
+-----+-------------------+
|   id|     recording_date|
+-----+-------------------+
|    1|2016-04-01 00:00:00|
+-----+-------------------+

我认为这个解决方案是由错误提出的,但我不明白为什么一个可行,一个不可行。也许有人可以详细解释一下为什么这样可以解决这个问题。

您是否检查了日期列是否包含任何缺少的值?@ndricca是的,我检查了,没有缺少的值,但在重现您的错误时遇到了一些困难。对于您的示例数据,我在pandas_udf中得到一个关键错误,该错误发生在最末端,因为for循环
for I in range(initital_T,len(pdf)):
结果为空。明天我将尝试模拟更多的数据