Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python PySpark:计算指数移动平均值_Python_Apache Spark_Pyspark_Apache Spark Sql_Pyspark Dataframes - Fatal编程技术网

Python PySpark:计算指数移动平均值

Python PySpark:计算指数移动平均值,python,apache-spark,pyspark,apache-spark-sql,pyspark-dataframes,Python,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Dataframes,我想计算第9单元的指数移动平均值 下面是我的数据 +---------+-------------------+--------+--------+--------+--------+ | Symbol| DateTime| Open| High| Low| Close| +---------+-------------------+--------+--------+--------+--------+ |BANKNIFTY|2019-01-01

我想计算第9单元的指数移动平均值

下面是我的数据

+---------+-------------------+--------+--------+--------+--------+
|   Symbol|           DateTime|    Open|    High|     Low|   Close|
+---------+-------------------+--------+--------+--------+--------+
|BANKNIFTY|2019-01-01 09:15:00| 27235.5|27239.55| 27087.7| 27119.8|
|BANKNIFTY|2019-01-01 09:30:00| 27120.3| 27123.3|27073.85| 27103.6|
|BANKNIFTY|2019-01-01 09:45:00| 27104.7| 27119.1|27035.25| 27052.7|
|BANKNIFTY|2019-01-01 10:00:00|27051.65|27078.35| 27038.8| 27070.7|
|BANKNIFTY|2019-01-01 10:15:00|   27071|27092.85| 27061.3| 27076.9|
|BANKNIFTY|2019-01-01 10:30:00| 27078.2| 27099.1|27064.95| 27079.1|
|BANKNIFTY|2019-01-01 10:45:00|27077.85| 27094.5|27057.45|27085.65|
|BANKNIFTY|2019-01-01 11:00:00|27082.85| 27116.5|27082.35| 27112.7|
|BANKNIFTY|2019-01-01 11:15:00|27111.75| 27119.1| 27081.6|27083.35|
|BANKNIFTY|2019-01-01 11:30:00|27084.25|27106.65|27080.65| 27099.3|
|BANKNIFTY|2019-01-01 11:45:00| 27100.5|   27114| 27095.3|27109.15|
|BANKNIFTY|2019-01-01 12:00:00|27108.95|27122.85|27105.55| 27111.9|
|BANKNIFTY|2019-01-01 12:15:00| 27114.4| 27121.1|27086.55| 27087.8|
|BANKNIFTY|2019-01-01 12:30:00|27087.95|27088.45| 27060.4| 27074.3|
|BANKNIFTY|2019-01-01 12:45:00| 27072.8| 27081.7|27054.75| 27062.2|
|BANKNIFTY|2019-01-01 13:00:00|27062.35|27094.55| 27059.5|27093.15|
|BANKNIFTY|2019-01-01 13:15:00|27093.45|27094.85|27075.65|27085.05|
|BANKNIFTY|2019-01-01 13:30:00| 27084.8|27087.15|27019.05| 27043.2|
|BANKNIFTY|2019-01-01 13:45:00|27045.75|27068.85|27035.85|27062.35|
|BANKNIFTY|2019-01-01 14:00:00|27062.45|27100.35|   27057|27080.65|
|BANKNIFTY|2019-01-01 14:15:00|27081.05|27188.25| 27076.6|27188.25|
|BANKNIFTY|2019-01-01 14:30:00|27186.85|27309.45| 27179.8|27294.15|
|BANKNIFTY|2019-01-01 14:45:00| 27294.1|27356.85| 27294.1|27346.35|
|BANKNIFTY|2019-01-01 15:00:00| 27346.1|27395.35|27326.05| 27386.8|
|BANKNIFTY|2019-01-01 15:15:00|   27385|27430.55|   27380|27411.15|
+---------+-------------------+--------+--------+--------+--------+
想要添加EMA列,该列计算最近9个收盘价周期的指数移动平均值

计算EMA的公式

Multiplier  = 2 ÷ (number of time periods + 1) => 2 ÷ (9+ 1) => 2 ÷ 10 => 0.2
EMA: {Price - EMA(previous row)} x Multiplier  + EMA(previous row)
所需输出

+---------+----------------+--------+--------+--------+--------+--------+
|   Symbol|        DateTime|    Open|    High|     Low|   Close|     EMA|
+---------+----------------+--------+--------+--------+--------+--------+
|BANKNIFTY|01-01-2019 09:15| 27235.5|27239.55| 27087.7| 27119.8|        |
|BANKNIFTY|01-01-2019 09:30| 27120.3| 27123.3|27073.85| 27103.6|        |
|BANKNIFTY|01-01-2019 09:45| 27104.7| 27119.1|27035.25| 27052.7|        |
|BANKNIFTY|01-01-2019 10:00|27051.65|27078.35| 27038.8| 27070.7|        |
|BANKNIFTY|01-01-2019 10:15|   27071|27092.85| 27061.3| 27076.9|        |
|BANKNIFTY|01-01-2019 10:30| 27078.2| 27099.1|27064.95| 27079.1|        |
|BANKNIFTY|01-01-2019 10:45|27077.85| 27094.5|27057.45|27085.65|        |
|BANKNIFTY|01-01-2019 11:00|27082.85| 27116.5|27082.35| 27112.7|        |
|BANKNIFTY|01-01-2019 11:15|27111.75| 27119.1| 27081.6|27083.35|27087.17|
|BANKNIFTY|01-01-2019 11:30|27084.25|27106.65|27080.65| 27099.3|27089.59|
|BANKNIFTY|01-01-2019 11:45| 27100.5|   27114| 27095.3|27109.15| 27093.5|
|BANKNIFTY|01-01-2019 12:00|27108.95|27122.85|27105.55| 27111.9|27097.18|
|BANKNIFTY|01-01-2019 12:15| 27114.4| 27121.1|27086.55| 27087.8|27095.31|
|BANKNIFTY|01-01-2019 12:30|27087.95|27088.45| 27060.4| 27074.3|27091.11|
|BANKNIFTY|01-01-2019 12:45| 27072.8| 27081.7|27054.75| 27062.2|27085.32|
|BANKNIFTY|01-01-2019 13:00|27062.35|27094.55| 27059.5|27093.15|27086.89|
|BANKNIFTY|01-01-2019 13:15|27093.45|27094.85|27075.65|27085.05|27086.52|
|BANKNIFTY|01-01-2019 13:30| 27084.8|27087.15|27019.05| 27043.2|27077.86|
|BANKNIFTY|01-01-2019 13:45|27045.75|27068.85|27035.85|27062.35|27074.76|
|BANKNIFTY|01-01-2019 14:00|27062.45|27100.35|   27057|27080.65|27075.93|
|BANKNIFTY|01-01-2019 14:15|27081.05|27188.25| 27076.6|27188.25| 27098.4|
|BANKNIFTY|01-01-2019 14:30|27186.85|27309.45| 27179.8|27294.15|27137.55|
|BANKNIFTY|01-01-2019 14:45| 27294.1|27356.85| 27294.1|27346.35|27179.31|
|BANKNIFTY|01-01-2019 15:00| 27346.1|27395.35|27326.05| 27386.8|27220.81|
|BANKNIFTY|01-01-2019 15:15|   27385|27430.55|   27380|27411.15|27258.88|
+---------+-------------------+--------+--------+--------+--------+------+

公式链接:

您的问题不是非常清楚,因为您的测试数据集不包含多天,因此您的EMA不能使用示例中数据的前几天

但是,要创建数据集,您需要:

从pyspark.sql导入SQLContext,窗口
从pyspark.sql导入函数为F
从pyspark.sql.types导入FloatType、StringType、StructField、StructType、TimestampType
schema=StructType(
[
StructField(“Symbol”,StringType(),null=False),
StructField(“DateTime”,TimestampType(),null=False),
StructField(“打开”,FloatType(),null=False),
StructField(“高”,FloatType(),nullable=False),
StructField(“低”,FloatType(),nullable=False),
StructField(“Close”,FloatType(),nullable=False),
StructField(“Close”,FloatType(),nullable=False),
]
)
数据=[
(“BANKNIFTY”,2019-01-01 09:15:0027235.527239.5527087.727119.8),
(“BANKNIFTY”,2019-01-01 09:30:0027120.327123.327073.8527103.6),
(“BANKNIFTY”,2019-01-01 09:45:0027104.727119.127035.2527052.7),
...
]
sql_context=SQLContext(sc)
df=sql\u context.createDataFrame(数据,模式)
从那里,您只需要创建窗口

w=Window().partitionBy(“Close”).orderBy(F.col(“DateTime”))
然后使用窗口创建新列

df=df.withColumn(
“EMA”,
F.avg(“关闭”)。超过(w)
)

注意:这里我只是使用平均值,而不是你要求的均线。要获得均线,我需要添加一个额外的列来计算每天的均线,然后在计算中使用它。

检查此项-抱歉误解。前一天是指前一行的均线值。我只需要根据标准的EMA计算找到EMA值。另外,如果我发现EMA值为9,那么前8行将没有任何EMA值。从第9排开始,我们将获得EMA值。感谢您的澄清。在这种情况下,您需要的是
lag
函数。例如,
F.lag(F.col(“Flag”),9)
其中9是前一行的数量。如果您需要更新答案以进行均线计算,您共享的答案将用于简单的均线计算。