Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 窗口化和聚合pyspark数据帧_Python_Apache Spark_Pyspark - Fatal编程技术网

Python 窗口化和聚合pyspark数据帧

Python 窗口化和聚合pyspark数据帧,python,apache-spark,pyspark,Python,Apache Spark,Pyspark,我试图处理来自套接字的传入事件,然后打开窗口并聚合事件数据。我的窗户碰了个钉子。看起来,即使我为DataFrame指定了一个模式,它也不会转换为列 import sys from pyspark.sql.types import StructType, StringType, TimestampType, FloatType, IntegerType, StructField from pyspark.sql import SparkSession import pyspark.sql.fun

我试图处理来自套接字的传入事件,然后打开窗口并聚合事件数据。我的窗户碰了个钉子。看起来,即使我为DataFrame指定了一个模式,它也不会转换为列

import sys
from pyspark.sql.types import StructType, StringType, TimestampType, FloatType, IntegerType, StructField

from pyspark.sql import SparkSession
import pyspark.sql.functions as F


if __name__ == "__main__":
    # our data currently looks like this (tab separated).
    # -SYMBOL   DATE            PRICE   TICKVOL BID         ASK
    # NQU7  2017-05-28T15:00:00 5800.50 12      5800.50     5800.50
    # NQU7  2017-05-28T15:00:00 5800.50 1       5800.50     5800.50
    # NQU7  2017-05-28T15:00:00 5800.50 5       5800.50     5800.50
    # NQU7  2017-05-28T15:00:00 5800.50 1       5800.50     5800.50

    if len(sys.argv) != 3:
        # print("Usage: network_wordcount.py <hostname> <port>", file=sys.stderr)
        exit(-1)

    spark = SparkSession \
        .builder \
        .appName("StructuredTickStream") \
        .getOrCreate()
    sc = spark.sparkContext
    sc.setLogLevel('WARN')

    # Read all the csv files written atomically in a directory
    tickSchema = StructType([
        StructField("symbol", StringType(), True),
        StructField("dt", TimestampType(), True),
        StructField("price", FloatType(), True),
        StructField("tickvol", IntegerType(), True),
        StructField("bid", FloatType(), True),
        StructField("ask", FloatType(), True)
    ])

    events_df = spark \
        .readStream \
        .option("sep", "\t") \
        .option("host", sys.argv[1]) \
        .option("port", sys.argv[2]) \
        .format("socket") \
        .schema(tickSchema) \
        .load()

    events_df.printSchema()
    print("columns = ", events_df.columns)

    ohlc_df = events_df \
        .groupby(F.window("dt", "5 minutes", "1 minutes")) \
        .agg(
            F.first('price').alias('open'),
            F.max('price').alias('high'),
            F.min('price').alias('low'),
            F.last('price').alias('close')
        ) \
        .collect()


    query = ohlc_df \
        .writeStream \
        .outputMode("complete") \
        .format("console") \
        .start()

    query.awaitTermination()

你知道我做错了什么吗?

你的数据框只有一列
,在这里你试图从这个
事件_df
访问列
dt
。这是问题的主要原因

下面的语句清楚地显示了它有单列

print("columns = ", events_df.columns)
你需要检查一下这个

events_df = spark \
    .readStream \
    .option("sep", "\t") \
    .option("host", sys.argv[1]) \
    .option("port", sys.argv[2]) \
    .format("socket") \
    .schema(tickSchema) \
    .load()
为什么只使用一列创建df

events_df = spark \
    .readStream \
    .option("sep", "\t") \
    .option("host", sys.argv[1]) \
    .option("port", sys.argv[2]) \
    .format("socket") \
    .schema(tickSchema) \
    .load()