Apache spark 无法对pyspark运行查询

Apache spark 无法对pyspark运行查询,apache-spark,pyspark,apache-spark-sql,Apache Spark,Pyspark,Apache Spark Sql,我必须选择每年发生次数最多的月份,并从高到低排序,在SQL server上此查询有效 SELECT TOP 1 WITH TIES month, year, COUNT(day) as occurrences FROM occurrences_sample GROUP BY year, month ORDER BY ROW_NUMBER() OVER(PARTITION BY year ORDER BY COUNT(day) DESC) 但网上阅读必须改为限制,我也是这样做的 SELECT m

我必须选择每年发生次数最多的月份,并从高到低排序,在SQL server上此查询有效

SELECT TOP 1 WITH TIES month, year, COUNT(day) as occurrences FROM occurrences_sample GROUP BY year, month ORDER BY ROW_NUMBER() OVER(PARTITION BY year ORDER BY COUNT(day) DESC)
但网上阅读必须改为限制,我也是这样做的

SELECT month, year, COUNT(day) as occurrences FROM occurrences_sample GROUP BY year, month ORDER BY ROW_NUMBER() OVER(PARTITION BY year ORDER BY COUNT(day) DESC) limit 1
但是在pyspark中不起作用,我对pyspark很陌生,我怎样才能让它起作用

这是回溯

Traceback (most recent call last):
  File "/home2/ead2020/SEM2/andre.dilay/PycharmProjects/ATP - Andre Filay/main.py", line 24, in <module>
    df_mes_anual_maior_ocorrencia = sqlContext.sql("SELECT  month, year, COUNT(day) as occurrences FROM occurrences_sample GROUP BY year, month ORDER BY ROW_NUMBER() OVER(PARTITION BY year ORDER BY COUNT(day) DESC ) limit 1")
  File "/home2/ead2020/SEM2/andre.dilay/PycharmProjects/ATP - Andre Filay/venv/lib/python3.7/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/context.py", line 371, in sql
  File "/home2/ead2020/SEM2/andre.dilay/PycharmProjects/ATP - Andre Filay/venv/lib/python3.7/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/session.py", line 649, in sql
  File "/home2/ead2020/SEM2/andre.dilay/PycharmProjects/ATP - Andre Filay/venv/lib/python3.7/site-packages/pyspark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
  File "/home2/ead2020/SEM2/andre.dilay/PycharmProjects/ATP - Andre Filay/venv/lib/python3.7/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 128, in deco
  File "/home2/ead2020/SEM2/andre.dilay/PycharmProjects/ATP - Andre Filay/venv/lib/python3.7/site-packages/pyspark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o22.sql.
: java.lang.ClassCastException: org.apache.spark.sql.catalyst.plans.logical.Project cannot be cast to org.apache.spark.sql.catalyst.plans.logical.Aggregate
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$$anonfun$apply$20.applyOrElse(Analyzer.scala:2177)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$$anonfun$apply$20.applyOrElse(Analyzer.scala:2154)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$3(AnalysisHelper.scala:90)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$1(AnalysisHelper.scala:90)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:84)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$2(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$1(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:84)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$2(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$1(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:84)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$.apply(Analyzer.scala:2154)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$.apply(Analyzer.scala:2153)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:149)
        at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
        at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
        at scala.collection.immutable.List.foldLeft(List.scala:89)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:146)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:138)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:138)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:176)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:170)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:130)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:116)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:116)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:154)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:153)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:68)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:133)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:133)
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:68)
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:66)
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:58)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
        at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)


我正在使用sqlContext运行查询

现在错误发生了变化,我认为我的查询是错误的:查询运算符Sort包含一个或多个不受支持的表达式类型Aggregate、Window或Generate。无效表达式:[row_number()OVER(按事件划分_sample.year ORDER BY EXCENTS DESC NULL无界前一行和当前行之间的最后一行)];;它起作用了,但只带来了一条线,我所期望的是,每年一个月,这个月是一年中的最高值,希望这个能起作用。。。似乎Spark
拥有
的行为方式与标准sqlYes不同,这很有效,非常感谢:-)
SELECT * FROM 
(SELECT *, RANK() OVER(PARTITION BY year ORDER BY occurrences DESC) AS rn
FROM
(SELECT month, year, COUNT(day) as occurrences
FROM occurrences_sample
GROUP BY year, month)) 
WHERE rn = 1;
SELECT * FROM 
(SELECT *, RANK() OVER(PARTITION BY year ORDER BY occurrences DESC) AS rn
FROM
(SELECT month, year, COUNT(day) as occurrences
FROM occurrences_sample
GROUP BY year, month)) 
WHERE rn = 1;