Apache spark 如何查看SPARK发送到我的数据库的SQL语句?

Apache spark 如何查看SPARK发送到我的数据库的SQL语句?,apache-spark,pyspark,vertica,pyspark-sql,Apache Spark,Pyspark,Vertica,Pyspark Sql,我有一个spark cluster和一个vertica数据库。我用 spark.read.jdbc( # etc 将Spark数据帧加载到集群中。当我执行某个groupby函数时 df2 = df.groupby('factor').agg(F.stddev('sum(PnL)')) df2.show() 然后我得到一个vertica语法异常 Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apach

我有一个spark cluster和一个vertica数据库。我用

spark.read.jdbc( # etc
将Spark数据帧加载到集群中。当我执行某个groupby函数时

df2 = df.groupby('factor').agg(F.stddev('sum(PnL)'))
df2.show()
然后我得到一个vertica语法异常

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1890)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1903)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1916)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
    at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2193)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2546)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2192)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2199)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1935)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1934)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2576)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:1934)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2149)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLSyntaxErrorException: [Vertica][VJDBC](4856) ERROR: Syntax error at or near "Window"
    at com.vertica.util.ServerErrorData.buildException(Unknown Source)
    at com.vertica.io.ProtocolStream.readExpectedMessage(Unknown Source)
    at com.vertica.dataengine.VDataEngine.prepareImpl(Unknown Source)
    at com.vertica.dataengine.VDataEngine.prepare(Unknown Source)
    at com.vertica.dataengine.VDataEngine.prepare(Unknown Source)
    at com.vertica.jdbc.common.SPreparedStatement.<init>(Unknown Source)
    at com.vertica.jdbc.jdbc4.S4PreparedStatement.<init>(Unknown Source)
    at com.vertica.jdbc.VerticaJdbc4PreparedStatementImpl.<init>(Unknown Source)
    at com.vertica.jdbc.VJDBCObjectFactory.createPreparedStatement(Unknown Source)
    at com.vertica.jdbc.common.SConnection.prepareStatement(Unknown Source)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:400)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:379)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more
Caused by: com.vertica.support.exceptions.SyntaxErrorException: [Vertica][VJDBC](4856) ERROR: Syntax error at or near "Window"
    ... 27 more
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
位于scala.Option.foreach(Option.scala:257)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
位于org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
位于org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1890)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1903)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1916)
位于org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
位于org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
位于org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2193)
位于org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
位于org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2546)
位于org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2192)
位于org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2199)
位于org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1935)
位于org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1934)
位于org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2576)
位于org.apache.spark.sql.Dataset.head(Dataset.scala:1934)
位于org.apache.spark.sql.Dataset.take(Dataset.scala:2149)
位于org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:280)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:214)
运行(Thread.java:745)
原因:java.sql.SQLSyntaxErrorException:[Vertica][VJDBC](4856)错误:在“窗口”处或附近出现语法错误
位于com.vertica.util.ServerErrorData.buildException(未知源)
位于com.vertica.io.ProtocolStream.readExpectedMessage(未知源)
位于com.vertica.dataengine.VDataEngine.prepareImpl(未知源)
位于com.vertica.dataengine.VDataEngine.prepare(未知源)
位于com.vertica.dataengine.VDataEngine.prepare(未知源)
位于com.vertica.jdbc.common.SPreparedStatement。(未知来源)
位于com.vertica.jdbc.jdbc4.S4PreparedStatement。(未知来源)
位于com.vertica.jdbc.VerticaJdbc4PreparedStatementImpl。(未知来源)
位于com.vertica.jdbc.VJDBCObjectFactory.createPreparedStatement(未知源)
位于com.vertica.jdbc.common.SConnection.prepareStatement(未知源)
位于org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1(JDBCRDD.scala:400)
位于org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:379)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:319)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:283)
在org.apache.spark.rdd.MapPartitionsRDD.compute上(MapPartitionsRDD.scala:38)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:319)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:283)
在org.apache.spark.rdd.MapPartitionsRDD.compute上(MapPartitionsRDD.scala:38)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:319)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:283)
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)上
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)上
位于org.apache.spark.scheduler.Task.run(Task.scala:86)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:274)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 还有一个
原因:com.vertica.support.exceptions.SyntaxErrorException:[vertica][VJDBC](4856)错误:在“窗口”处或附近出现语法错误
... 还有27个
我想知道的是,spark到底在vertica数据库上执行了什么?有跟踪配置我可以在某处设置吗

谢谢!
Spark web UI at http://<host ip>:4040.