spark redshift在pyspark shell中工作,但在使用spark submit时失败

spark redshift在pyspark shell中工作,但在使用spark submit时失败,pyspark,amazon-redshift,Pyspark,Amazon Redshift,我们的集群正在运行spark 2.0.1,它使用Scala 2.11.8 根据本文件: 我应该使用: groupId:com.databricks artifactId:spark-redshift_2.11 版本:3.0.0-1 我能够从中查询并写回表 - spark-shell --packages com.databricks:spark-redshift_2.11:3.0.0-preview1 - pyspark --packages com.databricks:spark-redsh

我们的集群正在运行spark 2.0.1,它使用Scala 2.11.8

根据本文件:

我应该使用: groupId:com.databricks artifactId:spark-redshift_2.11 版本:3.0.0-1

我能够从中查询并写回表

- spark-shell --packages com.databricks:spark-redshift_2.11:3.0.0-preview1
- pyspark --packages com.databricks:spark-redshift_2.11:3.0.0-preview1
但是如果我使用spark submit从pyspark脚本运行它,我会得到下面的错误

我运行spark并提交 --软件包com.databricks:spark-redshift_2.11:3.0.0-preview1

我得到以下错误:

Caused by: java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
    at com.databricks.spark.redshift.RecordReaderIterator.<init>(RecordReaderIterator.scala:30)
    at com.databricks.spark.redshift.RedshiftFileFormat$$anonfun$buildReader$1.apply(RedshiftFileFormat.scala:93)
    at com.databricks.spark.redshift.RedshiftFileFormat$$anonfun$buildReader$1.apply(RedshiftFileFormat.scala:80)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:279)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:263)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:116)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 26 more
原因:java.lang.NoClassDefFoundError:scala/collection/GenTraversableOnce$class
在com.databricks.spark.redshift.RecordReaderIterator上。(RecordReaderIterator.scala:30)
位于com.databricks.spark.redshift.RedshiftFileFormat$$anonfun$buildReader$1.apply(RedshiftFileFormat.scala:93)
位于com.databricks.spark.redshift.RedshiftFileFormat$$anonfun$buildReader$1.apply(RedshiftFileFormat.scala:80)
位于org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:279)
位于org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:263)
位于org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:116)
位于org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
位于org.apache.spark.sql.catalyst.expressions.GeneratedClass$GenerateEditor.processNext(未知源)
位于org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
位于org.apache.spark.sql.execution.whisttagecodegenexec$$anonfun$8$$anon$1.hasNext(whisttagecodegenexec.scala:370)
位于org.apache.spark.sql.catalyst.expressions.GeneratedClass$GenerateEditor.processNext(未知源)
位于org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
位于org.apache.spark.sql.execution.whisttagecodegenexec$$anonfun$8$$anon$1.hasNext(whisttagecodegenexec.scala:370)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
位于org.apache.spark.rdd.rdd$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(rdd.scala:803)
位于org.apache.spark.rdd.rdd$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(rdd.scala:803)
在org.apache.spark.rdd.MapPartitionsRDD.compute上(MapPartitionsRDD.scala:38)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:319)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:283)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
位于org.apache.spark.scheduler.Task.run(Task.scala:86)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:274)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 还有一个
原因:java.lang.ClassNotFoundException:scala.collection.GenTraversableOnce$class
位于java.net.URLClassLoader.findClass(URLClassLoader.java:381)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:424)
位于sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 26多

知道spark submit失败的原因吗?

如果您已在独立(本地)模式下安装spark,请使用这些步骤在您的本地模式下运行,稍后可以替换依赖项。火花安装在组合仪表(纱线)上,但我是在本地模式下运行的。请在ide构建路径中添加这些jar,并告知。@PraveenKumarKrishnaiyer我已经在/etc/spark/conf.dist/spark-defaults.conf中将jar添加到spark.driver.extraClassPath和spark.executor.extraClassPath中了。我需要在其他地方添加jar吗?请检查您添加的jar是否已安装以前的版本。如果是,请单独更换这些罐子。另外,如果您使用IDE(比如IntelliJ、eclipse),我会坚持在项目的构建路径上添加这些jar文件。