Linux 不能打电话;spark submit“;从scala内部通过系统调用,显然是因为--“罐子”;未展开的参数(具有*通配符)

Linux 不能打电话;spark submit“;从scala内部通过系统调用,显然是因为--“罐子”;未展开的参数(具有*通配符),linux,scala,shell,apache-spark,spark-submit,Linux,Scala,Shell,Apache Spark,Spark Submit,下面的“spark submit”调用在shell中运行良好 /bin/bash -c '/local/spark-2.3.1-bin-hadoop2.7/bin/spark-submit --class analytics.tiger.agents.spark.Orsp --master spark://analytics.broadinstitute.org:7077 --deploy-mode client --executor-memory 1024m --conf spark.app.

下面的“spark submit”调用在shell中运行良好

/bin/bash -c '/local/spark-2.3.1-bin-hadoop2.7/bin/spark-submit --class analytics.tiger.agents.spark.Orsp --master spark://analytics.broadinstitute.org:7077 --deploy-mode client --executor-memory 1024m --conf spark.app.id=Orsp --conf spark.executor.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --conf spark.driver.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --jars "/home/unix/analytics/TigerETL3/spark-jars/*.jar" /home/unix/analytics/TigerETL3/spark-agents.jar'
但是,当我只是将其转换为Scala中的系统调用时,如下所示:

val cmd = Seq("/bin/bash", "-c", s"""/local/spark-2.3.1-bin-hadoop2.7/bin/spark-submit --class analytics.tiger.agents.spark.Orsp --master spark://analytics.broadinstitute.org:7077 --deploy-mode client --executor-memory 1024m --conf spark.app.id=Orsp --conf spark.executor.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --conf spark.driver.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --jars "/home/unix/analytics/TigerETL3/spark-jars/*.jar" /home/unix/analytics/TigerETL3/spark-agents.jar""")
import scala.sys.process._
val log = cmd.lineStream.toList
println(log.mkString)
抛出错误

Warning: Local jar /home/unix/analytics/TigerETL3/spark-jars/*.jar does not exist, skipping.
Exception in thread "main" java.lang.NoClassDefFoundError: scalikejdbc/DB
        at java.lang.Class.getDeclaredMethods0(Native Method)
        at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
        at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
        at java.lang.Class.getMethod0(Class.java:3018)
        at java.lang.Class.getMethod(Class.java:1784)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:739)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: scalikejdbc.DB
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 10 more
异常表明*.jars模式由于某种原因没有被扩展(即使它在shell中运行良好)。 列举CSV列表中的所有罐子不是很吸引人,这将是一个怪兽——187个罐子。 我尝试过任何我能想到的伎俩,但失败得很惨,很久没有这么沮丧了

谢谢你的帮助!
谢谢

在指定--jars时,您需要删除双引号“”。你能试试这个吗

val cmd = Seq("/bin/bash", "-c", s"""/local/spark-2.3.1-bin-hadoop2.7/bin/spark-submit --class analytics.tiger.agents.spark.Orsp --master spark://analytics.broadinstitute.org:7077 --deploy-mode client --executor-memory 1024m --conf spark.app.id=Orsp --conf spark.executor.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --conf spark.driver.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --jars /home/unix/analytics/TigerETL3/spark-jars/*.jar /home/unix/analytics/TigerETL3/spark-agents.jar""")
import scala.sys.process._
val log = cmd.lineStream.toList
println(log.mkString)

好的,我知道了。我必须通读Spark的脚本才能意识到,如果Spark_HOME和JAVA_HOME不见了,Spark将通过一系列步骤来推断。 我最初的Scala命令(包括双引号)非常好——我只需要像这样定义这两个变量

val cmd = Seq("/bin/bash", "-c", s"""JAVA_HOME=/broad/software/free/Linux/redhat_7_x86_64/pkgs/jdk1.8.0_121 SPARK_HOME=/local/spark-2.3.1-bin-hadoop2.7 /local/spark-2.3.1-bin-hadoop2.7/bin/spark-submit --class analytics.tiger.agents.spark.Orsp --master spark://analytics.broadinstitute.org:7077 --deploy-mode client --executor-memory 1024m --conf spark.app.id=Orsp --conf spark.executor.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --conf spark.driver.extraJavaOptions=-Dconfig.file=/home/unix/analytics/TigerETL3/application.conf --jars "/home/unix/analytics/TigerETL3/spark-jars/*.jar" /home/unix/analytics/TigerETL3/spark-agents.jar""")

它工作起来很有魅力。

按照建议删除了双引号,现在我得到了一个
java.lang.ClassNotFoundException:analytics.tiger.agents.spark.Orsp
,这毫无意义,因为我确信类位于调用中作为最正确参数提供的jar中。我在某个地方读到,当在脚本模式下(而在交互模式下则是这样,并且该文件夹中的所有jar都已正确添加)