Apache spark CDH 5.3.0上的Spark SQL Thrift Server

Apache spark CDH 5.3.0上的Spark SQL Thrift Server,apache-spark,cloudera-cdh,Apache Spark,Cloudera Cdh,我正在尝试使用CDH5.3.0来运行Spark的Thrift服务器。我正在尝试按照Spark SQL说明进行操作,但我甚至无法成功运行-help选项。在下面的输出中,由于找不到HiveServer2类,它将终止 $ /usr/lib/spark/sbin/start-thriftserver.sh --help Usage./sbin/start-thriftserver [options] [thrift server options] Options: --master MASTER_U

我正在尝试使用CDH5.3.0来运行Spark的Thrift服务器。我正在尝试按照Spark SQL说明进行操作,但我甚至无法成功运行-help选项。在下面的输出中,由于找不到HiveServer2类,它将终止

$ /usr/lib/spark/sbin/start-thriftserver.sh --help
Usage./sbin/start-thriftserver [options] [thrift server options]
Options:
  --master MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.
  --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                              on one of the worker machines inside the cluster ("cluster")
                              (Default: client).
  --class CLASS_NAME          Your application's main class (for Java / Scala apps).
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of local jars to include on the driver
                              and executor classpaths.
  --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
                              on the PYTHONPATH for Python apps.
  --files FILES               Comma-separated list of files to be placed in the working
                              directory of each executor.

  --conf PROP=VALUE           Arbitrary Spark configuration property.
  --properties-file FILE      Path to a file from which to load extra properties. If not
                              specified, this will look for conf/spark-defaults.conf.

  --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 512M).
  --driver-java-options       Extra Java options to pass to the driver.
  --driver-library-path       Extra library path entries to pass to the driver.
  --driver-class-path         Extra class path entries to pass to the driver. Note that
                              jars added with --jars are automatically included in the
                              classpath.

  --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).

  --help, -h                  Show this help message and exit
  --verbose, -v               Print additional debug output

 Spark standalone with cluster deploy mode only:
  --driver-cores NUM          Cores for driver (Default: 1).
  --supervise                 If given, restarts the driver on failure.

 Spark standalone and Mesos only:
  --total-executor-cores NUM  Total cores for all executors.

 YARN-only:
  --executor-cores NUM        Number of cores per executor (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
  --archives ARCHIVES         Comma separated list of archives to be extracted into the
                              working directory of each executor.

Thrift server options:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hive/service/server/HiveServer2
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
Caused by: java.lang.ClassNotFoundException: org.apache.hive.service.server.HiveServer2
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 13 more

如错误所示,该类不在类路径中。不幸的是,设置CLASSPATH环境变量将不起作用。我能找到的唯一解决方案是编辑/usr/lib/spark/bin/compute-classpath.sh并添加这一行它几乎可以放在任何地方,但将它放在最后一行,以明确它是一个添加:

CLASSPATH="$CLASSPATH:/usr/lib/hive/lib/*"

Cloudera的显式状态Spark SQL在CDH中仍然是一个实验性的、不受支持的特性,因此可能需要这样的调整也就不足为奇了。此外,CDH 5.2中的一个类似问题表明,Cloudera出于尺寸原因故意将蜂箱罐排除在外。

我也遇到过同样的问题,但我用另一种方法解决了它

cloudera CDH版本不是5.3.0,而是该版本之前的某个版本,因此您会发现路径略有不同

简单的解决方案是用另一个版本替换cloudera CDH附带的spark程序集-**.jar文件

我从spark的官方下载页面下载了spark。我下载的版本是为hadoop 2.4及更高版本构建的。提取下载的文件并查找spark assembly-**.jar

在cloudera安装中,我查找了相同的文件,并在路径/usr/lib/spark/libe/spark assembly-.jar下找到了它

前面的路径实际上是指向实际文件的符号链接。我将spark下载的jar上传到同一路径,并使符号链接指向新的jar ln-f-s目标链接

我一切都很好

/usr/lib/spark/bin/compute-classpath.sh设置类路径=$spark\u类路径。在使用包裹的CDH上,您可以将蜂窝罐添加到SPARK_类路径,如下所示:

SPARK_CLASSPATH=$(ls -1 /opt/cloudera/parcels/CDH/lib/hive/lib/*.jar | sed -e :a  -e 'N;s/\n/:/;ta') /opt/cloudera/parcels/CDH/lib/spark/sbin/start-thriftserver.sh --help

Cloudera社区论坛的说明 :

-Phive和-Phive thriftserver是其中的关键部件

有一个添加Spark Thrift Server的请求
如果您想在CDH中看到这一点,请投赞成票。

现在在Spark1.4中,没有名为compute-classpath.sh的文件,我应该在哪里添加类路径?在CDH 5.5中不存在compute-classpath.sh。。它是重命名的还是刚刚删除的?其他解决方法?你必须重建Spark吗?没有,我做了答案中描述的事情。
git clone https://github.com/cloudera/spark.git

cd spark

./make-distribution.sh -DskipTests \
  -Dhadoop.version=2.6.0-cdh5.7.0 \
  -Phadoop-2.6 \
  -Pyarn \
  -Phive -Phive-thriftserver \
  -Pflume-provided \
  -Phadoop-provided \
  -Phbase-provided \
  -Phive-provided \
  -Pparquet-provided