Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 在I'时出错;我使用Amazon EMR从Spark连接到Hbase?_Scala_Hadoop_Apache Spark_Hbase_Phoenix - Fatal编程技术网

Scala 在I'时出错;我使用Amazon EMR从Spark连接到Hbase?

Scala 在I'时出错;我使用Amazon EMR从Spark连接到Hbase?,scala,hadoop,apache-spark,hbase,phoenix,Scala,Hadoop,Apache Spark,Hbase,Phoenix,我正在尝试使用AmazonEMR从Spark连接Hbase表。我正在使用以下版本的驱动程序 Hbase:1.1.2.2.3.4.0-3485 凤凰城车手:4.2.0.2.2.0.0-2041 当我在EMR上运行我的胖罐子时,会出现以下错误。我试图解决,但被击中了 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteStr

我正在尝试使用AmazonEMR从Spark连接Hbase表。我正在使用以下版本的驱动程序

Hbase:1.1.2.2.3.4.0-3485 凤凰城车手:4.2.0.2.2.0.0-2041

当我在EMR上运行我的胖罐子时,会出现以下错误。我试图解决,但被击中了

 java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteString
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1658)
        at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1613)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:924)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1168)
        at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:349)
org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:215)
        at org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:159)
        at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:304)
        at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:294)
        at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:215)
        at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:210)
        at org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:183)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:127)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:117)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:53)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:345)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
        at spid_part1$.main(spid_part1.scala:71)
        at spid_part1.main(spid_part1.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteString
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1176)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1169)
        at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1646)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
java.util.concurrent.ExecutionException:java.lang.IllegalAccessError:class com.google.protobuf.HBaseZeroCopyByteString无法访问其超类com.google.protobuf.LiteralByteString
位于java.util.concurrent.FutureTask.report(FutureTask.java:122)
位于java.util.concurrent.FutureTask.get(FutureTask.java:192)
位于org.apache.hadoop.hbase.client.HTable.协处理器服务(HTable.java:1658)
位于org.apache.hadoop.hbase.client.HTable.协处理器服务(HTable.java:1613)
位于org.apache.phoenix.query.connectionQueryServiceSiml.MetadataCopProcessorExec(connectionQueryServiceSiml.java:924)
位于org.apache.phoenix.query.ConnectionQueryServiceSiml.getTable(ConnectionQueryServiceSiml.java:1168)
位于org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:349)
org.apache.phoenix.compile.frompiler$SingleTableColumnResolver。(frompiler.java:215)
在org.apache.phoenix.compile.frompiler.getResolverForQuery上(frompiler.java:159)
位于org.apache.phoenix.jdbc.phoenix语句$ExecutableSelectStatement.compilePlan(phoenix语句.java:304)
在org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:294)
位于org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:215)
位于org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
位于org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
位于org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:210)
位于org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:183)
位于org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:127)
位于org.apache.spark.sql.execution.datasources.jdbc.jdbcreation.(jdbcreation.scala:117)
位于org.apache.spark.sql.execution.datasources.jdbc.jdbrelationprovider.createRelation(jdbrelationprovider.scala:53)
位于org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:345)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
在spid_part1$.main(spid_part1.scala:71)
位于spid_part1.main(spid_part1.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
位于org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
位于org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
位于org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
位于org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
原因:java.lang.IllegalAccessError:class com.google.protobuf.HBaseZeroCopyByteString无法访问其超类com.google.protobuf.LiteralByteString
位于java.lang.ClassLoader.defineClass1(本机方法)
位于java.lang.ClassLoader.defineClass(ClassLoader.java:763)
位于java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
位于java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
在java.net.URLClassLoader.access$100(URLClassLoader.java:73)
在java.net.URLClassLoader$1.run(URLClassLoader.java:368)
在java.net.URLClassLoader$1.run(URLClassLoader.java:362)
位于java.security.AccessController.doPrivileged(本机方法)
位于java.net.URLClassLoader.findClass(URLClassLoader.java:361)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:424)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:357)
在org.apache.phoenix.query.connectionQueryServiceSiml$5.call(connectionQueryServiceSiml.java:1176)
位于org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1169)
位于org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1646)
在java.util.concurrent.FutureTask.run(FutureTask.java:266)处
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
运行(Thread.java:745)

对此有什么帮助吗,谢谢

您能解决这个问题吗?我也遇到了同样的问题。不,我还没有解决这个问题,仍然在寻找解决方案。fat jar不能帮助您解决spark中的版本冲突,因为您的jar基本上是在spark库加载到jvm之后运行的,所以版本总是冲突的。如果你有版本问题,那么你的基本上就完蛋了,除非依赖库足够小,并且不与自己的第三方依赖冲突。然后,只要遮住自由,它就不应该再冲突了。