Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/360.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 类路径中存在重复的guava.jar_Java_Hbase_Apache Storm - Fatal编程技术网

Java 类路径中存在重复的guava.jar

Java 类路径中存在重复的guava.jar,java,hbase,apache-storm,Java,Hbase,Apache Storm,我使用storm-0.10将数据放入hbase-1.0.1,storm使用guava-12.0,hbase使用guava-18.0,两者都加载到类路径中,这导致我的作业失败 如何确保storm和hbase使用正确的jar版本 以下是我的pom.xml: <dependencies> <dependency> <groupId>org.apache.hbase</groupId> <artifactId&

我使用storm-0.10将数据放入hbase-1.0.1,storm使用guava-12.0,hbase使用guava-18.0,两者都加载到类路径中,这导致我的作业失败

如何确保storm和hbase使用正确的jar版本

以下是我的pom.xml:

<dependencies>
    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-client</artifactId>
        <version>1.0.0-cdh5.4.5</version>
        <exclusions>
            <exclusion>
                <groupId>com.google.guava</groupId>
                <artifactId>guava</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-hdfs</artifactId>
        <version>2.3.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-common</artifactId>
        <version>2.3.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.3.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.storm</groupId>
        <artifactId>storm-core</artifactId>
        <version>0.10.0</version>

    </dependency>

    <dependency>
        <groupId>org.apache.storm</groupId>
        <artifactId>storm-kafka</artifactId>
        <version>0.10.0</version>

    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.10</artifactId>
        <version>0.8.2.1</version>
        <exclusions>
            <exclusion>
                <groupId>org.apache.zookeeper</groupId>
                <artifactId>zookeeper</artifactId>
            </exclusion>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

    <dependency>
        <groupId>org.json</groupId>
        <artifactId>org.json</artifactId>
        <version>2.0</version>
    </dependency>
</dependencies>

org.apache.hbase
hbase客户端
1.0.0-cdh5.4.5
番石榴
番石榴
org.apache.hadoop
hadoop hdfs
2.3.0
org.apache.hadoop
hadoop通用
2.3.0
org.apache.hadoop
hadoop客户端
2.3.0
org.apache.storm
风暴核心
0.10.0
org.apache.storm
卡夫卡风暴
0.10.0
org.apache.kafka
卡夫卡2.10
0.8.2.1
org.apache.zookeeper
动物园管理员
log4j
log4j
org.json
org.json
2
例外情况:

java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:434) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1122) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1109) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1261) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1125) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1513) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1107) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at com.lujinhong.demo.storm.kinit.stormkinitdemo.HBaseHelper.put(HBaseHelper.java:182) ~[stormjar.jar:?]
at com.lujinhong.demo.storm.kinit.stormkinitdemo.HBaseHelper.put(HBaseHelper.java:175) ~[stormjar.jar:?]
at com.lujinhong.demo.storm.kinit.stormkinitdemo.PrepaidFunction.execute(PrepaidFunction.java:79) ~[stormjar.jar:?]
at storm.trident.planner.processor.EachProcessor.execute(EachProcessor.java:65) ~[storm-core-0.10.0.jar:0.10.0]
at storm.trident.planner.SubtopologyBolt$InitialReceiver.receive(SubtopologyBolt.java:206) ~[storm-core-0.10.0.jar:0.10.0]
at storm.trident.planner.SubtopologyBolt.execute(SubtopologyBolt.java:146) ~[storm-core-0.10.0.jar:0.10.0]
at storm.trident.topology.TridentBoltExecutor.execute(TridentBoltExecutor.java:370) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.executor$fn__5694$tuple_action_fn__5696.invoke(executor.clj:690) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5615.invoke(executor.clj:436) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.disruptor$clojure_handler$reify__5189.onEvent(disruptor.clj:58) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:132) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:106) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.executor$fn__5694$fn__5707$fn__5758.invoke(executor.clj:819) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_67]
java.lang.IllegalAccessError:试图从org.apache.hadoop.hbase.zookeeper.MetaTableLocator类访问方法com.google.common.base.Stopwatch。()V
在org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilavable(MetaTableLocator.java:434)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1122)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1109)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionMeta(ConnectionManager.java:1261)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1125)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.bufferedmutatompl.backgroundFlushCommissions(bufferedmutatompl.java:206)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.bufferedmutatompl.flush(bufferedmutatompl.java:183)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.HTable.FlushCommissions(HTable.java:1513)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在org.apache.hadoop.hbase.client.HTable.put(HTable.java:1107)~[hbase-client-1.0.0-cdh5.6.0.jar:?]
在com.lujinhong.demo.storm.kinit.stormkinitdemo.hbaseheloper.put(hbaseheloper.java:182)~[stormjar.jar:?]
在com.lujinhong.demo.storm.kinit.stormkinitdemo.hbaseheloper.put(hbaseheloper.java:175)~[stormjar.jar:?]
在com.lujinhong.demo.storm.kinit.stormkinitdemo.PrepaidFunction.execute(PrepaidFunction.java:79)~[stormjar.jar:?]
在storm.trident.planner.processor.EachProcessor.execute(EachProcessor.java:65)~[storm-core-0.10.0.jar:0.10.0]
在storm.trident.planner.subtopolyBolt$InitialReceiver.receive(subtopolyBolt.java:206)~[storm-core-0.10.0.jar:0.10.0]
在storm.trident.planner.SubtopologyBolt.execute(SubtopologyBolt.java:146)~[storm-core-0.10.0.jar:0.10.0]
在storm.trident.topology.TridentBoltExecutor.execute(TridentBoltExecutor.java:370)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.daemon.executor$fn\u 5694$tuple\u action\u fn\u 5696.invoke(executor.clj:690)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.daemon.executor$mk\u task\u receiver$fn\u 5615.invoke(executor.clj:436)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.disruptor$clojure\u handler$reify\u 5189.onEvent(disruptor.clj:58)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.utils.DisruptorQueue.ConsumerBatchToCursor(DisruptorQueue.java:132)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.utils.DisruptorQueue.ConsumerBatchWhenAvailable(DisruptorQueue.java:106)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.disruptor$consume\u batch\u可用时调用(disruptor.clj:80)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.daemon.executor$fn_u5694$fn_u5707$fn_u5758.invoke(executor.clj:819)~[storm-core-0.10.0.jar:0.10.0]
在backtype.storm.util$async\u循环$fn\uu 545.invoke(util.clj:479)[storm-core-0.10.0.jar:0.10.0]
在clojure.lang.AFn.run(AFn.java:22)[clojure-1.6.0.jar:?]
在java.lang.Thread.run(Thread.java:745)[?:1.7.0_67]
如果您使用maven:

<dependency>
  <groupId>sample.ProjectA</groupId>
  <artifactId>storm</artifactId>
  <version>1.0</version>
  <exclusions>
    <exclusion>
      <groupId>com.google.guava</groupId>
      <artifactId>guava</artifactId>
    </exclusion>
  </exclusions> 
</dependency>
如果您使用maven:

<dependency>
  <groupId>sample.ProjectA</groupId>
  <artifactId>storm</artifactId>
  <version>1.0</version>
  <exclusions>
    <exclusion>
      <groupId>com.google.guava</groupId>
      <artifactId>guava</artifactId>
    </exclusion>
  </exclusions> 
</dependency>

在我的拓扑中,我同时使用storm和hbase,因此guava-12.0应该在cp中用于hbase,guava-18.0应该在cp中用于storm。我认为,更正确的方法是简单地将hbase更新到最新版本(1.2.1),我通过将guava-12.0.jar和guava-18.0.jar放在storm/lib中来解决这个问题,但我认为这不是一个好的解决方案。为什么类路径中的两个jar都不会引起冲突?我在拓扑中同时使用storm和hbase,所以guava-12.0应该在cp中表示hbase,guava-18.0应该在cp中表示storm。我认为,更正确的方法是简单地将hbase更新到最新版本(1.2.1),我通过将guava-12.0.jar和guava-18.0.jar放在storm/lib中来解决这个问题,但我认为这不是一个好的解决方案。为什么类路径中的两个jar都不会引起冲突?