Java 无法提交风暴拓扑

Java 无法提交风暴拓扑,java,apache-storm,Java,Apache Storm,我正在尝试使用Eclipse在远程主机上提交Storm拓扑 这是我的密码: Config conf = new Config(); conf.setDebug(false); conf.setNumWorkers(1); conf.put(Config.NIMBUS_HOST, "hostName"); conf.put(Config.NIMBUS_THRIFT_PORT,6627); conf.put(Config.STORM_ZOOKEEPER_SERVERS,Arrays.asList(n

我正在尝试使用Eclipse在远程主机上提交Storm拓扑

这是我的密码:

Config conf = new Config();
conf.setDebug(false);
conf.setNumWorkers(1);
conf.put(Config.NIMBUS_HOST, "hostName");
conf.put(Config.NIMBUS_THRIFT_PORT,6627);
conf.put(Config.STORM_ZOOKEEPER_SERVERS,Arrays.asList(new String[]{"hostName"}));
conf.put(Config.STORM_ZOOKEEPER_PORT,2181);

// Remote submission
StormSubmitter.submitTopology("classMain", conf, topology);
但我有一个例外:

Exception in thread "main" java.lang.RuntimeException: org.apache.thrift7.TApplicationException: Binary field exceeded string size limit
  at backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:250)
 at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:271)
  at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:157)
  at com.rbc.rbccm.hackathon.Countersearch.submitTopology(Countersearch.java:111)
  at com.rbc.rbccm.hackathon.Countersearch.main(Countersearch.java:37)
Caused by: org.apache.thrift7.TApplicationException: Binary field exceeded string size limit
  at org.apache.thrift7.TApplicationException.read(TApplicationException.java:111)
  at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:71)
  at backtype.storm.generated.Nimbus$Client.recv_submitTopology(Nimbus.java:184)
  at backtype.storm.generated.Nimbus$Client.submitTopology(Nimbus.java:168)
  at backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:236)
... 4 more
我们可以传递给submitTopology函数的参数是否有字符串大小限制

当我沿着这条线索走多一点的时候,它会导致:

public void submitTopology(String name, String uploadedJarLocation, String jsonConf, StormTopology topology) throws AlreadyAliveException, InvalidTopologyException, AuthorizationException, org.apache.thrift.TException
{
    send_submitTopology(name, uploadedJarLocation, jsonConf, topology);
    recv_submitTopology();
}

recv
导致该问题。有什么想法吗?

您需要增加
nimbus.thrift.max\u buffer\u size
参数。您可以在
storm.yaml
Config
对象中设置它。

如果您在
StormSubmitter.java
的storm源代码中看到该代码,它是这样的:

public static void submitTopology(String name, Map stormConf, StormTopology topology)
        throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {
    submitTopology(name, stormConf, topology, null, null);
}
节俭错误是因为指定的
名称
太长(超过2MB?),或者
stormConf
的信息太多,或者更可能的原因是,当创建
拓扑
时,您正在用太多的信息填充喷口或螺栓实例

在我的例子中,我正在创建一个螺栓,在其中初始化了太多的数据

builder.setBolt(genBolt, new GenBolt(myTable1.getHashMap(), myTable2.getHashMap(), myTable3.getHashMap(), myTable4.getHashMap()), 2)
  .fieldsGrouping(iterSpout, new Fields(con.BATCH_ID))

jar文件的绝对路径名有多长?“C:\\adasd\\sdsd\\workspace\\adasdsad\\target\\sample-mainClass-0.0.1.jar”是否太长?当您尝试使用命令行客户端将jar上载到Storm拓扑时会发生什么?谢谢Matthias,但由于现在不推荐使用max\u buffer\u size(),现在这个问题怎么解决?我遇到了同样的错误。问题是你的jar文件太大了。Storm 1.0引入了一个分布式缓存,我从未尝试过,但您应该能够在那里上传jar文件(而不是通过Nimbus传输)。