Ruby Logstash编解码器-Avro模式注册表:由于未定义局部变量或方法“esponse'”;

Ruby Logstash编解码器-Avro模式注册表:由于未定义局部变量或方法“esponse'”;,ruby,apache-kafka,logstash,jruby,confluent-schema-registry,Ruby,Apache Kafka,Logstash,Jruby,Confluent Schema Registry,我有一个Logstash conf,它以json格式读取Kafka主题,它使用avro_schema_注册表将输出序列化到avro。 以下是conf文件: input { kafka{ group_id => "test_group" topics => ["logs_json"] bootstrap_servers => "server2:9094, server1:9094, server3:9094" codec => "json

我有一个Logstash conf,它以json格式读取Kafka主题,它使用avro_schema_注册表将输出序列化到avro。 以下是conf文件:

input {
  kafka{
    group_id => "test_group"
    topics => ["logs_json"]
    bootstrap_servers => "server2:9094, server1:9094, server3:9094"
    codec => "json"
    consumer_threads => 1
  }
}

output {
  kafka {
    codec => avro_schema_registry {
      endpoint => "http://host_schema_registry:port"
      schema_id  => 1
    }
    value_serializer => "org.apache.kafka.common.serialization.ByteArraySerializer"
    bootstrap_servers => "server1:9094, server1:9094, server1:9094"
    topic_id => "logs_avro"
  }
}
但我得到了一个错误:

org.jruby.exceptions.NameError: (NameError) undefined local variable or method `esponse' for #<SchemaRegistry::Client:0x3c5ad39>
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.schema_registry_minus_0_dot_1_dot_0.lib.schema_registry.client.request(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_registry-0.1.0/lib/schema_registry/cli
ent.rb:127) ~[?:?]
        at uri_3a_classloader_3a_.META_minus_INF.jruby_dot_home.lib.ruby.stdlib.net.http.start(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:914) ~[?:?]
        at uri_3a_classloader_3a_.META_minus_INF.jruby_dot_home.lib.ruby.stdlib.net.http.start(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:609) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.schema_registry_minus_0_dot_1_dot_0.lib.schema_registry.client.request(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_registry-0.1.0/lib/schema_registry/cli
ent.rb:101) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.schema_registry_minus_0_dot_1_dot_0.lib.schema_registry.client.schema(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_registry-0.1.0/lib/schema_registry/clie
nt.rb:40) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_codec_minus_avro_schema_registry_minus_1_dot_1_dot_1.lib.logstash.codecs.avro_schema_registry.get_schema(/usr/share/logstash/vendor/bundle/jruby/2.5.0/g
ems/logstash-codec-avro_schema_registry-1.1.1/lib/logstash/codecs/avro_schema_registry.rb:158) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_codec_minus_avro_schema_registry_minus_1_dot_1_dot_1.lib.logstash.codecs.avro_schema_registry.encode(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/
logstash-codec-avro_schema_registry-1.1.1/lib/logstash/codecs/avro_schema_registry.rb:246) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_integration_minus_kafka_minus_10_dot_0_dot_0_minus_java.lib.logstash.outputs.kafka.multi_receive(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logs
tash-integration-kafka-10.0.0-java/lib/logstash/outputs/kafka.rb:219) ~[?:?]
        at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1800) ~[jruby-complete-9.2.8.0.jar:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_integration_minus_kafka_minus_10_dot_0_dot_0_minus_java.lib.logstash.outputs.kafka.multi_receive(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logs
tash-integration-kafka-10.0.0-java/lib/logstash/outputs/kafka.rb:217) ~[?:?]
        at org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multi_receive(org/logstash/config/ir/compiler/OutputStrategyExt.java:118) ~[logstash-core.jar:?]
        at org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101) ~[logstash-core.jar:?]
        at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:262) ~[?:?]
[2020-02-11T13:11:41,720][ERROR][org.logstash.execution.WorkerLoop][main] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.
org.jruby.exceptions.namererror:(namererror)未定义的局部变量或方法“esponse”#
在usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.schema_registry_减去_0_dot_1_dot_0.lib.schema_registry.client.request(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_registry-0.1.0/lib/schema_registry/cli
ent.rb:127)~[?:?]
在uri\u 3a\u classloader\u 3a\u.META\u减去\u INF.jruby\u dot\u home.lib.ruby.stdlib.net.http.start(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:914)~[?:?]
在uri\u 3a\u classloader\u 3a\u.META\u减去\u INF.jruby\u dot\u home.lib.ruby.stdlib.net.http.start(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:609)~[?:?]
在usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.schema_registry_减去_0_dot_1_dot_0.lib.schema_registry.client.request(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_registry-0.1.0/lib/schema_registry/cli
ent.rb:101)~[?:?]
在usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.schema_registry_减去_0_dot_1_dot_0.lib.schema_registry.client.schema(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_registry-0.1.0/lib/schema_registry/clie
nt.rb:40)~[?:?]
在usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_codec_minus_avro_schema_registry_minus_1_dot_1_dot_1.lib.logstash.codecs.avro_schema_registry.get_schema(/usr/share/logstash/vendor/bundle/jruby/2.5.0/g
ems/logstash-codec-avro_schema_registry-1.1.1/lib/logstash/codecs/avro_schema_registry.rb:158)~[?:?]
在usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_codec_minus_avro_schema_registry_minus_1_dot_1_dot_1.lib.logstash.codecs.avro_schema_registry.encode(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/
logstash-codec-avro_schema_registry-1.1.1/lib/logstash/codecs/avro_schema_registry.rb:246)~[?:?]
在usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_减号_减号_集成_减号_kafka_减号_10_dot_0_0_减号_java.lib.logstash.outputs.kafka.multi_接收(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logs
tash-integration-kafka-10.0.0-java/lib/logstash/outputs/kafka.rb:219)~[?:?]
在org.jruby.RubyArray.each(org/jruby/RubyArray.java:1800)~[jruby-complete-9.2.8.0.jar:?]
在usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_减号_减号_集成_减号_kafka_减号_10_dot_0_0_减号_java.lib.logstash.outputs.kafka.multi_接收(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logs
tash-integration-kafka-10.0.0-java/lib/logstash/outputs/kafka.rb:217)~[?:?]
在org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multi_receive(org/logstash/config/ir/compiler/OutputStrategyExt.java:118)~[logstash core.jar:?]
在org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101)~[logstash core.jar:?]
在usr.share.logstash.logstash_减去_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash core/lib/logstash/java_pipeline.rb:262)~[?:?]
[2020-02-11T13:11:41720][ERROR][org.logstash.execution.WorkerLoop][main]pipelineworker中出现异常,管道已停止处理新事件,请检查过滤器配置并重新启动logstash。
那个编解码器坏了

参考问题-


没有理由将JSON序列化为Avro,然后插入Elasticsearch,因为Elasticsearch存储JSON,但如果您确实想这样做,我建议使用Confluent的Elasticsearch Kafka连接器

如果你甚至没有使用Elasticsearch,我认为不应该真正使用Logstash


KSQL支持您尝试执行的操作-

在调试了更多代码后,我了解到内部服务器错误是由于client.rb中的一行在GET请求头中设置了“Accept”

/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_registry-0.1.0/lib/schema_registry/client.rb:112
        request['Accept'] = "application/vnd.schemaregistry.v1+json"

通过注释或将值更改为request['Accept']=“application/json”,GET请求成功通过。

您是如何安装该编解码器的?@cricket_007--sudo/usr/share/logstash/bin/logstash plugin安装logstash-codec-avro_schema_注册表能否打开
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/schema_注册表-0.1.0/lib/schema_注册表/client.rb
并查看第127行?我假设
响应
响应
。。。最新版本是1.1.1,而不是0.1.0-谢谢@cricket_007 logstash-codec-avro_schema_注册表的版本是1.1.1,但schema注册表本身是0.1.0,这是另一个包:谢谢你的提示,现在在修复“response”后错误发生了更改。我将此帖子保持更新即使在那里,最新版本是5.3.2。。。我想你是说这个我需要在卡夫卡主题中有avro格式的消息(不是弹性的)最初我用nifi做了这个转换,但是转换的开销太大了,所以我想我可以使用这个编解码器。好吧,似乎这个问题已经解决了,因为我标记了开发人员。。。也许试着重新安装你的编解码器,看看会发生什么。此外,Confluent对Kafka Connect进行了全面测试,以支持Avro和Elasticsearch。但是您的日志存储流清楚地显示消费为JSON,并试图以Avro的形式输出到Kafka,因此我建议阅读本文,根据注册表的API规范,这是不正确的头,尽管哪一个是不正确的?这个?request['Accept']=“application/json”是的,值只是字符串
vnd.schemaregistry.v1+json
是Confluent的文档,我明白你的意思,但内容类型后面的一些行与文档不同:in line