Kubernetes elasticsearch连接器没有';t work-java.lang.NoClassDefFoundError:com/google/common/collect/ImmutableSet
Kafka elasticsearch连接器“confluentinc-Kafka-connect-elasticsearch-5.5.0”在prem上不工作Kubernetes elasticsearch连接器没有';t work-java.lang.NoClassDefFoundError:com/google/common/collect/ImmutableSet,kubernetes,apache-kafka,apache-kafka-connect,strimzi,Kubernetes,Apache Kafka,Apache Kafka Connect,Strimzi,Kafka elasticsearch连接器“confluentinc-Kafka-connect-elasticsearch-5.5.0”在prem上不工作 "java.lang.NoClassDefFoundError: com/google/common/collect/ImmutableSet\n\tat io.searchbox.client.AbstractJestClient.<init>(AbstractJestClient.java:38)\n\tat io.sea
"java.lang.NoClassDefFoundError: com/google/common/collect/ImmutableSet\n\tat io.searchbox.client.AbstractJestClient.<init>(AbstractJestClient.java:38)\n\tat io.searchbox.client.http.JestHttpClient.<init>(JestHttpClient.java:43)\n\tat io.searchbox.client.JestClientFactory.getObject(JestClientFactory.java:51)\n\tat io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:149)\n\tat io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:141)\n\tat io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:122)\n\tat io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:51)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:305)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"
我已经阅读了一些消息,elasticsearch连接器中缺少jar文件/依赖项,我添加了它们,正如您在上面看到的,但没有运气
这是我的连接器配置:
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
name: "elastic-files-connector"
labels:
strimzi.io/cluster: mssql-minio-connect-cluster
spec:
class: io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
config:
connection.url: "https://escluster-es-http.dev-kik.io:9200"
connection.username: "${file:/opt/kafka/external-configuration/elasticcreds/connector.properties:connection.username}"
connection.password: "${file:/opt/kafka/external-configuration/elasticcreds/connector.properties:connection.password}"
flush.timeout.ms: 10000
max.buffered.events: 20000
batch.size: 2000
topics: filesql1.dbo.Files
tasks.max: '1'
type.name: "_doc"
max.request.size: "536870912"
key.converter: io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081
value.converter: io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url: http://schema-registry-cp-schema-registry:8081
internal.key.converter: org.apache.kafka.connect.json.JsonConverter
internal.value.converter: org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable: true
value.converter.schemas.enable: true
schema.compatibility: NONE
errors.tolerance: all
errors.deadletterqueue.topic.name: "dlq_filesql1.dbo.Files"
errors.deadletterqueue.context.headers.enable: "true"
errors.log.enable: "true"
behavior.on.null.values: "ignore"
errors.retry.delay.max.ms: 60000
errors.retry.timeout: 300000
behavior.on.malformed.documents: warn
我将用户名/密码更改为明文;不走运。
我尝试了两种http/https的elasticsearch连接,没有运气
这是我的elasticsearch srv信息:
devadmin@vdi-mk2-ubn:~/kafka$ kubectl get svc -n elastic-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elastic-webhook-server ClusterIP 10.104.95.105 <none> 443/TCP 21h
escluster-es-default ClusterIP None <none> <none> 8h
escluster-es-http LoadBalancer 10.108.69.136 192.168.215.35 9200:31214/TCP 8h
escluster-es-transport ClusterIP None <none> 9300/TCP 8h
kibana-kb-http LoadBalancer 10.102.81.206 192.168.215.34 5601:31315/TCP 20h
devadmin@vdi-mk2-ubn:~/kafka$
无论我做什么,例外永远不会改变。我不知道还有什么我可以试试的。我的大脑在燃烧,我快要发疯了
我遗漏了什么,或者你们能告诉我你们是如何在库伯内特斯的prem上运行这个连接器的吗
感谢和问候我正在使用《卡夫卡2》。12-2.5.0我也遇到了同样的问题。我注意到$KAFKA_HOME/libs中缺少与KAFKA
2相关的番石榴罐子。4.0
。作为一种解决方法,我从以前的卡夫卡发行版中手动复制了jar(guava-20.0.jar
),一切正常。我正在使用Kafka_2。12-2.5.0
我也遇到了同样的问题。我注意到$KAFKA_HOME/libs中缺少与KAFKA2相关的番石榴罐子。4.0
。作为一种解决方法,我从以前的Kafka发行版中手动复制了jar(guava-20.0.jar
),一切正常。您如何在Kafka Connect worker中安装连接器?我使用“Strimzi Kafka Operator”部署连接器。我首先让它独立工作,然后尝试让它与Strimzi一起工作。您如何在Kafka Connect worker中安装连接器?我使用“Strimzi Kafka Operator”部署连接器。我首先让它独立工作,然后尝试让它与Strimzi一起工作。将缺少的guava-20.0.jar
添加到工作节点容器的/opt/kafka/libs/
中可以解决问题。我的环境是;Strimzi 0.18卡夫卡2.5.0复制guava-20.0.jar/opt/Kafka/libs/我认为guava在2.4.0之前一直是卡夫卡的附属品,但现在已经不是了。因此,您需要将其包含在kafka connect图像中,因为它不再位于基础图像中。对我来说,更大的问题是这是如何工作的,因为我认为Kafka连接类路径隔离应该阻止您使用Kafka的guava。您可以从这里下载它:将缺少的guava-20.0.jar
添加到/opt/Kafka/libs/
工作节点容器中修复了这个问题。我的环境是;Strimzi 0.18卡夫卡2.5.0复制guava-20.0.jar/opt/Kafka/libs/我认为guava在2.4.0之前一直是卡夫卡的附属品,但现在已经不是了。因此,您需要将其包含在kafka connect图像中,因为它不再位于基础图像中。对我来说,更大的问题是这是如何工作的,因为我认为Kafka Connect类路径隔离应该阻止你使用Kafka的番石榴。你可以从这里下载:
devadmin@vdi-mk2-ubn:~/kafka$ kubectl get svc -n elastic-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elastic-webhook-server ClusterIP 10.104.95.105 <none> 443/TCP 21h
escluster-es-default ClusterIP None <none> <none> 8h
escluster-es-http LoadBalancer 10.108.69.136 192.168.215.35 9200:31214/TCP 8h
escluster-es-transport ClusterIP None <none> 9300/TCP 8h
kibana-kb-http LoadBalancer 10.102.81.206 192.168.215.34 5601:31315/TCP 20h
devadmin@vdi-mk2-ubn:~/kafka$
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ curl -u "elastic:5NM0Pp25sFzNu578873BWFnN" -k "https://10.108.69.136:9200"
{
"name" : "escluster-es-default-0",
"cluster_name" : "escluster",
"cluster_uuid" : "TP5f4MGcSn6Dt9hZ144tEw",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$ curl -u "elastic:5NM0Pp25sFzNu578873BWFnN" -k "https://escluster-es-http.dev-kik.io:9200"
{
"name" : "escluster-es-default-0",
"cluster_name" : "escluster",
"cluster_uuid" : "TP5f4MGcSn6Dt9hZ144tEw",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
[kafka@mssql-minio-connect-cluster-connect-d9859784f-ffj8r plugins]$