elasticsearch 使用docker compose为弹性搜索7.1配置kafka connect接收器,elasticsearch,apache-kafka,docker-compose,apache-kafka-connect,elasticsearch,Apache Kafka,Docker Compose,Apache Kafka Connect" /> elasticsearch 使用docker compose为弹性搜索7.1配置kafka connect接收器,elasticsearch,apache-kafka,docker-compose,apache-kafka-connect,elasticsearch,Apache Kafka,Docker Compose,Apache Kafka Connect" />

elasticsearch 使用docker compose为弹性搜索7.1配置kafka connect接收器

elasticsearch 使用docker compose为弹性搜索7.1配置kafka connect接收器,elasticsearch,apache-kafka,docker-compose,apache-kafka-connect,elasticsearch,Apache Kafka,Docker Compose,Apache Kafka Connect,我正在设置一个生产者,它以(键值)[key是一个生成的唯一string,value是一个jsonpayload]的形式向卡夫卡主题(v1.0.0)发送消息,该主题由卡夫卡连接(v5.3.1)拉入,然后发送到弹性搜索容器(v7.1) kafka connect配置为在ES中查找具有主题名称的索引(索引已映射到具有架构的ES上),并使用kafkakey作为插入索引的每个文档的唯一id(_id)。一旦制作人将内容放入卡夫卡主题,则必须通过connect将其拉入并发送到ES kafka connect(

我正在设置一个生产者,它以(键值)[
key
是一个生成的唯一
string
value
是一个
json
payload]的形式向卡夫卡主题(v1.0.0)发送消息,该主题由卡夫卡连接(v5.3.1)拉入,然后发送到弹性搜索容器(v7.1)

kafka connect配置为在ES中查找具有主题名称的索引(索引已映射到具有架构的ES上),并使用kafka
key
作为插入索引的每个文档的唯一id(_id)。一旦制作人将内容放入卡夫卡主题,则必须通过connect将其拉入并发送到ES

kafka connect(5.3.1)需要从kafka主题发送给它的值的格式如下,以便将其映射到弹性搜索索引

{
"schema": {es_schema },
"payload":{ es_payload }
}
我的制作人只能发送

{
es_payload
}
我正在使用docker/docker compose容器在本地模拟这种情况

我让制作人能够发送到kafka,并由kafka connect接收,但在发送到elastic时失败,说明在有效负载上找不到该模式

我的kafka连接接收器配置

curl -X POST \
  http://localhost:8083/connectors/ \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "elasticsearch-sink",
  "config": {
    "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
    "tasks.max": "1",
    "topics": "adn-kafka",
    "key.ignore": "false",
    "schema.ignore": "false",
    "connection.url": "http://elasticsearch:9200",
    "type.name": "",
    "name": "elasticsearch-sink",
    "value.converter.schemas.enable": "false",
    "key.converter.schemas.enable":"false"
  }
}'
我得到的错误

Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
     at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:338)
     at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:510)
     at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
     at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
     ... 13 more

如果我设置了
schema.ignore:true
,它不会查找具有该架构的索引,我认为这不是正确的方法,因为我的索引已映射,我不希望kafka connect发送以创建新索引

我的码头工人

version: '3'
services:
  zookeeper:
    container_name : zookeeper
    image: zookeeper
    ports:
     - 2181:2181
     - 2888:2888
     - 3888:3888

  kafka:
    container_name : kafka
    image: bitnami/kafka:1.0.0-r5
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_BROKER_ID: "42"
      KAFKA_ADVERTISED_HOST_NAME: "kafka"
      ALLOW_PLAINTEXT_LISTENER: "yes" 

  elasticsearch:
    container_name : elasticsearch
    image:
      elasticsearch:7.1.1
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    environment:
      - cluster.name=docker-cluster
      - node.name=node1
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms4g -Xmx4g"
      - discovery.type=single-node

    ports:
      - "9400:9200"
      - "9500:9300"
    deploy:
      resources:
        limits:
          memory: 6G
        reservations:
          memory: 6G
  kibana:
    container_name : kibana
    image: docker.elastic.co/kibana/kibana:7.1.1
    # environment:
      # - SERVER_NAME=Local kibana
      # - SERVER_HOST=0.0.0.0
      # - ELASTICSEARCH_URL=elasticsearch:9400
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch

  kafka-connect:
    container_name : kafka-connect
    image: confluentinc/cp-kafka-connect:5.3.1
    ports:
      - 8083:8083
    depends_on:
      - zookeeper
      - kafka
    volumes:
      - $PWD/connect-plugins:/connect-plugins
    environment:
      CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: kafka-connect
      CONNECT_CONFIG_STORAGE_TOPIC: docker-kafka-connect-configs
      CONNECT_OFFSET_STORAGE_TOPIC: docker-kafka-connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: docker-kafka-connect-status
      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_KEY_CONVERTER-SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER-SCHEMAS_ENABLE: "false"
      CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
      CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
      CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
      CONNECT_PLUGIN_PATH: '/usr/share/java'
      # Interceptor config
      CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
      CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.3.1.jar

卡夫卡主题名称:
测试卡夫卡

Es索引:
测试卡夫卡

ES映射

{
    "mappings":{
        "properties" :{
            "ppid":{
                "type":"long"
            },
            "field1":{
                "type":"long"
            },
            "field2":{
                "type":"long"
            },
            "time1":{
                "type":"date",
                "format":"yyyy-MM-dd HH:mm:ss"
            },
            "time2":{
                "type":"date",
                "format":"yyyy-MM-dd HH:mm:ss"
            },
            "status":{
                "type":"keyword"
            },
            "field3":{
                "type":"integer"
            },
            "field4":{
                "type":"integer"
            }
        }
    }
}
有效载荷被发送到卡夫卡主题

{ "ppid" : 1, "field1":2 , "field2":1,"time1":"2019-09-25 07:36:48", "time2":"2019-09-25 07:36:48", "status":"SUCCESS", "field3":30,"field4":16}

您在模式启用的撰写中有一个输入错误。破折号应替换为下划线。该属性仅在JSONConverter上有效,因此密钥转换器不需要太多的密钥。。我还必须使`“schema.ignore”:“true”,因为它无法映射otherwise@PraveenSureshkumar你找到解决这个问题的办法了吗?解决办法是什么?