elasticsearch 无法使用Kafka connect将记录从Kafka MSK发送到弹性搜索
这是我上一个问题的后续问题。 因为问题变得很长,所以我想创建一个新的 这是我的老问题 所以我的连接器运行时出现了一些错误,这就是为什么我的记录不会进入弹性搜索 这是我的属性文件 快速启动elasticsearch.properties
elasticsearch 无法使用Kafka connect将记录从Kafka MSK发送到弹性搜索,
elasticsearch,apache-kafka,apache-kafka-connect,
elasticsearch,Apache Kafka,Apache Kafka Connect,这是我上一个问题的后续问题。 因为问题变得很长,所以我想创建一个新的 这是我的老问题 所以我的连接器运行时出现了一些错误,这就是为什么我的记录不会进入弹性搜索 这是我的属性文件 快速启动elasticsearch.properties name=elasticsearch-sink connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector tasks.max=1 topics=fspauditla
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=fspauditlambda
key.ignore=true
connection.url=https://drtrrterterterterterst-1.es.amazonaws.com
type.name=kafka-connect
这里是我的connect-standalone.properties详细信息
bootstrap.servers=b-3.rtyrtyty.amazonaws.com:9092,b-6.rtyrtyty.amazonaws.com:9092,b-1.rtyrtyty.us-east-1.amazonaws.com:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/local/confluent/share/java
然后我启动我的连接器,但当我这样做时,我得到的错误如下
org.apache.kafka.connect.errors.ConnectException: Cannot create mapping
{"kafka-connect":{"properties":{"ID":{"type":"text","fields":{"keyword": -- {"root_cause":[{"type":"illegal_argument_exception","reason":"Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."}],"type":"illegal_argument_exception","reason":"Types cannot be provided in put mapping requests,
unless the include_type_name parameter is set to true."}
当我将属性更改为下面的
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
它在弹性搜索中造成了不适,但数据并没有进入并低于错误
[2020-01-03 12:27:12,906] ERROR WorkerSinkTask{id=elasticsearch-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:560)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Cannot infer mapping without schema.
at io.confluent.connect.elasticsearch.Mapping.inferMapping(Mapping.java:84)
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.createMapping(JestElasticsearchClient.java:292)
at io.confluent.connect.elasticsearch.Mapping.createMapping(Mapping.java:66)
at io.confluent.connect.elasticsearch.ElasticsearchWriter.write(ElasticsearchWriter.java:260)
我甚至还尝试过这个配置
topic.schema.ignore=true
但也有同样的错误
更新我的MYSQL表定义
CREATE TABLE FSP_AUDIT (
ID NVARCHAR(255) NOT NULL,
VERSION numeric(10,0) ,
ACTION_TYPE NVARCHAR(255) ,
EVENT_TYPE NVARCHAR(255) ,
CLIENT_ID NVARCHAR(25) ,
DETAILS TEXT(40000) ,
OBJECT_TYPE NVARCHAR(255) ,
UTC_DATE_TIME TIMESTAMP(6) NOT NULL,
POINT_IN_TIME_PRECISION NVARCHAR(255) ,
TIME_ZONE NVARCHAR(255) ,
TIMELINE_PRECISION NVARCHAR(255) ,
GROUP_ID NVARCHAR(255) ,
OBJECT_DISPLAY_NAME NVARCHAR(200) ,
OBJECT_ID NVARCHAR(255) ,
USR_DISPLAY_NAME NVARCHAR(1500) ,
USR_ID NVARCHAR(255) ,
PARENT_EVENT_ID NVARCHAR(255) ,
NOTES NVARCHAR(1000) ,
SUMMARY NVARCHAR(4000) ,
ADTEVT_TO_UTC_DT TIMESTAMP(6) ,
ADTEVT_TO_DATE_PITP NVARCHAR(255) ,
ADTEVT_TO_DATE_TZ NVARCHAR(255) ,
ADTEVT_TO_DATE_TP NVARCHAR(255) ,
PRIMARY KEY(ID)
);
需要预先创建弹性搜索的架构
这是正确的吗?请建议找零
{
"schema":{
"type":"struct",
"fields":[
{
"type":"string",
"optional":false,
"field":"ID"
},
{
"type":"integer",
"optional":true,
"field":"VERSION"
},
{
"type":"string",
"optional":true,
"field":"ACTION_TYPE"
},
{
"type":"string",
"optional":true,
"field":"EVENT_TYPE"
},
{
"type":"string",
"optional":true,
"field":"CLIENT_ID"
},
{
"type":"string",
"optional":true,
"field":"DETAILS"
},
{
"type":"string",
"optional":true,
"field":"OBJECT_TYPE"
},
{
"type":"int64",
"optional":false,
"name":"org.apache.kafka.connect.data.Timestamp",
"version":1,
"field":"UTC_DATE_TIME"
},
{
"type":"string",
"optional":true,
"field":"POINT_IN_TIME_PRECISION"
},
{
"type":"string",
"optional":true,
"field":"TIME_ZONE"
},
{
"type":"string",
"optional":true,
"field":"TIMELINE_PRECISION"
},
{
"type":"string",
"optional":true,
"field":"GROUP_ID"
},
{
"type":"string",
"optional":true,
"field":"OBJECT_DISPLAY_NAME"
},
{
"type":"string",
"optional":true,
"field":"OBJECT_ID"
},
{
"type":"string",
"optional":true,
"field":"USR_DISPLAY_NAME"
},
{
"type":"string",
"optional":true,
"field":"USR_ID"
},
{
"type":"string",
"optional":true,
"field":"PARENT_EVENT_ID"
},
{
"type":"string",
"optional":true,
"field":"NOTES"
},
{
"type":"string",
"optional":true,
"field":"SUMMARY"
},
{
"type":"int64",
"optional":true,
"name":"org.apache.kafka.connect.data.Timestamp",
"version":1,
"field":"ADTEVT_TO_UTC_DT"
},
{
"type":"string",
"optional":true,
"field":"ADTEVT_TO_DATE_PITP"
},
{
"type":"string",
"optional":true,
"field":"ADTEVT_TO_DATE_TZ"
},
{
"type":"string",
"optional":true,
"field":"ADTEVT_TO_DATE_TP"
}
],
"optional":false,
"name":"FSP_AUDIT"
}
}
第二个错误是您需要一个模式,因此您对属性文件的第一次编辑是可以的 第一个错误是由于Elasticsearch 6+中的行为更改,需要自己定义索引映射
您是否尝试了中提到的schemas.enable=falseerror@sun007是的,但是我也得到了错误
类型不能在put映射请求中提供。你查过了吗@cricket_007是的,我已经讲过了……正如您建议的,我们应该在弹性搜索中预先创建模式,但为什么连接器不在这里呢?这不是连接器的问题。是elasticsearch拒绝了这个请求。。。Connect仍然使用2.x elasticsearch客户端,我认为jdbc connector没有创建正确的模式。我尝试打印密钥模式,但我发现有一些密钥模式不正确,如“field”:“VERSION”
我用jdbc生成的模式编辑了我的问题,如果我必须在弹性搜索中预先创建模式,我可以使用相同的模式吗?或者我必须在这里更改什么吗?我需要查看您的mysql表定义才能知道它是否正确。不,Elasticsearch有自己的属性类型。您只能将那里的连接架构用作引用。好的,数字类型映射到Connect中的字节。据我所说,这并没有错,我也不确定它是否可以直接“修复”。至少,如果不在弹性接收器配置中转换消息,或者以某种方式使Elasticsearch映射将字节解析为一个数字,那么进行此更改“numeric.mapping”:“best_fit”
也没有做任何事?或者,如果更改为elastic search版本6呢?版本5以上的任何内容都将返回相同的消息