Spring boot 在kafka主题中作为键发布的垃圾值
在检索消息时将消息发布到kafka主题之后,我使用长序列化器作为键,字符串序列化器作为值,并将键与键一起视为垃圾值,如下所示Spring boot 在kafka主题中作为键发布的垃圾值,spring-boot,apache-kafka,spring-kafka,kafka-producer-api,Spring Boot,Apache Kafka,Spring Kafka,Kafka Producer Api,在检索消息时将消息发布到kafka主题之后,我使用长序列化器作为键,字符串序列化器作为值,并将键与键一起视为垃圾值,如下所示 ^@^@^@^AÏÃ<9a>ò 当我在Kafka工具kafkatool.com中看到键值时,我发现了那些垃圾值 它不是“垃圾”,而是一个长值,显示为字符串,但没有进行适当的反序列化 我不熟悉该工具,也不熟悉您是否可以使用它为密钥指定反序列化程序,但使用命令行工具kafka console consumer.sh可以指定要使用的反序列化程序 $ kafka-c
^@^@^@^AÏÃ<9a>ò
当我在Kafka工具kafkatool.com中看到键值时,我发现了那些垃圾值
它不是“垃圾”,而是一个长值,显示为字符串,但没有进行适当的反序列化
我不熟悉该工具,也不熟悉您是否可以使用它为密钥指定反序列化程序,但使用命令行工具kafka console consumer.sh
可以指定要使用的反序列化程序
$ kafka-console-consumer
This tool helps to read data from Kafka topics and outputs it to standard output.
Option Description
------ -----------
--bootstrap-server <String: server to REQUIRED: The server(s) to connect to.
connect to>
--consumer-property <String: A mechanism to pass user-defined
consumer_prop> properties in the form key=value to
the consumer.
--consumer.config <String: config file> Consumer config properties file. Note
that [consumer-property] takes
precedence over this config.
--enable-systest-events Log lifecycle events of the consumer
in addition to logging consumed
messages. (This is specific for
system tests.)
--formatter <String: class> The name of a class to use for
formatting kafka messages for
display. (default: kafka.tools.
DefaultMessageFormatter)
--from-beginning If the consumer does not already have
an established offset to consume
from, start with the earliest
message present in the log rather
than the latest message.
--group <String: consumer group id> The consumer group id of the consumer.
--help Print usage information.
--isolation-level <String> Set to read_committed in order to
filter out transactional messages
which are not committed. Set to
read_uncommitted to read all
messages. (default: read_uncommitted)
--key-deserializer <String:
deserializer for key>
--max-messages <Integer: num_messages> The maximum number of messages to
consume before exiting. If not set,
consumption is continual.
--offset <String: consume offset> The offset id to consume from (a non-
negative number), or 'earliest'
which means from beginning, or
'latest' which means from end
(default: latest)
--partition <Integer: partition> The partition to consume from.
Consumption starts from the end of
the partition unless '--offset' is
specified.
--property <String: prop> The properties to initialize the
message formatter. Default
properties include:
print.timestamp=true|false
print.key=true|false
print.value=true|false
key.separator=<key.separator>
line.separator=<line.separator>
key.deserializer=<key.deserializer>
value.deserializer=<value.
deserializer>
Users can also pass in customized
properties for their formatter; more
specifically, users can pass in
properties keyed with 'key.
deserializer.' and 'value.
deserializer.' prefixes to configure
their deserializers.
--skip-message-on-error If there is an error when processing a
message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms> If specified, exit if no message is
available for consumption for the
specified interval.
--topic <String: topic> The topic id to consume on.
--value-deserializer <String:
deserializer for values>
--version Display Kafka version.
--whitelist <String: whitelist> Regular expression specifying
whitelist of topics to include for
consumption.
$kafka控制台消费者
此工具有助于从卡夫卡主题中读取数据并将其输出到标准输出。
选项说明
------ -----------
--引导服务器
--形式为key=value to的消费者属性
消费者。
--consumer.config消费者配置属性文件。注
(消费性房地产)需要
优先于此配置。
--启用systest事件记录使用者的生命周期事件
除了日志记录之外
信息。(这是针对
系统测试。)
--格式化程序要用于的类的名称
设置卡夫卡消息的格式
展示。(默认值:kafka.tools。
DefaultMessageFormatter)
--从一开始,如果消费者还没有
已建立的消耗补偿
从最早的开始
日志中存在消息,而不是
而不是最新的消息。
--group消费者的消费者组id。
--帮助打印使用信息。
--隔离级别设置为read_committed,以便
过滤事务性消息
这是没有承诺的。着手
read_uncommitted to read all
信息。(默认值:read_uncommitted)
--密钥反序列化器
--最大消息要发送的最大消息数
在退出前消费。如果没有设置,
消费是持续的。
--偏移要从中使用的偏移id(非-
负数),或“最早”
这意味着从一开始,或者
“最新的”,意思是从末尾开始
(默认值:最新版本)
--分区要从中使用的分区。
消费始于本世纪末
除非“--offset”为
明确规定。
--属性要初始化的属性
消息格式化程序。违约
物业包括:
print.timestamp=true | false
print.key=true | false
print.value=true | false
关键字分隔符=
行分隔符=
键。反序列化器=
值。反序列化程序=
用户还可以通过自定义
格式化程序的属性;更多
具体来说,用户可以传入
使用“键”设置键的属性。
“和”值的反序列化程序。
反序列化程序。要配置的前缀
他们的反序列化程序。
--如果在处理错误时出错,则跳过错误消息
消息,跳过它而不是停止。
--超时毫秒(如果指定),如果未显示任何消息,则退出
可供在
指定的时间间隔。
--主题要在其上使用的主题id。
--值反序列化器
--版本显示卡夫卡版本。
--白名单正则表达式指定
longKeyKafkaTemplate.send(topicName, key, message);
$ kafka-console-consumer
This tool helps to read data from Kafka topics and outputs it to standard output.
Option Description
------ -----------
--bootstrap-server <String: server to REQUIRED: The server(s) to connect to.
connect to>
--consumer-property <String: A mechanism to pass user-defined
consumer_prop> properties in the form key=value to
the consumer.
--consumer.config <String: config file> Consumer config properties file. Note
that [consumer-property] takes
precedence over this config.
--enable-systest-events Log lifecycle events of the consumer
in addition to logging consumed
messages. (This is specific for
system tests.)
--formatter <String: class> The name of a class to use for
formatting kafka messages for
display. (default: kafka.tools.
DefaultMessageFormatter)
--from-beginning If the consumer does not already have
an established offset to consume
from, start with the earliest
message present in the log rather
than the latest message.
--group <String: consumer group id> The consumer group id of the consumer.
--help Print usage information.
--isolation-level <String> Set to read_committed in order to
filter out transactional messages
which are not committed. Set to
read_uncommitted to read all
messages. (default: read_uncommitted)
--key-deserializer <String:
deserializer for key>
--max-messages <Integer: num_messages> The maximum number of messages to
consume before exiting. If not set,
consumption is continual.
--offset <String: consume offset> The offset id to consume from (a non-
negative number), or 'earliest'
which means from beginning, or
'latest' which means from end
(default: latest)
--partition <Integer: partition> The partition to consume from.
Consumption starts from the end of
the partition unless '--offset' is
specified.
--property <String: prop> The properties to initialize the
message formatter. Default
properties include:
print.timestamp=true|false
print.key=true|false
print.value=true|false
key.separator=<key.separator>
line.separator=<line.separator>
key.deserializer=<key.deserializer>
value.deserializer=<value.
deserializer>
Users can also pass in customized
properties for their formatter; more
specifically, users can pass in
properties keyed with 'key.
deserializer.' and 'value.
deserializer.' prefixes to configure
their deserializers.
--skip-message-on-error If there is an error when processing a
message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms> If specified, exit if no message is
available for consumption for the
specified interval.
--topic <String: topic> The topic id to consume on.
--value-deserializer <String:
deserializer for values>
--version Display Kafka version.
--whitelist <String: whitelist> Regular expression specifying
whitelist of topics to include for
consumption.