Transactions Spring云流kafka事务配置
我正在遵循Transactions Spring云流kafka事务配置,transactions,spring-kafka,spring-cloud-stream,spring-cloud-stream-binder-kafka,Transactions,Spring Kafka,Spring Cloud Stream,Spring Cloud Stream Binder Kafka,我正在遵循Spring cloud stream kafka的模板,但在制作制作人方法事务性时被卡住了。我之前没有使用过kafka,因此如果kafka 如果没有添加事务性配置,它工作得很好,但是当添加事务性配置时,它会在启动时超时- 2020-11-21 15:07:55.349 ERROR 20432 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition infor
Spring cloud stream kafka
的模板,但在制作制作人方法事务性
时被卡住了。我之前没有使用过kafka
,因此如果kafka
如果没有添加事务性配置,它工作得很好,但是当添加事务性配置时,它会在启动时超时-
2020-11-21 15:07:55.349 ERROR 20432 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information
org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId
下面是我对SpringCloudStream的设置
pom.xml
<properties>
<java.version>11</java.version>
<spring-boot.version>2.3.3.RELEASE</spring-boot.version>
<spring-cloud.version>Hoxton.SR8</spring-cloud.version>
<kafka-avro-serializer.version>5.2.1</kafka-avro-serializer.version>
<avro.version>1.8.2</avro.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring-boot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
spring:
cloud:
stream:
default:
producer:
useNativeEncoding: true
consumer:
useNativeEncoding: true
bindings:
input:
destination: employee-details
content-type: application/*+avro
group: group-1
concurrency: 3
output:
destination: employee-details
content-type: application/*+avro
kafka:
binder:
producer-properties:
key.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: http://localhost:8081
acks: all
max.block.ms: 60000
consumer-properties:
key.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
specific.avro.reader: true
transaction:
transactionIdPrefix: tx-
producer:
enable:
idempotence: true
# requiredAcks: all
brokers:
- localhost:9094
我正在运行minikube
中的kafka
,下面是我主题的配置
[2020-11-21 06:18:21,655] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
Topic: employee-details PartitionCount: 1 ReplicationFactor: 1 Configs:
Topic: employee-details Partition: 0 Leader: 0 Replicas: 0 Isr: 0
来自卡夫卡控制器的日志
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder("kafka",
MessageChannel.class)).getTransactionalProducerFactory();
return new KafkaTransactionManager<>(pf);
}
TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-11-24 06:56:21,379] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2020-11-24 06:56:21,379] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
查看服务器日志 如果事务状态日志的副本少于所需副本,则事务生产者将超时。默认情况下,需要3个副本,至少需要2个副本同步 看
transaction.state.log.replication.factor
和transaction.state.log.min.isr
感谢您的回复。但是我已经在上面的问题中提到了ReplicationFactor:1
和Isr:0
。这不正确吗?信息是关于主题的;日志中的那些属性只是说您的主题位于broker zero上,并且它有一个ISR(也就是broker 0)。我提到的属性是代理属性,需要在那里进行配置。这些属性用于事务日志主题,默认情况下,它需要3个副本和2个ISR。请尝试更改这些代理属性。事务流现在可以工作了。