Spring integration 由于bean初始化不正确,Spring cloud AWS kinesis流绑定器无法启动
我正试图用下面的代码运行这个简单的运动信息消费者。这是应用程序中唯一的类 我面临此错误,因为我已更新到最新快照版本的kinesis binderSpring integration 由于bean初始化不正确,Spring cloud AWS kinesis流绑定器无法启动,spring-integration,spring-cloud,spring-cloud-stream,spring-cloud-aws,spring-integration-aws,Spring Integration,Spring Cloud,Spring Cloud Stream,Spring Cloud Aws,Spring Integration Aws,我正试图用下面的代码运行这个简单的运动信息消费者。这是应用程序中唯一的类 我面临此错误,因为我已更新到最新快照版本的kinesis binder @SpringBootApplication @RestController @EnableBinding(Sink.class) @EnableAutoConfiguration public class ProducerApplication { public static void mai
@SpringBootApplication
@RestController
@EnableBinding(Sink.class)
@EnableAutoConfiguration
public class ProducerApplication {
public static void main(String[] args) {
SpringApplication.run(ProducerApplication.class, args);
}
@StreamListener(Sink.INPUT)
public void listen(String message) {
System.out.println("Message has been received"+message);
}
}
应用程序yml
server.port: 8081
spring:
cloud:
stream:
bindings:
input:
destination: my.sink
content-type: application/json
cloud:
aws:
region:
static: us-east-1
credentials:
accessKey: <accessKey>
secretKey: <secretKey>
我得到了bean初始化异常,在创建beandynamodbmetadastore
时似乎出现了问题
2018-07-10 10:53:22.629 INFO 18332 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : Got an exception java.lang.IllegalStateException: The component has not been initialized: DynamoDbMetadataStore{table={SpringIntegrationMetadataStore: {AttributeDefinitions: [{AttributeName: KEY,AttributeType: S}],TableName: SpringIntegrationMetadataStore,KeySchema: [{AttributeName: KEY,KeyType: HASH}],TableStatus: ACTIVE,CreationDateTime: Wed Jun 27 10:51:53 IST 2018,ProvisionedThroughput: {NumberOfDecreasesToday: 0,ReadCapacityUnits: 1,WriteCapacityUnits: 1},TableSizeBytes: 0,ItemCount: 0,TableArn: arn:aws:dynamodb:us-east-1:1234567:table/SpringIntegrationMetadataStore,TableId: d0cf588b-e122-406b-ad82-06255dfea6d4,}}, createTableRetries=25, createTableDelay=1, readCapacity=1, writeCapacity=1, timeToLive=null}.
Is it declared as a bean? during [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='my.sink', shard='shardId-000000000000', reset=false}, state=NEW}] task invocation.
Process will be retried on the next iteration.
此错误是在更新到最新快照版本的kinesis binder后开始的
您能检查一下是否有问题。我刚刚解决了这个问题: 问题是,当我们在DynamoDB中已经有了一个表时,我们只是从
afterPropertiesSet()
返回,将初始化的
保留为false
最新的
BUILD-SNAPSHOT
现在应该可以工作了。您能在Git Hub上的某个地方与我们分享一个简单的项目吗?另外,我不知道cloud.aws.kinesis.endpoint
configuration属性。那是什么?谁以及如何使用它?@ArtemBilan感谢您的关注。我已经在这里添加了示例项目,请检查并帮助此属性cloud.aws.kinesis.endpoint没有任何意义。谢谢你。现在我得到这个错误,是因为访问问题。c、 a.s.d.AmazonDynamoDBLockClient:无法获取锁,因为与DDB com.amazonaws.services.dynamodbv2.model.AmazondynamodException:用户:arn:aws:iam::123:用户/aws kinesis开发人员无权在资源上执行:dynamodb:GetItem:arn:aws:dynamodb:us-east-1:123:table/SpringIntegrationLockRegistry(服务:Amazondynamodv2;状态代码:400;错误代码:AccessDeniedException;请求ID:R5S2G137V3NN625GD52ASOHG4BV4KQNSO5AEMVJF66Q9AAJG)用户:arn:aws:iam::123:用户/aws运动开发人员无权执行:dynamodb:GetItem on resource:
。这已经超出了Spring Cloud Stream Kinesis Binder项目的责任范围。您需要向useGetItem
授予这些权限。在授予足够的权限后,它就工作了。非常感谢。
2018-07-10 10:53:22.629 INFO 18332 --- [esis-consumer-1] a.i.k.KinesisMessageDrivenChannelAdapter : Got an exception java.lang.IllegalStateException: The component has not been initialized: DynamoDbMetadataStore{table={SpringIntegrationMetadataStore: {AttributeDefinitions: [{AttributeName: KEY,AttributeType: S}],TableName: SpringIntegrationMetadataStore,KeySchema: [{AttributeName: KEY,KeyType: HASH}],TableStatus: ACTIVE,CreationDateTime: Wed Jun 27 10:51:53 IST 2018,ProvisionedThroughput: {NumberOfDecreasesToday: 0,ReadCapacityUnits: 1,WriteCapacityUnits: 1},TableSizeBytes: 0,ItemCount: 0,TableArn: arn:aws:dynamodb:us-east-1:1234567:table/SpringIntegrationMetadataStore,TableId: d0cf588b-e122-406b-ad82-06255dfea6d4,}}, createTableRetries=25, createTableDelay=1, readCapacity=1, writeCapacity=1, timeToLive=null}.
Is it declared as a bean? during [ShardConsumer{shardOffset=KinesisShardOffset{iteratorType=LATEST, sequenceNumber='null', timestamp=null, stream='my.sink', shard='shardId-000000000000', reset=false}, state=NEW}] task invocation.
Process will be retried on the next iteration.