Amazon web services 为什么aws上的SpringCloudConfig服务器间歇不响应?
我在aws上运行SpringCloudConfig服务器,它只是一个运行SpringBoot应用程序的docker容器。它正在从git回购中读取属性。我们的客户机应用程序在启动时从服务器读取配置,并在运行时断断续续地从服务器读取配置。大约有三分之一的时间,客户端应用程序在启动时拉配置时会超时,导致应用程序崩溃。在运行时,应用程序似乎成功了5次中的4次,但如果请求失败,它们将只使用现有配置 我在处理ssl终止的alb后面使用一个ec2实例。我最初使用的是t3.micro,但升级到了m5.large,我猜测t3类可能不支持连续可用性 alb需要2个子网,所以我创建了第二个子网,最初没有任何子网。我不确定alb是否会在某个点尝试路由到第二个子网,这可能会导致故障。目标组正在使用正确返回的运行状况检查,但idk足够了解alb,从而排除对空子网的循环。我试图创建第二个ec2实例以与第二个子网中的第一个配置服务器并行。但是,即使第二个实例使用与第一个实例相同的安全组和配置,我也无法通过ssh连接到第二个实例。我不知道为什么失败了,但我猜我的设置还有其他问题 所有的基础设施都是用terraform部署的,我在下面介绍了它 资源.tfAmazon web services 为什么aws上的SpringCloudConfig服务器间歇不响应?,amazon-web-services,terraform,spring-cloud-config,aws-load-balancer,Amazon Web Services,Terraform,Spring Cloud Config,Aws Load Balancer,我在aws上运行SpringCloudConfig服务器,它只是一个运行SpringBoot应用程序的docker容器。它正在从git回购中读取属性。我们的客户机应用程序在启动时从服务器读取配置,并在运行时断断续续地从服务器读取配置。大约有三分之一的时间,客户端应用程序在启动时拉配置时会超时,导致应用程序崩溃。在运行时,应用程序似乎成功了5次中的4次,但如果请求失败,它们将只使用现有配置 我在处理ssl终止的alb后面使用一个ec2实例。我最初使用的是t3.micro,但升级到了m5.large
provider "aws" {
region = "us-east-2"
version = ">= 2.38.0"
}
data "aws_ami" "amzn_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-hvm-2.0.*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["137112412989"]
}
resource "aws_vpc" "config-vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_security_group" "config_sg" {
name = "config-sg"
description = "http, https, and ssh"
vpc_id = aws_vpc.config-vpc.id
ingress {
from_port = 9000
to_port = 9000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_subnet" "subnet-alpha" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 1)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2a"
}
resource "aws_subnet" "subnet-beta" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 2)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2b"
}
resource "aws_internet_gateway" "config-vpc-ig" {
vpc_id = aws_vpc.config-vpc.id
}
resource "aws_route_table" "config-vpc-rt" {
vpc_id = aws_vpc.config-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.config-vpc-ig.id
}
}
resource "aws_route_table_association" "subnet-association-alpha" {
subnet_id = aws_subnet.subnet-alpha.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_route_table_association" "subnet-association-beta" {
subnet_id = aws_subnet.subnet-beta.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_alb" "alb" {
name = "config-alb"
subnets = [aws_subnet.subnet-alpha.id, aws_subnet.subnet-beta.id]
security_groups = [aws_security_group.config_sg.id]
}
resource "aws_alb_target_group" "alb_target_group" {
name = "config-tg"
port = 9000
protocol = "HTTP"
vpc_id = aws_vpc.config-vpc.id
health_check {
enabled = true
path = "/actuator/health"
port = 9000
protocol = "HTTP"
}
}
resource "aws_instance" "config_server_alpha" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-alpha.id
associate_public_ip_address = true
}
resource "aws_instance" "config_server_beta" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-beta.id
associate_public_ip_address = true
}
resource "aws_alb_target_group_attachment" "config-target-alpha" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_alpha.id
port = 9000
}
resource "aws_alb_target_group_attachment" "config-target-beta" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_beta.id
port = 9000
}
resource "aws_alb_listener" "alb_listener_80" {
load_balancer_arn = aws_alb.alb.arn
port = 80
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_8080" {
load_balancer_arn = aws_alb.alb.arn
port = 8080
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_https" {
load_balancer_arn = aws_alb.alb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = "arn:..."
default_action {
target_group_arn = aws_alb_target_group.alb_target_group.arn
type = "forward"
}
}
配置服务器
@SpringBootApplication
@EnableConfigServer
public class ConfigserverApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigserverApplication.class, args);
}
}
application.yml
spring:
profiles:
active: local
---
spring:
profiles: local, default, cloud
cloud:
config:
server:
git:
uri: ...
searchPaths: '{application}/{profile}'
username: ...
password:...
security:
user:
name: admin
password: ...
server:
port: 9000
management:
endpoint:
health:
show-details: always
info:
git:
mode: FULL
bootstrap.yml
spring:
application:
name: config-server
encrypt:
key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
您的示例缺少一些关键位,也有一些打字错误,因此可能与您正在使用的代码不完全相同,并且存在问题。如果一个问题包含一个完整的主题,它总是很有用的,它可以复制这个问题,帮助回答者看到应该去哪里看。应用程序文件中没有太多内容,但我也添加了它们。这只是绑定到docker映像中,然后在ec2实例上运行w/
docker run…
。应用程序是样板,这就是为什么我认为这不是问题所在。通过为该子网添加路由表关联修复了第二个实例,但我认为,如果第一个实例的运行状况检查返回200,则第二个实例甚至不必要。您的示例缺少一些关键位,并且其中也有一些拼写错误,因此可能与您使用的代码不完全相同,并且存在问题。如果一个问题包含一个完整的主题,它总是很有用的,它可以复制这个问题,帮助回答者看到应该去哪里看。应用程序文件中没有太多内容,但我也添加了它们。这只是绑定到docker映像中,然后在ec2实例上运行w/docker run…
。应用程序是样板,这就是为什么我认为这不是问题所在。确实通过为该子网添加路由表关联修复了第二个实例,但我认为,如果运行状况检查为第一个实例返回200,则甚至不需要第二个实例。