Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Rest POD之间的GKE延迟不良_Rest_Kubernetes_Google Cloud Platform_Google Kubernetes Engine_Internal Load Balancer - Fatal编程技术网

Rest POD之间的GKE延迟不良

Rest POD之间的GKE延迟不良,rest,kubernetes,google-cloud-platform,google-kubernetes-engine,internal-load-balancer,Rest,Kubernetes,Google Cloud Platform,Google Kubernetes Engine,Internal Load Balancer,对于kubernetes集群(GKE)中的通信,我们有一个非常奇怪的行为,具有不可接受的大延迟。 对于具有Memorystore获取/存储操作和CloudSQL查询的端点,延迟在600ms到1s之间跳跃。在dev enivorment中本地运行的相同设置(尽管没有k8s)没有显示这种延迟 关于我们的架构: 我们正在GKE上运行一个k8s集群,使用terraform和服务/部署(yaml)文件进行创建(我在下面添加了这些文件)。 我们正在运行两个节点API(koa.js2.5)。一个API向公众公

对于kubernetes集群(GKE)中的通信,我们有一个非常奇怪的行为,具有不可接受的大延迟。 对于具有Memorystore获取/存储操作和CloudSQL查询的端点,延迟在600ms到1s之间跳跃。在dev enivorment中本地运行的相同设置(尽管没有k8s)没有显示这种延迟

关于我们的架构: 我们正在GKE上运行一个k8s集群,使用terraform和服务/部署(yaml)文件进行创建(我在下面添加了这些文件)。 我们正在运行两个节点API(koa.js2.5)。一个API向公众公开,并通过节点端口连接到API pod

另一个API pod是私有的,可以通过谷歌的内部负载均衡器访问。此API连接到我们需要的所有资源(CloudSQL、云存储)

这两个API还连接到Memorystore(Redis)

这些POD之间的通信由自签名的服务器/客户端证书保护(这不是问题,我们已经临时删除了它以进行测试)

我们检查了日志,发现从公共API到私有API的请求只需要200毫秒就可以到达。 另外,私有API对公共API的响应耗时约600毫秒(从私有API的整个业务逻辑被抛出到我们在pubilc API收到该响应为止)

我们真的没有东西可以尝试了。。。我们已经将所有谷歌云资源连接到我们的本地环境,但没有显示出这种糟糕的延迟

在完整的本地设置中,延迟仅为我们在云设置中看到的延迟的1/5到1/10。 我们还尝试从0.100ms范围内的公共POD ping私人POD

你知道我们在哪里可以进一步调查吗? 这是关于谷歌云设置的terraform脚本

  // Configure the Google Cloud provider
  provider "google" {
    project = "${var.project}"
    region  = "${var.region}"
  }
  data "google_compute_zones" "available" {}
  # Ensuring relevant service APIs are enabled in your project. Alternatively visit and enable the needed services
  resource "google_project_service" "serviceapi" {
    service            = "serviceusage.googleapis.com"
    disable_on_destroy = false
  }
  resource "google_project_service" "sqlapi" {
    service            = "sqladmin.googleapis.com"
    disable_on_destroy = false
    depends_on         = ["google_project_service.serviceapi"]
  }
  resource "google_project_service" "redisapi" {
    service            = "redis.googleapis.com"
    disable_on_destroy = false
    depends_on         = ["google_project_service.serviceapi"]
  }
  # Create a VPC and a subnetwork in our region
  resource "google_compute_network" "appnetwork" {
    name                    = "${var.environment}-vpn"
    auto_create_subnetworks = "false"
  }
  resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges" {
    name          = "${var.environment}-vpn-subnet"
    ip_cidr_range = "10.2.0.0/16"
    region        = "europe-west1"
    network       = "${google_compute_network.appnetwork.self_link}"
    secondary_ip_range {
      range_name    = "kubernetes-secondary-range-pods"
      ip_cidr_range = "10.60.0.0/16"
    }
    secondary_ip_range {
      range_name    = "kubernetes-secondary-range-services"
      ip_cidr_range = "10.70.0.0/16"
    }
  }
  # GKE cluster setup
  resource "google_container_cluster" "primary" {
    name               = "${var.environment}-cluster"
    zone               = "${data.google_compute_zones.available.names[1]}"
    initial_node_count = 1
    description        = "Kubernetes Cluster"
    network            = "${google_compute_network.appnetwork.self_link}"
    subnetwork         = "${google_compute_subnetwork.network-with-private-secondary-ip-ranges.self_link}"
    depends_on         = ["google_project_service.serviceapi"]
    additional_zones = [
      "${data.google_compute_zones.available.names[0]}",
      "${data.google_compute_zones.available.names[2]}",
    ]
    master_auth {
      username = "xxxxxxx"
      password = "xxxxxxx"
    }
    ip_allocation_policy {
      cluster_secondary_range_name  = "kubernetes-secondary-range-pods"
      services_secondary_range_name = "kubernetes-secondary-range-services"
    }
    node_config {
      oauth_scopes = [
        "https://www.googleapis.com/auth/compute",
        "https://www.googleapis.com/auth/devstorage.read_only",
        "https://www.googleapis.com/auth/logging.write",
        "https://www.googleapis.com/auth/monitoring",
        "https://www.googleapis.com/auth/trace.append"
      ]
      tags = ["kubernetes", "${var.environment}"]
    }
  }
  ##################
  # MySQL DATABASES 
  ##################
  resource "google_sql_database_instance" "core" {
    name             = "${var.environment}-sql-core"
    database_version = "MYSQL_5_7"
    region           = "${var.region}"
    depends_on       = ["google_project_service.sqlapi"]
    settings {
      # Second-generation instance tiers are based on the machine
      # type. See argument reference below.
      tier = "db-n1-standard-1"
    }
  }
  resource "google_sql_database_instance" "tenant1" {
    name             = "${var.environment}-sql-tenant1"
    database_version = "MYSQL_5_7"
    region           = "${var.region}"
    depends_on       = ["google_project_service.sqlapi"]
    settings {
      # Second-generation instance tiers are based on the machine
      # type. See argument reference below.
      tier = "db-n1-standard-1"
    }
  }
  resource "google_sql_database_instance" "tenant2" {
    name             = "${var.environment}-sql-tenant2"
    database_version = "MYSQL_5_7"
    region           = "${var.region}"
    depends_on       = ["google_project_service.sqlapi"]
    settings {
      # Second-generation instance tiers are based on the machine
      # type. See argument reference below.
      tier = "db-n1-standard-1"
    }
  }
  resource "google_sql_database" "core" {
    name     = "project_core"
    instance = "${google_sql_database_instance.core.name}"
  }
  resource "google_sql_database" "tenant1" {
    name     = "project_tenant_1"
    instance = "${google_sql_database_instance.tenant1.name}"
  }
  resource "google_sql_database" "tenant2" {
    name     = "project_tenant_2"
    instance = "${google_sql_database_instance.tenant2.name}"
  }
  ##################
  # MySQL USERS
  ##################
  resource "google_sql_user" "core-user" {
    name     = "${var.sqluser}"
    instance = "${google_sql_database_instance.core.name}"
    host     = "cloudsqlproxy~%"
    password = "${var.sqlpassword}"
  }
  resource "google_sql_user" "tenant1-user" {
    name     = "${var.sqluser}"
    instance = "${google_sql_database_instance.tenant1.name}"
    host     = "cloudsqlproxy~%"
    password = "${var.sqlpassword}"
  }
  resource "google_sql_user" "tenant2-user" {
    name     = "${var.sqluser}"
    instance = "${google_sql_database_instance.tenant2.name}"
    host     = "cloudsqlproxy~%"
    password = "${var.sqlpassword}"
  }
  ##################
  # REDIS
  ##################
  resource "google_redis_instance" "redis" {
    name               = "${var.environment}-redis"
    tier               = "BASIC"
    memory_size_gb     = 1
    depends_on         = ["google_project_service.redisapi"]
    authorized_network = "${google_compute_network.appnetwork.self_link}"
    region             = "${var.region}"
    location_id        = "${data.google_compute_zones.available.names[0]}"
    redis_version = "REDIS_3_2"
    display_name  = "Redis Instance"
  }
  # The following outputs allow authentication and connectivity to the GKE Cluster.
  output "client_certificate" {
    value = "${google_container_cluster.primary.master_auth.0.client_certificate}"
  }
  output "client_key" {
    value = "${google_container_cluster.primary.master_auth.0.client_key}"
  }
  output "cluster_ca_certificate" {
    value = "${google_container_cluster.primary.master_auth.0.cluster_ca_certificate}"
  }
私有API的服务和部署

  # START CRUD POD
  apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    name: crud-pod
    labels:
      app: crud
  spec:
    template:
      metadata:
        labels:
          app: crud
      spec:
        containers:
          - name: crud
            image: eu.gcr.io/dev-xxxxx/crud:latest-unstable
            ports:
              - containerPort: 3333
            env:
            - name: NODE_ENV
              value: develop
            volumeMounts:
            - [..MountedConfigFiles..]
          # [START proxy_container]
          - name: cloudsql-proxy
            image: gcr.io/cloudsql-docker/gce-proxy:1.11
            command: ["/cloud_sql_proxy",
                      "-instances=dev-xxxx:europe-west1:dev-sql-core=tcp:3306,dev-xxxx:europe-west1:dev-sql-tenant1=tcp:3307,dev-xxxx:europe-west1:dev-sql-tenant2=tcp:3308",
                      "-credential_file=xxxx"]
            volumeMounts:
              - name: cloudsql-instance-credentials
                mountPath: /secrets/cloudsql
                readOnly: true
          # [END proxy_container]
        # [START volumes]
        volumes:
          - name: cloudsql-instance-credentials
            secret:
              secretName: cloudsql-instance-credentials
          - [..ConfigFilesVolumes..]
        # [END volumes]
  # END CRUD POD
  -------
  # START CRUD SERVICE
  apiVersion: v1
  kind: Service
  metadata:
    name: crud
    annotations:
      cloud.google.com/load-balancer-type: "Internal"
  spec:
    type: LoadBalancer
    loadBalancerSourceRanges: 
      - 10.60.0.0/16
    ports:
    - name: crud-port
      port: 3333
      protocol: TCP # default; can also specify UDP
    selector:
      app: crud # label selector for Pods to target
  # END CRUD SERVICE
和公共设施(包括入口)


我们通过从logTransport中删除@google cloud/logging winston解决了这个问题。
由于某种原因,它阻塞了我们的通信,所以我们的延迟很差。

您是否尝试过其他Linux发行版内核?Kubernetes严重依赖iptables(除非您使用Cilium),这可能与您看到的内容有关。Kubernetes运行的VPC完全由谷歌管理。我们的集装箱正在运行节点:碳阿尔卑斯码头图像。你认为码头工人的形象可能是问题所在吗?
  # START SAPI POD
  apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    name: sapi-pod
    labels:
      app: sapi
  spec:
    template:
      metadata:
        labels:
          app: sapi
      spec:
        containers:
          - name: sapi
            image: eu.gcr.io/dev-xxx/sapi:latest-unstable
            ports:
              - containerPort: 8080
            env:
              - name: NODE_ENV
                value: develop
            volumeMounts:
              - [..MountedConfigFiles..]
        volumes:
          - [..ConfigFilesVolumes..]
  # END SAPI POD
  -------------
  # START SAPI SERVICE
  kind: Service
  apiVersion: v1
  metadata:
    name: sapi # Service name
  spec:
    selector:
      app:  sapi
    ports:
    - port: 8080
      targetPort: 8080
    type: NodePort
  # END SAPI SERVICE
  --------------
  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    name: dev-ingress
    annotations:
      kubernetes.io/ingress.global-static-ip-name: api-dev-static-ip
    labels:
      app: sapi-ingress
  spec:
    backend:
      serviceName: sapi
      servicePort: 8080
    tls:
    - hosts:
      - xxxxx
      secretName: xxxxx