Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/reactjs/21.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
更改Pulumi';s部署Kubernetes资源时超时_Kubernetes_Pulumi - Fatal编程技术网

更改Pulumi';s部署Kubernetes资源时超时

更改Pulumi';s部署Kubernetes资源时超时,kubernetes,pulumi,Kubernetes,Pulumi,当我使用Pulumi将资源部署到Kubernetes时,如果我出错,Pulumi将等待Kubernetes资源正常运行 Type Name Status Info + pulumi:pulumi:Stack

当我使用Pulumi将资源部署到Kubernetes时,如果我出错,Pulumi将等待Kubernetes资源正常运行

     Type                                                                               Name                               Status                  Info
 +   pulumi:pulumi:Stack                                                                aws-load-balancer-controller-dev   **creating failed**     1 error
 +   ├─ jaxxstorm:aws:loadbalancercontroller                                            foo                                created
 +   ├─ kubernetes:yaml:ConfigFile                                                      foo-crd                            created
 +   │  └─ kubernetes:apiextensions.k8s.io/v1beta1:CustomResourceDefinition             targetgroupbindings.elbv2.k8s.aws  created                 1 warning
 +   ├─ kubernetes:core/v1:Namespace                                                    foo-namespace                      created
 +   ├─ kubernetes:core/v1:Service                                                      foo-webhook-service                **creating failed**     1 error
 +   ├─ kubernetes:rbac.authorization.k8s.io/v1:Role                                    foo-role                           created
 +   ├─ pulumi:providers:kubernetes                                                     k8s                                created
 +   ├─ aws:iam:Role                                                                    foo-role                           created
 +   │  └─ aws:iam:Policy                                                               foo-policy                         created
 +   ├─ kubernetes:core/v1:Secret                                                       foo-tls-secret                     created
 +   ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole                             foo-clusterrole                    created
 +   ├─ kubernetes:admissionregistration.k8s.io/v1beta1:ValidatingWebhookConfiguration  foo-validating-webhook             created                 1 warning
 +   ├─ kubernetes:admissionregistration.k8s.io/v1beta1:MutatingWebhookConfiguration    foo-mutating-webhook               created                 1 warning
 +   └─ kubernetes:core/v1:ServiceAccount                                               foo-serviceAccount                 **creating failed**     1 error
 C
Diagnostics:
  kubernetes:core/v1:ServiceAccount (foo-serviceAccount):
    error: resource aws-load-balancer-controller/foo-serviceaccount was not successfully created by the Kubernetes API server : ServiceAccount "foo-serviceaccount" is invalid: metadata.labels: Invalid value: "arn:aws:iam::616138583583:role/foo-role-10b9499": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')

  kubernetes:core/v1:Service (foo-webhook-service):
    error: 2 errors occurred:
        * resource aws-load-balancer-controller/foo-webhook-service-4lpopjpr was successfully created, but the Kubernetes API server reported that it failed to fully initialize or become live: Resource operation was cancelled for "foo-webhook-service-4lpopjpr"
        * Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods

有没有办法禁用此功能,这样我就不必向Pulumi发送终止信号?

Pulumi在Kubernetes资源上有特殊的等待逻辑。你可以阅读更多关于这方面的内容

普卢米将等待库伯内特斯资源“健康”。“健康”的定义可能会因创建的资源而异,但通常Pulumi会等待资源:

  • 存在
  • 具有就绪状态(如果资源具有就绪状态)
您可以通过向该资源添加注释来跳过此逻辑,如下所示:

pulumi.com/skipAwait: "true"
您还可以使用以下示例更改超时或Pulumi将等待的时间:

pulumi.com/timeoutSeconds: 600
这会添加到您使用Pulumi管理的任何Kubernetes资源中,因此,例如,服务资源可能如下所示(使用Pulumi的typescript SDK):

const service = new k8s.core.v1.Service(`${name}-service`, {
  metadata: {
    namespace: "my-service",
  },
  annotations: {
    "pulumi.com/timeoutSeconds": "60" // Only wait 1 minute for pulumi to timeout
    "pulumi.com/skipAwait": "true" // don't use the await logic at all
}
  spec: {
    ports: [{
      port: 443,
      targetPort: 9443,
    }],
    selector: {
      "app.kubernetes.io/name": "my-deployment",
      "app.kubernetes.io/instance": "foo",
    },
 },
});