Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/docker/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
docker microservice应用程序在kubernetes反复重启_Docker_Kubernetes_Microservices - Fatal编程技术网

docker microservice应用程序在kubernetes反复重启

docker microservice应用程序在kubernetes反复重启,docker,kubernetes,microservices,Docker,Kubernetes,Microservices,我正在尝试使用kubernetes运行微服务应用程序。我在kubernetes上运行rabbitmq、elasticsearch和eureka发现服务。除此之外,我还有三个微服务应用程序。当我运行其中两个时,它是好的;然而,当我运行第三个时,它们都开始毫无理由地一次又一次地重新启动 我的一个配置文件: apiVersion: v1 kind: Service metadata: name: hrm labels: app: suite spec: type: NodePort

我正在尝试使用kubernetes运行微服务应用程序。我在kubernetes上运行rabbitmq、elasticsearch和eureka发现服务。除此之外,我还有三个微服务应用程序。当我运行其中两个时,它是好的;然而,当我运行第三个时,它们都开始毫无理由地一次又一次地重新启动

我的一个配置文件:

apiVersion: v1
kind: Service
metadata:
  name: hrm
  labels:
    app: suite
spec:
  type: NodePort
  ports:
    - port: 8086
      nodePort: 30001
  selector:
    app: suite
    tier: hrm-core
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hrm
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: suite
        tier: hrm-core
    spec:
      containers:
      - image: privaterepo/hrm-core
        name: hrm
        ports:
        - containerPort: 8086
      imagePullSecrets:
      - name: regsecret
kubectl人力资源管理的结果:

 State:     Running
      Started:      Mon, 12 Jun 2017 12:08:28 +0300
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Mon, 01 Jan 0001 00:00:00 +0000
      Finished:     Mon, 12 Jun 2017 12:07:05 +0300
    Ready:      True
    Restart Count:  5
  18m       18m     1   kubelet, minikube               Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "hrm" with CrashLoopBackOff: "Back-off 10s restarting failed container=hrm pod=hrm-3288407936-cwvgz_default(915fb55c-4f4a-11e7-9240-080027ccf1c3)"
kubectl获得吊舱:

NAME                        READY     STATUS    RESTARTS   AGE
discserv-189146465-s599x    1/1       Running   0          2d
esearch-3913228203-9sm72    1/1       Running   0          2d
hrm-3288407936-cwvgz        1/1       Running   6          46m
parabot-1262887100-6098j    1/1       Running   9          2d
rabbitmq-279796448-9qls3    1/1       Running   0          2d
suite-ui-1725964700-clvbd   1/1       Running   3          2d
kubectl版本:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:43:50Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
minikube版本:

minikube version: v0.18.0
当我查看pod日志时,没有错误。似乎它开始时没有任何问题。这里有什么问题

编辑:kubectl get事件的输出:

19m        19m         1         discserv-189146465-lk3sm    Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Pulling                   kubelet, minikube       pulling image "private repo"
19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Created                   kubelet, minikube       Created container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
19m        19m         1         discserv-189146465-lk3sm    Pod          spec.containers{discserv}   Normal    Started                   kubelet, minikube       Started container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
19m        19m         1         esearch-3913228203-6l3t7    Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
19m        19m         1         esearch-3913228203-6l3t7    Pod          spec.containers{esearch}    Normal    Pulled                    kubelet, minikube       Container image "elasticsearch:2.4" already present on machine
19m        19m         1         esearch-3913228203-6l3t7    Pod          spec.containers{esearch}    Normal    Created                   kubelet, minikube       Created container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
19m        19m         1         esearch-3913228203-6l3t7    Pod          spec.containers{esearch}    Normal    Started                   kubelet, minikube       Started container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
18m        18m         1         hrm-3288407936-d2vhh        Pod                                      Normal    Scheduled                 default-scheduler       Successfully assigned hrm-3288407936-d2vhh to minikube
18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Pulling                   kubelet, minikube       pulling image "private repo"
18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Created                   kubelet, minikube       Created container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
18m        18m         1         hrm-3288407936-d2vhh        Pod          spec.containers{hrm}        Normal    Started                   kubelet, minikube       Started container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
18m        18m         1         hrm-3288407936              ReplicaSet                               Normal    SuccessfulCreate          replicaset-controller   Created pod: hrm-3288407936-d2vhh
18m        18m         1         hrm                         Deployment                               Normal    ScalingReplicaSet         deployment-controller   Scaled up replica set hrm-3288407936 to 1
19m        19m         1         minikube                    Node                                     Normal    RegisteredNode            controllermanager       Node minikube event: Registered Node minikube in NodeController
19m        19m         1         minikube                    Node                                     Normal    Starting                  kubelet, minikube       Starting kubelet.
19m        19m         1         minikube                    Node                                     Warning   ImageGCFailed             kubelet, minikube       unable to find data for container /
19m        19m         1         minikube                    Node                                     Normal    NodeAllocatableEnforced   kubelet, minikube       Updated Node Allocatable limit across pods
19m        19m         1         minikube                    Node                                     Normal    NodeHasSufficientDisk     kubelet, minikube       Node minikube status is now: NodeHasSufficientDisk
19m        19m         1         minikube                    Node                                     Normal    NodeHasSufficientMemory   kubelet, minikube       Node minikube status is now: NodeHasSufficientMemory
19m        19m         1         minikube                    Node                                     Normal    NodeHasNoDiskPressure     kubelet, minikube       Node minikube status is now: NodeHasNoDiskPressure
19m        19m         1         minikube                    Node                                     Warning   Rebooted                  kubelet, minikube       Node minikube has been rebooted, boot id: f66e28f9-62b3-4066-9e18-33b152fa1300
19m        19m         1         minikube                    Node                                     Normal    NodeNotReady              kubelet, minikube       Node minikube status is now: NodeNotReady
19m        19m         1         minikube                    Node                                     Normal    Starting                  kube-proxy, minikube    Starting kube-proxy.
19m        19m         1         minikube                    Node                                     Normal    NodeReady                 kubelet, minikube       Node minikube status is now: NodeReady
8m         8m          1         minikube                    Node                                     Warning   SystemOOM                 kubelet, minikube       System OOM encountered
18m        18m         1         parabot-1262887100-r84kf    Pod                                      Normal    Scheduled                 default-scheduler       Successfully assigned parabot-1262887100-r84kf to minikube
8m         18m         2         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Pulling                   kubelet, minikube       pulling image "private repo"
8m         18m         2         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
18m        18m         1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Created                   kubelet, minikube       Created container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
18m        18m         1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Started                   kubelet, minikube       Started container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
8m         8m          1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Created                   kubelet, minikube       Created container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
8m         8m          1         parabot-1262887100-r84kf    Pod          spec.containers{parabot}    Normal    Started                   kubelet, minikube       Started container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
18m        18m         1         parabot-1262887100          ReplicaSet                               Normal    SuccessfulCreate          replicaset-controller   Created pod: parabot-1262887100-r84kf
18m        18m         1         parabot                     Deployment                               Normal    ScalingReplicaSet         deployment-controller   Scaled up replica set parabot-1262887100 to 1
19m        19m         1         rabbitmq-279796448-pcqqh    Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Pulling                   kubelet, minikube       pulling image "rabbitmq"
19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Pulled                    kubelet, minikube       Successfully pulled image "rabbitmq"
19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Created                   kubelet, minikube       Created container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
19m        19m         1         rabbitmq-279796448-pcqqh    Pod          spec.containers{rabbitmq}   Normal    Started                   kubelet, minikube       Started container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
19m        19m         1         suite-ui-1725964700-ssshn   Pod                                      Normal    SandboxChanged            kubelet, minikube       Pod sandbox changed, it will be killed and re-created.
19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Pulling                   kubelet, minikube       pulling image "private repo"
19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Pulled                    kubelet, minikube       Successfully pulled image "private repo"
19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Created                   kubelet, minikube       Created container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a
19m        19m         1         suite-ui-1725964700-ssshn   Pod          spec.containers{suite-ui}   Normal    Started                   kubelet, minikube       Started container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a

有关任何明显错误,请参阅kubectl get logs。在这种情况下,正如所怀疑的那样,这似乎是资源不足的问题(或者是有资源泄漏的服务)。
如果可能,试着增加资源,看看是否有帮助

只是一个乐观的猜测,退出代码137表示终止信号9(减去128),因此节点上可能没有足够的内存。进程可能被操作系统终止。您是否有机会增加节点的数量或减少其他服务的数量,看看是否有帮助?我也在想同样的事情,但当我描述节点时,似乎有足够的内存。上面写着:OutOfDisk False Memory Pressure False DiskPressure False Ready True现在我在想,发现服务可能有问题。启动它们的顺序是否重要?例如,启动失败的总是
hrm
,或者如果启动它们的顺序不同,则总是第三个?根据其他评论,这意味着资源问题。我注意到服务器是1.6.0,考虑到这是第一个1.6版本,您是否尝试过1.6.4版本的服务器?您好,DanMurphy,没有顺序,正如您所说,它总是第三个。这看起来像是资源问题,但根据descripe node命令的输出,一切似乎都很好。这真令人费解。我通过curl安装了kubectl,但找不到任何关于如何升级服务器版本的信息。你能进一步帮我吗?嗨@DanMurphy,我找到了升级的方法,所以我会尝试1.6.4,然后回复你。谢谢:)我会尝试用更多的记忆开始minikube。之后我会更新这个问题。谢谢hi@AshishVyas这确实是一个内存问题。他们现在似乎没有任何问题。非常感谢。很高兴我能帮助你。