Docker Go Microservices与大使API网关
让大使正常工作我遇到了一些问题。我对库伯内特斯还不熟悉,只是自学而已 我已经成功地完成了Ambassador提供的演示材料,例如/httpbin/endpoint工作正常,但当我尝试部署Go服务时,它正在崩溃 点击“qotm”端点时,页面将显示以下响应:Docker Go Microservices与大使API网关,docker,go,kubernetes,yaml,amazon-eks,Docker,Go,Kubernetes,Yaml,Amazon Eks,让大使正常工作我遇到了一些问题。我对库伯内特斯还不熟悉,只是自学而已 我已经成功地完成了Ambassador提供的演示材料,例如/httpbin/endpoint工作正常,但当我尝试部署Go服务时,它正在崩溃 点击“qotm”端点时,页面将显示以下响应: upstream request timeout 吊舱状态: CrashLoopBackOff 根据我的研究,这似乎与yaml文件没有正确配置有关,但我很难找到任何与此用例相关的文档 我的集群正在AWS EKS上运行,图像正在推送到AWS
upstream request timeout
吊舱状态:
CrashLoopBackOff
根据我的研究,这似乎与yaml文件没有正确配置有关,但我很难找到任何与此用例相关的文档
我的集群正在AWS EKS上运行,图像正在推送到AWS ECR
main.go:
package main
import (
"fmt"
"net/http"
"os"
)
func main() {
var PORT string
if PORT = os.Getenv("PORT"); PORT == "" {
PORT = "3001"
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello World from path: %s\n", r.URL.Path)
})
http.ListenAndServe(":" + PORT, nil)
}
Dockerfile:
FROM golang:alpine
ADD ./src /go/src/app
WORKDIR /go/src/app
EXPOSE 3001
ENV PORT=3001
CMD ["go", "run", "main.go"]
test.yaml:
apiVersion: v1
kind: Service
metadata:
name: qotm
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: qotm_mapping
prefix: /qotm/
service: qotm
spec:
selector:
app: qotm
ports:
- port: 80
name: http-qotm
targetPort: http-api
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: qotm
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: qotm
spec:
containers:
- name: qotm
image: ||REMOVED||
ports:
- name: http-api
containerPort: 3001
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
periodSeconds: 3
resources:
limits:
cpu: "0.1"
memory: 100Mi
吊舱说明:
Name: qotm-7b9bf4d499-v9nxq
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: ip-192-168-89-69.eu-west-1.compute.internal/192.168.89.69
Start Time: Sun, 17 Mar 2019 17:19:50 +0000
Labels: app=qotm
pod-template-hash=3656908055
Annotations: <none>
Status: Running
IP: 192.168.113.23
Controlled By: ReplicaSet/qotm-7b9bf4d499
Containers:
qotm:
Container ID: docker://5839996e48b252ac61f604d348a98c47c53225712efd503b7c3d7e4c736920c4
Image: IMGURL
Image ID: docker-pullable://IMGURL
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 17 Mar 2019 17:30:49 +0000
Finished: Sun, 17 Mar 2019 17:30:49 +0000
Ready: False
Restart Count: 7
Limits:
cpu: 100m
memory: 200Mi
Requests:
cpu: 100m
memory: 200Mi
Readiness: http-get http://:3001/health delay=30s timeout=1s period=3s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5bbxw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5bbxw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5bbxw
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned default/qotm-7b9bf4d499-v9nxq to ip-192-168-89-69.eu-west-1.compute.internal
Normal Pulled 10m (x5 over 12m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Container image "IMGURL" already present on machine
Normal Created 10m (x5 over 12m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Created container
Normal Started 10m (x5 over 11m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Started container
Warning BackOff 115s (x47 over 11m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Back-off restarting failed container
名称:qotm-7b9bf4d499-v9nxq
名称空间:默认值
优先级:0
PriorityClassName:
节点:ip-192-168-89-69.eu-west-1.compute.internal/192.168.89.69
开始时间:2019年3月17日星期日17:19:50+0000
标签:app=qotm
pod模板哈希=3656908055
注释:
状态:正在运行
IP:192.168.113.23
控制人:ReplicaSet/qotm-7b9bf4d499
容器:
qotm:
容器ID:docker://5839996e48b252ac61f604d348a98c47c53225712efd503b7c3d7e4c736920c4
图片:IMGURL
图像ID:docker-pullable://IMGURL
端口:3001/TCP
主机端口:0/TCP
国家:等待
原因:仓促退却
最后状态:终止
原因:错误
退出代码:1
开始时间:2019年3月17日星期日17:30:49+0000
完成日期:2019年3月17日星期日17:30:49+0000
就绪:错误
重新启动计数:7
限制:
中央处理器:100米
内存:200Mi
请求:
中央处理器:100米
内存:200Mi
准备就绪:http get http://:3001/运行状况延迟=30s超时=1s时段=3s#成功=1#失败=3
环境:
挂载:
/来自default-token-5bbxw(ro)的var/run/secrets/kubernetes.io/serviceCount
条件:
类型状态
初始化为True
准备错误
集装箱准备好了吗
播客预定为真
卷数:
default-token-5bbxw:
类型:Secret(由Secret填充的卷)
SecretName:default-token-5bbxw
可选:false
QoS等级:保证
节点选择器:
容差:node.kubernetes.io/未就绪:不执行300秒
node.kubernetes.io/不可访问:不执行300秒
活动:
从消息中键入原因年龄
---- ------ ---- ---- -------
正常计划的12m默认计划程序已成功将默认/qotm-7b9bf4d499-v9nxq分配给ip-192-168-89-69.eu-west-1.compute.internal
正常拉力10米(x5/12米)kubelet,ip-192-168-89-69.eu-west-1.compute.机器上已存在内部容器映像“IMGURL”
正常创建10米(x5/12米)kubelet,ip-192-168-89-69.eu-west-1.compute.internal创建容器
正常启动10米(x5/11米)kubelet,ip-192-168-89-69.eu-west-1.compute.内部启动容器
警告后退115s(x47超过11m)kubelet,ip-192-168-89-69.eu-west-1.compute.internal后退重新启动失败的容器
在kubernetes部署文件中,当应用程序在端口3001上公开时,您已经在端口5000上公开了准备就绪探测,并且在运行容器时,我被OOMKilled了几次,因此增加了内存限制。无论如何,下面的部署文件应该可以正常工作
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: qotm
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: qotm
spec:
containers:
- name: qotm
image: <YOUR_IMAGE>
imagePullPolicy: Always
ports:
- name: http-api
containerPort: 3001
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 3
resources:
limits:
cpu: "0.1"
memory: 200Mi
apiVersion:extensions/v1beta1
种类:部署
元数据:
名称:qotm
规格:
副本:1份
战略:
类型:RollingUpdate
模板:
元数据:
标签:
app:qotm
规格:
容器:
-名称:qotm
图片:
imagePullPolicy:始终
端口:
-名称:http api
集装箱港口:3001
readinessProbe:
httpGet:
路径:/health
端口:3001
初始延迟秒数:30
秒:3
资源:
限制:
cpu:“0.1”
内存:200Mi
您可以添加失败容器的日志吗?已修改原始日志。谢谢这看起来像kubectl描述的,你能试试吗-kubectl日志我在打领带时遇到了这个错误,有什么想法:2019/03/17 12:14:02听tcp:80:bind:permission DeniedId你试图添加到DockerfileEXPOSE 3001
?我在minikube上试过,没有遇到任何问题,您能否使用kubectl descripe pod
命令检查事件的详细信息,它可能会让您了解为什么会发生崩溃。我刚刚意识到我在部署您的更改时犯了一个错误!现在一切都好了。非常感谢你。