Grafana、Prometheus、Kiali与AzureAD和istio内部负载平衡器的身份验证
我正在azure kubernetes服务(AKS)中部署istio,我有以下问题:Grafana、Prometheus、Kiali与AzureAD和istio内部负载平衡器的身份验证,azure,prometheus,grafana,istio,Azure,Prometheus,Grafana,Istio,我正在azure kubernetes服务(AKS)中部署istio,我有以下问题: 是否可以使用内部负载平衡器部署istio。默认情况下,它似乎是在Azure中使用公共负载平衡器部署的。要使用内部负载平衡器,我需要做哪些更改?回答第二个问题: 可以根据AKS为内部负载平衡器添加AKS注释: 要创建内部负载平衡器,请使用服务类型LoadBalancer和azure负载平衡器内部注释创建名为internal-lb.yaml的服务清单,如下例所示: apiVersion: v1 kind: Serv
是否可以使用内部负载平衡器部署istio。默认情况下,它似乎是在Azure中使用公共负载平衡器部署的。要使用内部负载平衡器,我需要做哪些更改?回答第二个问题: 可以根据AKS为内部负载平衡器添加AKS注释: 要创建内部负载平衡器,请使用服务类型LoadBalancer和azure负载平衡器内部注释创建名为
internal-lb.yaml
的服务清单,如下例所示:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
因此,可以通过使用helm和以下--set来设置此注释:
正如在评论中提到的,你应该按照建议在每篇文章中坚持一个问题。所以我建议用另一个问题创建第二个帖子 希望能有帮助
更新: 对于istioctl,您可以执行以下操作:
istio.yaml
并搜索类型:LoadBalancer
的文本---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
istio.yaml
部署到K8s群集:istio-ingresgateway
服务中
$ kubectl get svc istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-ingressgateway","istio":"ingressgateway","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15020,"targetPort":15020},{"name":"http2","port":80,"targetPort":80},{"name":"https","port":443},{"name":"kiali","port":15029,"targetPort":15029},{"name":"prometheus","port":15030,"targetPort":15030},{"name":"grafana","port":15031,"targetPort":15031},{"name":"tracing","port":15032,"targetPort":15032},{"name":"tls","port":15443,"targetPort":15443}],"selector":{"app":"istio-ingressgateway"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2020-01-27T13:51:07Z"
希望有帮助。回答第二个问题:
可以根据AKS为内部负载平衡器添加AKS注释:
要创建内部负载平衡器,请使用服务类型LoadBalancer和azure负载平衡器内部注释创建名为internal-lb.yaml
的服务清单,如下例所示:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
因此,可以通过使用helm和以下--set来设置此注释:
正如在评论中提到的,你应该按照建议在每篇文章中坚持一个问题。所以我建议用另一个问题创建第二个帖子
希望能有帮助
更新:
对于istioctl,您可以执行以下操作:
为您的istio部署生成清单文件对于这个示例,我使用了演示概要文件
修改istio.yaml
并搜索类型:LoadBalancer
的文本
为内部负载平衡器添加注释,如下所示:
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
保存更改后,使用以下方法将修改的istio.yaml
部署到K8s群集:
之后,您可以验证注释是否存在于istio-ingresgateway
服务中
$ kubectl get svc istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-ingressgateway","istio":"ingressgateway","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15020,"targetPort":15020},{"name":"http2","port":80,"targetPort":80},{"name":"https","port":443},{"name":"kiali","port":15029,"targetPort":15029},{"name":"prometheus","port":15030,"targetPort":15030},{"name":"grafana","port":15031,"targetPort":15031},{"name":"tracing","port":15032,"targetPort":15032},{"name":"tls","port":15443,"targetPort":15443}],"selector":{"app":"istio-ingressgateway"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2020-01-27T13:51:07Z"
希望有帮助。为什么在一篇文章中有两个单独的问题?他们甚至没有任何形式的联系。为什么你在一篇文章中有两个单独的问题?他们甚至没有任何形式的联系。谢谢你指出了社区指南。我将分别询问认证问题。我没用头盔。您知道如何使用istioctl完成此操作吗?我已编辑了答案以包含istioctl的解决方案。我无法在windows框中运行此命令,因为我不断收到以下错误消息:无法配置日志:无法打开接收器/dev/null”
但是,通过使用helm安装istio init,我终于能够让它工作了。然后我运行了helm-template-install/kubernetes/helm/istio--name-istio--namespace-istio-system--set-gateways.istio-ingresgateway.serviceinnotations.service\.beta\.kubernetes\.io/azure负载平衡器internal'=“true”>aks istio.yaml
替换了内部负载平衡器的注释,然后使用kubectl安装清单。感谢您指出社区指南。我将分别询问认证问题。我没用头盔。您知道如何使用istioctl完成此操作吗?我已编辑了答案以包含istioctl的解决方案。我无法在windows框中运行此命令,因为我不断收到以下错误消息:无法配置日志:无法打开接收器/dev/null”
但是,通过使用helm安装istio init,我终于能够让它工作了。然后我运行了helmtemplate install/kubernetes/helm/istio--name istio--namespace istio system--set gateways.istio-ingresgateway.serviceAnnotations.service\.beta\.kubernetes\.io/azure load balancer internal'=“true”>aks istio.yaml
替换了内部负载平衡器的注释,然后使用kubectl安装清单。
$ kubectl get svc istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"labels":{"app":"istio-ingressgateway","istio":"ingressgateway","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15020,"targetPort":15020},{"name":"http2","port":80,"targetPort":80},{"name":"https","port":443},{"name":"kiali","port":15029,"targetPort":15029},{"name":"prometheus","port":15030,"targetPort":15030},{"name":"grafana","port":15031,"targetPort":15031},{"name":"tracing","port":15032,"targetPort":15032},{"name":"tls","port":15443,"targetPort":15443}],"selector":{"app":"istio-ingressgateway"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
creationTimestamp: "2020-01-27T13:51:07Z"