Kubernetes HPA labelSelector未筛选外部指标
我正在尝试在EKS群集(v1.13.10-EKS-5ac0f1)上设置基于自定义度量的autoscaler,但外部度量标签的labelSelector筛选器似乎未进行筛选 使用和(v0.3.6),我成功地将普罗米修斯的度量导出为库伯内特斯的外部度量 度量已正确导出并在kubernetes api上可见:Kubernetes HPA labelSelector未筛选外部指标,kubernetes,prometheus,autoscaling,Kubernetes,Prometheus,Autoscaling,我正在尝试在EKS群集(v1.13.10-EKS-5ac0f1)上设置基于自定义度量的autoscaler,但外部度量标签的labelSelector筛选器似乎未进行筛选 使用和(v0.3.6),我成功地将普罗米修斯的度量导出为库伯内特斯的外部度量 度量已正确导出并在kubernetes api上可见: kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/*/sqs_queue_messages" { "k
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/*/sqs_queue_messages"
{
"kind": "ExternalMetricValueList",
"apiVersion": "external.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/external.metrics.k8s.io/v1beta1/namespaces/%2A/sqs_queue_messages"
},
"items": [
{
"metricName": "sqs_queue_messages",
"metricLabels": {
"__name__": "sqs_queue_messages",
...
"queue_name": "temp-queue"
},
"timestamp": "2019-11-07T21:14:44Z",
"value": "0"
},
{
"metricName": "sqs_queue_messages",
"metricLabels": {
"__name__": "sqs_queue_messages",
...
"queue_name": "random-queue"
},
"timestamp": "2019-11-07T21:14:44Z",
"value": "0"
}
]
}
水平吊舱自动缩放器.yml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: api
namespace: api
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api
minReplicas: 2
maxReplicas: 5
metrics:
- external:
metricName: sqs_queue_messages
metricSelector:
matchLabels:
queue_name: temp-queue
targetAverageValue: "100"
type: External
问题在于,HPA并不仅仅选择带有匹配标签的度量,事实上,通过查看日志,我可以看到执行了以下调用
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/*/sqs_queue_messages?labelSelector=queue_name%3Dtemp-queue"
预期结果仅为1项(与队列\u name:temp queue标签匹配的项),但会忽略筛选器并返回所有结果。能否尝试将apiVersion更改为
自动缩放/v2beta2
,请查找explanationHi@HelloWorld感谢您的回复,顺便说一句,我试图将apiVersion更改为v2beta2,但hpa仍然无法正常工作,尤其是对kubectl get--raw“/api/external.metrics.k8s.io/v1beta1/namespaces/*/sqs_queue_messages?labelSelector=queue_name%3Dqueue temp”的请求忽略返回所有系列的labelSelector而不是只返回所选的oneHey@GaruGaru是否有关于此问题的更新?