Kubernetes吊舱中MongoDb RAM的使用-不知道节点限制
在Google容器引擎Kubernetes中,我有3个节点,每个节点都有3.75 GB的ram 现在,我还有一个从单个端点调用的api。该端点在mongodb中进行如下批插入Kubernetes吊舱中MongoDb RAM的使用-不知道节点限制,mongodb,kubernetes,google-kubernetes-engine,Mongodb,Kubernetes,Google Kubernetes Engine,在Google容器引擎Kubernetes中,我有3个节点,每个节点都有3.75 GB的ram 现在,我还有一个从单个端点调用的api。该端点在mongodb中进行如下批插入 IMongoCollection<T> stageCollection = Database.GetCollection<T>(StageName); foreach (var batch in entites.Batch(1000)) { await stageCollection.Inse
IMongoCollection<T> stageCollection = Database.GetCollection<T>(StageName);
foreach (var batch in entites.Batch(1000))
{
await stageCollection.InsertManyAsync(batch);
}
更新
同时,我还配置了pod反亲和力,以确保在运行mongo db的节点上,ram中没有任何干扰。但我们仍然得到了oom场景–与此同时,我还配置了pod反亲和力,以确保在运行mongo db的节点上,ram中没有任何干扰。但我们还是得到了机会
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:3.6
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- "0.0.0.0"
- "--noprealloc"
- "--wiredTigerCacheSizeGB"
- "1.5"
resources:
limits:
memory: "2Gi"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 32Gi