I am trying to get my Manticore Cluster work properly on Kubernetes but I am facing memory usage issues which lead to the pod’s systematic eviction / crashloop backoff.
I have two nodes, with 4 CPUs and 16 Gi RAM each, running one manticore container each. I have set up Requests and Limits for each of them at 80% full potential, but my pods keep being OOMkilled by Kubernetes system (Out Of Memory).
Containers: manticore: Container ID: docker://2c71c25298154b09ecb00****6c58d1103096d0fc42732aa316516ec82a9 Image: *****/manticore:7e740fe43b50 Image ID: docker-pullable://*****/manticore@sha256:c38b116***38cf383fa583c8a76705c9ac2dd417851649df608134ee2cf8a2 Port: <none> Host Port: <none> State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Wed, 30 Dec 2020 13:06:58 +0100 Finished: Wed, 30 Dec 2020 13:11:18 +0100 Ready: False Restart Count: 8 Limits: cpu: 3700m memory: 6000Mi Requests: cpu: 50m memory: 20Mi Environment: <none> Mounts: /var/lib/manticore from manticore-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-csf8n (ro)
I am using RealTime index and each time I do some “REPLACE INTO” queries, the memory keeps on increasing. I have managed to monitor this with this command :
kubectl top pod manticore-6446***6c-6bcd8
NAME CPU(cores) MEMORY(bytes) manticore-64466f86c-6bcd8 1m 3429Mi
CPU is keeping with low values but memory can’t stop increasing and when it reaches my pod’s limit, it gets destroyed, again and again. Is there a way to “cap” memory usage or to flush automatically to disk before the pod gets destroyed?