Search⌘ K

Allocating Excessive Resources

Understand how Kubernetes manages resource allocation when memory requests exceed node capacity. Explore why Pods enter a pending state due to insufficient memory, how to interpret scheduling failures, and the importance of setting realistic resource requests based on actual usage to optimize cluster performance.

Allocating excessive memory

Let’s explore another possible situation through yet another updated definition go-demo-2-insuf-node. Just as before, the change is only in the resources of the go-demo-2-db Deployment.

YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-demo-2-db
spec:
...
template:
...
spec:
containers:
- name: db
image: mongo:3.3
resources:
limits:
memory: 8Gi
cpu: 0.5
requests:
memory: 12Gi
cpu: 0.3

This time we specify that the requested memory is twice as much as the total memory of the node (6GB). The memory limit is even higher.

Applying the definition

Let’s apply the change and observe what happens.

Shell
kubectl apply \
-f go-demo-2-insuf-node.yml \
--record
kubectl get pods

The output of the latter command is as follows:

Shell
NAME READY STATUS RESTARTS AGE
go-demo-2-api-... 1/1 Running 0 8m
go-demo-2-api-... 1/1 Running 0 8m
go-demo-2-api-... 1/1 Running 0 9m
go-demo-2-db-... 0/1 Pending 0 13s

This time, the status of the Pod ...