LABS – CERTIFIED KUBERNETES ADMINISTRATOR WITH PRACTICE TESTS > SCHEDULING – Resource Limits
Scheduling
01. A pod called rabbit is deployed. Identify the CPU requirements set on the Pod
in the current(default) namespace
정답 : 1
kubectl describe pod rabbit
02. Delete the rabbit Pod.
Once deleted, wait for the pod to fully terminate.
- Delete Pod rabbit
controlplane ~ ➜ kubectl delete pod rabbit
pod "rabbit" deleted
03. Another pod called elephant has been deployed in the default namespace.
It fails to get to a running state.
Inspect this pod and identify the Reason why it is not running.
1) Running
2) Ready
3) CrashLoopBackOff
! 4) OOMKilled (정답)
controlplane ~ ➜ kubectl describe pod elephant
Name: elephant
Namespace: default
Priority: 0
Node: controlplane/172.25.0.21
Start Time: Wed, 19 Jan 2022 08:54:56 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.42.0.10
IPs:
IP: 10.42.0.10
Containers:
mem-stress:
Container ID: containerd://ca4fea45db41b58394ae6e05dd265a72d9ac5cfa2630e296d8cff4dda33416a8
Image: polinux/stress
Image ID: docker.io/polinux/stress@sha256:b6144f84f9c15dac80deb48d3a646b55c7043ab1d83ea0a697c09097aaad21aa
Port: <none>
Host Port: <none>
Command:
stress
Args:
--vm
1
--vm-bytes
15M
--vm-hang
1
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 1
Started: Wed, 19 Jan 2022 09:00:43 +0000
Finished: Wed, 19 Jan 2022 09:00:43 +0000
Ready: False
Restart Count: 6
Limits:
memory: 10Mi
Requests:
memory: 5Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhcz7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-rhcz7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m9s default-scheduler Successfully assigned default/elephant to controlplane
Normal Pulled 8m7s kubelet Successfully pulled image "polinux/stress" in 1.172580167s
Normal Pulled 8m6s kubelet Successfully pulled image "polinux/stress" in 180.697193ms
Normal Pulled 7m53s kubelet Successfully pulled image "polinux/stress" in 171.402384ms
Normal Pulled 7m30s kubelet Successfully pulled image "polinux/stress" in 260.126705ms
Normal Created 7m30s (x4 over 8m7s) kubelet Created container mem-stress
Normal Started 7m30s (x4 over 8m7s) kubelet Started container mem-stress
Normal Pulling 6m48s (x5 over 8m8s) kubelet Pulling image "polinux/stress"
Normal Pulled 6m48s kubelet Successfully pulled image "polinux/stress" in 168.356311ms
Warning BackOff 3m1s (x25 over 8m5s) kubelet Back-off restarting failed container
04. The status OOMKilled indicates that it is failing because the pod ran out of memory.
Identify the memory limit set on the POD.
05. The elephant pod runs a process that consume 15Mi of memory.
Increase the limit of the elephant pod to 20Mi.
Delete and recreate the pod if required. Do not modify anything other than the required fields.
- Pod Name: elephant
- Image Name: polinux/stress
- Memory Limit: 20Mi
---
apiVersion: v1
kind: Pod
metadata:
name: elephant
namespace: default
spec:
containers:
- args:
- --vm
- "1"
- --vm-bytes
- 15M
- --vm-hang
- "1"
command:
- stress
image: polinux/stress
name: mem-stress
resources:
limits:
memory: 20Mi
requests:
memory: 5Mi
Create the file elephant.yaml by running command
kubectl get po elephant -o yaml > elephant.yaml
and edit the file such as memory limit is set to 20Mi as follows:
kubectl replace -f elephant.yaml --force
controlplane ~ ➜ kubectl replace -f elephant.yaml
Error from server (Conflict): error when replacing "elephant.yaml": Operation cannot be fulfilled on pods "elephant": the object has been modified; please apply your changes to the latest version and try again
controlplane ~ ✖ kubectl replace -f elephant.yaml --force
pod "elephant" deleted
pod/elephant replaced
06. Inspect the status of POD. Make sure it's running
kubectl get po
07. Delete the elephant Pod.
Once deleted, wait for the pod to fully terminate.
kubectl delete pod elephant
'CKA (Certified Kubernetes Administrator) > Kode Kloud' 카테고리의 다른 글
02. Scheduling - Static PODs (0) | 2022.01.20 |
---|---|
02. Scheduling - DaemonSets (0) | 2022.01.20 |
02.Scheduling - Node Affinity (0) | 2022.01.19 |
02 Scheduling - Taints and Tolerations (0) | 2022.01.19 |
02.Scheduling - Labels and Selectors (0) | 2022.01.19 |