CKA (Certified Kubernetes Administrator)/Kode Kloud

MockExam(2)

seulseul 2022. 2. 7. 16:03

01. Take a backup of the etcd cluster and save it to /opt/etcd-backup.db.

ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379  \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt\
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save  /opt/etcd-backup.db


ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
  snapshot save <backup-file-location>

 

02. Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir 

that lasts for the life of the Pod.


Specs on the below.

 
  • Pod named 'redis-storage' created
  • Pod 'redis-storage' uses Volume type of emptyDir
  • Pod 'redis-storage' uses volumeMount with mountPath = /data/redis
apiVersion: v1
kind: Pod
metadata:
  name: redis-storage
spec:
  containers:
  - image: redis:alpine
    name: redis-storage
    volumeMounts:
    - mountPath: /data/redis
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}

 

03. Create a new pod called super-user-pod with image busybox:1.28.

Allow the pod to be able to set system_time.


The container should sleep for 4800 seconds.

 
  • Pod: super-user-pod
  • Container Image: busybox:1.28
  • SYS_TIME capabilities for the conatiner?
apiVersion: v1
kind: Pod
metadata:
  name: super-user-pod
spec:
  containers:
  - image: redis:alpine
    name: super-user-pod
    command: [ "sh", "-c", "sleep 4800" ]
    securityContext:
      capabilities:
        add: ["SYS_TIME"]

 

04. A pod definition file is created at /root/CKA/use-pv.yaml.

Make use of this manifest file and mount the persistent volume called pv-1.

Ensure the pod is running and the PV is bound.

 

mountPath: /data
persistentVolumeClaim Name: my-pvc

  • persistentVolume Claim configured correctly
  • pod using the correct mountPath
  • pod using the persistent volume claim?
---
# use-pv.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: use-pv
  name: use-pv
spec:
  containers:
  - image: nginx
    name: use-pv
    resources: {}
    volumeMounts:
    - mountPath: "/data"
      name: config
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: config
      persistentVolumeClaim:
        claimName: my-pvc
---
# pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi

 

05. Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.

Next upgrade the deployment to version 1.17 using rolling update.


 
  • Deployment : nginx-deploy. Image: nginx:1.16
  • Image: nginx:1.16
  • Task: Upgrade the version of the deployment to 1:17
  • Task: Record the changes for the image upgrade
k set image deployment.apps/nginx-deploy nginx=1.17

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

 

Deployments

A Deployment provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new Rep

kubernetes.io

 

06. Create a new user called john.

Grant him access to the cluster.

John should have permission to create, list, get, update and delete pods in the development namespace .

The private key exists in the location: /root/CKA/john.key and csr at /root/CKA/john.csr.

Important Note: As of kubernetes 1.19, the CertificateSigningRequest object expects a signerName.

Please refer the documentation to see an example.

The documentation tab is available at the top right of terminal.
  • CSR: john-developer Status:Approved
  • Role Name: developer, namespace: development, Resource: Pods
  • Access: User 'john' has appropriate permissions

https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/

 

Solution manifest file to create a CSR as follows:

---
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: john-developer
spec:
  signerName: kubernetes.io/kube-apiserver-client
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0VhbTlvYmpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQUt2Um1tQ0h2ZjBrTHNldlF3aWVKSzcrVVdRck04ZGtkdzkyYUJTdG1uUVNhMGFPCjV3c3cwbVZyNkNjcEJFRmVreHk5NUVydkgyTHhqQTNiSHVsTVVub2ZkUU9rbjYra1NNY2o3TzdWYlBld2k2OEIKa3JoM2prRFNuZGFvV1NPWXBKOFg1WUZ5c2ZvNUpxby82YU92czFGcEc3bm5SMG1JYWpySTlNVVFEdTVncGw4bgpjakY0TG4vQ3NEb3o3QXNadEgwcVpwc0dXYVpURTBKOWNrQmswZWhiV2tMeDJUK3pEYzlmaDVIMjZsSE4zbHM4CktiSlRuSnY3WDFsNndCeTN5WUFUSXRNclpUR28wZ2c1QS9uREZ4SXdHcXNlMTdLZDRaa1k3RDJIZ3R4UytkMEMKMTNBeHNVdzQyWVZ6ZzhkYXJzVGRMZzcxQ2NaanRxdS9YSmlyQmxVQ0F3RUFBYUFBTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQ1VKTnNMelBKczB2czlGTTVpUzJ0akMyaVYvdXptcmwxTGNUTStsbXpSODNsS09uL0NoMTZlClNLNHplRlFtbGF0c0hCOGZBU2ZhQnRaOUJ2UnVlMUZnbHk1b2VuTk5LaW9FMnc3TUx1a0oyODBWRWFxUjN2SSsKNzRiNnduNkhYclJsYVhaM25VMTFQVTlsT3RBSGxQeDNYVWpCVk5QaGhlUlBmR3p3TTRselZuQW5mNm96bEtxSgpvT3RORStlZ2FYWDdvc3BvZmdWZWVqc25Yd0RjZ05pSFFTbDgzSkljUCtjOVBHMDJtNyt0NmpJU3VoRllTVjZtCmlqblNucHBKZWhFUGxPMkFNcmJzU0VpaFB1N294Wm9iZDFtdWF4bWtVa0NoSzZLeGV0RjVEdWhRMi80NEMvSDIKOWk1bnpMMlRST3RndGRJZjAveUF5N05COHlOY3FPR0QKLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
  usages:
  - digital signature
  - key encipherment
  - client auth
  groups:
  - system:authenticated
  
  
To approve this certificate, run: kubectl certificate approve john-developer

Next, create a role developer and rolebinding developer-role-binding,

run the command:

$ kubectl create role developer --resource=pods --verb=create,list,get,update,delete \
--namespace=development


$ kubectl create rolebinding developer-role-binding --role=developer --user=john \
--namespace=development

To verify the permission from kubectl utility tool:

$ kubectl auth can-i update pods --as=john --namespace=development
07. Create a nginx pod called nginx-resolver using image nginx,

expose it internally with a service called 
nginx-resolver-service.


Test that you are able to look up the service and pod names from within the cluster.


Use the image: 
busybox:1.28 for dns lookup.



Record results in 
/root/CKA/nginx.svc and /root/CKA/nginx.pod
Use the command kubectl run and create a nginx pod and busybox pod.

Resolve it, nginx service and its pod name from busybox pod.


To create a pod nginx-resolver and expose it internally:
To create a pod test-nslookup. 

Test that you are able to look up the service and pod names from within the cluster:


kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never \
-- nslookup nginx-resolver-service

kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never \
-- nslookup nginx-resolver-service > /root/CKA/nginx.svc



Get the IP of the nginx-resolver pod and replace the dots(.)
with hyphon(-) which will be used below.


kubectl get pod nginx-resolver -o wide

kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never \
-- nslookup <P-O-D-I-P.default.pod> > /root/CKA/nginx.pod


10.50.192.2

kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never \
-- nslookup 10.50.192.2.default.pod > /root/CKA/nginx.pod

 

08. Create a static pod on node01 called nginx-critical with image nginx and


make sure that it is recreated/restarted automatically in case of a failure.

Use /etc/kubernetes/manifests as the Static Pod path for example.
 
  • static pod configured under /etc/kubernetes/manifests ?

  • Pod nginx-critical-node01 is up and running
apiVersion: v1
kind: Pod
metadata:
  name: nginx-critical
spec:
  containers:
  - name: nginx-critical
    image: nginx
  restartPolicy: OnFailure
 # node01 로 접속후
 ssh node01

/etc/kubernetes/manifests 경로에 위의 yaml 파일을 생성하면된다.

'CKA (Certified Kubernetes Administrator) > Kode Kloud' 카테고리의 다른 글

12.Lightning Labs  (0) 2022.02.08
Mock Exam(3)  (0) 2022.02.07
MockExam (1)  (0) 2022.02.04
10.Troubleshooting - Troubleshoot Network  (0) 2022.02.04
10. Troubleshooting - Worker Node Failure  (0) 2022.02.04