alias k=kubectl
complete -F __start_kubectl k
alias k=kubectl
complete -F __start_kubectl k
01.
Create a new service account with the name pvviewer.
Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace.
- ServiceAccount: pvviewer
- ClusterRole: pvviewer-role
- ClusterRoleBinding: pvviewer-role-binding
- Pod: pvviewer
- Pod configured to use ServiceAccount pvviewer ?
# service account 생성
k create sa pvviewer
---
# 클러스터롤
kubectl create clusterrole pvviewer-role --resource=persistentvolumes --verb=list
# yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: pvviewer-role
rules:
- apiGroups: [""]
#
resources: ["PersistentVolumes"]
verbs: [ "list"]
# 클러스터 롤바인딩
# >> namespace 붙여줘야함!!
kubectl create clusterrolebinding pvviewer-role-binding \
--clusterrole=pvviewer-role --serviceaccount=default:pvviewer
---
# 파드
apiVersion: v1
kind: Pod
metadata:
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
serviceAccountName: pvviewer
02.
List the InternalIP of all nodes of the cluster.
Save the result to a file /root/CKA/node_ips.
List the InternalIP of all nodes of the cluster.
Save the result to a file /root/CKA/node_ips.
Answer should be in the format:
InternalIP of controlplane<space>InternalIP of node01 (in a single line)
InternalIP of controlplane<space>InternalIP of node01 (in a single line)
- Task Completed
Answer should be in the format: InternalIP of controlplane<space>InternalIP of node01 (in a single line)
root@controlplane:~# k describe nodes node01 | grep -i ip
InternalIP: 10.14.131.3
root@controlplane:~# vi ip
root@controlplane:~# k describe nodes controlplane | grep -i ip
InternalIP: 10.14.131.12
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'
10.14.131.12 of controlplane 10.14.131.3 of node01
03.
Create a pod called multi-pod with two containers. Container 1, name: alpha, image: nginx Container 2: name: beta, image: busybox, command: sleep 4800 Environment Variables: container 1: name: alpha Container 2: name: beta
|
---
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:
containers:
- image: nginx
name: alpha
env:
- name: name
value: alpha
- image: busybox
name: beta
command: ["sleep", "4800"]
env:
- name: name
value: beta
04.
Create a Pod called non-root-pod , image: redis:alpine
runAsUser: 1000
fsGroup: 2000
- Pod non-root-pod fsGroup configured
- Pod non-root-pod runAsUser configured
apiVersion: v1
kind: Pod
metadata:
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- name: non-root-pod
image: redis:alpine
05.
We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it. Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80. Important: Don't delete any current objects deployed.
|
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-nptest
spec:
podSelector:
matchLabels:
run: np-test-1
ingress:
- ports:
- protocol: TCP
port: 80
policyTypes:
- Ingress
curl http://10.108.1.146:80
06.
Taint the worker node node01 to be Unschedulable.
Taint the worker node node01 to be Unschedulable.
Once done, create a pod called dev-redis, image redis:alpine,
to ensure workloads are not scheduled to this worker node.
Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.
Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.
key: env_type, value: production, operator: Equal and effect: NoSchedule
- Key = env_type
- Value = production
- Effect = NoSchedule
- pod 'dev-redis' (no tolerations) is not scheduled on node01?
- Create a pod 'prod-redis' to run on node01
# To add taints on the node01 worker node:
kubectl taint node node01 env_type=production:NoSchedule
# Now, deploy dev-redis pod and to ensure that workloads are not scheduled
# to this node01 worker node.
kubectl run dev-redis --image=redis:alpine
# To view the node name of recently deployed pod:
kubectl get pods -o wide
# Solution manifest file to deploy new pod called prod-redis with toleration
# to be scheduled on node01 worker node.
---
apiVersion: v1
kind: Pod
metadata:
name: prod-redis
spec:
containers:
- name: prod-redis
image: redis:alpine
tolerations:
- effect: NoSchedule
key: env_type
operator: Equal
value: production
# To view only prod-redis pod with less details:
kubectl get pods -o wide | grep prod-redis
https://kubernetes.io/ko/docs/concepts/scheduling-eviction/taint-and-toleration/
테인트(Taints)와 톨러레이션(Tolerations)
노드 어피니티는 노드 셋을 (기본 설정 또는 어려운 요구 사항으로) 끌어들이는 파드의 속성이다. 테인트 는 그 반대로, 노드가 파드 셋을 제외할 수 있다. 톨러레이션 은 파드에 적용되며, 파드
kubernetes.io
07.
Create a pod called hr-pod in hr namespace belonging
Create a pod called hr-pod in hr namespace belonging
to the production environment and frontend tier .
image: redis:alpine
Use appropriate labels and create all the required objects
if it does not exist in the system already.
- hr-pod labeled with environment production?
- hr-pod labeled with tier frontend?
Create a namespace if it doesn't exist:
kubectl create namespace hr
and then create a hr-pod with given details:
kubectl run hr-pod --image=redis:alpine -l envrionment=production,tier=frontend
08.
A kubeconfig file called super.kubeconfig has been created under /root/CKA.
There is something wrong with the configuration.
Troubleshoot and fix it.
A kubeconfig file called super.kubeconfig has been created under /root/CKA.
There is something wrong with the configuration.
Troubleshoot and fix it.
- Fix /root/CKA/super.kubeconfig
09.
We have created a new deployment callednginx-deploy.
scale the deployment to 3 replicas.
Has the replica's increased? Troubleshoot the issue and fix it.
scale the deployment to 3 replicas.
Has the replica's increased? Troubleshoot the issue and fix it.
- deployment has 3 replicas
Verify host and port for kube-apiserver are correct.
Open the super.kubeconfig in vi editor.
Change the 9999 port to 6443 and run the below command to verify:
kubectl cluster-info --kubeconfig=/root/CK
Use the command kubectl scale to increase the replica count to 3.
kubectl scale deploy nginx-deploy --replicas=3
The controller-manager is responsible for scaling up pods of a replicaset.
If you inspect the control plane components in the kube-system namespace,
you will see that the controller-manager is not running.
kubectl get pods -n kube-system
The command running inside the controller-manager pod is incorrect.
After fix all the values in the file and wait
for controller-manager pod to restart.
Alternatively, you can run sed command to change all values at once:
sed -i 's/kube-contro1ler-manager/kube-controller-manager/g' \
/etc/kubernetes/manifests/kube-controller-manager.yaml
This will fix the issues in controller-manager yaml file.
At last, inspect the deployment by using below command:
kubectl get deploy
'CKA (Certified Kubernetes Administrator) > Kode Kloud' 카테고리의 다른 글
12.Lightning Labs (0) | 2022.02.08 |
---|---|
MockExam(2) (0) | 2022.02.07 |
MockExam (1) (0) | 2022.02.04 |
10.Troubleshooting - Troubleshoot Network (0) | 2022.02.04 |
10. Troubleshooting - Worker Node Failure (0) | 2022.02.04 |