CKA (Certified Kubernetes Administrator)/Kode Kloud

08. Networking - Explore Environment

seulseul 2022. 1. 27. 15:52
LABS – CERTIFIED KUBERNETES
 
ADMINISTRATOR WITH PRACTICE TESTS > NETWORKING
 
08. Networking

01. Explore Environment
02. CNI weave
03. Deploy Network Solution
04. Networking Weave
05. Service Networking
06. CoreDNS in Kubernetes
07. CKA – Ingress Networking – 1
08. CKA – Ingress Networking – 2
 
 

01. How many nodes are part of this cluster?

Including the master and worker nodes.

 ask : 2

root@controlplane:~# k get nodes
NAME           STATUS   ROLES                  AGE    VERSION
controlplane   Ready    control-plane,master   2m5s   v1.20.0
node01         Ready    <none>                 87s    v1.20.0

 

02. What is the Internal IP address of the controlplane node in this cluster?

ask : 10.11.149.8
 
k nodes describe contolpane

Addresses:
  InternalIP:  10.11.149.8

 

03. What is the network interface configured for cluster connectivity on the master node?

마스터 노드에서 클러스터 연결을 위해 구성된 네트워크 인터페이스는 무엇입니까?

node-to-node communication

Run the ip a / ip link command and identify the interface.

 

ask : eth0

 

InternalIP: 로  grep 하면 됨 


root@controlplane:~# ip a | grep -B2 10.11.149.8
196: eth0@if197: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 02:42:0a:0b:95:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.11.149.8/24 brd 10.11.149.255 scope global eth0
# solution

kubectl get nodes -o wide

to see the IP address assigned to the controlplane node.

root@controlplane:~# kubectl get nodes controlplane -o wide

NAME           STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
controlplane   Ready    control-plane,master   4h46m   v1.20.0   10.3.116.12   <none>        Ubuntu 18.04.5 LTS   5.4.0-1041-gcp   docker://19.3.0


In this case, the internal IP address used for node 
for node to node communication is 10.3.116.12.

Important Note : The result above is just an example,
the node IP address will vary for each lab.

Next, find the network interface to which this IP is assigned 
by making use of the ip a command:

root@controlplane:~# ip a | grep -B2 10.3.116.12

16476: eth0@if16477: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 02:42:0a:03:74:0c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.3.116.12/24 brd 10.3.116.255 scope global eth0
    
root@controlplane:~# 
Here you can see that the interface associated with this IP is eth0 on the host.

 

04. master node mac 주소?

ask : 02:42:0a:0b:95:08

 

05. What is the IP address assigned to node01?

 

ask : 10.11.149.10

root@controlplane:~# k get nodes node01 -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
node01   Ready    <none>   20m   v1.20.0   10.11.149.10   <none>        Ubuntu 18.04.5 LTS   5.4.0-1062-gcp   docker://19.3.0

06. What is the MAC address assigned to node01?

 

ask : 02:42:0a:0b:95:04 

root@controlplane:~# arp node01
Address                  HWtype  HWaddress           Flags Mask            Iface
10.11.149.9              ether   02:42:0a:0b:95:04   C                     eth0

07. We use Docker as our container runtime. What is the interface/bridge created by Docker on this host?

 

ask : docker0

 

08. What is the state of the interface docker0?

 

ask : DOWN

 

root@controlplane:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    
2: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:29:f3:6a:1d brd ff:ff:ff:ff:ff:ff
    
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether a6:3e:a5:72:12:30 brd ff:ff:ff:ff:ff:ff
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:58:90:03:52:94 brd ff:ff:ff:ff:ff:ff
5: veth0fad46a2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master cni0 state UP mode DEFAULT group default 
    link/ether 4e:f8:48:66:d2:26 brd ff:ff:ff:ff:ff:ff link-netnsid 2
6: vethd19df413@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master cni0 state UP mode DEFAULT group default 
    link/ether 7e:f8:d1:73:0b:75 brd ff:ff:ff:ff:ff:ff link-netnsid 3
280: eth1@if281: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:19:00:1f brd ff:ff:ff:ff:ff:ff link-netnsid 1
196: eth0@if197: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:0a:0b:95:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0

 

09. If you were to ping google from the master node, which route does it take?

What is the IP address of the Default Gateway?

 ask : 172.25.0.1

ip route show default


root@controlplane:~# ip route show default
default via 172.25.0.1 dev eth1

 

10. What is the port the kube-scheduler is listening on in the controlplane node?

 

ask : 10259

root@controlplane:~# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      5008/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      6259/kube-proxy     
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      4190/etcd           
tcp        0      0 10.11.149.8:2379        0.0.0.0:*               LISTEN      4190/etcd           
tcp        0      0 127.0.0.11:34667        0.0.0.0:*               LISTEN      -                   
tcp        0      0 10.11.149.8:2380        0.0.0.0:*               LISTEN      4190/etcd           
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      4190/etcd           
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      759/ttyd            
tcp        0      0 127.0.0.1:39153         0.0.0.0:*               LISTEN      5008/kubelet        
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      4205/kube-controlle 
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      4341/kube-scheduler 
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      494/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      775/sshd            
tcp6       0      0 :::10250                :::*                    LISTEN      5008/kubelet        
tcp6       0      0 :::6443                 :::*                    LISTEN      4016/kube-apiserver 
tcp6       0      0 :::10256                :::*                    LISTEN      6259/kube-proxy     
tcp6       0      0 :::22                   :::*                    LISTEN      775/sshd            
tcp6       0      0 :::8888                 :::*                    LISTEN      5173/kubectl

 

11. Notice that ETCD is listening on two ports. Which of these have more client connections established?

 

ask : 2379

 

root@controlplane:~# k describe pod/etcd-controlplane -n kube-system
Name:                 etcd-controlplane
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 controlplane/10.11.149.8
Start Time:           Thu, 27 Jan 2022 06:41:33 +0000
Labels:               component=etcd
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.11.149.8:2379
                      kubernetes.io/config.hash: 9122f7dcfb7025e74e0009b716d4c7ce
                      kubernetes.io/config.mirror: 9122f7dcfb7025e74e0009b716d4c7ce
                      kubernetes.io/config.seen: 2022-01-27T06:41:31.135293910Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   10.11.149.8
IPs:
  IP:           10.11.149.8
Controlled By:  Node/controlplane
Containers:
  etcd:
    Container ID:  docker://f48a60f1532b7476d2fd1f769b52266b929a7fac1665318def09e2edb6adbf87
    Image:         k8s.gcr.io/etcd:3.4.13-0
    Image ID:      docker-pullable://k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://10.11.149.8:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --initial-advertise-peer-urls=https://10.11.149.8:2380
      --initial-cluster=controlplane=https://10.11.149.8:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379,https://10.11.149.8:2379
      --listen-metrics-urls=http://127.0.0.1:2381
      --listen-peer-urls=https://10.11.149.8:2380
      --name=controlplane
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    State:          Running
      Started:      Thu, 27 Jan 2022 06:41:11 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:                100m
      ephemeral-storage:  100Mi
      memory:             100Mi
    Liveness:             http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:              http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:          <none>
    Mounts:
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
      /var/lib/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:            <none>

 

12. Correct!

That's because 2379 is the port of ETCD to which all control plane components connect to.

2380 is only for etcd peer-to-peer connectivity.

When you have multiple master nodes. In this case we don't.

2379는 모든 컨트롤 플레인 구성 요소가 연결되는 ETCD의 포트이기 때문입니다.

2380은 etcd 피어 투 피어 연결 전용입니다.

마스터 노드가 여러 개인 경우. 이 경우 우리는하지 않습니다.