CKA (Certified Kubernetes Administrator)/Kode Kloud

09. Install - Cluster Installation using Kubeadm

seulseul 2022. 2. 3. 15:32

 

09. Install

 

01. Install the kubeadm package on the controlplane and node01.

Use the exact version of 1.21.0-00


 
apt 패키지 색인을 업데이트하고, 쿠버네티스 apt 리포지터리를 사용하는 데 필요한 패키지를 설치한다.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

구글 클라우드의 공개 사이닝 키를 다운로드 한다.

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg \
https://packages.cloud.google.com/apt/doc/apt-key.gpg

쿠버네티스 apt 리포지터리를 추가한다.

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] \
https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list

apt 패키지 색인을 업데이트하고, kubelet, kubeadm, kubectl을 설치하고 해당 버전을 고정한다.

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet=1.21.0-00 kubeadm=1.21.0-00 kubectl=1.21.0-00
sudo apt-mark hold kubelet kubeadm kubectl


kubeadm init --kubernetes-version=v1.21.0-00

10.10.244.9

kubeadm join 172.25.0.63:6443 --token n5coby.rqa7zm8c1qp2s9ui \
        --discovery-token-ca-cert-hash sha256:ed185abbc0bd2a4f01b65e3458ac3b7e738799f52b8ab00ebc9312f01be44f13 
        
        
        
kubeadm join 10.10.244.9:6443 --token n5coby.rqa7zm8c1qp2s9ui  --kubernetes-version=v1.21.0-00       --discovery-token-ca-cert-hash sha256:ed185abbc0bd2a4f01b65e3458ac3b7e738799f52b8ab00ebc9312f01be44f13 –ignore-preflight-errors=SystemVerification


–ignore-preflight-errors=SystemVerification
 

Creating a cluster with kubeadm

Using kubeadm, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use kubeadm to set up a cluster that will pass the Kubernetes Conformance tests. kubeadm also supports other cluster lifecycle functions, su

kubernetes.io

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join node01:6443 --token hztb8g.wob54573a8ccemin \
        --discovery-token-ca-cert-hash sha256:fb67d16e9bface421ce1d1e29286a9144521a186fc8142dc987d894546b56928

 

02. What is the version of kubelet installed?

 

root@controlplane:~# kubelet --version
Kubernetes v1.21.0

 

03. How many nodes are part of kubernetes cluster currently?

Are you able to run kubectl get nodes?

ask : 0

 

04. Lets now bootstrap a kubernetes cluster using kubeadm.

 

The latest version of Kubernetes will be installed.

 

05. Initialize Control Plane Node (Master Node). Use the following options:

  1. apiserver-advertise-address - Use the IP address allocated to eth0 on the controlplane node

  2. apiserver-cert-extra-sans - Set it to controlplane

  3. pod-network-cidr - Set to 10.244.0.0/16

Once done, set up the default kubeconfig file and wait for node to be part of the cluster.

 
  • Master node initialized

 

run 

kubeadm init --apiserver-cert-extra-sans=controlplane \
--apiserver-advertise-address 10.2.223.3 --pod-network-cidr=10.244.0.0/16

The IP address used here is just an example. It will change for your lab session. 

Make sure to check the IP address allocated to eth0 by running:

root@controlplane:~# ifconfig eth0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.2.223.3  netmask 255.255.255.0  broadcast 10.2.222.255
        ether 02:42:0a:02:de:0a  txqueuelen 0  (Ethernet)
        RX packets 6223  bytes 769785 (769.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5913  bytes 1483419 (1.4 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@controlplane:~#
In this example, the IP address is 10.2.223.3
Once you run the init command, you should see an output similar to below:

[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.223.3]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [10.2.223.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [10.2.223.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 85.004816 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node controlplane as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: gtmdad.olx54xrbafcionbd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.2.223.3:6443 --token gtmdad.olx54xrbafcionbd \
        --discovery-token-ca-cert-hash sha256:fb08c01c782ef1d1ad0b643b56c9edd6a864b87cff56e7ff35713cd666659ff4 
Once the command has been run successfully, set up the kubeconfig:

root@controlplane:~# mkdir -p $HOME/.kube
root@controlplane:~#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@controlplane:~#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@controlplane:~#

 

07. Generate a kubeadm join token

 

Or copy the one that was generated by kubeadm init command

Ok

08. Join node01 to the cluster using the join token

 

Use the join token provided by the kubeadm command or create a new token.

- Node01 joined the cluster?

 

To create token:

root@controlplane:~# kubeadm token create --print-join-command
kubeadm join 10.2.223.3:6443 --token 50pj4l.0cy7m2e1jlfmvnif --discovery-token-ca-cert-hash sha256:fb08c01c782ef1d1ad0b643b56c9edd6a864b87cff56e7ff35713cd666659ff4 
root@controlplane:~#
next, run the join command on node01:

root@node01:~# kubeadm join 10.2.223.3:6443 --token 50pj4l.0cy7m2e1jlfmvnif --discovery-token-ca-cert-hash sha256:fb08c01c782ef1d1ad0b643b56c9edd6a864b87cff56e7ff35713cd666659ff4
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@node01:~#
kubeadm join node01:6443 --token wky5ij.5o3jrrwm1r136r69 --discovery-token-ca-cert-hash sha256:fb67d16e9bface421ce1d1e29286a9144521a186fc8142dc987d894546b56928

 

08. Install a Network Plugin. As a default, we will go with flannel


Refer to the official documentation for the procedure

 

  • Network Plugin deployed?
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml