Install Kubernetes (1 Master & 1 Worker) using kubeadm on AWS

Sundeep Nagumalli
4 min readJun 10, 2021

Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice "fast paths" for creating Kubernetes clusters.

Steps to create Kubernetes Cluster on AWS:
1. Deploy Master & Worker EC2 Instance (Ubuntu 20.04, 64-bit) with below specs.
a. EC2 Instance Type:
t2.medium (Kubernetes suggests 2 VCPU as minimum requirement for master and t2.medium type Instance meets this requirement. For worker node, it can be any EC2 Instance type (t2.small).
b. Bootstrap the EC2 Instance with below code at Step: Configure Instance Details.

#!/bin/bash
curl https://get.docker.com | bash
swapoff -a
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Embed above code in User data Column and proceed to next step

c. EBS Volume: 20 GB
d. Enable following ports in Security Group at inbound rules:
For Master Node: Custom TCP (6443) for kube-apiserver, Custom TCP (2379–2380) for etcd, Custom TCP (10250) for kubelet API, Custom TCP (10251) for kube-scheduler, Custom TCP (10252) for kube-controller-manager, SSH (22).
For Worker Node: SSH (22), Custom TCP (10250) for kubelet API.
e. Name EC2 Instance as Master and Worker for identification.

Now both the master and worker instances are created with docker, kubelet, kubeadm, kubectl installed using bootstrapping at Step 2.

Note:
1.
As standard recommended driver is “systemd”, make sure to pass cgroup drive as “systemd” using below command. Run the command on both Master and Worker nodes.

# cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

2. Restart docker on all nodes before initializing cluster using kubeadm.

# systemctl restart docker
# docker info | grep -i cgroup
Cgroup Driver: systemd

2. Initialize control plane node (Master) using kubeadm:
Pass any CIDR Block of your choice, In this case we are choosing 10.244.0.0/16.

# kubeadm init --pod-network-cidr="10.244.0.0/16"

kubeadm init first runs a series of prechecks to ensure that the machine is ready to run Kubernetes. These prechecks expose warnings and exit on errors. kubeadm init then downloads and installs the cluster control plane components. This may take several minutes. After it finishes you should see:

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.31.2.17:6443 --token 162s2e.xsn4zcd363058mym \
--discovery-token-ca-cert-hash sha256:acd5af6015782b86964431bb599207f7c128c920b2ba1f20d6bf08e56e06771e

Run the below commands after kubeadm init command works successfully.

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Note:
1. Now you can observe set of pods (kube-apiserver, etcd, kube-controller-manager, kube-scheduler, kube-proxy) in running state as below:

# kubectl get po -ANAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system coredns-558bd4d5db-9dk8g 0/1 Pending 0 20s
kube-system coredns-558bd4d5db-wpz6f 0/1 Pending 0 20s
kube-system etcd-ip-172-31-2-17 1/1 Running 0 34s
kube-system kube-apiserver-ip-172-31-2-17 1/1 Running 0 33s
kube-system kube-controller-manager-ip-172-31-2-17 1/1 Running 0 33s
kube-system kube-proxy-zr5xf 1/1 Running 0 20s
kube-system kube-scheduler-ip-172-31-2-17 1/1 Running 0 34s

In order for coredns pods to change from Pending State to Running State, configure network plugin. (You can configure any network plugin of your choice)

3. Configure Network Plugin:

# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
# kubectl apply -f calico.yaml

Note:
a. Now coredns pods will be changed to running state in few minutes.

4. Bootstrap Worker Nodes:

Run thekubeadm join command generated in Step 2 in worker node.

# kubeadm join 172.31.2.17:6443 --token 162s2e.xsn4zcd363058mym \
--discovery-token-ca-cert-hash sha256:acd5af6015782b86964431bb599207f7c128c920b2ba1f20d6bf08e56e06771e

Once this step is completed, you can see that worker node is joined with master and we can consider this as cluster.

Note:
a. kubeadm join doesn’t work if the ports are not opened as suggested in step 1. In this step, worker node tries to communicate to kube-apiserver and checks for ports. If ports are not open on Master and Worker node, the request will be timed out.

5. Verify Worker bootstrap in Master node:

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-2-19 Ready control-plane,master 8m28s v1.21.1
ip-172-31-25-193 Ready <none> 5m17s v1.21.1

6. Label Worker Node:

# kubectl label node ip-172-31-25-193 node-role.kubernetes.io/worker=
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-2-17 Ready control-plane,master 9m40s v1.21.1
ip-172-31-25-183 Ready worker 6m29s v1.21.1

In this way, Kubernetes Cluster is created using kubeadm command.

PROS:
1. Using kubeadm for spinning the Cluster can be considered as easiest path to deploy a cluster for testing purposes.
2. Can join any number of worker nodes with the above documentation.
3. Useful to CKA Certification.

CONS:
1. Involves manual setup for creating a cluster.
2. Is not considered as production grade cluster as no auto-scaling is setup.

--

--

Sundeep Nagumalli

DevOps Engineer, AWS Certified Solutions Architect Associate