Deploy Production-grade Kubernetes Cluster using kOps on Amazon Cloud (AWS)
KOps
a.k.a Kubernetes Operation is a project that is considered as “the easiest way to get a production-grade Kubernetes cluster up and running”. It is a production grade project used to handle the entire life cycle of kubernetes Cluster, which includes infrastructure provisioning to upgrades and deletion if needed, and it keeps track of everything: nodes, masters, load balancers, cloud providers, monitoring, networking, logging etc.
There are other tools like kubeadm
which acts as layman to bootstrap the master/worker nodes manually, so kubeadm
is not considered as an ideal option to use at production level. Let’s go ahead with the deployment part now.
Steps to deploy kubernetes Cluster using Kops:
1. Amazon Cloud Account: Kops is architected to work on Amazon Cloud initially and in order to create a cluster using Kops, make sure you have an account on
aws.amazon.com
.
2. Domain Name: Purchase a domain of your choice from any domain registrar of your choice. In this case we are purchasing fromgodaddy.com
.
Note: Since this is a test cluster we are choosing a domain from godaddy. At Enterprise level, company’s either register/buy domain from Amazon Route 53 or they maintain a separate team to provide you the desired domain name.3. Route 53 DNS: Create a hosted zone on Route 53 with the domain bought at
godaddy.com
and configure godaddy domain to use name servers (NS) on Route 53.
Note: Whenever user access URL:
book-for-fun.online
on browser, asgodaddy.com
is hosting the website, godaddy routes the request to nameservers present atManage DNS
Tab. In previous step, we have replaced thegodaddy nameservers for domain
withAmazon hosted zone nameservers
.
4. Management Server (EC2): Deploy an EC2 instance (Redhat or Ubuntu or Amazon AMI)to install tools that are needed for cluster deployment.
a. EC2 Instance Type: t2.small
b. EBS Volume: 20 GB
c. Enable following ports in Security Group: HTTPS (80), HTTPS (443), SSH (22).
d. Purchase Elastic IP and associate to EC2 Instance. (This Step is Optional if you are using for testing purposes)5. kubectl & kOps: Kubectl & kOps tools are needed to create a kubernets cluster, so download both the tools on Management Server.
a.kubectl
latest version can be found at this link and follow below steps to install.
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF# yum install -y kubectl# kubectl version --clientClient Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
b.
kOps
latest version can be found at this link and follow below steps to install.
# curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
# chmod +x ./kops
# sudo mv ./kops /usr/local/bin/
# kops version
Version 1.20.1 (git-5a27dad40a703f646433595a2a40cf94a0c43cd5)
6. AWS S3 Bucket: kubernetes needs some custom configuration to use during cluster creation. When you initiate kubeadm i.e.
kubeadm init
on any machine, it actually downloads K8 config, certificates needed for etcd databases etc. besides installing the cluster control plane components in/etc/kubernetes
path.In similar way, kops lets you manage your clusters even after installation. To do this, it must keep track of the cluster that you have created, along with the configuration, keys they are using etc. This information is stored in S3 bucket. S3 permissions are used to control access to the bucket.
Multiple clusters can use the same S3 bucket, you can grant permissions to access the S3 bucket between your colleagues that administer the K8’s clusters — this is much easier than passing around kubecfg files. But anyone with access to the S3 bucket will have administrative access to all your clusters, so make sure you don’t share it beyond the operations team.
7. IAM User: Create user on IAM and provide programmatic access to user to generate access key and secret key at end. Also provide full permissions on S3 bucket, EC2, Route 53, VPC, IAM to the user as shown below.
8. Configure IAM User on Management Server: Run the below steps on management server.
Note: Download Amazon CLI if its not available on server. In this case, as we are using Amazon AMI Instance, it is available by default.
# aws configure
AWS Access Key ID [None]: AKIAQEN456MXUXFFGGGG3KPSO
AWS Secret Access Key [None]:
C3WkuCvX0Q9I/V6HlpgHdp8ka/I+8UY8v2343tPfXeT
Default region name [None]: us-west-1
Default output format [None]: json
9. ssh-keys: Create SSH keys by running the below command on management server.
# ssh-keygen -t rsa -b 4096 -C "sundeep-test"
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:
SHA256:zsc0V1TFAwSE4LDSy9ZsnsURaL1HWPKuWIOjcFJZ/K8 root@ip-172-31-30-195.us-west-1.compute.internalThe key's randomart image is:+---[RSA 2048]----+
| ..o+o=+oooo+|
| . *+ ++. . ..|
| . =....o. . .|
| + + ooo. . |
| o = *S==.. |
| = +o*oo+ |
| . +o.+ |
| E |
| |
+----[SHA256]-----+
10. Export AWS Access and Secret keys:
# export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
# export AWS_SECRET_KEY_ID=$(aws configure get aws_secret_key_id)
11. Run kops command to create cluster from management server: By running below commands, kops cluster configuration will be created. Previously S3 bucket is empty during creation, after running the below commands you can see that kops uses that S3 bucket to create & store the cluster configuration, certs etc in it.
*** The below command helps to create cluster using different paramters.
# kops create cluster --help*** Export KOPS_STATE_STORE
# export KOPS_STATE_STORE=s3://book-for-fun.online*** --name=book-for-fun.online -> Name of the cluster
*** --state=s3://book-for-fun.online -> Name of the S3 Bucket created in previous step
*** --zones=us-west-1a -> Name of the zone where you wanted to create the cluster
*** --node-count=2 --node-size=t2.micro -> specifies that this cluster needs 2 worker nodes of ec2-type t2.micro
*** --master-size=t2.micro -> By default, cluster spins with single master and we are specifying to create master ec2-type t2.micro
*** --dns-zone=book-for-fun.online -> provide the hosted zone name you have specified on Route 53# kops create cluster --name=book-for-fun.online --state=s3://book-for-fun.online --zones=us-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=book-for-fun.onlineCluster configuration has been created.
Suggestions:
* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster book-for-fun.online
* edit your node instance group: kops edit ig --name=book-for-fun.online nodes-us-west-1a
* edit your master instance group: kops edit ig --name=book-for-fun.online master-us-west-1aFinally configure your cluster with: kops update cluster --name book-for-fun.online --yes --admin
After running above commands, kops will download its configuration into S3 bucket as below and no instances are spun at this point.
12. kops update: This command will help to create the cluster instances (1 Master + 2 Workers) as desired on AWS EC2 Console.
# kops update cluster --name book-for-fun.online --yes --adminCluster is starting. It should be ready in a few minutes.
Suggestions:
* validate cluster: kops validate cluster --wait 10m
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.book-for-fun.online
* the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
* read about installing addons at: https://kops.sigs.k8s.io/operations/addons.
13. Validate kops cluster: This command validates the following components:
a. All control plane nodes are running and have “Ready” status.
b. All worker nodes are running and have “Ready” status.
c. All control plane nodes have the expected pods.
d. All pods with a critical priority are running and have “Ready” status.
# kops validate clusterUsing cluster from kubectl context: book-for-fun.online
Validating cluster book-for-fun.onlineINSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-west-1a Master t2.micro 1 1 us-west-1a
nodes-us-west-1a Node t2.micro 2 2 us-west-1aNODE STATUS
NAME ROLE READY
ip-172-20-32-75.us-west-1.compute.internal master True
ip-172-20-38-68.us-west-1.compute.internal node True
ip-172-20-39-190.us-west-1.compute.internal node TrueYour cluster book-for-fun.online is ready
14. Run kubectl commands to check containers installed/running after cluster creation:
# kubectl get pods -ANAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5489b75945-hzjwt 1/1 Running 0 5m11s
kube-system coredns-5489b75945-v6gzj 1/1 Running 0 6m34s
kube-system coredns-autoscaler-6f594f4c58-44lt9 1/1 Running 0 6m34s
kube-system dns-controller-5f86c9c9dc-4wn2w 1/1 Running 0 6m34s
kube-system etcd-manager-events-ip-172-20-32-75.us-west-1.compute.internal 1/1 Running 0 5m48s
kube-system etcd-manager-main-ip-172-20-32-75.us-west-1.compute.internal 1/1 Running 0 6m23s
kube-system kops-controller-gn6xq 1/1 Running 1 6m34s
kube-system kube-apiserver-ip-172-20-32-75.us-west-1.compute.internal 2/2 Running 0 5m55s
kube-system kube-controller-manager-ip-172-20-32-75.us-west-1.compute.internal 1/1 Running 1 6m2s
kube-system kube-proxy-ip-172-20-32-75.us-west-1.compute.internal 1/1 Running 0 6m6s
kube-system kube-proxy-ip-172-20-38-68.us-west-1.compute.internal 1/1 Running 0 4m16s
kube-system kube-proxy-ip-172-20-39-190.us-west-1.compute.internal 1/1 Running 0 4m16s
kube-system kube-scheduler-ip-172-20-32-75.us-west-1.compute.internal 1/1 Running 1 5m55s
15. kube-config: Its always important to know where you cluster kube configuration is saved and by sharing the
config
on any other server, it will help to access the cluster from another host/server.
# cd .kube
# ls
cache config
16. Deleting kubernetes Cluster: If you wanted to delete the cluster, run the below command. This command will wipe away all EC2 Instances (Master + 2 Workers), Volumes associated to EC2 Instances, State store created on S3 Bucket, Route 53 Record Sets etc.
# kops delete cluster --name=book-for-fun.online --state=s3://book-for-fun.online --yes
PROS:
1. Production grade Kubernetes Cluster can be created using kops.
2. Since kubernetes control plane has many components, automation during cluster formation is taken care by kops.
3. Hard to break cluster as kube-apiserver always monitors new changes to cluster and maintains the cluster state using other components.
CONS:
1. Complex to build.
2. Expensive for testing purposes.
Conclusion:
* Once the above steps are configured on management server, you are good to configure your applications.
* When compared to kubeadm, kubernetes cluster setup using kOps is complex and need ample amount of knowledge on AWS but it is referred as stable way to spin up production-grade cluster.