Setup Kubernetes Cluster Using Kubeadm [Multi-node]
Summary
TLDRThis tutorial provides a step-by-step guide on setting up a Kubernetes cluster using kubeadm, with a focus on the importance of hands-on experience for DevOps engineers. The video walks through deploying virtual machines, installing key Kubernetes components, and configuring the cluster control plane and worker nodes. The presenter also emphasizes using a self-hosted cluster for learning and certification preparation. Additionally, the tutorial covers key prerequisites, metrics server setup, and deploying an NGINX app, offering valuable insights into managing Kubernetes clusters efficiently.
Takeaways
- 🚀 Kubernetes setup tutorial using kubeadm, focusing on creating a multi-node cluster for real-world project simulation.
- 🔗 Links to necessary documentation and GitHub repository are provided in the description, with a blog for the latest updates.
- 🛠️ kubeadm simplifies Kubernetes cluster setup, following best practices and providing hands-on experience with system complexities.
- 📚 Self-hosted Kubernetes clusters offer valuable learning for DevOps engineers, especially for certification exams like CKA and CKS.
- 💻 Prerequisites include two or more virtual machines (VMs), static IPs, and sufficient CPU/RAM for both master and worker nodes.
- 📶 Ensure nodes can communicate on required ports and allow proper routing between subnets to avoid IP conflicts.
- 🛑 Swap must be disabled on all nodes, and nodes should have CRI-O as the container runtime and kubeadm, kubelet, and kubectl installed.
- 📄 Scripts provided automate setup tasks for both common node and master node configurations, making the process faster.
- 📊 After setting up the cluster, install the Kubernetes metric server to track CPU and memory usage across nodes and pods.
- 🌐 Final validation of the cluster includes deploying an NGINX app and verifying access via NodePort, ensuring the setup is successful.
Q & A
What is the purpose of using kubeadm in Kubernetes cluster setup?
-Kubeadm is used to simplify the process of setting up a working Kubernetes cluster. It follows best practices for configuring the cluster components, making it faster and easier to deploy Kubernetes clusters.
Why is it recommended to use a self-hosted Kubernetes cluster for learning purposes?
-A self-hosted Kubernetes cluster provides hands-on experience and exposes learners to the complexities of managing a cluster. This deeper understanding of the control plane and worker node components is essential for DevOps engineers and is especially useful when preparing for certifications like CKA or CKS.
What are the prerequisites for setting up a Kubernetes cluster using kubeadm?
-The prerequisites include having at least two nodes: one master node and one worker node. The master node should have a minimum of 2 vCPUs and 2GB of RAM, while worker nodes require at least 1 vCPU and 2GB of RAM. Additionally, nodes should have an IP range in the 10.x or 172.x series with static IPs.
What is the significance of using the Calico Network plugin in this setup?
-The Calico Network plugin is used to enable pod networking in the Kubernetes cluster. It ensures that there are non-overlapping node and pod IP addresses to avoid any routing conflicts, allowing the nodes and pods to communicate efficiently.
What is the purpose of running the 'common.sh' script on all nodes?
-The 'common.sh' script installs necessary components like the container runtime (CRIO), kubelet, kubectl, and kubeadm. It also configures swap settings and sets up kubelet's extra arguments to ensure the correct IPs are used in multi-IP environments.
What does the 'master.sh' script do on the master node?
-The 'master.sh' script sets up the control plane components by initializing kubeadm, configuring networking with the pod CIDR, pulling control plane images, and starting the kubelet service. It also sets up Calico for networking and allows the API server to be accessed via public or private IPs.
What is the process for adding worker nodes to the Kubernetes cluster?
-Worker nodes are added to the cluster by running the 'kubeadm join' command on the worker nodes. This command, generated during the master node setup, allows the worker nodes to connect to the control plane. The TLS certificates required for secure communication between the master and worker nodes are automatically created.
Why is it necessary to install the Kubernetes metrics server?
-The metrics server is required to collect and store resource usage data (CPU and memory) from each node in the cluster. Without it, commands like 'kubectl top' would return errors, making it difficult to monitor the performance of the cluster and its components.
How can you validate that the Kubernetes cluster is working properly after setup?
-Validation is done by deploying a sample application (such as nginx) and exposing it using a NodePort service. Accessing the application from the public or private IP of the worker nodes on the specified port (e.g., 32000) confirms that the cluster is functioning correctly.
How can you manage the Kubernetes cluster from your local workstation?
-To manage the cluster from your local workstation, you need to copy the 'admin.conf' file from the master node to your local machine’s '.kube' directory. This file contains the API server endpoint and authentication details, allowing kubectl commands to interact with the cluster remotely.
Outlines
👋 Introduction to Setting Up a Kubernetes Cluster with kubeadm
The video introduces viewers to setting up a Kubernetes cluster using the kubeadm utility. The speaker highlights the importance of gaining hands-on experience with a self-hosted Kubernetes cluster instead of using tools like Kind or Minikube, which are better suited for development purposes. A self-hosted Kubernetes cluster provides valuable learning opportunities, particularly for those preparing for certifications such as CKA or CKS. The prerequisites for following the tutorial include having at least two nodes, proper IP ranges, and specific ports open for communication between nodes. The video emphasizes that all resources and scripts are available in the linked GitHub repository.
⚙️ Deploying Virtual Machines and Configuring Cluster Setup
This section covers the steps for deploying virtual machines and setting up the cluster using shell scripts. The user is instructed to configure the VMs using Terraform and provided with steps to modify the main Terraform file to suit their environment. The speaker explains the common.sh script, which sets the Kubernetes version, installs CRI-O as the container runtime, and configures kubelet on all nodes. Viewers can run these scripts to simplify the process or manually execute the commands to set up the cluster. This part sets the foundation for the control plane setup on the master node.
🚀 Initializing the Control Plane and Joining Worker Nodes
This paragraph explains the steps for initializing the master node's control plane using the master.sh script. The script configures key variables, installs necessary images, and sets up the Kubernetes control plane. The speaker explains the differences when using public vs. private IP addresses for the Kubernetes API server. Once the master node is initialized, a kubeconfig file and a join command for worker nodes are generated. The next step is to join the worker nodes to the master node using kubeadm join, allowing worker nodes to authenticate and join the cluster.
📊 Verifying the Cluster and Deploying the Metric Server
After joining the worker nodes, the speaker demonstrates how to verify that the cluster components are running smoothly by checking the pods in the kube-system namespace. The Kubernetes API server endpoints are tested to ensure everything is working correctly. To collect and expose resource usage data (CPU, memory) from nodes, the metric server must be installed separately. The speaker explains the deployment of the metric server and how it can be used to access pod and node metrics. The importance of TLS certificates in production environments is also briefly mentioned.
🌐 Deploying a Sample Nginx App and Accessing the Cluster
The final section covers deploying a sample Nginx application and exposing it via a NodePort service. The speaker explains how to access the application using the worker node's public IP and port. Additionally, instructions are provided for accessing the Kubernetes cluster from a local workstation by copying the kubeconfig file. This allows users to interact with the cluster using kubectl commands from their own machines. The video ends with a call to check the kubeadm documentation for updates and invites viewers to leave comments or questions.
Mindmap
Keywords
💡Kubernetes
💡kubeadm
💡Control Plane
💡Worker Node
💡Container Runtime (CRI-O)
💡Calico Network Plugin
💡Metric Server
💡TLS Certificates
💡Pod CIDR
💡kubelet
Highlights
Introduction to Kubernetes cluster setup using kubeadm utility.
Emphasizes the importance of hands-on experience in building and maintaining self-hosted Kubernetes clusters.
Kubeadm simplifies setting up Kubernetes clusters by handling all components and configurations.
The tutorial focuses on a real-world multi-node cluster setup with master and worker nodes.
Kubernetes cluster management with kubeadm is part of certification exams like CKA and CKS.
Prerequisites include at least two nodes, with IP ranges in the 10.x or 172.x series and static IPs.
Key focus on setting up virtual machines and installing container runtimes like CRI-O.
Comprehensive walkthrough of kubeadm initialization on master nodes and joining worker nodes.
Explanation of TLS certificates and their role in Kubernetes cluster security.
Detailed steps for setting up Calico network plugin for pod networking.
Installing Kubernetes Metric Server to collect CPU and memory usage data.
The tutorial provides a hands-on demonstration using AWS Cloud and Terraform for deploying virtual machines.
Instructions for using pre-built shell scripts (common.sh and master.sh) to automate Kubernetes cluster setup.
Final validation of the cluster by deploying an NGINX application and exposing it via NodePort.
How to configure kubectl on a local workstation by copying the admin kubeconfig from the Kubernetes cluster.
Transcripts
hello guys welcome to another practical
devops tutorial in this video I'll be
showing you guys how to set up
kubernetes cluster using the cubeadm
utility
please check the description where I
have given all the links to the required
documentation and GitHub repository to
follow this tutorial you can use the
blog Link in the description as a
reference for the entire setup as it is
constantly updated with the latest
kubernetes version
cubeadm is a great tool to set up
working kubernetes cluster in less time
it simplifies the process of setting up
all the kubernetes Clusters components
and follows all the best practices for
cluster configurations there are
solutions like kind and mini Cube which
you can set up locally to have a
kubernetes environment those tools are
great for development purposes but it
abstracts away all the cluster
configurations while these tools can
save time and reduce complexity it is
essential for devops engineer to have a
deep understanding of the various
components that make up the kubernetes
cluster building and maintaining a
self-hosted kubernetes cluster provides
valuable hands-on experience and exposes
you to the system's complexities this
experience will help you better
understand the cluster control plane and
worker node components so I strongly
suggest using a self-hosted kubernetes
cluster during your learning process
rather than using easily available
solution with a multi-node cluster you
can have the setup that mimics the real
world project setup also if you are
preparing for cka or cks certification
exams it is important to note that
cluster management using cubeadm is part
of the exam syllabus
let's look at the prerequisites to
follow this tutorial you should have a
minimum of two bundle notes one master
and one worker node the master node
should have a minimum of two vcpu and
2GB RAM for the worker nodes a minimum
of 1 bcpu and 2GB Ram is recommended and
here is an important requirement your
nodes should have an IP range in the
10.ex OR 172.x series with static IPS
for master and worker nodes we will be
using 192 series as the power Network
range through the Calico Network plugin
it is very important to have a
non-overlapping node and pod IP
addresses to avoid any type of IPM
routing conflicts
your nodes should be able to talk to
each other on all these ports required
by kubernetes
if you are setting up Cuba dim cluster
on cloud servers ensure you allow the
ports in the respective firewall
configuration also make sure the subnets
have the routing rules enabled for the
cidr ranges you use in the setup to
avoid any sort of routing issues
all the commands and scripts used in
this guide are hosted on GitHub clone
the repository to follow along this
guide
in a high level here is what we are
going to do deploy three virtual
machines install container runtime on
all the nodes we'll be using crio
install cubadium cubelet and Cube CTL on
all the nodes
initiate cubadium control plane
configuration on the master node it
first pulls all the images from the
registry.koh.io
join the worker node to the control
plane
install the Calico Network plugin to
enable power networking
install kubernetes metric server to
enable pod and node metrics validate all
the cluster components and nodes finally
deploy a sample nginx app and validate
the cluster
here is how qbdm works when you
initialize Cube ADM first it runs all
the PreFlight checks to validate the
system State and it downloads all the
required cluster container images from
the registry.kh.io container registry it
then generates required TLS certificates
and stores it on the HC kubernetes PK
folder next it generates all the
cubeconfig file for the cluster
components in the HC slash kubernetes
folder
then it starts a cubelet service and
generates the static pod manifest for
all the cluster components and saves it
in the slash it see slash kubernetes
slash manifest folder
next it starts all the control plane
components from the static Point
manifests then it installs core DNS and
Cube proxy components finally it
generates the node bootstrap token
worker nodes use this token to join the
control plane as you can see all the key
cluster configurations will be present
under the HC slash kubernetes folder
let's get started with the Hands-On labs
you can use any cloud or local
virtualization setup of your preference
this setup will work on any platform all
you need to have is three virtual
machines which will talk to each other
on the required ports for this demo I am
using AWS Cloud to deploy three virtual
machines I have a simple terraform
script that deploys three T2 dot medium
instances with security groups that
allows all the traffic between the nodes
and allow traffic and Port 6443 and note
Port range thirty thousand to thirty two
thousand seven sixty eight from anywhere
so that we can access the API server and
applications on node Port from our
workstation if you are a terraform and
AWS user all you have to do is from the
Clone repository go to the instances
folder
in the main.tf replace the Ami ID key
name and the subnet IDs to your custom
values then do a terraform unit plan and
then apply you will have three VMS ready
in a matter of minutes now that we have
the VMS ready let's get started with the
setup
to make the setup easier I have added
all the commands in two shell scripts
under the scripts folder common.sh and
master.sh for demonstration purposes and
to save time I will run the shell
scripts containing all the commands if
you want to set up the cluster by
executing the commands individually you
can follow the documentation
let's take a look at the common.sh
script
the common.sh should be run on all the
nodes let's have a look at the script
here we have the kubernetes underscore
version variable to set the required
cluster version then we are disabling
this web and add a crown type entry to
keep the swap off during server reboots
it is a requirement for the setup then
from lines 21 to 57 we execute commands
to install the crio container runtime
similar to the kubernetes version
from lines 63 to 69 we install cubelet
Cube CDL and cubeadm based on the
version that we set in the kubernetes
version variable
from lines 73 to 76 we are setting the
respective node IPS to the cubelet extra
arcs in the slash Etc default slash
cuplet file
this is required for few setup because
the servers might have more than one IP
addresses and cubelet might end up
picking the wrong ones now you can run
these commands one by one on all the
nodes or execute the whole shell script
here I'm logged into three nodes as a
root user k8's Master one is the master
node and k8s worker 2 and 3 are the
worker nodes we need to execute the
commands as a pseudo user so ensure
you're logged in this route
I will clone the cubead in scripts
repository to all the VMS and CD into
the scripts folder
let's execute the common.sh on all the
three nodes
it will take a few minutes for the
script execution to complete
in all the nodes the script has executed
successfully
next we have the master.sh script
it should be run on the master node to
set up the control plane components
let's have a look at the script
here we are setting up three environment
variables public IP access node name and
Port C idea if you are working on a poc
in your personal or sandboxed
environment you might need access to the
kubernetes API server using the public
IP address for that you can set the
public IP access equal to true because
the cubeadim initialization command
parameter needs to be changed for public
IP end points
here we are using pod cidrs 192 series
ensure your node IP range doesn't
conflict with the Pod cadr range
then the image's pull command here will
download all the required images for the
control plane components because except
cubelet every cluster component runs as
a container
then we have a condition to check if the
public IP access is set to true or false
if it is false then we fetch the private
IP address of the server using the IP
80dr Command and set it to the master
private IP variable then we initialize
cubeadm with the master private IP node
name and pods idea if the public IP
access is set to true then we retrieve
the public IP using curl and ifconfig.me
service and set it to master public IP
variable
then we initialize Cube ADM with control
plane endpoint parameter with the public
IP address instead of the API server
advertise address parameter that is the
only change if you want to use the
public IP address of the master node
next we copy the generated admin Cube
config file to the home folder so that
we can execute Cube CTL commands from
the master node finally we install the
Calico CNA plugin to enable power
networking
now that we have an understanding of the
master script let's execute the script
on the muscle node
I am executing this on the server with a
public IP so first I need to set the
public IP access variable to true in the
master.sh script on the master node
if you don't want to use the public IP
do not make any changes to the script as
by default the script picks up the
private IP address of the server let's
execute the master.sh script
initialization you should get an output
with cubeconfig file location and the
join command with the token copy that
and save it to a file we will need the
join command for joining the worker
nodes to the master now let's verify the
cube config by executing the cube CDL
command to list all the parts in the
cube system namespace
here you can see all the cluster
component parts like APA server hcd Cod
DNS and Cube scheduler are running
without any issues we can also verify
the Readiness of the API server by
querying the kubernetes API server
endpoint using cubectl
here you can see all the API server
endpoints is written the OK status which
means we have a working Master node
without any issues
now let's join the worker node to the
master node using the cubeadm join
command that we have got in the output
while setting in the master node if you
do not have the command with you you can
print it using the cubeadm token create
command from the master node
I will copy the join command and will
execute a join command from the two
worker nodes
it performs the TLs bootstrapping for
the nodes meaning the TLs certificates
required for the master and node
authentication are automatically created
in this process once it is successfully
executed you will see the output saying
this node has joined the cluster if you
have multiple worker nodes execute the
node join command on all the worker
nodes I am configuring two worker nodes
for this setup now from the master node
let's try to list the nodes
we can see three nodes one is the
control plane node and others are the
worker nodes without any labels let's
label the worker nodes as worker using
the cube CDL command
you need to replace KH worker 2 and 3
with the hostname of your worker node
if you list the notes now you can see
the label worker on the worker nodes
to get CPU and memory metrics of PODS we
need the metric server component in the
cluster it collects and stores resource
usage data such as CPU and memory from
each node in the cluster and exposes
this data through kubernetes API cubeadm
doesn't install the metric server
component during its initialization we
have to install it separately
to verify this if you run the top
command you will see the Matrix API not
available error
let's deploy the metric server using the
metric service manifest file present
under the Manifest folder
this manifest is taken from the official
metric server repo I have added the
cubelet insecure TLS flag to The
Container to make it work because in our
setup qbdm uses cells and certificates
the insecure flag is not recommended in
actual projects or production
environments as you have to use valid
TLS certificates in the cluster
let's deploy the Manifest
if you check the cube system namespace
you can see the metric server getting
deployed
once it is in registered you can check
the Pod and node metrics using the top
command
here you can see the CPU and memory
metrics of nodes and pods
it means the metric server is working as
expected
our final step is to validate the
cluster by deploying an app and access
it over a note port we will deploy nginx
application and expose it using a
service of type node Port under the
Manifest folder you will find the sample
iPhone app.yaml file
in this manifest we have a deployment
object with the latest nginx image and a
service object that exposes the nginx
deployment on node port 32000.
now let's deploy the sample app in the
next deployment using cubectl
it is successfully deployed to verify
the node Port service let's try to
access it using the worker nodes IP and
Port 32000. here I'm using the public IP
address of the worker mode
we are able to see the nginx homepage on
node Port it means the cluster setup is
working as expected in the whole setup I
have used Cube CDL from the master node
if you want to access the kubernetes
cluster from your local workstation you
need to copy the admin.com content that
is the admin cubeconfig file to your
local dot Cube folder I assume you have
cubesatel utility installed in your
Workstation
you can either copy the admin.com file
content or use sap to copy the file to
your workstation first open the
admin.com file present in the slash HC
slash kubernetes folder
here you can see the API server public
endpoint and the certificate and token
details that are required to
authenticate against the API server this
config has full admin access to the
cluster
now I will copy the whole admin.com file
contents to the clipboard
in my workstation I will open the config
file under dot Cube folder
now paste admin.com content to this file
and save it
if I execute cubicle commands now it
will interact with my Cube EDM cluster
using the details present in the config
file
I hope this tutorial was helpful if you
face any issues with the setup or need
any clarification you can drop a comment
also check the cubeadm documentation
link I have added in the description for
the latest updates in the setup in the
next video we will look at the important
kubernetes cluster configurations every
devops engineer should know
thank you and see you in the next video
Посмотреть больше похожих видео
Day-32 | How to Manage Hundreds of Kubernetes clusters ??? | KOPS | #k8s #kubernetes #devops
Day-19 | Jenkins ZERO to HERO | 3 Projects Live |Docker Agent |Interview Questions | #k8s #gitops
Kubernetes Architecture in 7 minutes | K8s explained
33. Configuring Network Load Balancing in Windows Server 2019
How to Propagate Secrets Everywhere with External Secrets Operator (ESO) and Crossplane
קורס kubernetes (k8s) המלא - שיעור 2 - ארכיטקטורה של אשכול (cluster architecture)
5.0 / 5 (0 votes)