Kubernetes Architecture in 7 minutes | K8s explained
Summary
TLDRThis video offers a simple explanation of Kubernetes architecture, breaking down the components of both the master and worker nodes within a Kubernetes cluster. It explains how worker nodes run application containers inside pods, managed by agents like kubelet and networked through kube-proxy. The master node controls the cluster with components such as the API server, scheduler, controller manager, and etcd, which stores cluster data. The video covers how these components interact to manage workloads and ensure everything runs smoothly. It’s a beginner-friendly guide to Kubernetes architecture.
Takeaways
- 🔧 Kubernetes clusters consist of two types of nodes: master nodes and worker nodes.
- ⚙️ Master nodes are responsible for managing the entire cluster, while worker nodes run the application containers.
- 📦 Pods are the smallest deployable units in Kubernetes, and they can contain one or more containers.
- 🛠 The Kubelet agent on worker nodes communicates with the master node to manage workloads and decide where pods should run.
- 🐳 A container runtime interface, such as Docker or Containerd, is required on worker nodes to create and manage containers.
- 🌐 The Kube Proxy on worker nodes handles networking, load balancing, and communication between pods, making applications accessible to end users.
- 📡 The API server on the master node acts as the entry point for all operations within the cluster, validating and authenticating requests.
- 🗂 The scheduler on the master node assigns pods to worker nodes based on resource availability.
- 🔄 The controller manager ensures that the cluster is running according to the desired state defined in manifest files.
- 📊 The etcd component on the master node stores all cluster data, including the state of applications and pods, and communicates only through the API server.
Q & A
What is a Kubernetes cluster?
-A Kubernetes cluster consists of a group of nodes that run containerized applications. It can be created locally using tools like Minikube or kubeadm, or using cloud provider services like EKS (AWS), AKS (Azure), or GKE (Google Cloud). The cluster has two types of nodes: master nodes and worker nodes.
What is the role of the master node in a Kubernetes cluster?
-The master node, also known as the control plane, is responsible for managing the entire cluster. It handles tasks such as scheduling pods, ensuring the desired state of the cluster, and storing the cluster's data. The master node communicates with worker nodes and coordinates application deployment.
What are worker nodes in a Kubernetes cluster, and what is their role?
-Worker nodes are machines in the Kubernetes cluster responsible for running application containers. They host pods, which are the smallest deployable units in Kubernetes. Each worker node also contains essential components like the kubelet agent, container runtime interface, and kube-proxy for communication.
What is a pod in Kubernetes?
-A pod is the smallest deployable unit in Kubernetes and can contain one or more containers. Pods run on worker nodes, and Kubernetes schedules and manages them based on the available resources in the cluster.
What is kubelet, and what is its role on a worker node?
-Kubelet is an agent running on every worker node. It receives instructions from the master node and ensures that the pods are running correctly. It is responsible for managing and monitoring the containers on that worker node.
What is the container runtime interface (CRI) in Kubernetes?
-The container runtime interface (CRI) is software that manages the lifecycle of containers on a worker node. Popular CRIs include Docker and containerd. The CRI is responsible for creating, running, and stopping containers on worker nodes.
What is kube-proxy, and how does it function in a Kubernetes cluster?
-Kube-proxy is a networking component that runs on worker nodes. It is responsible for enabling communication between pods and making applications accessible to external users. Kube-proxy also handles load balancing and service proxying for traffic within the cluster.
What is the kube API server, and why is it important?
-The kube API server is the entry point for all interactions with the Kubernetes cluster. It exposes Kubernetes APIs that allow users to manage resources like pods. The API server authenticates and validates requests, making it central to all communication between the user and the cluster's components.
How does the Kubernetes scheduler work?
-The scheduler assigns pods to worker nodes based on resource availability. When a new pod needs to be created, the scheduler selects the most suitable node that has the capacity to run the pod. It then informs the kube API server, which coordinates the deployment of the pod.
What is the role of the controller manager in Kubernetes?
-The controller manager monitors the cluster to ensure it is in the desired state. It compares the desired state (as defined in the cluster's manifest files) with the actual state. If there is any deviation, such as a pod being deleted, the controller manager instructs the kube API server to recreate the pod.
What is etcd, and what role does it play in Kubernetes?
-Etcd is a key-value store that stores all the cluster's data, such as which pods are running and their locations. It only communicates with the kube API server. Etcd ensures that the cluster's state information is persistent and can be accessed as needed by other components.
Outlines
🚀 Introduction to Kubernetes Architecture
In this video, we explore the Kubernetes architecture in the simplest terms. We'll dive deep into the Kubernetes cluster, understanding its components and their interactions. The video starts by creating a Kubernetes cluster using tools like Minikube, kubeadm, or cloud services such as AWS EKS, Azure AKS, or Google GKE. Within the cluster, there are two types of nodes: the master node, which manages everything in the cluster, and the worker node, which runs application containers. We will break down these components starting with the worker nodes.
🖥️ Worker Nodes: The Backbone of Application Deployment
Worker nodes are responsible for running application containers inside Kubernetes, and they do this through pods—the smallest deployable units in the cluster. Pods can hold one or more containers. The 'kubelet' agent running on all worker nodes ensures that pods are deployed correctly, as instructed by the master node. Additionally, each worker node has a container runtime interface like Docker or containerd to create and manage containers. For network communication and load balancing, the 'kube-proxy' handles interactions between pods and ensures application accessibility.
⚙️ Master Node Components: Managing the Cluster
The master node, or control plane, is crucial for managing Kubernetes clusters and contains four key components. The 'Kube API Server' acts as the entry point for all operations within the cluster, validating requests from users or systems and managing communication with other components. The 'Scheduler' assigns pods to worker nodes based on resource availability, ensuring balanced distribution of workloads. The 'Controller Manager' monitors the state of the cluster, ensuring that desired conditions (such as running a certain number of pods) match the actual state.
📦 etcd: Storing Cluster Data
All the cluster's data, including running applications and pods, is stored in 'etcd,' a key-value store. This component only communicates with the Kube API Server. When a user or system needs information about the cluster, they must go through the API Server to access data from etcd. The entire system ensures seamless communication between components within the master node and worker nodes, ultimately orchestrating Kubernetes' dynamic and scalable architecture.
🔑 Recap of Kubernetes Architecture
In summary, Kubernetes architecture consists of a master node that manages the cluster and worker nodes that run application workloads. The master node includes the Kube API Server (the entry point for all operations), the Scheduler (which assigns pods to nodes), the Controller Manager (which ensures the system's desired state is maintained), and etcd (the key-value store holding cluster data). Worker nodes, on the other hand, run pods, with kubelet ensuring container management and kube-proxy enabling communication and access to applications. This architecture provides a robust and scalable environment for deploying and managing containerized applications.
Mindmap
Keywords
💡Kubernetes Cluster
💡Master Node
💡Worker Node
💡Pod
💡Kubelet
💡Container Runtime Interface (CRI)
💡Kube Proxy
💡API Server
💡Scheduler
💡Controller Manager
💡etcd
Highlights
Introduction to Kubernetes architecture explained in a simplified manner.
Two types of nodes in a Kubernetes cluster: master node (control plane) and worker node.
Worker nodes are responsible for running application containers within Kubernetes.
Pods, the smallest deployable units in Kubernetes, can hold one or more containers.
The kubelet agent, running on each worker node, communicates with the master node to manage pod creation and workloads.
Container Runtime Interface (CRI) such as Docker is necessary to manage containers on worker nodes.
Kube-proxy is used for networking and communication between pods, load balancing, and exposing applications to end users.
The master node contains key components like the kube-apiserver, which is the entry point for cluster management.
The kube-apiserver validates requests, authenticates users, and processes commands issued via kubectl or other interfaces.
Scheduler component in the master node assigns pods to worker nodes based on available resources.
The controller manager ensures the desired state of the cluster is maintained, recreating pods if necessary.
Cluster data, including the status of nodes and pods, is stored in etcd, a key-value store.
All communication between etcd and other components in the master node goes through the kube-apiserver.
The kubernetes architecture overview emphasizes the role of the master node in controlling workloads and the worker node in running them.
For further learning, additional videos on Kubernetes concepts and working are recommended.
Transcripts
in this video I will explain you
kubernetes architecture in simplest
manner possible we go deep down in the
kubernetes cluster understanding
different components and how they
actually work so make sure you watch
this video till the end let's start so
we start with creating a cluster you can
create a kubernetes cluster locally
either using tools like mini Cube or
cube ADM optionally you can also use
probably cloud provider services like
eks and AWS AKs in aure gka in Google
Cloud to create kubernetes clusters
inside this kubernetes this cluster we
have two different types of machines or
as we call them nodes so we have a
master node which is responsible to
handled everything inside the cluster
along with Master node we also have
another kind of node known as worker
node this worker node is actually
responsible to run your application
containers inside kubernetes cluster so
you have Master node that is when to
handle everything inside the cluster and
you have worker notes which are
responsible to run your application
containers inside the kubernetes cluster
let's start with worker nodes and look
at different components inside it as we
all know worker nodes are responsible to
run application containers and they run
this application containers inside
something known as pods pods are
smallest Deployable units in the
kubernetes tester and the Pod can hold
one or more containers a single worker
node can have more than one pods or it
can have a single pod as well so these
containers are actually running inside
object kuet is object known as pods to
deploy and manage pods inside this
worker node we have an agent named as
cuet this cuet agent is running on all
worker nodes and it gets information
from the master node to decide where
should be the P running and how to
manage it along with this for a worker
machine to create and manage containers
we need to have a container runtime
interface or a software like Docker so
worker machine will also contain CRI or
container Endtime interface such as
Docker here other options are container
d locket Etc which is supported by cetes
as well also to enable communication
between these two ports or to make these
pods or application accessible to the Ed
users we have another networking
component in the worker node which is
known as cube proxy qoxy is used for
Port toport communication also for load
balancing service proxy and to make your
application accessible to the end users
so these are different components inside
the worker nodes you have q proxy for
port-to-port communication and expose
your application you have container
runtime interface to create and delete
containers you have cuet an agent that
is running on every worker node which
get information from the master node and
handle the workloads inside the worker
nodes along with this you can have more
add-ons or plugins that you install on
your worker notes but these are all the
important components inside your worker
notes now that we have understood the
different components inside worker nodes
and how they actually work let's look at
the different components inside Master
node so so Master node or control plane
has four very important components that
are responsible to manage everything
inside the cluster the first component
is the cube API server this Cube API
server is like an entry point for
everything you do inside the cluster so
let's say you are a devops engineer and
you want to create a new pod or maybe
delete a pod or just get the status if
you want to do anything inside cluster
you will have to talk to the cube API
server so you can talk to cube API
server either by running commands using
Cube CTL command line tool either using
Cube uh Cube it is dashboard or maybe
using sdks or anything so you will talk
to the cube API server Cube API server
will validate if the request is okay if
the user is authenticated or not and if
it works then it will get the
information from the other components
and provide it to the end user or to the
dev
optionor so API server is like an entry
point which validates the request and
also authenticates the user next Master
node component is the scheduler
scheduler is used to assign pods to the
worker nodes let's say you made a
request to the API server to create a
new pod scheder is going to check which
nodes have enough capacity to launch
this new
pod once the nodes on which the pods are
going to be run is decided by the
scheduler scheduler provides this
information to the cube API server Cube
API server gives this information to the
cubet running on that particular node
and thus it creates the new pod on the
worker note so scheder is used to assign
pods on the notes based on resource
capacity all the Manifest files that you
have defined for your
cluster now what if there's an issue
inside the cluster or one of these ports
shows an error or gets deleted that's
when the next kubernetes Master node
components comes into play so next
kubernetes component is the controller
manager controller manager is used to
monitor the entire cluster and it makes
sures that everything is as you have
defined inside your manifested files so
let's say you have said you want to have
four pods running it will make sure that
the four pods are running all the times
if that's not the case it'll tell API
server which will recreate the P
following the same procedure with the
scheduler first decides what nodes to
create the pods on then gives the
information to API server and then cuade
creates it so controller manager is used
to Monitor and make sure everything is
working as you have defined and it does
that by comparing the desired state with
the actual state of the cluster but
where is this cluster information or
cluster data stored it is stored inside
the new it is stored inside the next
Master node component which is hcd hcd
is a key value store that stores all the
information about the cluster what
application is running what pods are
running on which nodes and everything
else one thing to note is hcd can only
talk to the cube API server so if you
want to get any information from the hcd
it has to be through the cube API server
so you can see Cube API server is like a
center of communication for every
component inside the master node as well
as to the CET which is running on the
worker node so this is the kubernetes
architecture we have Master node and the
worker node Master node is responsible
for handling workloads inside the
cluster worker nodes are actually
running the application workloads here
inside worker node we have the parts
which is the smallest unit responsible
to run the application container we have
cuate and agent which is going to run on
all the worker nodes and also
responsible to create pods next we have
con the runtime interface and we also
have q proxy for networking and for
exposing our application inside Master
node we have QBE API server which is the
entry point and the authentic ation
system for every request that you make
it also exposes a kubernetes API for the
end user to start using it next
component is a scheder which is used to
assign ports to the notes then we have
controller manager that makes sure
everything is working as you want or as
you have desired and all the cluster
information is stored inside hcd so so
this is a simplified explanation on ku's
architecture I hope this video was
informative if you have any questions
any doubt do let me know in the comment
section and for more information I would
recommend checking out my another video
which explains what is kubernetes and
how it actually works do check it out
thank you and have a good day
関連動画をさらに表示
¿QUE ES KUBERNETES? - Introducción al orquestador más usado
Setup Kubernetes Cluster Using Kubeadm [Multi-node]
קורס kubernetes (k8s) המלא - שיעור 2 - ארכיטקטורה של אשכול (cluster architecture)
Istio & Service Mesh - simply explained in 15 mins
What is a cluster in Netapp storage
Splunk Components | universal forwarder | Heavy forwarder
5.0 / 5 (0 votes)