Kubernetes Architecture in 7 minutes | K8s explained

Cloud Champ
9 Apr 202407:05

Summary

TLDRThis video offers a simple explanation of Kubernetes architecture, breaking down the components of both the master and worker nodes within a Kubernetes cluster. It explains how worker nodes run application containers inside pods, managed by agents like kubelet and networked through kube-proxy. The master node controls the cluster with components such as the API server, scheduler, controller manager, and etcd, which stores cluster data. The video covers how these components interact to manage workloads and ensure everything runs smoothly. It’s a beginner-friendly guide to Kubernetes architecture.

Takeaways

  • 🔧 Kubernetes clusters consist of two types of nodes: master nodes and worker nodes.
  • ⚙️ Master nodes are responsible for managing the entire cluster, while worker nodes run the application containers.
  • 📦 Pods are the smallest deployable units in Kubernetes, and they can contain one or more containers.
  • 🛠 The Kubelet agent on worker nodes communicates with the master node to manage workloads and decide where pods should run.
  • 🐳 A container runtime interface, such as Docker or Containerd, is required on worker nodes to create and manage containers.
  • 🌐 The Kube Proxy on worker nodes handles networking, load balancing, and communication between pods, making applications accessible to end users.
  • 📡 The API server on the master node acts as the entry point for all operations within the cluster, validating and authenticating requests.
  • 🗂 The scheduler on the master node assigns pods to worker nodes based on resource availability.
  • 🔄 The controller manager ensures that the cluster is running according to the desired state defined in manifest files.
  • 📊 The etcd component on the master node stores all cluster data, including the state of applications and pods, and communicates only through the API server.

Q & A

  • What is a Kubernetes cluster?

    -A Kubernetes cluster consists of a group of nodes that run containerized applications. It can be created locally using tools like Minikube or kubeadm, or using cloud provider services like EKS (AWS), AKS (Azure), or GKE (Google Cloud). The cluster has two types of nodes: master nodes and worker nodes.

  • What is the role of the master node in a Kubernetes cluster?

    -The master node, also known as the control plane, is responsible for managing the entire cluster. It handles tasks such as scheduling pods, ensuring the desired state of the cluster, and storing the cluster's data. The master node communicates with worker nodes and coordinates application deployment.

  • What are worker nodes in a Kubernetes cluster, and what is their role?

    -Worker nodes are machines in the Kubernetes cluster responsible for running application containers. They host pods, which are the smallest deployable units in Kubernetes. Each worker node also contains essential components like the kubelet agent, container runtime interface, and kube-proxy for communication.

  • What is a pod in Kubernetes?

    -A pod is the smallest deployable unit in Kubernetes and can contain one or more containers. Pods run on worker nodes, and Kubernetes schedules and manages them based on the available resources in the cluster.

  • What is kubelet, and what is its role on a worker node?

    -Kubelet is an agent running on every worker node. It receives instructions from the master node and ensures that the pods are running correctly. It is responsible for managing and monitoring the containers on that worker node.

  • What is the container runtime interface (CRI) in Kubernetes?

    -The container runtime interface (CRI) is software that manages the lifecycle of containers on a worker node. Popular CRIs include Docker and containerd. The CRI is responsible for creating, running, and stopping containers on worker nodes.

  • What is kube-proxy, and how does it function in a Kubernetes cluster?

    -Kube-proxy is a networking component that runs on worker nodes. It is responsible for enabling communication between pods and making applications accessible to external users. Kube-proxy also handles load balancing and service proxying for traffic within the cluster.

  • What is the kube API server, and why is it important?

    -The kube API server is the entry point for all interactions with the Kubernetes cluster. It exposes Kubernetes APIs that allow users to manage resources like pods. The API server authenticates and validates requests, making it central to all communication between the user and the cluster's components.

  • How does the Kubernetes scheduler work?

    -The scheduler assigns pods to worker nodes based on resource availability. When a new pod needs to be created, the scheduler selects the most suitable node that has the capacity to run the pod. It then informs the kube API server, which coordinates the deployment of the pod.

  • What is the role of the controller manager in Kubernetes?

    -The controller manager monitors the cluster to ensure it is in the desired state. It compares the desired state (as defined in the cluster's manifest files) with the actual state. If there is any deviation, such as a pod being deleted, the controller manager instructs the kube API server to recreate the pod.

  • What is etcd, and what role does it play in Kubernetes?

    -Etcd is a key-value store that stores all the cluster's data, such as which pods are running and their locations. It only communicates with the kube API server. Etcd ensures that the cluster's state information is persistent and can be accessed as needed by other components.

Outlines

00:00

🚀 Introduction to Kubernetes Architecture

In this video, we explore the Kubernetes architecture in the simplest terms. We'll dive deep into the Kubernetes cluster, understanding its components and their interactions. The video starts by creating a Kubernetes cluster using tools like Minikube, kubeadm, or cloud services such as AWS EKS, Azure AKS, or Google GKE. Within the cluster, there are two types of nodes: the master node, which manages everything in the cluster, and the worker node, which runs application containers. We will break down these components starting with the worker nodes.

05:01

🖥️ Worker Nodes: The Backbone of Application Deployment

Worker nodes are responsible for running application containers inside Kubernetes, and they do this through pods—the smallest deployable units in the cluster. Pods can hold one or more containers. The 'kubelet' agent running on all worker nodes ensures that pods are deployed correctly, as instructed by the master node. Additionally, each worker node has a container runtime interface like Docker or containerd to create and manage containers. For network communication and load balancing, the 'kube-proxy' handles interactions between pods and ensures application accessibility.

⚙️ Master Node Components: Managing the Cluster

The master node, or control plane, is crucial for managing Kubernetes clusters and contains four key components. The 'Kube API Server' acts as the entry point for all operations within the cluster, validating requests from users or systems and managing communication with other components. The 'Scheduler' assigns pods to worker nodes based on resource availability, ensuring balanced distribution of workloads. The 'Controller Manager' monitors the state of the cluster, ensuring that desired conditions (such as running a certain number of pods) match the actual state.

📦 etcd: Storing Cluster Data

All the cluster's data, including running applications and pods, is stored in 'etcd,' a key-value store. This component only communicates with the Kube API Server. When a user or system needs information about the cluster, they must go through the API Server to access data from etcd. The entire system ensures seamless communication between components within the master node and worker nodes, ultimately orchestrating Kubernetes' dynamic and scalable architecture.

🔑 Recap of Kubernetes Architecture

In summary, Kubernetes architecture consists of a master node that manages the cluster and worker nodes that run application workloads. The master node includes the Kube API Server (the entry point for all operations), the Scheduler (which assigns pods to nodes), the Controller Manager (which ensures the system's desired state is maintained), and etcd (the key-value store holding cluster data). Worker nodes, on the other hand, run pods, with kubelet ensuring container management and kube-proxy enabling communication and access to applications. This architecture provides a robust and scalable environment for deploying and managing containerized applications.

Mindmap

Keywords

💡Kubernetes Cluster

A Kubernetes Cluster is the set of machines (nodes) that run containerized applications. In the video, the speaker explains that the cluster can be created locally using tools like Minikube or kubeadm, or through cloud services like AWS EKS and Google GKE. The Kubernetes cluster is the central infrastructure where both master and worker nodes operate.

💡Master Node

The Master Node is the central control unit in a Kubernetes cluster. It is responsible for managing the cluster, including scheduling and monitoring workloads. The video describes how the master node handles requests from the user and controls the worker nodes. Key components such as the API server, scheduler, and controller manager operate here.

💡Worker Node

A Worker Node is the machine that actually runs the application containers in Kubernetes. The video highlights how worker nodes host 'pods,' which are the smallest deployable units, and are responsible for executing applications. Each worker node also runs essential components like kubelet and container runtime interfaces.

💡Pod

Pods are the smallest deployable units in Kubernetes, and they can contain one or more containers. The video explains that these pods run within worker nodes and are managed by an agent called kubelet. The pod is critical for hosting and running applications within the Kubernetes ecosystem.

💡Kubelet

Kubelet is an agent running on each worker node that interacts with the master node to manage and deploy pods. In the video, the kubelet is responsible for receiving instructions from the master node about pod scheduling and ensuring that the applications are running as expected within the worker nodes.

💡Container Runtime Interface (CRI)

The Container Runtime Interface (CRI) is a software that runs containers within a pod. Docker is an example of a CRI mentioned in the video. The CRI is responsible for the creation and management of containers within the worker node, which enables Kubernetes to deploy applications.

💡Kube Proxy

Kube Proxy is a network component that runs on each worker node and manages networking, load balancing, and communication between pods. The video explains how it enables pod-to-pod communication and exposes applications to end users, ensuring efficient network traffic routing within the cluster.

💡API Server

The API Server is the main entry point for interacting with the Kubernetes cluster. In the video, it is described as the component that handles all requests from users, such as creating or deleting pods. It also authenticates users and communicates with other master node components to execute commands.

💡Scheduler

The Scheduler is a component of the master node responsible for assigning pods to worker nodes based on resource availability. The video explains how, after a request to create a new pod, the scheduler determines which worker node has enough capacity to host the new pod.

💡Controller Manager

The Controller Manager monitors the entire Kubernetes cluster and ensures that the system matches the desired state as defined in the manifest files. In the video, this component is explained as essential for keeping the system operational by ensuring the right number of pods are always running and automatically recreating them if needed.

💡etcd

etcd is a key-value store used by Kubernetes to store all cluster data, such as the state of pods, nodes, and applications. The video mentions that etcd holds the entire state of the cluster and is only accessible through the API server. This makes it a critical component for ensuring that cluster information is kept consistent and up to date.

Highlights

Introduction to Kubernetes architecture explained in a simplified manner.

Two types of nodes in a Kubernetes cluster: master node (control plane) and worker node.

Worker nodes are responsible for running application containers within Kubernetes.

Pods, the smallest deployable units in Kubernetes, can hold one or more containers.

The kubelet agent, running on each worker node, communicates with the master node to manage pod creation and workloads.

Container Runtime Interface (CRI) such as Docker is necessary to manage containers on worker nodes.

Kube-proxy is used for networking and communication between pods, load balancing, and exposing applications to end users.

The master node contains key components like the kube-apiserver, which is the entry point for cluster management.

The kube-apiserver validates requests, authenticates users, and processes commands issued via kubectl or other interfaces.

Scheduler component in the master node assigns pods to worker nodes based on available resources.

The controller manager ensures the desired state of the cluster is maintained, recreating pods if necessary.

Cluster data, including the status of nodes and pods, is stored in etcd, a key-value store.

All communication between etcd and other components in the master node goes through the kube-apiserver.

The kubernetes architecture overview emphasizes the role of the master node in controlling workloads and the worker node in running them.

For further learning, additional videos on Kubernetes concepts and working are recommended.

Transcripts

play00:00

in this video I will explain you

play00:01

kubernetes architecture in simplest

play00:03

manner possible we go deep down in the

play00:06

kubernetes cluster understanding

play00:07

different components and how they

play00:09

actually work so make sure you watch

play00:10

this video till the end let's start so

play00:13

we start with creating a cluster you can

play00:15

create a kubernetes cluster locally

play00:17

either using tools like mini Cube or

play00:19

cube ADM optionally you can also use

play00:21

probably cloud provider services like

play00:23

eks and AWS AKs in aure gka in Google

play00:26

Cloud to create kubernetes clusters

play00:29

inside this kubernetes this cluster we

play00:30

have two different types of machines or

play00:32

as we call them nodes so we have a

play00:35

master node which is responsible to

play00:37

handled everything inside the cluster

play00:40

along with Master node we also have

play00:42

another kind of node known as worker

play00:44

node this worker node is actually

play00:47

responsible to run your application

play00:49

containers inside kubernetes cluster so

play00:52

you have Master node that is when to

play00:54

handle everything inside the cluster and

play00:56

you have worker notes which are

play00:57

responsible to run your application

play00:59

containers inside the kubernetes cluster

play01:02

let's start with worker nodes and look

play01:03

at different components inside it as we

play01:06

all know worker nodes are responsible to

play01:08

run application containers and they run

play01:10

this application containers inside

play01:11

something known as pods pods are

play01:14

smallest Deployable units in the

play01:15

kubernetes tester and the Pod can hold

play01:18

one or more containers a single worker

play01:20

node can have more than one pods or it

play01:22

can have a single pod as well so these

play01:25

containers are actually running inside

play01:28

object kuet is object known as pods to

play01:31

deploy and manage pods inside this

play01:33

worker node we have an agent named as

play01:35

cuet this cuet agent is running on all

play01:38

worker nodes and it gets information

play01:40

from the master node to decide where

play01:42

should be the P running and how to

play01:44

manage it along with this for a worker

play01:46

machine to create and manage containers

play01:48

we need to have a container runtime

play01:50

interface or a software like Docker so

play01:52

worker machine will also contain CRI or

play01:55

container Endtime interface such as

play01:57

Docker here other options are container

play01:59

d locket Etc which is supported by cetes

play02:02

as well also to enable communication

play02:05

between these two ports or to make these

play02:07

pods or application accessible to the Ed

play02:09

users we have another networking

play02:11

component in the worker node which is

play02:13

known as cube proxy qoxy is used for

play02:17

Port toport communication also for load

play02:19

balancing service proxy and to make your

play02:21

application accessible to the end users

play02:24

so these are different components inside

play02:26

the worker nodes you have q proxy for

play02:28

port-to-port communication and expose

play02:30

your application you have container

play02:31

runtime interface to create and delete

play02:33

containers you have cuet an agent that

play02:36

is running on every worker node which

play02:38

get information from the master node and

play02:40

handle the workloads inside the worker

play02:42

nodes along with this you can have more

play02:44

add-ons or plugins that you install on

play02:46

your worker notes but these are all the

play02:48

important components inside your worker

play02:50

notes now that we have understood the

play02:52

different components inside worker nodes

play02:54

and how they actually work let's look at

play02:56

the different components inside Master

play02:58

node so so Master node or control plane

play03:01

has four very important components that

play03:03

are responsible to manage everything

play03:05

inside the cluster the first component

play03:07

is the cube API server this Cube API

play03:10

server is like an entry point for

play03:12

everything you do inside the cluster so

play03:14

let's say you are a devops engineer and

play03:15

you want to create a new pod or maybe

play03:17

delete a pod or just get the status if

play03:19

you want to do anything inside cluster

play03:20

you will have to talk to the cube API

play03:23

server so you can talk to cube API

play03:25

server either by running commands using

play03:27

Cube CTL command line tool either using

play03:29

Cube uh Cube it is dashboard or maybe

play03:31

using sdks or anything so you will talk

play03:34

to the cube API server Cube API server

play03:36

will validate if the request is okay if

play03:38

the user is authenticated or not and if

play03:41

it works then it will get the

play03:42

information from the other components

play03:44

and provide it to the end user or to the

play03:46

dev

play03:48

optionor so API server is like an entry

play03:51

point which validates the request and

play03:52

also authenticates the user next Master

play03:55

node component is the scheduler

play03:57

scheduler is used to assign pods to the

play04:00

worker nodes let's say you made a

play04:01

request to the API server to create a

play04:03

new pod scheder is going to check which

play04:06

nodes have enough capacity to launch

play04:08

this new

play04:09

pod once the nodes on which the pods are

play04:11

going to be run is decided by the

play04:13

scheduler scheduler provides this

play04:14

information to the cube API server Cube

play04:16

API server gives this information to the

play04:19

cubet running on that particular node

play04:22

and thus it creates the new pod on the

play04:25

worker note so scheder is used to assign

play04:27

pods on the notes based on resource

play04:29

capacity all the Manifest files that you

play04:31

have defined for your

play04:33

cluster now what if there's an issue

play04:35

inside the cluster or one of these ports

play04:37

shows an error or gets deleted that's

play04:40

when the next kubernetes Master node

play04:41

components comes into play so next

play04:43

kubernetes component is the controller

play04:45

manager controller manager is used to

play04:47

monitor the entire cluster and it makes

play04:49

sures that everything is as you have

play04:51

defined inside your manifested files so

play04:54

let's say you have said you want to have

play04:55

four pods running it will make sure that

play04:57

the four pods are running all the times

play04:59

if that's not the case it'll tell API

play05:01

server which will recreate the P

play05:03

following the same procedure with the

play05:04

scheduler first decides what nodes to

play05:06

create the pods on then gives the

play05:08

information to API server and then cuade

play05:10

creates it so controller manager is used

play05:13

to Monitor and make sure everything is

play05:14

working as you have defined and it does

play05:16

that by comparing the desired state with

play05:19

the actual state of the cluster but

play05:21

where is this cluster information or

play05:23

cluster data stored it is stored inside

play05:25

the new it is stored inside the next

play05:27

Master node component which is hcd hcd

play05:30

is a key value store that stores all the

play05:32

information about the cluster what

play05:34

application is running what pods are

play05:36

running on which nodes and everything

play05:37

else one thing to note is hcd can only

play05:40

talk to the cube API server so if you

play05:42

want to get any information from the hcd

play05:44

it has to be through the cube API server

play05:47

so you can see Cube API server is like a

play05:49

center of communication for every

play05:51

component inside the master node as well

play05:53

as to the CET which is running on the

play05:55

worker node so this is the kubernetes

play05:58

architecture we have Master node and the

play06:01

worker node Master node is responsible

play06:03

for handling workloads inside the

play06:04

cluster worker nodes are actually

play06:06

running the application workloads here

play06:08

inside worker node we have the parts

play06:10

which is the smallest unit responsible

play06:12

to run the application container we have

play06:14

cuate and agent which is going to run on

play06:16

all the worker nodes and also

play06:18

responsible to create pods next we have

play06:20

con the runtime interface and we also

play06:22

have q proxy for networking and for

play06:24

exposing our application inside Master

play06:26

node we have QBE API server which is the

play06:28

entry point and the authentic ation

play06:29

system for every request that you make

play06:31

it also exposes a kubernetes API for the

play06:34

end user to start using it next

play06:36

component is a scheder which is used to

play06:37

assign ports to the notes then we have

play06:40

controller manager that makes sure

play06:42

everything is working as you want or as

play06:44

you have desired and all the cluster

play06:46

information is stored inside hcd so so

play06:48

this is a simplified explanation on ku's

play06:50

architecture I hope this video was

play06:52

informative if you have any questions

play06:54

any doubt do let me know in the comment

play06:55

section and for more information I would

play06:57

recommend checking out my another video

play06:59

which explains what is kubernetes and

play07:01

how it actually works do check it out

play07:03

thank you and have a good day

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
Kubernetes basicsCluster setupMaster nodeWorker nodeContainersPodsSchedulerController managerKubernetes tutorialCloud services
هل تحتاج إلى تلخيص باللغة الإنجليزية؟