Setup Kubernetes Cluster Using Kubeadm [Multi-node]

DevopsCube
17 Apr 202316:20

Summary

TLDRThis tutorial provides a step-by-step guide on setting up a Kubernetes cluster using kubeadm, with a focus on the importance of hands-on experience for DevOps engineers. The video walks through deploying virtual machines, installing key Kubernetes components, and configuring the cluster control plane and worker nodes. The presenter also emphasizes using a self-hosted cluster for learning and certification preparation. Additionally, the tutorial covers key prerequisites, metrics server setup, and deploying an NGINX app, offering valuable insights into managing Kubernetes clusters efficiently.

Takeaways

  • πŸš€ Kubernetes setup tutorial using kubeadm, focusing on creating a multi-node cluster for real-world project simulation.
  • πŸ”— Links to necessary documentation and GitHub repository are provided in the description, with a blog for the latest updates.
  • πŸ› οΈ kubeadm simplifies Kubernetes cluster setup, following best practices and providing hands-on experience with system complexities.
  • πŸ“š Self-hosted Kubernetes clusters offer valuable learning for DevOps engineers, especially for certification exams like CKA and CKS.
  • πŸ’» Prerequisites include two or more virtual machines (VMs), static IPs, and sufficient CPU/RAM for both master and worker nodes.
  • πŸ“Ά Ensure nodes can communicate on required ports and allow proper routing between subnets to avoid IP conflicts.
  • πŸ›‘ Swap must be disabled on all nodes, and nodes should have CRI-O as the container runtime and kubeadm, kubelet, and kubectl installed.
  • πŸ“„ Scripts provided automate setup tasks for both common node and master node configurations, making the process faster.
  • πŸ“Š After setting up the cluster, install the Kubernetes metric server to track CPU and memory usage across nodes and pods.
  • 🌐 Final validation of the cluster includes deploying an NGINX app and verifying access via NodePort, ensuring the setup is successful.

Q & A

  • What is the purpose of using kubeadm in Kubernetes cluster setup?

    -Kubeadm is used to simplify the process of setting up a working Kubernetes cluster. It follows best practices for configuring the cluster components, making it faster and easier to deploy Kubernetes clusters.

  • Why is it recommended to use a self-hosted Kubernetes cluster for learning purposes?

    -A self-hosted Kubernetes cluster provides hands-on experience and exposes learners to the complexities of managing a cluster. This deeper understanding of the control plane and worker node components is essential for DevOps engineers and is especially useful when preparing for certifications like CKA or CKS.

  • What are the prerequisites for setting up a Kubernetes cluster using kubeadm?

    -The prerequisites include having at least two nodes: one master node and one worker node. The master node should have a minimum of 2 vCPUs and 2GB of RAM, while worker nodes require at least 1 vCPU and 2GB of RAM. Additionally, nodes should have an IP range in the 10.x or 172.x series with static IPs.

  • What is the significance of using the Calico Network plugin in this setup?

    -The Calico Network plugin is used to enable pod networking in the Kubernetes cluster. It ensures that there are non-overlapping node and pod IP addresses to avoid any routing conflicts, allowing the nodes and pods to communicate efficiently.

  • What is the purpose of running the 'common.sh' script on all nodes?

    -The 'common.sh' script installs necessary components like the container runtime (CRIO), kubelet, kubectl, and kubeadm. It also configures swap settings and sets up kubelet's extra arguments to ensure the correct IPs are used in multi-IP environments.

  • What does the 'master.sh' script do on the master node?

    -The 'master.sh' script sets up the control plane components by initializing kubeadm, configuring networking with the pod CIDR, pulling control plane images, and starting the kubelet service. It also sets up Calico for networking and allows the API server to be accessed via public or private IPs.

  • What is the process for adding worker nodes to the Kubernetes cluster?

    -Worker nodes are added to the cluster by running the 'kubeadm join' command on the worker nodes. This command, generated during the master node setup, allows the worker nodes to connect to the control plane. The TLS certificates required for secure communication between the master and worker nodes are automatically created.

  • Why is it necessary to install the Kubernetes metrics server?

    -The metrics server is required to collect and store resource usage data (CPU and memory) from each node in the cluster. Without it, commands like 'kubectl top' would return errors, making it difficult to monitor the performance of the cluster and its components.

  • How can you validate that the Kubernetes cluster is working properly after setup?

    -Validation is done by deploying a sample application (such as nginx) and exposing it using a NodePort service. Accessing the application from the public or private IP of the worker nodes on the specified port (e.g., 32000) confirms that the cluster is functioning correctly.

  • How can you manage the Kubernetes cluster from your local workstation?

    -To manage the cluster from your local workstation, you need to copy the 'admin.conf' file from the master node to your local machine’s '.kube' directory. This file contains the API server endpoint and authentication details, allowing kubectl commands to interact with the cluster remotely.

Outlines

00:00

πŸ‘‹ Introduction to Setting Up a Kubernetes Cluster with kubeadm

The video introduces viewers to setting up a Kubernetes cluster using the kubeadm utility. The speaker highlights the importance of gaining hands-on experience with a self-hosted Kubernetes cluster instead of using tools like Kind or Minikube, which are better suited for development purposes. A self-hosted Kubernetes cluster provides valuable learning opportunities, particularly for those preparing for certifications such as CKA or CKS. The prerequisites for following the tutorial include having at least two nodes, proper IP ranges, and specific ports open for communication between nodes. The video emphasizes that all resources and scripts are available in the linked GitHub repository.

05:01

βš™οΈ Deploying Virtual Machines and Configuring Cluster Setup

This section covers the steps for deploying virtual machines and setting up the cluster using shell scripts. The user is instructed to configure the VMs using Terraform and provided with steps to modify the main Terraform file to suit their environment. The speaker explains the common.sh script, which sets the Kubernetes version, installs CRI-O as the container runtime, and configures kubelet on all nodes. Viewers can run these scripts to simplify the process or manually execute the commands to set up the cluster. This part sets the foundation for the control plane setup on the master node.

10:03

πŸš€ Initializing the Control Plane and Joining Worker Nodes

This paragraph explains the steps for initializing the master node's control plane using the master.sh script. The script configures key variables, installs necessary images, and sets up the Kubernetes control plane. The speaker explains the differences when using public vs. private IP addresses for the Kubernetes API server. Once the master node is initialized, a kubeconfig file and a join command for worker nodes are generated. The next step is to join the worker nodes to the master node using kubeadm join, allowing worker nodes to authenticate and join the cluster.

15:04

πŸ“Š Verifying the Cluster and Deploying the Metric Server

After joining the worker nodes, the speaker demonstrates how to verify that the cluster components are running smoothly by checking the pods in the kube-system namespace. The Kubernetes API server endpoints are tested to ensure everything is working correctly. To collect and expose resource usage data (CPU, memory) from nodes, the metric server must be installed separately. The speaker explains the deployment of the metric server and how it can be used to access pod and node metrics. The importance of TLS certificates in production environments is also briefly mentioned.

🌐 Deploying a Sample Nginx App and Accessing the Cluster

The final section covers deploying a sample Nginx application and exposing it via a NodePort service. The speaker explains how to access the application using the worker node's public IP and port. Additionally, instructions are provided for accessing the Kubernetes cluster from a local workstation by copying the kubeconfig file. This allows users to interact with the cluster using kubectl commands from their own machines. The video ends with a call to check the kubeadm documentation for updates and invites viewers to leave comments or questions.

Mindmap

Keywords

πŸ’‘Kubernetes

Kubernetes is an open-source platform designed for automating the deployment, scaling, and operation of application containers. In the video, Kubernetes forms the basis of the cluster setup, where the focus is on building a Kubernetes environment using the kubeadm utility.

πŸ’‘kubeadm

kubeadm is a command-line utility that helps in setting up Kubernetes clusters. It handles most of the complexity of bootstrapping a Kubernetes cluster, allowing users to quickly set up the control plane, manage nodes, and maintain best practices. The video demonstrates using kubeadm to initialize and manage a Kubernetes cluster.

πŸ’‘Control Plane

The control plane is the central part of a Kubernetes cluster, consisting of components like the API server, etcd, and controller manager. In the video, the master node is responsible for running the control plane, managing the state of the entire cluster, and coordinating the nodes.

πŸ’‘Worker Node

Worker nodes are machines that run applications inside Kubernetes Pods. In the video, the worker nodes join the control plane through a token and participate in running workloads managed by Kubernetes. Each worker node communicates with the control plane for task scheduling.

πŸ’‘Container Runtime (CRI-O)

A container runtime is responsible for running containers. CRI-O is one such runtime optimized for Kubernetes. In the video, CRI-O is installed on all nodes as the container runtime to manage the lifecycle of containers within the Kubernetes cluster.

πŸ’‘Calico Network Plugin

Calico is a networking and network security solution for Kubernetes. It provides network connectivity and policy enforcement between Pods in the cluster. In the video, the Calico Network plugin is installed to manage pod-to-pod networking, ensuring seamless communication within the Kubernetes environment.

πŸ’‘Metric Server

The Metric Server is a Kubernetes addon that collects resource usage data from nodes and Pods. It enables monitoring and scaling by providing CPU and memory metrics. In the video, the metric server is deployed to monitor the cluster’s health and provide real-time metrics.

πŸ’‘TLS Certificates

TLS certificates are used in Kubernetes to secure communication between various components, such as between nodes and the control plane. The video explains how kubeadm generates these certificates during the cluster setup to ensure encrypted communication within the cluster.

πŸ’‘Pod CIDR

Pod CIDR is a range of IP addresses assigned to Pods within the Kubernetes cluster. It is important to ensure that there is no overlap between the Pod IP range and the node IP range to prevent routing issues. The video demonstrates configuring the Pod CIDR when setting up the network using Calico.

πŸ’‘kubelet

kubelet is a Kubernetes agent that runs on each node in the cluster. It ensures that the containers are running and communicates with the control plane to receive tasks. The video shows how kubelet is installed on both the master and worker nodes to manage container execution.

Highlights

Introduction to Kubernetes cluster setup using kubeadm utility.

Emphasizes the importance of hands-on experience in building and maintaining self-hosted Kubernetes clusters.

Kubeadm simplifies setting up Kubernetes clusters by handling all components and configurations.

The tutorial focuses on a real-world multi-node cluster setup with master and worker nodes.

Kubernetes cluster management with kubeadm is part of certification exams like CKA and CKS.

Prerequisites include at least two nodes, with IP ranges in the 10.x or 172.x series and static IPs.

Key focus on setting up virtual machines and installing container runtimes like CRI-O.

Comprehensive walkthrough of kubeadm initialization on master nodes and joining worker nodes.

Explanation of TLS certificates and their role in Kubernetes cluster security.

Detailed steps for setting up Calico network plugin for pod networking.

Installing Kubernetes Metric Server to collect CPU and memory usage data.

The tutorial provides a hands-on demonstration using AWS Cloud and Terraform for deploying virtual machines.

Instructions for using pre-built shell scripts (common.sh and master.sh) to automate Kubernetes cluster setup.

Final validation of the cluster by deploying an NGINX application and exposing it via NodePort.

How to configure kubectl on a local workstation by copying the admin kubeconfig from the Kubernetes cluster.

Transcripts

play00:00

hello guys welcome to another practical

play00:01

devops tutorial in this video I'll be

play00:04

showing you guys how to set up

play00:05

kubernetes cluster using the cubeadm

play00:07

utility

play00:08

please check the description where I

play00:09

have given all the links to the required

play00:11

documentation and GitHub repository to

play00:13

follow this tutorial you can use the

play00:15

blog Link in the description as a

play00:16

reference for the entire setup as it is

play00:18

constantly updated with the latest

play00:19

kubernetes version

play00:22

cubeadm is a great tool to set up

play00:24

working kubernetes cluster in less time

play00:26

it simplifies the process of setting up

play00:28

all the kubernetes Clusters components

play00:30

and follows all the best practices for

play00:32

cluster configurations there are

play00:34

solutions like kind and mini Cube which

play00:35

you can set up locally to have a

play00:37

kubernetes environment those tools are

play00:39

great for development purposes but it

play00:41

abstracts away all the cluster

play00:42

configurations while these tools can

play00:44

save time and reduce complexity it is

play00:46

essential for devops engineer to have a

play00:48

deep understanding of the various

play00:50

components that make up the kubernetes

play00:51

cluster building and maintaining a

play00:53

self-hosted kubernetes cluster provides

play00:55

valuable hands-on experience and exposes

play00:58

you to the system's complexities this

play01:00

experience will help you better

play01:01

understand the cluster control plane and

play01:03

worker node components so I strongly

play01:05

suggest using a self-hosted kubernetes

play01:07

cluster during your learning process

play01:08

rather than using easily available

play01:10

solution with a multi-node cluster you

play01:12

can have the setup that mimics the real

play01:14

world project setup also if you are

play01:15

preparing for cka or cks certification

play01:18

exams it is important to note that

play01:20

cluster management using cubeadm is part

play01:22

of the exam syllabus

play01:25

let's look at the prerequisites to

play01:27

follow this tutorial you should have a

play01:28

minimum of two bundle notes one master

play01:30

and one worker node the master node

play01:33

should have a minimum of two vcpu and

play01:35

2GB RAM for the worker nodes a minimum

play01:37

of 1 bcpu and 2GB Ram is recommended and

play01:40

here is an important requirement your

play01:42

nodes should have an IP range in the

play01:43

10.ex OR 172.x series with static IPS

play01:47

for master and worker nodes we will be

play01:50

using 192 series as the power Network

play01:53

range through the Calico Network plugin

play01:55

it is very important to have a

play01:57

non-overlapping node and pod IP

play01:59

addresses to avoid any type of IPM

play02:01

routing conflicts

play02:02

your nodes should be able to talk to

play02:04

each other on all these ports required

play02:06

by kubernetes

play02:11

if you are setting up Cuba dim cluster

play02:13

on cloud servers ensure you allow the

play02:15

ports in the respective firewall

play02:16

configuration also make sure the subnets

play02:19

have the routing rules enabled for the

play02:20

cidr ranges you use in the setup to

play02:23

avoid any sort of routing issues

play02:26

all the commands and scripts used in

play02:28

this guide are hosted on GitHub clone

play02:30

the repository to follow along this

play02:31

guide

play02:34

in a high level here is what we are

play02:35

going to do deploy three virtual

play02:37

machines install container runtime on

play02:40

all the nodes we'll be using crio

play02:42

install cubadium cubelet and Cube CTL on

play02:46

all the nodes

play02:47

initiate cubadium control plane

play02:49

configuration on the master node it

play02:51

first pulls all the images from the

play02:53

registry.koh.io

play02:55

join the worker node to the control

play02:57

plane

play02:58

install the Calico Network plugin to

play03:00

enable power networking

play03:02

install kubernetes metric server to

play03:05

enable pod and node metrics validate all

play03:08

the cluster components and nodes finally

play03:10

deploy a sample nginx app and validate

play03:13

the cluster

play03:16

here is how qbdm works when you

play03:19

initialize Cube ADM first it runs all

play03:21

the PreFlight checks to validate the

play03:23

system State and it downloads all the

play03:24

required cluster container images from

play03:27

the registry.kh.io container registry it

play03:30

then generates required TLS certificates

play03:32

and stores it on the HC kubernetes PK

play03:35

folder next it generates all the

play03:38

cubeconfig file for the cluster

play03:39

components in the HC slash kubernetes

play03:41

folder

play03:42

then it starts a cubelet service and

play03:45

generates the static pod manifest for

play03:47

all the cluster components and saves it

play03:49

in the slash it see slash kubernetes

play03:51

slash manifest folder

play03:54

next it starts all the control plane

play03:56

components from the static Point

play03:58

manifests then it installs core DNS and

play04:01

Cube proxy components finally it

play04:04

generates the node bootstrap token

play04:05

worker nodes use this token to join the

play04:08

control plane as you can see all the key

play04:10

cluster configurations will be present

play04:12

under the HC slash kubernetes folder

play04:16

let's get started with the Hands-On labs

play04:19

you can use any cloud or local

play04:20

virtualization setup of your preference

play04:23

this setup will work on any platform all

play04:25

you need to have is three virtual

play04:27

machines which will talk to each other

play04:28

on the required ports for this demo I am

play04:31

using AWS Cloud to deploy three virtual

play04:33

machines I have a simple terraform

play04:35

script that deploys three T2 dot medium

play04:37

instances with security groups that

play04:40

allows all the traffic between the nodes

play04:42

and allow traffic and Port 6443 and note

play04:46

Port range thirty thousand to thirty two

play04:48

thousand seven sixty eight from anywhere

play04:50

so that we can access the API server and

play04:53

applications on node Port from our

play04:54

workstation if you are a terraform and

play04:56

AWS user all you have to do is from the

play04:59

Clone repository go to the instances

play05:01

folder

play05:06

in the main.tf replace the Ami ID key

play05:09

name and the subnet IDs to your custom

play05:11

values then do a terraform unit plan and

play05:14

then apply you will have three VMS ready

play05:15

in a matter of minutes now that we have

play05:18

the VMS ready let's get started with the

play05:19

setup

play05:20

to make the setup easier I have added

play05:23

all the commands in two shell scripts

play05:24

under the scripts folder common.sh and

play05:27

master.sh for demonstration purposes and

play05:30

to save time I will run the shell

play05:31

scripts containing all the commands if

play05:34

you want to set up the cluster by

play05:35

executing the commands individually you

play05:37

can follow the documentation

play05:39

let's take a look at the common.sh

play05:41

script

play05:44

the common.sh should be run on all the

play05:46

nodes let's have a look at the script

play05:49

here we have the kubernetes underscore

play05:51

version variable to set the required

play05:53

cluster version then we are disabling

play05:55

this web and add a crown type entry to

play05:58

keep the swap off during server reboots

play06:00

it is a requirement for the setup then

play06:02

from lines 21 to 57 we execute commands

play06:05

to install the crio container runtime

play06:07

similar to the kubernetes version

play06:09

from lines 63 to 69 we install cubelet

play06:13

Cube CDL and cubeadm based on the

play06:16

version that we set in the kubernetes

play06:18

version variable

play06:19

from lines 73 to 76 we are setting the

play06:23

respective node IPS to the cubelet extra

play06:26

arcs in the slash Etc default slash

play06:29

cuplet file

play06:30

this is required for few setup because

play06:32

the servers might have more than one IP

play06:35

addresses and cubelet might end up

play06:37

picking the wrong ones now you can run

play06:39

these commands one by one on all the

play06:41

nodes or execute the whole shell script

play06:44

here I'm logged into three nodes as a

play06:46

root user k8's Master one is the master

play06:48

node and k8s worker 2 and 3 are the

play06:51

worker nodes we need to execute the

play06:53

commands as a pseudo user so ensure

play06:55

you're logged in this route

play06:57

I will clone the cubead in scripts

play06:59

repository to all the VMS and CD into

play07:01

the scripts folder

play07:08

let's execute the common.sh on all the

play07:11

three nodes

play07:15

it will take a few minutes for the

play07:16

script execution to complete

play07:22

in all the nodes the script has executed

play07:24

successfully

play07:27

next we have the master.sh script

play07:30

it should be run on the master node to

play07:32

set up the control plane components

play07:34

let's have a look at the script

play07:37

here we are setting up three environment

play07:39

variables public IP access node name and

play07:42

Port C idea if you are working on a poc

play07:45

in your personal or sandboxed

play07:46

environment you might need access to the

play07:48

kubernetes API server using the public

play07:50

IP address for that you can set the

play07:52

public IP access equal to true because

play07:54

the cubeadim initialization command

play07:57

parameter needs to be changed for public

play07:59

IP end points

play08:00

here we are using pod cidrs 192 series

play08:04

ensure your node IP range doesn't

play08:06

conflict with the Pod cadr range

play08:09

then the image's pull command here will

play08:11

download all the required images for the

play08:13

control plane components because except

play08:15

cubelet every cluster component runs as

play08:18

a container

play08:19

then we have a condition to check if the

play08:21

public IP access is set to true or false

play08:23

if it is false then we fetch the private

play08:26

IP address of the server using the IP

play08:28

80dr Command and set it to the master

play08:31

private IP variable then we initialize

play08:34

cubeadm with the master private IP node

play08:36

name and pods idea if the public IP

play08:39

access is set to true then we retrieve

play08:42

the public IP using curl and ifconfig.me

play08:44

service and set it to master public IP

play08:47

variable

play08:48

then we initialize Cube ADM with control

play08:51

plane endpoint parameter with the public

play08:53

IP address instead of the API server

play08:55

advertise address parameter that is the

play08:58

only change if you want to use the

play08:59

public IP address of the master node

play09:01

next we copy the generated admin Cube

play09:04

config file to the home folder so that

play09:06

we can execute Cube CTL commands from

play09:08

the master node finally we install the

play09:11

Calico CNA plugin to enable power

play09:13

networking

play09:15

now that we have an understanding of the

play09:17

master script let's execute the script

play09:19

on the muscle node

play09:20

I am executing this on the server with a

play09:22

public IP so first I need to set the

play09:25

public IP access variable to true in the

play09:28

master.sh script on the master node

play09:39

if you don't want to use the public IP

play09:40

do not make any changes to the script as

play09:43

by default the script picks up the

play09:45

private IP address of the server let's

play09:47

execute the master.sh script

play09:56

initialization you should get an output

play09:59

with cubeconfig file location and the

play10:01

join command with the token copy that

play10:03

and save it to a file we will need the

play10:05

join command for joining the worker

play10:07

nodes to the master now let's verify the

play10:09

cube config by executing the cube CDL

play10:12

command to list all the parts in the

play10:13

cube system namespace

play10:22

here you can see all the cluster

play10:23

component parts like APA server hcd Cod

play10:27

DNS and Cube scheduler are running

play10:28

without any issues we can also verify

play10:31

the Readiness of the API server by

play10:33

querying the kubernetes API server

play10:34

endpoint using cubectl

play10:37

here you can see all the API server

play10:39

endpoints is written the OK status which

play10:41

means we have a working Master node

play10:43

without any issues

play10:45

now let's join the worker node to the

play10:47

master node using the cubeadm join

play10:49

command that we have got in the output

play10:51

while setting in the master node if you

play10:53

do not have the command with you you can

play10:55

print it using the cubeadm token create

play10:57

command from the master node

play11:01

I will copy the join command and will

play11:04

execute a join command from the two

play11:05

worker nodes

play11:16

it performs the TLs bootstrapping for

play11:19

the nodes meaning the TLs certificates

play11:21

required for the master and node

play11:23

authentication are automatically created

play11:25

in this process once it is successfully

play11:27

executed you will see the output saying

play11:30

this node has joined the cluster if you

play11:32

have multiple worker nodes execute the

play11:34

node join command on all the worker

play11:35

nodes I am configuring two worker nodes

play11:38

for this setup now from the master node

play11:40

let's try to list the nodes

play11:45

we can see three nodes one is the

play11:47

control plane node and others are the

play11:49

worker nodes without any labels let's

play11:51

label the worker nodes as worker using

play11:54

the cube CDL command

play11:55

you need to replace KH worker 2 and 3

play11:58

with the hostname of your worker node

play12:02

if you list the notes now you can see

play12:04

the label worker on the worker nodes

play12:11

to get CPU and memory metrics of PODS we

play12:14

need the metric server component in the

play12:15

cluster it collects and stores resource

play12:18

usage data such as CPU and memory from

play12:20

each node in the cluster and exposes

play12:22

this data through kubernetes API cubeadm

play12:25

doesn't install the metric server

play12:26

component during its initialization we

play12:29

have to install it separately

play12:30

to verify this if you run the top

play12:33

command you will see the Matrix API not

play12:35

available error

play12:42

let's deploy the metric server using the

play12:45

metric service manifest file present

play12:47

under the Manifest folder

play12:50

this manifest is taken from the official

play12:52

metric server repo I have added the

play12:54

cubelet insecure TLS flag to The

play12:56

Container to make it work because in our

play12:59

setup qbdm uses cells and certificates

play13:02

the insecure flag is not recommended in

play13:04

actual projects or production

play13:05

environments as you have to use valid

play13:07

TLS certificates in the cluster

play13:10

let's deploy the Manifest

play13:17

if you check the cube system namespace

play13:19

you can see the metric server getting

play13:21

deployed

play13:24

once it is in registered you can check

play13:26

the Pod and node metrics using the top

play13:28

command

play13:39

here you can see the CPU and memory

play13:41

metrics of nodes and pods

play13:44

it means the metric server is working as

play13:46

expected

play13:47

our final step is to validate the

play13:49

cluster by deploying an app and access

play13:51

it over a note port we will deploy nginx

play13:54

application and expose it using a

play13:56

service of type node Port under the

play13:58

Manifest folder you will find the sample

play14:00

iPhone app.yaml file

play14:04

in this manifest we have a deployment

play14:06

object with the latest nginx image and a

play14:08

service object that exposes the nginx

play14:10

deployment on node port 32000.

play14:14

now let's deploy the sample app in the

play14:16

next deployment using cubectl

play14:28

it is successfully deployed to verify

play14:31

the node Port service let's try to

play14:33

access it using the worker nodes IP and

play14:35

Port 32000. here I'm using the public IP

play14:38

address of the worker mode

play14:41

we are able to see the nginx homepage on

play14:43

node Port it means the cluster setup is

play14:45

working as expected in the whole setup I

play14:47

have used Cube CDL from the master node

play14:49

if you want to access the kubernetes

play14:51

cluster from your local workstation you

play14:53

need to copy the admin.com content that

play14:56

is the admin cubeconfig file to your

play14:58

local dot Cube folder I assume you have

play15:01

cubesatel utility installed in your

play15:02

Workstation

play15:04

you can either copy the admin.com file

play15:06

content or use sap to copy the file to

play15:09

your workstation first open the

play15:10

admin.com file present in the slash HC

play15:13

slash kubernetes folder

play15:20

here you can see the API server public

play15:22

endpoint and the certificate and token

play15:24

details that are required to

play15:26

authenticate against the API server this

play15:28

config has full admin access to the

play15:30

cluster

play15:31

now I will copy the whole admin.com file

play15:33

contents to the clipboard

play15:35

in my workstation I will open the config

play15:37

file under dot Cube folder

play15:42

now paste admin.com content to this file

play15:45

and save it

play15:50

if I execute cubicle commands now it

play15:52

will interact with my Cube EDM cluster

play15:54

using the details present in the config

play15:56

file

play16:00

I hope this tutorial was helpful if you

play16:02

face any issues with the setup or need

play16:04

any clarification you can drop a comment

play16:06

also check the cubeadm documentation

play16:08

link I have added in the description for

play16:11

the latest updates in the setup in the

play16:13

next video we will look at the important

play16:14

kubernetes cluster configurations every

play16:16

devops engineer should know

play16:18

thank you and see you in the next video

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Kubernetes setupkubeadm tutorialDevOps guideCluster managementCloud deploymentContainer runtimeAWS serversNetworkingCertification prepTerraform script