What is Kubernetes | Kubernetes explained in 15 mins

TechWorld with Nana
20 Dec 201914:13

Summary

TLDREl video presenta una explicación detallada de Kubernetes, un framework de orquestación de contenedores open source desarrollado por Google. Cubre su definición, la problemática que resuelve, su arquitectura básica con nodos maestros y worker, y los componentes clave como pods, contenedores, servicios y cómo se configuran. Destaca las funcionalidades de alta disponibilidad, escalabilidad y recuperación ante desastres que ofrecen las herramientas de orquestación de contenedores como Kubernetes.

Takeaways

  • 📚 Kubernetes es un marco de orquestación de contenedores de código abierto, originalmente desarrollado por Google.
  • 🔄 Solve problemas de gestión de cientos o miles de contenedores en diferentes entornos como máquinas físicas, máquinas virtuales o entornos de nube.
  • 🤖 Garantiza alta disponibilidad, escalabilidad y recuperación ante desastres para las aplicaciones.
  • 🌐 La arquitectura básica de Kubernetes consiste en al menos un nodo maestro y varios nodos de trabajo, donde los nodos de trabajo ejecutan contenedores de aplicaciones.
  • 🔧 El nodo maestro ejecuta procesos clave como el servidor de API, el gerente de controladores y el programador.
  • 💾 etcd es un componente crucial que almacena la configuración y el estado actual del clúster de Kubernetes.
  • 🌐 La red virtual permite que todos los nodos del clúster se comuniquen entre sí y se conviertan en una máquina poderosa.
  • 📦 Un Pod (unidad de implementación) es la unidad más pequeña en Kubernetes, que puede contener múltiples contenedores.
  • 🔄 Los Pods son componentes efímeros y pueden morir con frecuencia, lo que justifica el uso de servicios para proporcionar direcciones IP permanentes.
  • 🛠 La configuración de Kubernetes se realiza a través del servidor de API y se maneja en formato YAML o JSON.
  • 🔄 Kubernetes trabaja de forma declarativa,力求达到用户声明的期望状态.

Q & A

  • ¿Qué es Kubernetes?

    -Kubernetes es un marco de orquestación de contenedores de código abierto, originalmente desarrollado por Google, que ayuda a gestionar aplicaciones compuestas por cientos o quizás miles de contenedores en diferentes entornos como máquinas físicas, máquinas virtuales o entornos en la nube.

  • ¿Qué problemas resuelve Kubernetes?

    -Kubernetes resuelve problemas relacionados con la gestión de una gran cantidad de contenedores en múltiples entornos, garantizando la alta disponibilidad, escalabilidad y recuperación ante desastres de las aplicaciones.

  • ¿Qué es una arquitectura básica de Kubernetes?

    -Una arquitectura básica de Kubernetes consiste en al menos un nodo maestro conectado a varios nodos de trabajo (worker nodes), donde se ejecutan los contenedores de las aplicaciones y se distribuye la carga de trabajo.

  • ¿Qué procesos esenciales se ejecutan en el nodo maestro de Kubernetes?

    -El nodo maestro ejecuta procesos esenciales como el API server (punto de entrada para la comunicación con el clúster), el controller manager (mantiene un registro de los eventos del clúster), el scheduler (responsible de programar contenedores en nodos de trabajo) y el etcd (almacena el estado actual del clúster).

  • ¿Qué es un pod en Kubernetes?

    -Un pod es la unidad más pequeña de configuración y interacción en Kubernetes, actuando como un contenedor que encapsula uno o varios contenedores. Los pods son componentes de Kubernetes que gestionan los contenedores en su interior sin intervención directa del usuario.

  • ¿Cómo se comunican los pods en Kubernetes?

    -Los pods en Kubernetes se comunican a través de direcciones IP internas asignadas a cada pod, permitiendo que los contenedores dentro de los pods se comuniquen entre sí a través de estas direcciones IP.

  • ¿Qué es un servicio en Kubernetes y cuál es su función?

    -Un servicio en Kubernetes es un componente que proporciona una dirección IP permanente para la comunicación entre los pods, actuando como un equilibrador de carga y permitiendo que las aplicaciones se comuniquen sin interrupciones a pesar de los pods que se reinician y obtengan nuevas direcciones IP.

  • ¿Cómo se configuran los componentes de Kubernetes como los pods y servicios?

    -La configuración de los componentes en Kubernetes se realiza a través del API server, donde los clientes de Kubernetes (como la interfaz de usuario, scripts o herramientas de línea de comando) envían solicitudes de configuración en formato YAML o JSON.

  • ¿Qué es la orquestación de contenedores y por qué es necesaria?

    -La orquestación de contenedores es una tecnología que automatiza la implementación, escalabilidad, monitoreo y mantenimiento de aplicaciones que se ejecutan en contenedores. Es necesaria debido al aumento del uso de microservicios y la complejidad de gestionar cientos o miles de contenedores en múltiples entornos.

  • ¿Qué sucede si un nodo maestro de Kubernetes falla?

    -Si un nodo maestro de Kubernetes falla, se requiere una copia de seguridad para poder acceder al clúster de nuevo. En entornos de producción, se recomienda tener al menos dos nodos maestros para garantizar la continuidad del clúster en caso de fallo.

  • ¿Cómo Kubernetes garantiza la alta disponibilidad de las aplicaciones?

    -Kubernetes garantiza la alta disponibilidad mediante la replicación de pods, la supervisión constante del estado de los contenedores y la reconstrucción automática de los contenedores que fallen, asegurando así que las aplicaciones permanezcan accesibles para los usuarios.

Outlines

00:00

📚 Introducción a Kubernetes

Este párrafo inicia explicando qué es Kubernetes, su definición oficial y su funcionamiento básico. Se menciona que Kubernetes es un marco de orquestación de contenedores open source desarrollado originalmente por Google, capaz de gestionar aplicaciones compuestas por cientos o miles de contenedores en diferentes entornos como máquinas físicas, máquinas virtuales o entornos cloud. Además, se discuten los problemas que Kubernetes resuelve, como la alta disponibilidad, escalabilidad y recuperación ante desastres, y se presenta un caso de estudio para entender mejor la necesidad de tecnologías de orquestación de contenedores.

05:01

🖥️ Arquitectura Básica de Kubernetes

En este párrafo se describe la arquitectura básica de un clúster Kubernetes, compuesto al menos por un nodo maestro y varios nodos de trabajo. Cada nodo ejecuta un proceso 'kubelet' que permite la comunicación y ejecución de tareas. Los nodos de trabajo alojan contenedores de aplicaciones diferentes, y el nodo maestro ejecuta procesos esenciales como el API server (punto de entrada para el clúster), el controlador manager y el scheduler. También se destaca el rol del almacenamiento etcd para mantener el estado actual del clúster y la red virtual que permite a los nodos comunicarse como una sola máquina virtual.

10:05

📦 Conceptos Básicos de Kubernetes: Pods y Contenedores

Este párrafo cubre los conceptos fundamentales de Kubernetes, como los pods y los contenedores. Un pod es la unidad mínima de configuración y es un contenedor que puede incluir varios contenedores, generalmente uno por aplicación. Los pods son efímeros y pueden reiniciarse frecuentemente, lo que lleva a la introducción del concepto de 'service', que proporciona una dirección IP permanente y actúa como un equilibrador de carga para los pods. La explicación incluye cómo los clientes de Kubernetes interactúan con el API server para configurar componentes como pods y servicios.

🔧 Creación y Configuración de Componentes Kubernetes

Finalmente, se explica cómo se crean y configuran los componentes de Kubernetes, como los pods y los servicios, a través de solicitudes de configuración hechas por clientes a través del API server. Se da un ejemplo de cómo se configura un deployment con replica sets de pods y se destaca el enfoque declarativo de las solicitudes, donde se define el resultado deseado y Kubernetes trabaja para alcanzar ese estado. Esto garantiza que, incluso si un pod falla, Kubernetes lo reemplace y mantenga el clúster en el estado deseado.

Mindmap

Keywords

💡Kubernetes

Kubernetes es un marco de orquestación de contenedores de código abierto, originalmente desarrollado por Google. Es fundamental para gestionar aplicaciones compuestas por cientos o miles de contenedores en diferentes entornos como máquinas físicas, máquinas virtuales o entornos de nube, e incluso en entornos de implementación híbridos. El video lo describe como una herramienta que ayuda a asegurar la alta disponibilidad, escalabilidad y recuperación ante desastres de las aplicaciones.

💡Contenedores

Los contenedores son unidades de ejecución de aplicaciones que encapsulan el software y sus dependencias en un paquete ligero, permitiendo que la aplicación se ejecute consistentemente en cualquier sistema. Kubernetes ayuda a gestionar estos contenedores, facilitando la implementación y el mantenimiento de aplicaciones compuestas por múltiples contenedores.

💡Orquestación de Contenedores

La orquestación de contenedores es el proceso de coordinar y gestionar la implementación, ejecución y mantenimiento de múltiples contenedores en un entorno informático. Kubernetes es una herramienta de orquestación de contenedores que permite la gestión eficiente de contenedores a gran escala, garantizando la alta disponibilidad, escalabilidad y recuperación ante desastres.

💡Maestro (Master Node)

En Kubernetes, el nodo maestro es el componente principal que dirige el cluster, ejecutando procesos esenciales como el servidor de API, el gerente de controladores y el programador. El nodo maestro es crucial para la operación del cluster y, por lo tanto, se recomienda tener múltiples nodos maestros en entornos de producción para garantizar la alta disponibilidad.

💡Trabajo (Worker Node)

Los nodos de trabajo en Kubernetes son los que ejecutan las aplicaciones en contenedores. Estos nodos tienen un proceso de kubelet que les permite comunicarse con el nodo maestro y ejecutar tareas como la implementación de contenedores. Los nodos de trabajo suelen ser más grandes y tener más recursos que los nodos maestros, ya que son los que soportan la carga de las aplicaciones.

💡Pods

Los pods en Kubernetes son la unidad más pequeña de configuración y son un contenedor que puede contener una o múltiples aplicaciones. Los pods son manejados por Kubernetes y se encargan de ejecutar y mantener los contenedores dentro de ellos. Los pods son efímeros, lo que significa que pueden ser reiniciados o reemplazados, y Kubernetes garantiza que los pods se reinicien automáticamente si fallan.

💡Servicios (Services)

Los servicios en Kubernetes son componentes que proporcionan una dirección IP constante y un equilibrador de carga para los pods. Los servicios permiten que las aplicaciones se comuniquen entre sí sin preocuparse por las direcciones IP dinámicas que pueden cambiar cuando los pods son reiniciados. Los servicios son fundamentales para mantener la conectividad y la accesibilidad de las aplicaciones en un entorno dinámico.

💡Alta Disponibilidad

La alta disponibilidad se refiere a la capacidad de una aplicación para estar disponible y accesible para los usuarios sin tiempo de inactividad. Kubernetes garantiza la alta disponibilidad a través de su orquestación de contenedores, asegurando que las aplicaciones sigan funcionando incluso en caso de fallos o problemas de infraestructura.

💡Escalabilidad

La escalabilidad se refiere a la capacidad de una aplicación para aumentar o disminuir su capacidad de respuesta y rendimiento según la demanda. Kubernetes permite la escalabilidad al orquestar la implementación y el mantenimiento de contenedores en múltiples nodos de trabajo, ajustando dinámicamente la cantidad de contenedores en ejecución según las necesidades.

💡Recuperación ante desastres

La recuperación ante desastres es el proceso de restaurar un sistema o aplicación a su estado anterior después de un evento catastrófico, como la pérdida de datos o la falla de un servidor. Kubernetes ofrece mecanismos de recuperación ante desastres para garantizar que las aplicaciones puedan volver a su estado último conocido en caso de problemas de infraestructura.

💡etcd

etcd es un almacén de clave-valor distribuido que se utiliza en Kubernetes para mantener la configuración y el estado actual del cluster. Etcd almacena información crítica como la configuración de los nodos, el estado de los contenedores y los pods, lo que permite la recuperación y restauración del cluster en caso de fallos.

Highlights

Kubernetes is an open-source container orchestration framework developed by Google.

It manages containers, such as Docker, in various environments like physical machines, VMs, or cloud setups.

Kubernetes helps manage applications composed of hundreds or thousands of containers.

The rise of microservices increased the use of container technologies due to their suitability for small, independent apps.

Container orchestration tools like Kubernetes address the complexity of managing numerous containers across different environments.

Kubernetes ensures high availability, scalability, and disaster recovery for applications.

A Kubernetes cluster consists of at least one master node and multiple worker nodes, each running a kubelet process.

Worker nodes host Docker containers of different applications, depending on the workload distribution.

The master node runs essential Kubernetes processes like the API server, controller manager, and scheduler.

etcd key-value storage holds the Kubernetes cluster's configuration and status data, enabling backup and restore from snapshots.

A virtual network connects all nodes within the Kubernetes cluster, turning them into a single, resourceful entity.

Pods are the smallest unit in Kubernetes, acting as wrappers for containers and housing multiple containers per application.

Each pod is assigned its own IP address, allowing for internal communication within the cluster.

Pods are ephemeral and can die frequently; Kubernetes automatically restarts stopped containers within pods.

Services in Kubernetes provide a stable IP address and load balancing for pods, overcoming the challenge of dynamic IP addresses.

Kubernetes configurations are declarative, defining the desired state for the system to achieve and maintain.

The API server is the entry point for all configuration requests in the Kubernetes cluster.

Kubernetes deployment is a template for creating pods, allowing for the specification of replicas, container images, and environment variables.

Transcripts

play00:00

so in this video I'm going to explain

play00:01

what kubernetes is we're going to start

play00:04

off with the definition to see what

play00:06

official definition is and what it does

play00:08

then we're going to look at the problem

play00:10

solution case study of Cabezas basically

play00:13

why did kubernetes even come around and

play00:16

what problems does it solve we're gonna

play00:19

look at the basic architecture what are

play00:22

the master nodes and the slave nodes and

play00:24

what are the kubernetes processes that

play00:27

actually make up the platform mechanism

play00:29

then we going to see some basic concepts

play00:32

and the components of kubernetes which

play00:34

are pots and containers and services and

play00:38

what is the role of each one of those

play00:40

components and finally we going to look

play00:42

at a simple configuration that you as a

play00:46

kubernetes cluster user would use to

play00:49

create those components and configure

play00:51

the cluster to your needs so let's jump

play01:00

in right into the definition what is

play01:02

kubernetes so kubernetes is an open

play01:04

source container orchestration framework

play01:07

which was originally developed by google

play01:10

so on the foundation it manages

play01:12

containers be docker containers or from

play01:15

some other technology which basically

play01:18

means that kubernetes helps you manage

play01:21

applications that are made up of

play01:23

hundreds or maybe thousands of

play01:25

containers and it helps you manage them

play01:29

in different environments like physical

play01:33

machines virtual machines or cloud

play01:35

environments or even hybrid deployment

play01:38

environments so what problems does

play01:41

kubernetes solve and what are the tasks

play01:44

of a container orchestration tool

play01:46

actually so to go through this

play01:48

chronologically the rise of micro

play01:51

services cause increased usage of

play01:54

container technologies because the

play01:56

containers actually offer the perfect

play01:57

host for small independent applications

play02:02

like Microsoft a nurse and the micro

play02:06

service technology actually resulted in

play02:09

applications that they're now comprised

play02:11

of hundreds or sometime

play02:13

maybe even thousands of containers now

play02:16

managing those loads of containers

play02:18

across multiple environments using

play02:21

scripts and self-made tools can be

play02:23

really complex and sometimes even

play02:26

impossible so that specific scenario

play02:29

actually caused the need for having

play02:32

container orchestration technologies so

play02:35

what those orchestration tools like

play02:37

kubernetes do is actually guarantee

play02:40

following features one is high

play02:43

availability in simple words high

play02:46

availability means that the application

play02:49

has no downtime so it's always

play02:50

accessible by the users a second one is

play02:54

scalability which means that application

play02:57

has a high performance it loads fast and

play03:00

users have a very high response rates

play03:04

from the application and the third one

play03:06

is disaster recovery which basically

play03:09

means that if an infrastructure has some

play03:11

problems like data is lost or the

play03:13

server's explode or something bad

play03:15

happens with the server center the

play03:17

infrastructure has to have some kind of

play03:19

mechanism to pick up the data and to

play03:21

restore it to the latest state so that

play03:23

application doesn't actually lose any

play03:25

data and the containerized application

play03:28

can run from the latest stayed after the

play03:31

recovery and all of these are

play03:33

functionalities that container

play03:35

orchestration technologies like

play03:37

kubernetes offer

play03:41

so how does the kubernetes basic

play03:43

architecture actually look like the

play03:46

kubernetes cluster is made up with at

play03:48

least one master node and then connected

play03:52

to it you have a couple of worker nodes

play03:55

where each node has a cubelet process

play03:59

running on it and cubelet is actually a

play04:01

kubernetes process that makes it

play04:04

possible for the cluster to talk to each

play04:07

other to communicate to each other and

play04:09

actually execute some tasks on those

play04:12

nodes like running application processes

play04:14

each worker node has docker containers

play04:18

of different applications deployed on it

play04:20

so depending on how the workload is

play04:23

distributed you would have different

play04:26

number of docker containers running on

play04:28

worker nodes and worker nodes are where

play04:31

the actual work is happening so here is

play04:34

where where your applications are

play04:36

running so the question is what is

play04:38

running on master node master node

play04:41

actually runs several kubernetes

play04:44

processes that are absolutely necessary

play04:47

to run and manage the cluster properly

play04:50

one of such processes is an API server

play04:53

which also is a container an API server

play04:57

is actually the entry point to the

play04:59

kubernetes cluster so this is the

play05:01

process which the different kubernetes

play05:03

clients will talk to like UI if you're

play05:06

using kubernetes dashboard an API if

play05:09

you're using some scripts and automating

play05:12

technologies and a command-line tool so

play05:15

all of these will talk to the API server

play05:17

another process that is running on

play05:19

master node is a controller manager

play05:22

which basically keeps an overview of

play05:24

what's happening in the cluster whether

play05:26

something needs to be repaired or maybe

play05:29

if a container died and it needs to be

play05:31

restarted etc and another one is

play05:34

scheduler which is basically responsible

play05:37

for scheduling containers on different

play05:40

nodes based on the workload and the

play05:43

available server resources on each node

play05:46

so it's an intelligent process that

play05:49

decides on which worker node the next

play05:52

container should be scheduled on

play05:55

based on the available resources on

play05:57

those worker notes and the load that

play05:59

that container meets and another very

play06:02

important component of the whole cluster

play06:04

is actually an etcd

play06:06

key value storage which basically holds

play06:09

at any time the current state of the

play06:12

kubernetes cluster so it has all the

play06:14

configuration data inside and all the

play06:17

status data of each node and each

play06:20

container inside of that node and the

play06:22

backup and restore that we mentioned

play06:24

previously is actually made from these

play06:27

etcd snapshots because you can recover

play06:30

the whole cluster state using that etcd

play06:33

snapshot and last but not least also a

play06:37

very important component of kubernetes

play06:38

which enables those notes

play06:41

worker notes master notes talk to each

play06:43

other is the virtual network that spends

play06:46

all the notes that are part of the

play06:49

cluster and in simple words virtual

play06:51

network actually turns all the notes

play06:54

inside of the cluster into one powerful

play06:57

machine that has the sum of all the

play07:00

resources of individual notes one thing

play07:03

to be noted here is that work who knows

play07:05

because they actually have most load

play07:09

because they are running the

play07:11

applications on inside of it usually are

play07:14

much bigger and have more resources

play07:16

because they will be running hundreds of

play07:19

containers inside of them where is

play07:21

master node will be running just a

play07:23

handful of master processes like we see

play07:26

in this diagram so it doesn't need that

play07:28

many resources however as you can

play07:30

imagine master node is much more

play07:32

important than the individual worker

play07:34

notes because if for example you lose a

play07:37

master node excess you will not be able

play07:40

to access the cluster anymore and that

play07:43

means that you absolutely have to have a

play07:45

backup of your master at any time so in

play07:49

production environments usually you

play07:51

would have at least two masters inside

play07:53

of your kubernetes cluster but in more

play07:56

cases of course you're going to have

play07:57

multiple masters where if one master

play08:00

node is down the cluster continues to

play08:04

function smoothly because you have other

play08:06

masters available so now

play08:09

look at some kubernetes basic concepts

play08:11

like pots and containers in kubernetes

play08:13

pod is the smallest unit that you as a

play08:17

kubernetes user will configure and

play08:20

interact with in pod is basically a

play08:23

wrapper of a container and on each

play08:27

worker node you're gonna have multiple

play08:29

pods and inside of a pod you can

play08:33

actually have multiple containers

play08:34

usually per application you would have

play08:38

one pod so the only time you would need

play08:40

more than one containers inside of a pod

play08:43

is when you have a main application that

play08:46

needs some helper containers so usually

play08:50

you would have one pod per application

play08:53

so a database for example would be one

play08:56

pod a message broker will be another pod

play08:59

a server will be again another pod in

play09:01

Europe no J's application for example or

play09:04

a java application will be its own pod

play09:07

and as we mentioned previously as well

play09:09

there is a virtual network dispense the

play09:12

kubernetes cluster so what that does is

play09:15

that it assigns each pod its own IP

play09:18

address so each pod is its own self

play09:22

containing server with its own IP

play09:24

address and the way that they can

play09:27

communicate with each other is we using

play09:29

that internal IP addresses and to note

play09:33

here we don't actually configure or

play09:36

create containers inside of kubernetes

play09:39

cluster but we only work with the pods

play09:41

which is an abstraction layer over

play09:43

containers and pod is a component of

play09:47

kubernetes that manages the containers

play09:50

running inside itself without our

play09:53

intervention so for example if a

play09:55

container stops or dies inside of a pod

play09:58

it will be automatically restarted

play10:00

inside of the pod however pods are

play10:05

ephemeral components which means that

play10:08

pots can also die very frequently and

play10:11

when a pod dies a new one gets created

play10:14

and here is where the notion of service

play10:17

comes into play so what happens is that

play10:20

whenever a pod gets restarted or weak

play10:22

a new pod is created and it gets a new

play10:26

IP address so for example if you have

play10:29

your application talking to a database

play10:32

pod using the IP address the pods have

play10:35

and the pod restarts it gets a new IP

play10:37

address obviously it would be very

play10:41

inconvenient but just that IP address

play10:43

all the time so because of that another

play10:46

component of Cabrini's called service is

play10:48

used which basically is an alternative

play10:53

or a substitute to those IP addresses so

play10:56

instead of having this dynamic IP

play10:58

addresses their services sitting in

play11:02

front of each pod that talk to each

play11:04

other so now if a pod behind the service

play11:08

dies and gets recreated the service

play11:11

stays in place because their life cycles

play11:14

are not tied to each other and the

play11:16

service has two main functionalities one

play11:19

is an IP address so it's a permanent IP

play11:22

address which you can use to communicate

play11:24

with between the pods and at the same

play11:27

time it is a load balancer

play11:31

so now that we have seen the basic

play11:33

concepts of kubernetes how do we

play11:35

actually create those components like

play11:37

pods and services to configure the

play11:40

kubernetes cluster all the configuration

play11:42

in kubernetes cluster actually goes

play11:45

through a master node with the process

play11:47

called API server which we mentioned

play11:49

briefly earlier so kubernetes clients

play11:52

which could be a UI a kubernetes

play11:54

dashboard for example or an API which

play11:57

could be a script or curl command or a

play12:00

command line tool like cube CTL they all

play12:03

talk to the API server and they send

play12:05

their configuration requests to the API

play12:07

server which is the main entry point or

play12:10

the only entry point into the cluster in

play12:12

this requests have to be either in

play12:15

yellow format or JSON format and this is

play12:18

how a example configuration in the ML

play12:21

format actually looks like so with this

play12:23

we are sending a request to kubernetes

play12:25

to configure a component called

play12:28

deployment which is basically a template

play12:31

or a blueprint for creating pots and in

play12:33

this specific configuration example we

play12:36

tell kubernetes to create to replica

play12:39

pots for us called my app with each pod

play12:43

replica having a container based on my

play12:46

image running inside in addition to that

play12:49

we configure what the environment

play12:52

variables and the port configuration of

play12:54

this container inside of the pot should

play12:57

be and as you see the configuration

play13:00

requests in kubernetes our declarative

play13:03

form so we declare what is our desired

play13:05

outcome from kubernetes and kubernetes

play13:08

tries to meet those requirements meaning

play13:11

for example since we declare we want to

play13:14

replica pots of my app deployment to be

play13:17

running in the cluster and one of those

play13:20

pots dies the controller manager will

play13:22

see that the east and shoot states now

play13:25

are different the actual state is one

play13:27

part our desired state is two so it goes

play13:31

to work to make sure that this desired

play13:33

state is recovered automatically

play13:37

restarting the second replica of that

play13:39

pot and this is how kubernetes

play13:41

configuration works with all of its

play13:43

component

play13:44

be the parts or the services or

play13:46

deployments what have you thanks for

play13:50

watching the video I hope it was helpful

play13:51

and if it was don't forget to like it if

play13:54

you want to be notified whenever a new

play13:56

video comes out then subscribe to my

play13:58

channel if you have any questions if

play14:01

something wasn't clear in the video

play14:02

please post them in a comment section

play14:04

below and I will try to answer them so

play14:07

thank you and see you in the next video

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
KubernetesOrquestaciónContenedoresMicroserviciosGoogleArquitecturaMaster-WorkerPODsServiciosConfiguración
هل تحتاج إلى تلخيص باللغة الإنجليزية؟