¿QUE ES KUBERNETES? - Introducción al orquestador más usado

Pelado Nerd
22 Sept 202007:57

Summary

TLDRThis video offers an introductory explanation of Kubernetes, a container orchestration tool created by Google and now open source. It discusses Kubernetes' ability to manage applications across multiple servers, ensuring high availability and reducing downtime through application replication. The script covers the basic architecture, including Masters and Workers, and introduces key components like the API server, controller manager, and scheduler. It also explains the concept of Pods, Services, and the use of declarative manifests for application deployment, highlighting Kubernetes' role in simplifying container management at scale.

Takeaways

  • 😀 Kubernetes is a tool for managing containerized applications across multiple servers, simplifying deployment and scaling.
  • 🔧 Docker is used to create containerized applications, ensuring they run the same way on any machine, avoiding 'it works on my machine' problems.
  • 👨‍💻 Developers can focus on creating and modifying applications without needing to coordinate with system administrators for configuration.
  • 🚀 Kubernetes was created by Google and is now open-source, allowing anyone to download, modify, and improve it.
  • 🌐 Kubernetes solves the problem of managing a large number of containers across many nodes, providing high availability and reducing downtime.
  • 🔄 High availability in Kubernetes is achieved by creating replicas of applications, ensuring traffic is redirected to functioning copies if one fails.
  • 📈 Scalability is managed by Kubernetes, allowing for the creation of additional application copies to handle increased traffic or capacity needs.
  • 🛠 Kubernetes architecture consists of two main groups: Masters and Workers, with the Masters running several components like the API server, controller manager, and scheduler.
  • 📦 Pods in Kubernetes are sets of containers that share a network space and IP, typically running a single container each but can run more for task division.
  • 🔗 An overlay network in Kubernetes allows pods to communicate across different nodes, facilitating inter-pod communication regardless of their location.
  • 🌐 Services in Kubernetes provide a stable way to access applications, with different types like ClusterIP, NodePort, LoadBalancer, and Ingress to manage traffic.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is Kubernetes, explaining what it is, its purpose, and how it works.

  • What is Docker and how does it relate to Kubernetes?

    -Docker is a tool that allows you to run applications in containers, ensuring that the application runs the same way on any machine. It is related to Kubernetes as Kubernetes is an orchestration tool for managing Docker containers across multiple servers.

  • What problem does Kubernetes solve that Docker does not?

    -Kubernetes solves the problem of managing containers across many servers, providing high availability, scaling, and disaster recovery, which Docker does not handle on its own.

  • What is an orchestrator in the context of container management?

    -An orchestrator is a tool that manages the deployment, scaling, and operations of application containers across a cluster of hosts.

  • Who originally created Kubernetes and what is its current status?

    -Kubernetes was originally created by Google and is now an open-source project that anyone can download, use, and modify.

  • What does Kubernetes provide in terms of application management?

    -Kubernetes provides high availability by creating replicas of applications, scaling capabilities to handle varying traffic, and disaster recovery through its declarative manifests.

  • What are the two main components of a Kubernetes architecture?

    -The two main components of a Kubernetes architecture are the Masters and the Workers. The Masters manage the cluster, while the Workers run the applications in containers.

  • What is a Pod in Kubernetes and why are they considered volatile?

    -A Pod in Kubernetes is a set of containers that share a network space and have a single IP. They are considered volatile because they can be destroyed and recreated during deployments, such as version updates.

  • What is a Service in Kubernetes and why is it used?

    -A Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy by which to access them. It is used to provide a stable endpoint for accessing the Pods, which are otherwise volatile.

  • What are some types of Services in Kubernetes?

    -Some types of Services in Kubernetes include ClusterIP, NodePort, LoadBalancer, and Ingress, each providing different methods of accessing the Pods.

  • How does Kubernetes handle the creation and management of Pods?

    -Kubernetes uses deployments, which are templates for creating Pods. When a deployment is applied to a Kubernetes cluster, the controller manager takes care of creating the Pods according to the template.

Outlines

00:00

📚 Introduction to Kubernetes

This paragraph introduces the topic of Kubernetes, explaining its purpose and basic functionality. It mentions that Kubernetes is a tool for managing containerized applications across multiple servers, ensuring high availability and scalability. The speaker compares it to Docker, which is used for running applications in containers but lacks the orchestration capabilities that Kubernetes provides. Kubernetes was created by Google and is now an open-source project, allowing anyone to download, modify, and improve it. The paragraph also touches on the high-level architecture of Kubernetes, including the roles of Masters and Workers, and the use of agents, the API server, controller manager, and scheduler within the system.

05:00

🔧 Deep Dive into Kubernetes Features and Architecture

The second paragraph delves deeper into Kubernetes' features, discussing its architecture and the components that make it function effectively. It explains the concept of Pods, which are groups of containers that share a network namespace and IP, and the volatility of Pods, which are destroyed and recreated during deployments. The paragraph also introduces Services in Kubernetes, which provide a stable entry point to access applications running in Pods, and mentions different types of services like ClusterIP, LoadBalancer, Ingress, and NodePort. The speaker emphasizes the use of declarative manifests for managing the state of the cluster, allowing for easy recovery and scaling of applications. The paragraph concludes with a mention of the practicality of running Kubernetes in various cloud environments and the ease of setting it up locally for practice.

Mindmap

Keywords

💡Kubernetes

Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. In the video, Kubernetes is presented as a solution to manage containers across multiple servers, providing high availability and reducing downtime by creating application replicas and handling traffic routing.

💡Containerization

Containerization refers to the process of packaging software into containers that can be run on any system without modification. In the script, Docker is mentioned as a tool for containerization, allowing applications to run with their environment and dependencies encapsulated within a container, ensuring consistent performance across different machines.

💡Orchestration

Orchestration in the context of the video refers to the automated management of containers across multiple servers. It is the process of coordinating the operation of containers to ensure they work together efficiently. Kubernetes serves as an orchestrator, handling tasks such as load balancing, scaling, and self-healing of containers.

💡High Availability

High availability is the property of a system being continuously operational for a desirable period. In the video, Kubernetes achieves high availability by creating multiple replicas of an application, ensuring that if one replica fails, the others can continue to serve the application without affecting the user experience.

💡Pods

In Kubernetes, a pod is the basic unit of deployment and scaling. It is a group of one or more containers that share the same network namespace and can communicate with each other. The script mentions that pods are volatile, meaning they are destroyed and recreated during deployment, and that they should not be directly addressed via IP for service access.

💡Services

Services in Kubernetes are abstractions that define a logical set of pods and a policy by which to access them. They provide a stable endpoint for accessing the application running in pods. The video script discusses different types of services, such as ClusterIP, NodePort, LoadBalancer, and Ingress, which offer various ways to expose an application to external traffic.

💡Deployment

A deployment in Kubernetes is a strategy to declare the desired state of a set of pods. It is used to manage the application's lifecycle, including creating, updating, and scaling the pods. The script explains that once a deployment is applied to a cluster, the controller manager in Kubernetes takes over to ensure the desired state is maintained.

💡Manifests

In Kubernetes, manifests are files that describe the desired state of the system's resources. They are used to create and manage resources declaratively. The video script mentions that Kubernetes uses manifests to define the deployment of applications, allowing for easy scaling and recovery of services.

💡Etcd

Etcd is a distributed key-value store that provides a reliable way to store data across a cluster of machines. In the context of Kubernetes, etcd is used to store the cluster state, including all configurations and the desired state of the system, ensuring consistency and persistence of the cluster's data.

💡API Server

The API server in Kubernetes is a component that exposes the Kubernetes API. It allows different clients, such as command-line tools, developer UIs, and third-party integrations, to interact with the cluster. The script mentions the API server as part of the master components that handle external interactions with the Kubernetes cluster.

💡Controller Manager

The controller manager in Kubernetes is responsible for running cluster controllers that watch the state of the cluster and ensure it matches the desired state. It is a key component that handles the reconciliation process, making sure that the actual state of the cluster aligns with the state defined in the manifests.

Highlights

Introduction to Kubernetes, explaining its purpose and how it differs from Docker and Docker Swarm.

Docker as a tool for running applications in a containerized manner, ensuring consistent application behavior across different environments.

The challenge of managing containers across multiple servers and the need for an orchestrator to handle this complexity.

Kubernetes as a container orchestration tool created by Google and now open-sourced for community contributions.

Kubernetes' primary function of managing containers across multiple nodes and providing high availability and reducing downtime.

The concept of creating application replicas in Kubernetes to ensure high availability and seamless traffic redirection in case of failures.

Kubernetes' ability to scale applications by creating additional copies to handle increased traffic or capacity needs.

The role of Kubernetes in disaster recovery, simplifying the process of recreating lost applications based on declarative manifests.

Overview of Kubernetes architecture, distinguishing between Masters and Workers, and the function of each component.

Explanation of 'kubelet', the Kubernetes agent that runs on each worker node.

The function of the API server in Kubernetes, which exposes an interface for various clients to interact with the cluster.

The role of the controller manager in monitoring the cluster and ensuring the correct number of containers are running.

The scheduler's responsibility for placing pods based on the instructions from the controller manager.

The use of etcd as the database for storing the state and configurations of the Kubernetes cluster.

Definition and explanation of 'pods' in Kubernetes, which are sets of containers sharing a network namespace.

The volatility of pods and the process of creating new ones during deployment updates.

Introduction to Kubernetes services, which provide a stable entry point to access applications running in pods.

Different types of Kubernetes services, such as ClusterIP, NodePort, LoadBalancer, and Ingress, and their respective use cases.

The importance of Kubernetes' declarative manifests for defining and managing application deployment templates.

Practical advice on when to use Kubernetes in a business environment based on the scale and needs of the operation.

Availability of various Kubernetes providers and the ease of running Kubernetes with the support of cloud services like Digital Ocean, Google Cloud, and Amazon.

A personal account of using Kubernetes in a large-scale operation with approximately 3,000 servers running 100 pods each.

Comparison between using Kubernetes and managing containers with OpenStack's LXD, highlighting the impracticality of the latter for production environments.

Transcripts

play00:00

Bueno hoy vamos a hablar de kubernetes y

play00:01

vamos a explicar qué es lo que es voy a

play00:03

hacer este video Porque mucha gente ha

play00:05

visto mis otros videos donde son un poco

play00:07

más avanzados y tal vez no saben bien

play00:09

Qué es kubernetes y para qué sirve ya

play00:11

hice un video hace un par de semanas

play00:13

sobre las diferencias entre kubernetes

play00:14

docker y docker swarm y vamos a hacer

play00:16

una pequeña introducción de kubernetes

play00:18

para que puedas entender bien cómo

play00:19

funciona y de ahí pasar a los siguientes

play00:22

videos en donde vas a aprender cómo

play00:24

usarlo

play00:26

[Música]

play00:28

vamos

play00:31

esta altura ya imagino que sabes lo que

play00:33

es docker no docker es una herramienta

play00:35

que te permite correr tus aplicaciones

play00:36

de una forma contenerizada contenedores

play00:40

como un contenedor y te permite crear

play00:42

imágenes para mover esa imagen en

play00:44

cualquier máquina y levantar esa

play00:46

aplicación con el entorno de todas las

play00:48

librerías que necesita y todos los

play00:49

archivos que necesita dentro de esa

play00:51

misma imagen Esto está bueno porque te

play00:53

aseguras de que cuando corres tu

play00:55

aplicación en docker en tu máquina va a

play00:57

funcionar Exactamente igual que si la

play00:59

corres en un un servidor o en la máquina

play01:01

de un compañero y esto obviamente sirve

play01:03

para evitar ese típico problema de

play01:04

funciona mi máquina no funciona en el

play01:06

servidor y aparte podes mover ese

play01:08

trabajo de crear esa imagen crear todo

play01:10

ese entorno para la aplicación a los

play01:12

desarrolladores Entonces los

play01:13

desarrolladores pueden hacer esos

play01:14

cambios sin tener que estar liando con

play01:17

hablar con la gente de sistemas para

play01:19

configurar la aplicación obviamente como

play01:21

dije recién este estos contenedores los

play01:22

puedes correr no solamente en tu máquina

play01:23

sino que lo puedes correr en servidores

play01:25

Pero qué pasa si tenés que manejar esos

play01:27

contenedores en muchos servidores tal

play01:29

vez en decenas cientos o incluso miles

play01:32

de servidores te imaginas que manejar

play01:34

todo eso es muy difícil para la gente de

play01:36

sistemas por lo que lo que necesitas es

play01:38

un orquestador un orquestador es lo que

play01:39

te permite manejar esas aplicaciones

play01:41

dentro de contenedores a través de

play01:43

muchos servidores esos servidores pueden

play01:45

ser virtuales pueden ser físicos pueden

play01:47

ser lo que sea kubernet Es una

play01:49

herramienta de orquestación de

play01:51

contenedores fue creada por Google

play01:53

actualmente es Open source por la que

play01:54

cualquier persona puede no solamente

play01:56

descargarlo sino modificarlo y y hacerlo

play01:58

mejor el principal problema problema que

play02:00

resuelve kubernetes es la posibilidad de

play02:03

manejar muchos contenedores a través de

play02:05

muchos nodos no solamente te da eso sino

play02:07

que también te da alta disponibilidad y

play02:09

quita downtime la forma en que

play02:11

kubernetes brinda esa alta

play02:12

disponibilidad es creando réplicas de tu

play02:14

aplicación Sí entonces vos puedes tener

play02:16

varias copias de tu aplicación y si una

play02:18

de esas copias deja de funcionar

play02:19

kubernetes va a mandar el tráfico a la

play02:21

que está funcionando todo esto de forma

play02:23

transparente para el usuario que está

play02:24

visitando tu aplicación también te

play02:26

permite escalar o sea crear copias de tu

play02:28

aplicación en en el caso de que tengas

play02:30

más tráfico o que quieras por alguna

play02:32

razón tener más capacidad para recibir

play02:35

ese tráfico también se encarga de todo

play02:37

lo que es disaster recovery ya que es

play02:38

muy fácil volver a crear las

play02:40

aplicaciones que se murieron porque todo

play02:42

es basado en manifiestos declarativos

play02:45

veamos un poquito la arquitectura de

play02:46

kubernetes kubernetes tiene básicamente

play02:48

dos grandes grupos los Masters y los

play02:50

workers cada worker corre un agente de

play02:53

kubernetes que se llama cubet en los

play02:55

Masters tenemos varios componentes entre

play02:57

ellos está el Api server el Api o Api

play03:00

server la que se encarga de exponer una

play03:02

interfaz para que diferentes clientes

play03:04

puedan interactuar con kubernetes por

play03:07

ejemplo interfaces web eh otros clientes

play03:10

que hacen llamadas a la Api o el cliente

play03:12

de kubernet que se llama cub ctl el

play03:15

controller manager es el que maneja lo

play03:16

que pasa en el clúster sí está todo el

play03:19

tiempo al tanto de los contenedores que

play03:21

están corriendo y los contenedores que

play03:23

tienen que estar corriendo el scheduler

play03:25

lo que hace es recibir las órdenes del

play03:26

controller manager y mueve los pots de

play03:28

lugar en lugar todo Esto va guardado en

play03:30

una base de datos eh que se llama etcd

play03:32

esa base de datos no solamente tiene el

play03:34

Estado de tu clúster de kubernetes sino

play03:35

que tiene toda la información y todas

play03:37

las configuraciones de tu claser en los

play03:38

nodos o los workers es donde vamos a

play03:40

correr los pots de tu aplicación un Pot

play03:43

es un set de contenedores que tiene un

play03:44

solo IP o mejor dicho comparten el nam

play03:48

space de red lo más normal es correr un

play03:50

solo contenedor en cada Pot Pero hay

play03:51

casos donde corres más de uno ya que

play03:53

podes dividir algunas tareas lo que hay

play03:55

que tener en cuenta es que estos pots

play03:56

son volátiles o sea cuando haces un

play03:58

deploy por ejemplo para cambiar la

play04:00

versión los pots se destruyen y se crea

play04:02

uno nuevo a través de todos nuestros

play04:03

nodos tenemos algo que se le llama una

play04:05

overlay Network que lo que permite Es

play04:08

compartir una red entre todos los pods

play04:10

no importa en qué nodo estén sí Entonces

play04:13

el pod a puede llegar al pod B Por más

play04:16

que estén en diferentes nodos como dije

play04:18

recién esos pods son volátiles por lo

play04:20

que no es una muy buena idea apuntarle a

play04:22

la ip de un pod para llegar a él lo que

play04:25

se hace Generalmente es crear algo que

play04:27

se llama servicio un servicio de

play04:28

kubernetes hay vari tipos de servicios

play04:30

pero los más comunes son los que se

play04:32

llaman claser IP que se caracterizan por

play04:35

tener un IP que nunca cambia entonces en

play04:37

lugar de ir de pod a pod vas de un pod a

play04:40

un servicio y ese servicio tiene como

play04:41

backend los pods que corren tu

play04:44

aplicación el servicio encuentra esos

play04:46

pots basado en un set de reglas o

play04:48

etiquetas que vos le pones a tus pots

play04:49

para que el servicio los pueda encontrar

play04:51

como dije hay varios tipos de servicios

play04:53

por ejemplo tipo load balancer tipo

play04:55

ingress tipo cluster IP y tipo Note Port

play04:58

Estos son diferentes formas de crear

play05:00

servicios notebo por ejemplo te permite

play05:02

crear un puerto en el nodo que va a

play05:04

llegar a tu tráfico ingress te permite

play05:06

crear reglas basadas en en el subdominio

play05:09

por ejemplo y lo balancer te permite

play05:11

crear un balanceador de carga en tu

play05:12

proveedor Cloud ya sea Amazon digital

play05:15

Ocean o Google como dije kubernetes se

play05:17

maneja con manifiestos declarativos Sí

play05:19

entonces vos por ejemplo puedes crear un

play05:21

manifiesto que tenga una un template de

play05:23

tu aplicación y este tipo de template se

play05:26

le dice deployment el deployment

play05:28

básicamente es un template para crear

play05:30

pots una vez que vos aplicas este

play05:32

template a tu clúster de kubernetes el

play05:35

controller manager se encarga de crear

play05:36

esos pods en todo tu clúster bueno esta

play05:39

fue una explicación muy básica de lo que

play05:41

es kubernetes y creo que te va a servir

play05:43

para empezar sí de ahora en adelante

play05:45

puedes ir a la lista que está acá que es

play05:46

todos los videos de kubernetes que hice

play05:48

donde puedes aprender las diferentes eh

play05:51

features Y cosas que puedas hacer con

play05:52

kubernetes si viniste acá y queres saber

play05:55

si tenés que usar kubernetes o no de tu

play05:56

empresa te recomiendo que veas el video

play05:58

que nuevamente ya dice acá sobre docker

play06:00

y docker compost donde hablo de Cuáles

play06:02

son las diferencias de cada uno y en

play06:03

cuándo te conviene correr kubernetes en

play06:05

tu empresa a esta altura hay muchos

play06:07

proveedores de kubernetes que la verdad

play06:09

que funcionan muy bien entre ellos

play06:10

digital Ocean Google Cloud Amazon y

play06:13

varios más por lo que correr kubernetes

play06:15

en este momento es bastante fácil

play06:17

también te voy a dejar un video acá de

play06:18

Cómo correr kubernetes de forma local

play06:20

corriendo simplemente una máquina

play06:22

virtual que ya tiene todos los

play06:23

componentes de kubernetes para que

play06:24

puedas practicar y probar esos

play06:27

manifiestos que te comenté en mi caso

play06:29

por ejemplo donde yo trabajo tenemos

play06:30

aproximadamente 3,000 servidores Sí y

play06:33

corremos unos 100 pots por cada servidor

play06:37

por lo que imagínate que sería imposible

play06:39

correr todos esos contenedores a mano o

play06:41

sea ir a cada uno de sus servidores y

play06:43

ejecutar un comando para correr un

play06:44

contenedor por eso necesitamos un

play06:46

orquestador necesitamos esto para que

play06:48

pueda mantener todos estos servidores de

play06:50

una forma más abstracta sí y yo lo único

play06:52

que hago es decirle a kubernetes creame

play06:54

esta aplicación y kubernetes se encarga

play06:56

de moverlo donde tiene que ir así que

play06:57

bueno eso fue todo por hoy Espero que

play06:58

hayas aprendido alg uno suscríbete para

play07:00

más videos Dale gusto si te

play07:01

[Música]

play07:04

gustó

play07:05

anotalo Dale no me gusta Si en lugar de

play07:10

usar

play07:11

kubernetes para correr tus contenedores

play07:13

tus apliaciones en los

play07:16

pots usas Open

play07:19

stack con

play07:22

lxd y te bajas de stack de stack se

play07:27

llamaba y por más que di ahí bien grande

play07:31

no correr esto en producción corres un

play07:34

Script y te instalas de stack en tus

play07:36

servidores y corres todo ahí y decís a

play07:39

todo el mundo mira tengo mi propio

play07:40

Amazon tengo mi propio Amazon

play07:43

aws yo nunca hice

play07:45

eso

play07:55

jamás

Rate This

5.0 / 5 (0 votes)

Related Tags
KubernetesContainerOrchestrationDockerDeploymentDevOpsHigh AvailabilityScalabilityCloud ServicesManifests