3 1 6 Containers

Cognitive Class
22 Mar 202009:00

Summary

TLDRThis video script discusses containerization, a technology that packages application code with its libraries and dependencies into executable units that can run on any platform. It contrasts containers with virtual machines, highlighting their smaller size, faster deployment, and portability without the need for a guest OS. The script explains the history of container technology since Linux kernel's introduction of control groups in 2008 and how it enables agile DevOps practices by avoiding 'works on my machine' issues. It also illustrates the container creation process, from manifest to image to container, and shows how containers are more resource-efficient and scalable compared to VMs.

Takeaways

  • 📦 **Containers Defined**: Containers are a unit of software that packages an application with its libraries and dependencies, allowing it to run consistently across different environments.
  • 🚀 **Efficiency**: Containers are smaller and faster than virtual machines, as they do not require a guest OS, instead leveraging the host OS's features and resources.
  • 🌐 **Portability**: Containers are designed to be portable, ensuring that applications run the same way on a desktop, in traditional IT, or in the cloud.
  • 📚 **Containerization History**: The foundation of modern container technologies was laid in 2008 with the introduction of Linux control groups, which led to tools like Docker and Kubernetes.
  • 🛠️ **Development to Production**: The script uses a node.js application as an example to highlight the benefits of containerization in moving applications from development to production.
  • 🖥️ **Virtual Machines (VMs)**: VMs require a guest OS and additional libraries, leading to larger file sizes and resource consumption compared to containers.
  • 🔄 **Scalability**: Containers are easier to scale due to their lightweight nature; they do not need to duplicate OS dependencies for each instance.
  • 🔧 **Container Process**: The process of containerization typically involves creating a manifest (like a Dockerfile), building the image, and then running the container.
  • 🔀 **Modularity**: Containers allow for modular deployment, where different applications can be scaled independently without being tied to the same VM.
  • 🔄 **Resource Sharing**: Container technologies enable resource sharing among different processes, making efficient use of available hardware resources.
  • 🔄 **Agility**: Containerization supports agile DevOps practices by simplifying the development, deployment, and scaling of cloud-native applications.

Q & A

  • What is a container in the context of software?

    -A container is an executable unit of software that packages application code along with its libraries and dependencies, allowing it to run consistently across different environments like desktop, traditional IT, or the cloud.

  • How do containers differ from virtual machines?

    -Containers are lightweight and do not require a guest OS in each instance. Instead, they leverage the features and resources of the host OS, unlike virtual machines that require a full guest OS for each instance.

  • When was the foundation for modern container technology introduced?

    -The foundation for modern container technology was introduced in 2008 with the Linux kernel's C groups and control groups.

  • What are some examples of container technologies mentioned in the script?

    -Examples of container technologies mentioned include Docker, Cloud Foundry, and Rocket.

  • What is the smallest size of a node.js VM mentioned in the script?

    -The smallest size of a node.js VM mentioned is over 400 megabytes, which is significantly larger than the node.js runtime and application itself, which would be under 15 megabytes.

  • Why can using VMs create issues when pushing applications into production?

    -Using VMs can create issues in production because applications might work on the local machine but encounter incompatibilities when deployed on a VM, which can hinder agile DevOps, continuous integration, and delivery.

  • What is the three-step process for working with containers?

    -The three-step process for working with containers involves creating a manifest (like a Dockerfile), building the image, and then creating the container itself.

  • What is the role of a Dockerfile in containerization?

    -A Dockerfile is a manifest that describes the container, specifying the steps needed to assemble the container image.

  • How does containerization help with scaling applications?

    -Containerization helps with scaling applications by allowing for the deployment of multiple lightweight containers without the need for a guest OS, thus using fewer resources and enabling more efficient scaling.

  • What is the advantage of container technology when accessing third-party services?

    -Container technology allows for modular and cloud-native architectures, enabling the deployment of individual components like a Python application for accessing a third-party service, without the need to bundle them into a single VM.

  • How do shared resources work in container-based technology?

    -In container-based technology, shared resources can be utilized by any container if they are not being used by another process, allowing for efficient resource allocation and utilization.

  • What does the script suggest about the future lesson's topic?

    -The script suggests that the next lesson will cover cloud storage, which is likely to discuss storage solutions that complement containerization and cloud-native applications.

Outlines

00:00

💻 Introduction to Containers

The script introduces containers as a method of packaging application code with its libraries and dependencies into an executable unit that can run on any platform, including desktops, traditional IT, and the cloud. Containers are highlighted for their small size, speed, and portability, contrasting with virtual machines that require a guest OS in each instance. Sai Vennam, a developer advocate at IBM, discusses the history of container technology, starting from the Linux kernel's introduction of control groups in 2008. He uses a node.js application as an example to illustrate the advantages of containerization over VMs, such as reduced resource consumption and ease of scaling. Sai also touches on the issues of portability and compatibility that arise with VMs, which containers help to resolve.

05:01

🚀 Containerization Process and Benefits

This paragraph delves into the three-step process of containerization: creating a manifest (like a Dockerfile), building the image, and pushing it to a registry to create a container. It contrasts this with the VM approach, emphasizing the lightweight nature of containers that contain only the necessary libraries and application binaries, thus using fewer resources. The script uses a scenario where a node.js application is containerized and scaled, demonstrating how containers allow for efficient use of resources. It also explores the modularity of containers, showing how they can be combined with other services, like a Python application accessing a cognitive API for image recognition. The benefits of containerization for cloud-native architectures, such as portability, scalability, and facilitation of agile DevOps practices, are highlighted.

Mindmap

Keywords

💡Containers

Containers are a type of software that packages an application and its dependencies together in a way that it can be run consistently across various computing environments. They are integral to the video's theme as they enable developers to create applications that are portable and can run anywhere, from desktops to cloud environments. The script mentions Docker and Kubernetes, which are tools commonly associated with containerization.

💡Executable unit

An 'executable unit' refers to a component that can be run independently. In the context of the video, containers are described as executable units because they encapsulate the application code and its libraries, making it a self-contained and runnable entity. This concept is central to understanding the flexibility and portability of container technology.

💡Virtual Machines (VMs)

Virtual Machines are software-based computing environments that emulate physical hardware. They are contrasted with containers in the script to highlight the efficiency and portability of containers. VMs typically include a guest operating system, which can lead to larger file sizes and slower deployment compared to containers.

💡Portability

Portability in the video refers to the ability of a software application to run across different environments without modification. Containers are highlighted as being portable because they can be run on any system that supports containerization, which is a key advantage over VMs.

💡Linux Kernel

The Linux Kernel is the core part of a Linux operating system, responsible for system resource management. The video mentions that container technology has roots in the Linux kernel's introduction of control groups in 2008, which laid the groundwork for modern containerization technologies.

💡C groups

C groups, or control groups, are a Linux kernel feature that limits, accounts for, and isolates the resource usage of processes. They are fundamental to container technology as they allow containers to share the host OS resources efficiently without needing a guest OS.

💡Docker

Docker is a platform that allows developers to package applications and their dependencies into containers. It is mentioned in the script as a widely recognized tool for containerization. Docker files and Docker images are part of the process of creating and deploying containers.

💡Kubernetes

Kubernetes is an open-source platform designed to automate deploying, scaling, and managing containerized applications. It is mentioned alongside Docker as a tool associated with container technology, indicating its role in managing containerized applications at scale.

💡Cloud Native

Cloud native refers to applications and infrastructure designed to leverage cloud computing models. The video discusses how container technology aligns with cloud-native architectures, allowing for efficient scaling and resource utilization, which is a key advantage in modern cloud computing environments.

💡DevOps

DevOps is a set of practices that automates the processes between software development and IT teams. The script mentions that containers facilitate agile DevOps practices by making it easier to integrate and deploy applications continuously, which is crucial for modern software development workflows.

💡Continuous Integration and Delivery (CI/CD)

CI/CD refers to the practices of automating the integration and delivery of software. The video script indicates that containerization supports CI/CD by making it easier to deploy applications consistently across different environments, which is a significant benefit for software development teams.

Highlights

Containers are an executable unit of software that packages application code with libraries and dependencies.

Containers can run anywhere, including desktop, traditional IT, or the cloud.

They are small, fast, and portable compared to virtual machines.

Containers do not need a guest OS in every instance and can leverage the host OS.

Container technology has been around since 2008 with the introduction of Linux kernel control groups.

Examples of container technologies include Docker, Cloud Foundry, Rocket, and others.

A node.js application example is used to explain the advantages of containerization.

VMs require a guest OS, binaries, and libraries, which can bloat the application size.

The smallest node.js VM is over 400MB, whereas the app itself is under 15MB.

Scaling VMs requires deploying a separate guest OS and libraries for each instance.

Containerization solves the 'it works on my machine' problem.

The three-step process of containerization involves creating a manifest, building the image, and creating the container.

Containers are more lightweight than VMs and use fewer resources.

Container technology allows for better utilization of shared resources.

Containers enable true cloud-native architectures and easier scaling.

Containerization streamlines development and deployment of Cloud Native applications.

The next lesson will cover cloud storage.

Transcripts

play00:08

Containers are an executable unit of software in which application code is packaged, along

play00:14

with its libraries and dependencies, in common ways so that it can be run anywhere, whether

play00:20

it be on desktop, traditional IT, or the cloud.

play00:25

Containers are small, fast, and portable, and unlike virtual machines, they do not need

play00:31

to include a guest OS in every instance and can, instead, simply leverage the features

play00:37

and resources of the host OS.

play00:41

In the rest of this video, we will see how container-based technology really works.

play00:46

Hi everyone.

play00:47

My name is Sai Vennam and I'm a developer advocate with IBM.Today I want to talk about

play00:52

containerization.

play00:53

Whenever I mention containers, most people tend to default to something like Docker or

play00:57

even Kubernetes these days.

play00:59

But container technology has actually been around for quite some time.

play01:02

It's actually back in 2008 that the Linux kernel introduced C groups, and control groups,

play01:07

that basically paved the way for all the different container technologies we see today.

play01:11

So that includes Docker, but also things like Cloud Foundry, as well as Rocket and other

play01:16

container runtimes out there.

play01:18

Let's get started with an example, and we'll say that I was a developer.

play01:22

I've created a node.js application and I want to push it into production.

play01:29

We'll take two different form factors to kind of explain the advantages of containerization.

play01:34

Let's say that first we'll talk about VMs, and then we'll talk about containers.

play01:39

First things first, let's introduce some of the things that we've got here.

play01:47

We've got the hardware itself, just a big box.

play01:51

We've got the guest, or rather, the host, operating system, as well as a hypervisor.

play01:58

Hypervisor is actually what allows us to spin up VMs.

play02:04

Let's take a look at this shared pool of resources with the host OS and hypervisor.

play02:08

We can assume that some of these resources have already been consumed.

play02:13

Next, let's go ahead and take this .js application and push it in.

play02:16

And to do that, I need a Linux VM, so let's go ahead and sketch out that Linux VM.

play02:24

In this VM there's a few things to note here.

play02:28

We've got another operating system, in addition to the host OS, it's gonna be the guest OS,

play02:33

as well as some binaries and libraries.

play02:36

That's one of the things about Linux VMs, that even though we're working with a really

play02:39

lightweight application, to create that Linux VM, we have to put that guest OS in there,

play02:45

in a set of binaries and libraries.

play02:47

That really bloats it out.

play02:49

In fact, I think the smallest node .js VM that I've seen out there is 400 plus mega

play02:54

bytes, whereas the node.js runtime and app itself would be under 15.

play03:00

So we've got that and we'll go ahead and let's push that .js application into it.

play03:06

Just by doing that alone, we're gonna consume a set of resources.

play03:10

Next, let's think about scaling this out.

play03:14

So we'll create two additional copies of it, and you'll notice that even though it's the

play03:21

exact same application, we have to use and deploy that separate guest OS and libraries

play03:27

every time.

play03:28

And so we'll do that three times.

play03:31

And by doing that, essentially, we can assume that for this particular hardware we've consumed

play03:37

all of the all of the resources.

play03:41

There's another thing that I haven't mentioned here, but this .js application, I developed

play03:45

it on my macbook.

play03:46

So when I pushed it into production to get it going on the VM, and notice that there

play03:50

were some issues and incompatibilities.

play03:51

This is the kind of foundations is big, he said, she said issue.

play03:57

Where things might be working on your local machine, and work great, but when you try

play04:00

to push it into production, things start to break.

play04:03

And this really gets in the way of doing agile DevOps, and continuous integration and delivery.

play04:09

That's solved when you use something like containers.

play04:11

There's a three-step process when kind of doing anything container related, and then

play04:16

pushing, or creating, containers.

play04:18

And it almost always starts with first, some sort of a manifest.

play04:23

So something that describes the container itself.

play04:26

In the Docker world, this would be something like a Docker file.

play04:29

And in Cloud Foundry, this would be a manifest Channel.

play04:32

Next, what you'll do is create the actual image itself.

play04:35

For the image, again, if you're working with something like Docker, that could be something

play04:41

like a Docker image.

play04:43

If you're working with Rocket it would be an ACI or application container image.

play04:47

So regardless of the different containerization technologies, this process stays the same.

play04:52

The last thing you end up with is an actual container itself.

play04:56

You know, that contains all of the runtimes, and libraries, and binaries needed to run

play05:00

an application.

play05:02

That application runs on a very similar set up to the VMS, but what we've got on this

play05:07

side is, again, a host operating system.

play05:11

The difference here, instead of a hypervisor, we're gonna have something like a runtime

play05:15

engine.

play05:16

So if you're using Docker this would be the Docker engine and, you know, different containerization

play05:25

technologies would have a different engine.

play05:26

Regardless, it's something that runs those containers.

play05:29

Again, we've got this shared pool of resources, so we can assume that that alone consumes

play05:35

some set of resources.

play05:37

Next, let's think about actually containerizing this technology.

play05:41

We talked about the three-step process.

play05:42

We create a docker file.

play05:44

We build out the image.

play05:46

We push it to a registry, and we have our container and we can start pushing this out

play05:50

as containers.

play05:51

The great thing is, these are going to be much more lightweight.

play05:54

So deploying out multiple containers, since you don't have to worry about a guest OS this

play06:03

time, you really just have the libraries, as well as the the application itself.

play06:09

We've scaled that out three times, and because we don't have to duplicate all of those operating

play06:14

system dependencies and create bloated VMs, we actually will use less resources.

play06:20

So use a different color here... and scaling that out three times, we still have a good

play06:29

amount of resources left.

play06:31

Next, let's say that my coworker decides, hey, for this .js application, let's take

play06:36

advantage of a third party - let's say a cognitive API - to do something like image recognition.

play06:42

Let's say that we've got our third party service, and we want to access that using maybe a Python

play06:50

application.

play06:52

So he's created that service that acts as that third party APIs.

play06:58

And with our node.js application, we want to access that Python app, to then access

play07:03

that service.

play07:04

If we wanted to do this in VMs, I'm really tempted to basically create a VM out of both

play07:11

the .js application and the Python application.

play07:14

Because essentially that would allow me to continue to use the VMs that I have.

play07:18

But that's not truly cloud native, right?

play07:20

Because if I wanted to scale out the .js, but not the Python app, I wouldn't be able

play07:25

to if they were running in the same VM.

play07:26

So to do it in a truly cloud native way, essentially I would have to free up some of these resources.

play07:32

Basically get rid of one of these VMs, and then deploy the Python application in it instead.

play07:38

And you know, that's not ideal.

play07:40

But with the container based approach what we can do is simply say, since we're modular,

play07:46

we can say, okay, just deploy one copy of the Python application.

play07:53

So we'll go ahead and do that.

play07:54

There's a different color here.

play07:58

And that consumes a little bit more resources.

play08:01

Then with those those remaining resources, the great thing about container technology,

play08:06

that actually becomes shared between all the processes running.

play08:09

In fact, another advantage if some of these container processes aren't actually utilizing

play08:14

the CPU or memory, all of those shared resources become accessible for the other containers

play08:21

running within that hardware.

play08:25

So with container-based technology, we can truly take advantage of cloud native based

play08:29

architectures.

play08:30

We talked about things like portability of the containers.

play08:33

We talked about how it's easier to scale them out.

play08:35

And then overall, with this kind of three-step process and the way we push containers, allows

play08:40

for more agile devops and continuous integration and delivery.

play08:45

Containers streamline development and deployment of Cloud Native applications.

play08:50

In the next lesson, we will cover cloud storage.

Rate This

5.0 / 5 (0 votes)

関連タグ
ContainerizationDockerKubernetesCloud NativeDevOpsIBMLinux KernelVMsSai VennamAgile
英語で要約が必要ですか?