OCI, CRI, ??: Making Sense of the Container Runtime Landscape in Kubernetes - Phil Estes, IBM
Summary
TLDRThe speaker discusses the evolution and standardization of container runtimes, emphasizing the importance of the Open Container Initiative (OCI) in creating a common ground for various runtimes and images. They highlight Docker's impact on popularizing containers but also note the emergence of alternatives like ContainerD, CRI-O, and rkt. The talk delves into Kubernetes' reliance on the Container Runtime Interface (CRI) for container management, showcasing how different runtimes can integrate with Kubernetes. The speaker also touches on ongoing work within the OCI, such as image signing and the potential for further standardization, aiming to simplify choices for developers and operators.
Takeaways
- 📚 The presentation aims to clarify the evolution of container runtimes and demystify terms like OCI and CRI.
- 💡 Docker popularized container usage with simple commands that made it easy to run services like Redis or Nginx in milliseconds.
- 🚀 Docker is not the only container runtime; alternatives like rkt, Cloud Foundry, LXE, lxd, singularity, and more have been developed for various use cases.
- 🔗 The Open Container Initiative (OCI) was created to standardize container runtimes and image formats, ensuring interoperability among different tools.
- 📈 The OCI has led to the development of the runtime-spec and runc, which are widely recognized and used in the industry.
- 🛠 The Container Runtime Interface (CRI) was introduced to allow Kubernetes to work with any container runtime that implements the CRI API.
- 🔄 Kubernetes itself does not run containers; it relies on a container runtime to manage the execution of containers.
- 🔄 Different runtimes like Docker, containerd, and CRI-O are implementing the CRI to work with Kubernetes.
- 👥 The industry is moving towards more standardization in areas like artifacts, image signing, and distribution specs within the OCI.
- 🔒 There is a growing interest in enhanced container isolation and security, with projects like gVisor and Kata Containers gaining attention.
- 🔄 CRI-O and containerd are becoming more popular as default runtimes in Kubernetes environments, especially in managed services and cloud platforms.
Q & A
What was the main goal of the talk on container runtimes?
-The main goal was to help the audience understand the current state of container runtimes, demystify terms like OCI and CRI, and provide informative content on the topic in an engaging way.
What is the significance of Docker's introduction in 2014-2015?
-Docker's introduction was significant because it brought simplicity and standardization to container usage with commands like 'docker run', making it easier for developers to deploy services quickly and efficiently.
Why did CoreOS develop Rocket as an alternative to Docker?
-CoreOS developed Rocket as an alternative to Docker to offer new ideas and a different approach to connecting developers with the features of the Linux kernel, such as namespaces and cgroups.
What is the purpose of the Open Container Initiative (OCI)?
-The purpose of the OCI is to create a common specification for container runtimes and image formats, allowing for interoperability among different tools and ensuring a consistent definition of what it means to have a container.
What does the Container Runtime Interface (CRI) provide for Kubernetes?
-The CRI provides an abstraction layer for container runtimes, allowing Kubernetes to interact with different runtimes through a standardized API, thus enabling the orchestration of containers without being tied to a specific runtime.
Why was the CRI interface introduced in Kubernetes?
-The CRI interface was introduced to simplify the integration of Kubernetes with various container runtimes, avoiding the need for multiple code bases and ensuring a consistent way to manage container operations.
What is the current state of container runtime usage in Kubernetes?
-Docker remains the most commonly used runtime, but other runtimes like containerd, CRI-O, and others are gaining traction, especially as they become default options in managed Kubernetes services and meet the needs of different use cases.
What is the role of the Runtime Class in Kubernetes?
-Runtime Class in Kubernetes allows users to specify which container runtime should be used for a particular pod, providing flexibility in choosing the right runtime for different workloads.
How does the talk address the topic of container security?
-The talk touches on the topic of container security by mentioning the interest in image signing and the work being done around artifacts and standard media types, indicating a growing focus on securing containers in production environments.
What are some of the ongoing efforts within the OCI?
-Ongoing efforts within the OCI include finalizing the distribution spec, standardizing image signing processes, and exploring new ideas for container images, such as OCI v2, which aims to address current challenges and user needs.
Outlines
😀 Introduction to Container Runtimes and Standards
The speaker aims to clarify the current state of container runtimes, demystify terms such as OCI (Open Container Initiative) and CRI (Container Runtime Interface), and provide informative insights into the evolution of container technology. The talk acknowledges Docker's significant impact on popularizing container usage with its simple commands, but also highlights that Docker is not the sole runtime. Other runtimes like rkt, Cloud Foundry, LXE, lxd, singularity, and more have contributed to the diversity of the container ecosystem. The emergence of OCI was crucial in establishing a common ground for container runtimes and image formats, promoting interoperability among different tools.
😉 The Role of OCI in Standardizing Container Technologies
This section delves into the role of the OCI in fostering a unified approach to containerization. It discusses the development of a runtime specification and an image format that has led to the creation of tools like 'runc' and the distribution specification for interacting with registries. The speaker also touches on the ongoing work within OCI to standardize areas such as artifacts, image signing for security, and the distribution specification. The goal is to streamline the container ecosystem and ensure that different tools and runtimes can work together seamlessly.
🎓 Understanding Kubernetes and the CRI Interface
The speaker explains that Kubernetes itself does not run containers; it relies on a container runtime to perform this task. Initially, Docker was the default runtime, but with the emergence of other runtimes, the need for a standardized interface arose. This led to the creation of the CRI in 2016, which abstracts the container runtime interactions with Kubernetes. The speaker outlines the responsibilities defined by the CRI API and how different runtimes can integrate with Kubernetes by responding to these API calls.
🚀 Current Landscape of Container Runtime Implementations
The speaker provides an overview of the current landscape of container runtime implementations, with a focus on how different runtimes like Docker, containerd, and CRI-O have adapted to work with Kubernetes through the CRI interface. It also mentions the development of shim APIs that allow for richer communication between runtimes and isolators like Kata Containers or AWS Firecracker. The concept of runtime classes in Kubernetes is introduced, which allows for specifying the runtime for individual pods, and the speaker references a recent report that shows Docker as the dominant runtime in current Kubernetes clusters.
🛡️ Security and the Future of Container Runtimes
In this segment, the speaker discusses the growing interest in enhancing container security, especially with the use of sandboxing and additional isolation techniques. The speaker references a recent tweet and Atlassian's use of Kata Containers for added security in their pipelines. There is also mention of the ongoing discussions within the OCI to standardize image signing and the exploration of new models for container images. The speaker emphasizes the importance of having a common understanding and standardization to simplify choices and operations for users and operators.
🤔 The Importance of Choice and Standardization in the Container Ecosystem
The speaker concludes by emphasizing the importance of having informed choices in the container ecosystem. The talk highlights the need for standardization to simplify decision-making and to provide a clear path forward for users and operators. The speaker encourages the audience to consider factors such as sandboxing, isolation, compatibility with Kubernetes releases, and security when choosing a runtime. The goal is to maintain momentum within the OCI to address ongoing challenges and to ensure that the container ecosystem remains user-friendly and efficient.
Mindmap
Keywords
💡Container Runtimes
💡OCI (Open Container Initiative)
💡CRI (Container Runtime Interface)
💡Docker
💡rkt (Rocket)
💡containerd
💡CRI-O
💡Kubernetes
💡Sandbox Isolation
💡OCI Runtime Spec
💡Container Registries
Highlights
The goal is to demystify container runtimes and terms like OCI and CRI, providing an informative session.
Container concepts like namespaces and cgroups existed before Docker but gained popularity with Docker's simple commands.
Docker's introduction in 2014-2015 was pivotal in making container technology widely accessible to developers.
Other container runtimes like rkt, Cloud Foundry, LXE, lxd, singularity, and more have also been developed.
The Open Container Initiative (OCI) was created to standardize container runtimes and image formats.
OCI has led to the development of a runtime spec and an image format, promoting interoperability among tools.
Kubernetes relies on container runtimes to manage containers but does not have its own runtime.
The Container Runtime Interface (CRI) was introduced to abstract container runtime interactions with Kubernetes.
Docker, containerd, and CRI-O are major implementers of the CRI, allowing for a variety of runtimes to be used with Kubernetes.
Runtime class in Kubernetes allows specifying which runtime should be used for a particular pod.
OCI aims to standardize areas like artifacts, image signing, and distribution specs for better security.
CRI-O has been integrated into various Kubernetes services and platforms, including managed services from cloud providers.
The choice of container runtime can depend on factors like performance, stability, extensibility, and compatibility.
Increased interest in additional isolation layers like sandboxing for security purposes in container deployments.
CRI tool 'crictl' allows operators to interact with different container runtimes without specific client tooling knowledge.
OCI has had a significant impact by creating a level playing field and promoting interoperability in the container ecosystem.
The talk emphasizes the importance of OCI and CRI in making pluggable runtimes a reality in Kubernetes clusters.
Transcripts
my goal is in 25 minutes which is goes
by fairly quickly with the topic is to
kind of help you see why we are where we
are with container runtimes demystify
some terms like OCI and CRI and
hopefully make that at least a little
bit informative and not just a bunch of
boring information and so we'll see see
how we do on that so clearly even though
those of you who are maybe have a deeper
layer of understanding about what what
we even call containers that they're
made up of see groups and namespaces and
and that these concepts and technologies
have existed quite a while before docker
so for example some of the namespaces
and the Linux kernel have been there for
over a decade but I think we can't all
agree that it was docker in 2014 2015
coming on the stage and sort of exciting
the developers mind with the simplicity
of some of these commands you see on the
screen now all of a sudden I just type
docker run and some sort of standard
open-source software package like Redis
or my sequel or Apache or nginx and in
less than you know 500 milliseconds this
service is running and that was magical
for for lots of developers and so that's
kind of become synonymous with
containers in the last four or five
years but that doesn't mean that docker
is the only container runtime in fact
soon after docker came on the scene core
OS had some other ideas about how to
actually connect a developer with some
of those features in the Linux kernel
the same features built around the same
namespaces and C groups but they came
out with rocket with some new ideas
since then container D crea there was a
there was a great talk in here earlier
about cat o we'll talk about sandbox and
isolation a little bit later in the talk
but even Cloud Foundry and there were
some talks that referred to Cloud
Foundry here earlier today
Cloud Foundry under the covers was
packaging up and using these same
features below their their platform as a
service model to run containers as well
and even stepping away from kind of this
docker application model LXE and lxd
have been around produced by canonical
and a team there for a number of years
and the high-performance computing space
has also been interested in containers
with tools like singularity and some
others and so in this kind of explosion
of runtimes and and ideas for how to use
containers obviously the the market
didn't want there to be a hundred
different ideas of what it meant to be a
container or how to run a container how
to create a container image and so
that's really how the OCI was born was
this idea could we all get together and
even if there are different runtimes
that exists in different ways of
building and running containers could we
all agree on a specification for what
that means to have a a runtime and an
image format and how to distribute those
images and so many of us got together
and were involved in founding that in
2015 many member companies have joined
since and out of that has come a runtime
spec which is at a1 oh really since the
middle of 2017 a implementation of that
called run see that many of you have
heard of you know that at least you
probably know that when you run docker
that talks to container D and container
D drives run C and also an image format
that has then led to the distribution
spec so how do I talk to a registry and
so docker obviously had a way to do that
that many people again use docker pull
docker push these are registry
operations and so now that is also part
of the OCI and we're hoping to finalize
the one o virgin of the distribution
spec next year so again the idea bring
all these communities together to an
agreement what it means
to have a image what it means to run a
container and then people can go
different directions and as long as
they're OCI compliant we can actually
have interoperability among many many
different tools and so I would say you
know in the years that followed we're in
a pretty good place with this common
substrate of container runtimes and
container registries that are meeting
these OCI specs and the specs that are
in process and these drive how we
actually run containers on Linux and
also Microsoft has evolved and the OSI
high specs also provide a way to talk
about how we run containers on Windows
and so you can think of that kind of
bubbling up to again this mix of
runtimes whether we're talking about
cata
containers or Nadler container to your
docker cRIO and then of course there are
many many different registries there are
some in the CN CF like project harbor
docker hub is a instance of an open
source project called docker
distribution all the major cloud
providers have registries some of them
have been written themselves jay frog
has a registry queda i/o is recently
open sourced so again we're at this
place where thankfully you as a
developer an operator you don't have to
worry about whether your developers are
using docker container D or creo or
whether they're playing around with cata
containers or AWS firecracker we can
actually interoperate you know your
container image and docker hub can also
be pushed to clay it can also be pushed
to Asher's or IBM cloud registry and so
that's a great place so that's really
what we intended to do with the OCI when
we started in 2015 but just to give you
a little taste that we aren't done yet
we'd like to see more areas sort of
standardized and finalized things that
are in process there's some great works
around artifacts that was added to the
OCI this year it's the idea that you
know just have to store layers and image
configs and registries maybe you'd like
to sort helmet charts or C NAB like the
other talk that's
right now or maybe you'd like to include
a bill of materials like the SPD acts
format so there's some great work about
defining sort of standard media types
and how registries would implement
understanding them searching them like I
said we'd like to finish the
distribution spec this year there's a
lot of interest in image signing so
security in kubernetes and containers is
a hot topic and so there's a set of of
registry operators who would like to say
it'd be nice we have these CN CF
projects notary tough and Red Hat also
has a PGP based signing process so I
believe there was actually a meeting
yesterday in the u.s. time that I wasn't
able to attend to kind of kick off this
new topic about image signing and see if
we can agree on what it looks like to
sign an image run c1o is finally
finishing up like I said the
distribution spec what comes after one
so kind of one plus and then a lot of
new ideas have been presented in the
last year about container images have we
picked the best model for how we create
these tarballs of layers and so there's
a lot of great information if you go
searching on sort of OCI v2 which is not
an official name of any kind of spec but
just kind of a moniker for those
discussions so that's kind of a
whirlwind tour of OCI why it was created
what it's done what we're still trying
to do let's turn our attention to
kubernetes so you probably know this but
just in case you don't kubernetes has
never had a run time in kubernetes
itself it doesn't know how to run
containers it's an Orchestrator it's
always relied on some container run time
to do the work of actually running your
containers since the earliest days of
kubernetes that was docker via this
docker shim code which you can still
find if you go out and github poke down
in the kubernetes code base you can find
the docker shim code that drives the dr
engine to do things like when you start
a pod docker
has to actually start that container
has to interact with the couplet about
statistics operations like stopping
removing etc pulling images from the
registry and so again this is kind of
where we were
again as other runtimes appeared on the
scene there were variants of this like
rocket Nettie's to use rocket as the
runtime but as the idea of more runtimes
were kind of on the horizon it was clear
that it wasn't going to be feasible to
keep kind of integrating code with the
couplet and container runtimes and sort
of having this mess of different code
bases and and trees for different
runtimes and so the CRI interface the
container runtime interface was
announced in late 2016 as a way to
abstract out the idea of a container
runtime and how it interfaces to
kubernetes so a set of responsibilities
defined by the CRI API would be given
and if you are a runtime and wanted to
plug into cuber days all you have to do
is respond to the CRI API calls start
stop remove pull image etc then you can
go read up on sort of that whole
interface spec and there again continues
to be growth there and changes over time
as a new features are requested for how
kubernetes interacts and what kind of
information needs to be shared between
kubernetes and runtimes so kind of the
current landscape today of who is
actually implementing this interface
obviously docker continues to be
somewhat a default many people still use
the docker engine docker shim is sort of
grandfathered in it's not a real sort of
CRI implementation but when we talk
about kubernetes using runtimes
it makes sense for us to list it but CRI
and creo are kind of the the major
implementers of the CRI today I also
breakout container D to show that you
can run other Isolators underneath
container D you can also do some of
these under creo simply by just
replacing run C with other
binary's but container d developed a
shim api so that there could actually be
sort of a richer level of communication
between the runtime and these Isolators
like cata or firecracker or Google's G
Weiser singularity which I mentioned
maybe you haven't heard of it from the
high-performance computing space they
created a runtime specific to the needs
of HPC cluster workloads but over time
they added OCI compliance and then more
recently last year they added an
implementation of the CRI that again if
you wanted a kubernetes cluster in an
HPC environment and your choice was
singularity you can now actually plug
that into a kubernetes cluster and that
will drive singularity to do those
operations on containers if you were
creating your own cluster the way you
would do this sort of if you're
following Kelsey Hightower's kubernetes
the hard way when you start the Kubla
you can actually point it at a runtime
point it to the socket that's going to
actually respond to the CRI API and then
we don't have a ton of time to go into
runtime class but that's a feature
slightly newer that then allows you to
actually register runtimes with
kubernetes and use that as a way like in
your pod spec to say actually I would
like this pod run with this runtime and
then you can interface with container D
for example to place a specific workload
on firecracker or a cat or G visor so
that's kind of the the path to using
different runtimes within kubernetes
obviously if you're using a managed
service from one of the cloud providers
if your company's operations team is
running your clusters they're going to
make this choice for you and you're
going to default to whatever runtime
that they choose for your needs so kind
of where we are today Cystic just put
out a report I think within the last
month or so and this was a graphic that
they added to that report which has a
little more detail sort of textually if
you want to follow through to this link
you can read more about it but not
totally unexpected
then again docker has been the default
runtime under so many clusters operated
and in production today that that as you
would expect docker is the lion's share
of results in their survey container D
up to 18% and crea 4% and growing again
creo has become the default in Red Hat's
OpenShift
and so you can expect that that number
will grow as well so we put out a blog
post early last year just talking about
our work to integrate container D so I'm
a maintainer in the CNC F container D
project and so you can read more about
our work to bring that to GA and that's
led to IBM's managed kubernetes service
ease in container D is a runtime gke has
it as an option and then last week at a
BBS reinvent Amazon announced that they
were using container D with as the
default runtime for Fargate and so we
see again a lot of movement in the space
as people make other choices and sort of
defaulting to docker and so be
interesting in a year to see where this
goes now in 25 minutes it's really
almost impossible to kind of detail why
would I choose if it was up to me if I
was creating the clusters why would I
choose different runtimes there's
obviously maybe a myriad of choices
around performance or stability
extensibility maybe you're really
interested in your runtime being
directly compatible to a kubernetes
release and so the creo team has made
that kind of a tenant of how they exist
so creo 1.16 you know for sure that was
tested fully ended and passes all the
sweets for kubernetes 1.16 maybe you're
interested in additional isolation like
cata nabla G visor etc etc I did try and
sort of answer these questions in a talk
at
coupe con San Diego last month so you
can watch that or see the slides as well
if you want to dig deeper into that
topic I would like to highlight that
there is increased interest in this
whole idea of maybe default container
isolation isn't enough for my workloads
for my threat models and so there's a
lot of people in the security community
talking about this such as this tweet
that came out around coupe con last
month and again it was an area I would
because I've worked in run times I've
been interested to see sort of the
announcements and the the rollouts of
different use cases and if you were in
this room a few hours ago
Atlassian talked about their use of cata
which was quite interesting as a
additional isolation layer in their
pipelines capability so I read an
article in info queue published in
October again a lot more detail about
why are people interested in sandboxes
who's using them and kind of where where
they might be going from here so the
good news is no matter who's choosing a
runtime there's a there's a way to
interface with it that doesn't rely on
your operators having to know the the
client tooling or the specifics of a
specific runtime and so that tool is
called CRI CTL and so if your cloud
provider creates you a kubernetes
cluster and gives you access to a worker
node hopefully you'd be able to find
this command and you can start talking
directly to the CRI socket four-year
runtime so no matter if that's docker
carrillo container d you can do things
that sort of feel natural things you'd
expect to do like PS commands there's
some sort of special commands with extra
characters like stop p4 stop a pod
versus topic container removing pulling
images listing images so again this is a
way for your operators to again abstract
away from this idea that we have to know
exactly the runtime and so you can read
there's
a user guide the CRI team for container
D put together and obviously the code
for that is within kubernetes so I think
you know summarizing what I hope to have
to have shared is that the OCI was a
really valuable step in kind of this
explosion of interest in the container
ecosystem to have a level playing field
so that we're not all fighting each
other with different tool choices that
mean we can't work together so the fact
that that as of you know these last few
years that that we don't have to worry
about whether our developers are using
the same exact runtime as our operations
or production team and these higher
layer layer as abstractions can have
real interoperability that that's been
proven and the fact that we got all the
right people together to collaborate and
put that together hopefully means we've
delivered on kind of maybe making the
runtime space boring after all the talk
of sort of the container runtime wars of
of 2015 another I think sort of side
effect which I call the network effect
of having this this positive peer
pressure to be OCI compliant is that the
LXE team and the singularity team added
OCI compatibility to their runtimes
which are not necessarily docker like
and so that was great to see that the
expectation is that in the container
ecosystem OCI compliance is valuable and
useful for users and so add to that the
CRI and we really have gotten to the
place where sort of plug ability with
your kubernetes cluster is a reality and
again we can hear from other talks that
people are actually using these features
they're actually doing this and
implementing this again whenever we I
think some things that are still in
progress choice is always confusing
because people want to know why what
should I choose just tell me the right
answer and
it's not always easy in a world where
there's lots of choices and so one of
the reasons I give this talk and
hopefully others are sharing their
experiences is so people have some data
and some information to make these
choices themselves and so it doesn't
become just a a crazy set of choices and
and confusing decisions so hopefully
we're helping users make those choices
to determine you know do I need
sandboxing isolation like G Weiser
ricotta are the default runtimes good
enough for my use cases what are the
threat models and again there's a lot
going on the security space in
kubernetes to try and help people figure
those things out I think also in a world
where a lot of this is still maturing is
obviously companies do like common
tooling choices enterprises like to know
that people are sort of all using a
standardized thing so in a world where
we're seeing a lot of docker migration
to other other tools and other platforms
you know where where do we send people
for this kind of docker run use case
that everyone has sort of accepted over
the years and then finally as I tried to
hint at I think we're trying to keep
momentum in the OCI for challenges that
still exists things that are still of
interest to users to standardize so
though for example we don't have a
myriad of choices if I use Azure
registry or IBM cloud or Google that we
have different ways to sign images or to
verify or validate images and so
hopefully the OCI will continue on being
a valuable place where we have those
discussions figure out a common path
forward and and make this
straightforward for users and operators
to continue using these valuable tools
so with that that's what I wanted to
share today we have a two and a half
minutes officially on the clock but it's
also break time in a few minutes so I'm
happy to answer questions here I'm happy
just to to hang around so anyone have a
burning question before we officially
close
and I can barely see but I'm happy to
pass a mic if if so and I think people
are maybe more interested in break-in
questions so let's end there and thanks
very much for listening
[Applause]
تصفح المزيد من مقاطع الفيديو ذات الصلة
100+ Docker Concepts you Need to Know
Containerizing and Orchestrating Apps with GKE
Pods and Containers - Kubernetes Networking | Container Communication inside the Pod
What Can I Get You? An Introduction to Dynamic Resource Allocation - Freddy Rolland & Adrian Chiris
Never install locally
¿QUE ES KUBERNETES? - Introducción al orquestador más usado
5.0 / 5 (0 votes)