Running Confidential Workloads with Podman - Container Plumbing Days 2023
Summary
TLDRSergio presents a solution for running confidential workloads with Podman, enabling hardware-based memory encryption and attestation for container applications. By nesting virtualization with libvirt's libkrun inside container environments, they offer a compatible workflow leveraging existing container tools while providing confidential computing guarantees. The demo illustrates transforming a regular container into an encrypted, integrity-protected workload, showcasing the protection against host-level memory and storage inspection. Addressing compatibility with other virtualization technologies, Sergio highlights the unique strengths of this approach for low-footprint, single-container deployments prevalent in cloud and edge scenarios.
Takeaways
- 😄 Confidential computing protects data and code by performing computations in a hardware-based trusted execution environment (TEE).
- 🔐 It requires hardware support for memory encryption, integrity protection, and remote attestation.
- 🐳 The goal is to enable running confidential workloads within the existing container workflow using Podman and CRI-O.
- 📦 A confidential workload is an OCI image containing an encrypted disk image and TEE parameters.
- 🔒 The disk image is encrypted using dm-crypt, protecting data at rest, while RAM is encrypted by hardware.
- ✅ Remote attestation verifies the initial memory state before providing decryption keys.
- 🧩 Confidential workloads are nested inside regular container contexts, preserving existing isolation.
- 🌐 Network activity from confidential workloads appears like regular container traffic.
- ⚖️ Kata Containers and confidential workloads have trade-offs in terms of compatibility and overhead.
- 🔬 A live demo showcased the confidentiality guarantees against memory and disk inspection attacks.
Q & A
What is confidential computing?
-Confidential computing is the protection of data and code by performing computation in a hardware-based trusted execution environment. It provides memory encryption, integrity protection, and the ability to generate attestations of the memory contents.
Why is confidential computing important?
-Confidential computing is important because it prevents the host system from accessing sensitive data and code running in the trusted execution environment, providing a secure isolated environment for running sensitive workloads.
What are the main goals of enabling confidential workloads with Podman?
-The main goals are compatibility with the existing container tools and workflows, self-contained OCI images with all necessary information, meeting the confidential computing requirements (encrypted and integrity-protected disk and measurable memory contents), and limiting host leaks.
How does the proposed solution work?
-The solution involves creating a Luks-encrypted disk image containing the contents of the original OCI image, and then creating a new OCI image that includes this encrypted disk image and the necessary parameters for launching a trusted execution environment with libvirt-lkvm.
How is the confidential workload protected?
-The confidential workload's memory is encrypted and integrity-protected by the hardware, and the disk image is Luks-encrypted and mounted inside the trusted execution environment, preventing the host from accessing sensitive data.
What is the role of the attestation server?
-The attestation server stores the expected measurements for registered confidential workloads. It verifies the attestation from the workload's trusted execution environment and provides the encryption key to unlock the disk image if the measurement matches.
How does this solution differ from Kata Containers?
-Kata Containers can run multiple containers in the same VM, while this solution intends to run one container per trusted execution environment by design. This solution aims to provide confidential computing guarantees with a smaller stack addition.
What are the advantages of this solution for specific deployment scenarios?
-For single-container cloud deployments or Function-as-a-Service scenarios, this solution provides a lower footprint and lower TCO. For edge or embedded deployments, it allows meeting the confidential computing requirements with a minimal addition to the existing container infrastructure.
Can this solution coexist with other virtualization technologies like KVM or VirtualBox?
-Yes, this solution can coexist on the same host with other containers running different runtimes, but by design, it does not support nesting trusted execution environments.
What is the purpose of the entrypoint in the confidential workload image?
-The entrypoint is a binary that displays a message if the OCI image is attempted to be run without the libvirt-lkvm runtime specified. It serves as a safeguard against inadvertently running the confidential workload without the proper runtime.
Outlines
🔍 Introduction to Confidential Computing
This paragraph introduces the concept of Confidential Computing, explaining its formal definition and practical implications for protecting sensitive data and computations within a hardware-based trusted execution environment. It highlights the key requirements of memory encryption, integrity protection, and attestation for confidential workloads. The paragraph also mentions the availability of virtualization-based Confidential Computing since 2017, but notes the limited adoption due to complexities in implementation.
🎯 Goals and Requirements for Confidential Workloads
This paragraph outlines the set of goals and requirements for enabling confidential workloads in a container environment. The primary goals include compatibility with existing container tools and workflows, self-containment of all necessary information within the OCI image, meeting Confidential Computing requirements (encrypted and integrity-protected disk, measurable memory contents), and limiting host leaks while potentially breaking some container semantics. The idea is to transform a regular OCI image into a confidential workload by encrypting its contents into a Luks-encrypted disk image, bundled with configuration parameters within a new OCI image.
🛠️ Implementation Approach and Nested Containers
This paragraph explains the implementation approach of nesting confidential workloads within regular container contexts. Podman and CRI-O still create the container environment with namespaces, cgroups, and SELinux, but libkrun then creates the trusted execution environment (VM-TE) within this container context. The confidential workload runs inside this VM-TE, preserving the security guarantees of containers while adding hardware-based protection. This nested approach also allows seamless integration with existing container networking and traffic management tools.
🔄 Transformation Process: From OCI to Confidential Workload
This paragraph walks through the actual process of transforming a regular OCI image into a confidential workload. It involves creating a Luks-encrypted disk image, generating a random encryption key, expanding the original OCI image contents into this encrypted volume, and creating a new OCI image containing the encrypted disk image and necessary configuration parameters. The encryption key and workload parameters are then registered with an attestation server for future verification and key retrieval.
🧪 Live Demo: Running a Confidential Workload
This paragraph presents a live demonstration of running a confidential workload. It starts with a simple Go program serving a secret over HTTPS, showing how the secret can be easily extracted from memory and storage in a regular container context. It then transforms the container into a confidential workload using the oci2cwl tool, registers it with the attestation server, and runs it with podman. Attempts to extract secrets from memory and storage fail due to encryption and hardware protection, demonstrating the effectiveness of Confidential Computing.
🔍 Addressing Questions and Closing Remarks
This final paragraph addresses questions from the audience, explaining the acronyms TEE (Trusted Execution Environment) and LUKS (Linux Unified Key Setup), and clarifying that libkrun does not support nesting VMs running KVM or VirtualBox by design. It also mentions the ability to run both confidential and non-confidential containers on the same host. The talk concludes with closing remarks.
Mindmap
Keywords
💡Confidential Computing
💡Podman
💡TEE (Trusted Execution Environment)
💡Memory Encryption and Integrity Protection
💡Attestation
💡OCI (Open Container Initiative)
💡LUX (Linux Unified Key Setup)
💡Hypervisor
💡Virtual Machine (VM)
💡Container Workflow
Highlights
Confidential Computing is the protection of data and use by performing computation in a hardware-based trusted execution environment, which provides memory encryption, integrity protection, and the ability to generate an attestation of the memory contents.
Virtual machine-based Confidential Computing has been available since at least 2017 with AMD's SEV, but adoption has been low due to the complexity of implementing it correctly.
The goal is to enable Confidential Computing for containers by extending the existing Podman and CRI-O workflow, leveraging the libvirt-based hypervisor libkrisi.
The confidential workload must be an OCI image containing a Luks-encrypted disk image with the original container contents, and the necessary configuration parameters for launching the trusted execution environment.
All components needed to launch the confidential workload must be contained within the OCI image, without requiring any node configuration from users.
Host leaks must be limited, breaking some container semantics like volume mapping and exec to maintain security guarantees.
Confidential workloads are nested within a regular container runtime, preserving container isolation while adding hardware-based memory encryption and attestation.
The confidential workload's network traffic is perceived as coming from the container context, allowing existing network policies and sidecars to work without modification.
The attestation server stores expected measurements and provides encryption keys to unlock the disk image after verifying the workload's memory measurement.
The process of creating a confidential workload involves generating a Luks-encrypted disk image from an existing OCI image and creating a new OCI image containing that disk image and configuration parameters.
Confidential workloads run one container per trusted execution environment by design, unlike Kata Containers which can run multiple containers in a single VM.
Confidential workloads have a lower overhead by adding a small libvirt component to the existing container stack, making them suitable for edge and embedded deployments.
A live demo showed converting a simple Go HTTP server into a confidential workload, preventing the host from inspecting memory or disk to extract secrets.
The demo used the oci-cw tool to create the Luks-encrypted disk image, generate parameters, register with the attestation server, and run the confidential workload with Podman.
Confidential workloads can coexist on the same host with regular containers using different runtimes, but nesting of trusted execution environments is not supported by design.
Transcripts
all right I think let's get started
um welcome everyone to the second day of
container Plumbing
um so the session we have right now is
running confidential workloads with
podman from Sergio if you have any
questions please um put them in the Q a
tab so we can go over them after the
talk um go ahead Sergio
thank you for the introduction
um so hello everyone I'm here I'm going
to present the what we've done so far to
enable podman to run completing their
workloads
but first let's start with a very very
brief introduction to consider Computing
this is a complex topic but we are going
to just to talk about some of the
highlights uh the uh the formal
definition is that confidential
Computing is the protection of data and
use by performing computation in a
hardware base at the state structure
execution environment
this definition comes from the computer
Consortium but what does it means in
practice for us
well for the case for this we care about
which is today which is a resultation
based uh confident that Computing it
means that we need a hardware that
provides us with two filters
uh one is the ability to run virtual
machines with memory encryption and the
Integrity protection with ram that is
both encrypted and protected and
integrity protected and the other is the
ability to generate an attestation which
is a assigned measurement of the memory
contents in a way that can be provided
to a third party
both abilities must be provided they're
the one with all the other is is not
used it doesn't make the cut
we need memory encryption because
otherwise the host will be able to
easily extract secrets from these stress
execution environment
we need Integrity protection because uh
in other case the host will be able to
alter the context of the memory of this
product distribution environment as such
potentially alter this Behavior
and we also need the attestation because
uh the initial payload that is going to
be loaded into into this virtual machine
slash trusted distribution environment
it needs to be needs to pass through the
host at some point so if we didn't have
at the station it will be very easy for
the host to alter the context of the
initial payload and in yet malware or
read on ingest any kind of or written
any kind of sensitive data
so uh we need both of them at the same
time
now
uh the truth is that virtual
efficient-based conventional Computing
is not exactly new
um it was at least available on the
market at least science 2017 uh when AMD
introduced their free sap servers that
provided a CV support
and SCP in the CVS are in Mainland Linux
and ship enabled in most distributions
but the truth is that even though it's
there and it provides some very
interesting features barely anyone is
using it in for real
so we have started a couple years ago we
started thinking about why was that and
we came to the conclusion that doing
computer Computing the right way is
complicated it's very complicated
because yes the hardware give us the
Primitives but it doesn't tell us how to
use them and what you need to measure
and how are you going to measure and
when and you're going to donate the
attestation those are questions that
depend heavily on the context
so we are starting thinking about ways
in which we can make computer Computing
more accessible to to the users
and we thought that instead of trying to
provide a completely different
experience and a completely different
workflow having to introduce these users
to this workflow
we could extend an existing workflow
such as the container workflow which
many users are very familiar with
and enable it to actually use computer
Computing for running this kind of
sensitive workloads
and we also noticed that we could very
easily do that by extending podman and
CRM and integrating that with lip Iran
is is a built-in machine monitor written
in Russ that instead of being a separate
binary and unexecutable it's provided as
a dynamic Library so you can link to it
from other programs and instantly gain
virtualization and computer Computing
capabilities
so now that we had an initial idea we
also need to Define exactly what are we
going to provide the uh to the users and
what's conceptually was a confidential
overload in our mind
and we started by sitting in our set of
goals
um the first one is that it must be
compatible with the system container
tools and workflows obviously this is
one of the main goals we had and because
we wanted to reuse the system workflow
and that was the lead motif
and this means that uh this kind of
workloads need to be deployed and
Service as an obvious image because it
needs to be something that you can
manipulate with podman with Builder
coscopio that you can push into a
registry and pull it from it so it needs
to be amazing image
another requirement is that all the
information that the the context the uh
vmm and the hypervisor need to actually
run this virtual machine has attractive
accusation execution environment must be
inside that OCA image it must be
contained in it with uncon users to need
to pass any kind of annotations or to do
any kind of local node configuration
just to run this kind of workloads we
want all the information to build
self-contained in a single image
but on the other hand we also must meet
the company that computer requirements
because otherwise it will be pointless
yeah and this means that the disk must
be encrypted and it must be Integrity
protected there is no use in having a
ram that is encrypted I'm an integrated
protective with the storage is not
protected and encrypted
and we also need that the memory
contents must be easy to measure and in
this context the initially easy to
measure means that we need to be able to
easily identify what we need to measure
uh in this design ECC because all the
components that need to measure are
provided by lead piranhas itself
and another requirement is that the host
leaks must be limited uh even if that
means breaking some of the conventional
container semantics
in practice this means that we cannot
support things such as volume mapping
and we cannot support things such as
running podman exec and running a new
Insider container inside a confidential
overload because that will break the
walls that we need there to provide the
configure Computing guarantees
so
thinking about these goals we can up
with this uh with this idea that prayers
a completed workload must be a regular
oci image
because again we need it to be
compatible with existing container tools
but it needs to be honestly image that
contains at least the te specific
parameters that we need to actually
deploy it and create a built-on machine
has attraction trust execution
environment
and we need to have a lux encrypted this
image with the contents of the an
original oci
so what this allows us is to for the
users to provide tools for the user to
pick and develop their application has a
regular container and eventually
transform this container into a
confidential workload simply by picking
up the contents of the container and
putting them into an encrypted this
image that in turn will be part of
another a new oci image that will be
back this conference confidential
workload
it has a well-known set of initial
memory contents because all of them are
provided the limit around FW which
provides a minimal kernel minimal Linux
kernel freeware and an indirectly system
and this also implies that
upgrades are very easily are very
controllable so you just need to
regenerate the
measurements once every one you are when
you update the library set well you have
the liquor run firmware and this is
something that can be coordinated very
easily
uh this contest doesn't allow any kind
of horse leaks it's through the network
the network is the only way in which
city can communicate with the outside
and uh the uh of course of course this
kind of contest provides memory
encryption and creative protection and
decision by relying on the underlying
companies and Computing Hardware
at this moment we support cbes CB SMP
which is a whole SUV AMD SUV family and
we also support TDX
um and now this is something I would
like to highlight when we are talking
about integrating liquid around with
siren we are not talking about replacing
the container context with a virtual
virtualization based context but instead
we are nesting them so uh when you run
one of these confidential workloads
podman and siren will still create the
container contest it will use c groups
they will use name spaces they will use
Easy Linux to create that uh isolated
context within the host
and then inside that container contest
is where Ricky Ram will create the vm-te
and inside this bntz VM slash T is where
the confidential world the application
and the user has developed is going to
be running
this means that we are not only
protecting uh the bmm the trust
execution agreement against house
inspection very we are still preserving
all the security guarantees that are
container regularly provides
and there is another nice Advantage for
this approach and that is that all the
networking activity that happens in the
confidential workload
is going to be perceived from the
container context has activity that uh
that could happen from any other process
within a container context
in practice this means that if you are
using sidecars for uh
injecting rules or or AP table rules or
for doing uh for taking a measurement of
the traffic and doing any kind of
traffic shaping that still works after
the works you don't need any kind of
specific support for confidential
workloads
now let's take a look at how each data
context is protected with Comprehensive
workloads in the center of the image We
have basically the same thing we've seen
before but slightly bigger and Jos the
container context managed by simran then
the bmte management look around and then
in the guess OS slash company in their
workload this vmte access a region of
memory from the host that is
transparently encrypted and decrypted by
the hardware so the if the host tries to
access this region of memory it will not
be able to access it or if it can it
will only find an encrypted garbage this
depends on the computer Computing
technology the host is the highways are
actually providing now there is an
exception of this Rule and that is where
will be some regions of memory that will
be shared with the host and will not be
encrypted for storing things such as
Vehicles adjustment and implementation
detail from a constantial point of view
we can say that a whole memory encrypted
that is going to a whole data that is
potentially sensitive is going to be
encrypted
now on the right side we also have the
storage which can be any kind of
arbitrary storage you can use with a
regular container and inside here we
have the confidential CI we talked
before which contains the confidential
workload encrypted this image which is
this Lux encrypted this image now the
encryption and decryption will happen by
software inside the context of the te so
it's the company sdd the one that will
be open the labs device and will operate
all three so the host has no visibility
of the data in plain text at any moment
and to be able to open these blacks
encrypted this image and they guess the
computer workload will retrieve the
secrets from this component on the right
which is the on the left side sorry
which is called the attestation server
and now the attestation server is a
component that is trusted for some
reason probably pass because it's
running an IDE we have for a lot of
reasons we are not going to cover that
in this talk but is one that stores the
all the expected maximums for the
confidential workers we have registered
in the system
so once this DEA starts up it will take
it will ask the hardware to make the to
take a measurement of the memory
contents to sign this measurement send
it and to send it to the gas operating
system the gos operating system will
send will contact the attestation server
with this with this attestation
signature
and it will send it to it the
application server will be verified the
signature and and compare against this
Vector measurement and if it matches it
will pick up the secret and it will send
it back to the guest operating system to
the confeder workload to be used to
unlock the encrypted this image that we
have in the confidence here on the on
the right side
before about this idea of taking a
regular container regular oci and
information
the actual process is fairly simple
actually so what happens here is that uh
we need to create a this image which is
basically a file in the house in the
context of the uh of the Builder or the
build operating system
on this this image we are going to
format it to relax we are going to
generate a random encryption key
we are going to expand the contents of
the original oci image into this relax
encrypted volume
then we are going to create a new oci
image that contains both this this image
and the parameters we said before we
need to actually launch IBM with the
capabilities
and once this one is created we are
going to potentially pass it to
situation registry and also send both
the encryption key
that is need to unlock the storage and
the measurement and the workload
parameters to the adaptation server
we've seen on the previous slide and we
see here on the bottom
now uh before jump before jumping into
the demo uh for the last couple years
every time I talk about this uh about
completely workloads I was asked about
what's the difference with a catacomb
video containers so this tiny instead of
waiting for your question I just went
ahead and made it part of the
presentation
so the the main difference from a
practical point of view between computer
workloads and configure containers is
that Kata is able to run multiple
containers in the same T in the same VM
well lip conduction Wireless intends to
run Jazz one container per T by Design
for supporting all the container all the
commissioner container semantics now
this of course comes with with a ghost
cater containers is computer containers
is more complex and requires more
components but it gives you these
additional features on the other side
for enabling confidential Network loss
we just need to add lip kerram which is
a very small piece of software show an
already existing stack which allows us
to meet the uh
Computing guarantees without very very
small addition in the stack
but
if you ask me honestly
which one to choose between the others
it honestly it really depends on what
you intend to do with them
if you I'll plan to use uh if you intend
to deploy to migrate existing container
deployments Kata will beat you give you
better compatibility so it's likely that
it's going to be uh you're going to uh
find less problems this way
uh if you intend to do a cloud
deployment that are going to have that
potentially going to have many
containers per board then again
will provide you with a lower uh
footprint
on the other hand if you intend to the
plus you have a deploy a cloud
deployment with many single container
pods or no Pods at all and in the sense
that everything is just a single
container or even a container pertinent
which is the case of function as a
service then we click around you have a
lower footprint and such and such a
lower TCO
and of course if you are aiming for an
enclosed deployment which yes they still
exists uh the uh with the the with
container workers you just uh can
leverage on the existing content
infrastructure and it just uh has a
little bit of of
um I just need to add liquid around to
the mix so it's just uh it allows you to
do the the to meet the community
requirements with a very small baggage
and this is ideal for such thing for
scenarios for contextual you have uh in
the in the edge or Automotive or in
general embedding contexts
so let's jump now to the demo which is
going to be live so
let's hope for the best
now here I'm connected to an ACB capable
machine which is an AMD epic server and
what I have here is a it's a very simple
golang program which is basically opens
an HTTP server and service this secret
through DLS we saw as Nicole
certificates so what I'm going to do is
I'm going to generate a regular
container from it
okay
build
container file
I'm going to give you the name CPD mode
okay now I'm going to run this container
I'm going to just close or 8080
and CPD mode
okay and I should be able to contact it
to obtain the secret
now
while this communication via https has
has been encrypted this is very easy
from the context of the house to stack
the secrets from this container so if I
go ahead and
inspect the container
to find out his process ID
ol and confirm this is basically the
binary we built before
and now I can simply done its memory
contents
and if I inspect this memory Contents I
can see
that the secret is is there in plain
text
in addition to attacking you from the
memory side of things I can also undo
the similar thing from the storage
so I'm going to speculative coin again
once again
I have the clear process idiot here so
I'm going to find out this mounts
okay here they are and let's take a look
at them and we see that we have here the
binary again in plain text so we can
simply
obtain the secret here too
so
in this case these were one of these not
protected against hosting by inspection
now we are doing the same thing but this
time with uh we are going to transform
this container into a confidential
workload so what we're going to do is
we're going to make use of this script
this tool which is called OC I choose w
uh ideally this should be uh the uh it
really will that Builder should be able
to do this for us but I for now we are
using this this script
and
to this screen we need to provide it
with the de config
which is uh we we say before that
contains the parameters that are needed
for the hypervisor to actually launch
this T which contains the workload ID
the number of pcpus the amount of ram
the technology we are going to use we
have right now we are supporting both
the CV and SMP
specific data and then the attestational
URL uh to which the world needs to
contact you send the measurement and
obtain the lock secreting change if they
measurement is successful
so I'm going to run ca27u
specifying this container this
configuration file I'm going to pass the
ACB certificate this is something
specific about the CP SMP for instance
doesn't need this uh this argument
I'm going to give the original image
name and I'm going to provide also a new
oci image which is going to be the same
one-cw
now he's asked us to run it into a
building share which we are going to do
right away
and now what it's doing is what we're
talking for is basically automatically
creating a file which is this image it's
going to generate a random encryption
key it's going to be Larry formatting it
with relax it's going to mount it
somewhere copy the contents of the
original image and then as you can see
here create a new oci image with the
this image among all the configuration
file and in action figure first value
the under certificate
now by the at the end of this process it
will also register it with the
destination server we will have run
right here for demonstration purposes
so we can see here we will save a
register wordload
and now we should be able to run it
using random using a run as a random
so I'm going to do exactly that
I'm going to also pass the 8080 board
CW
I think we should be it
now this this 90 will take you longer
because we are asking the hardware to
authenticate the memory and generate
data station I encrypt it but the other
STC has already done that it has already
sent the attestation and the measurement
to the attestation server it has
received back the receipt back the key
which has been used to unlock and mount
the route the Lux encrypted loop system
and the HTTP server has started yes as
we it happened in the uh non-encrypted
in the non-completed world of case
now I should be able to do call again
with this one and yes we have the same
Secret
but let's try to start a secret the same
way we did before from memory and both
storage
so let's find out the process ID
and here we find a fin that something
different because instead of finding our
hello openshift the example we see that
this is running with cran-k run which
means it's running in IBM and more
precisely an interesting environment in
this case
and we can still dump this whole memory
of this VM of course
but since the memory is encrypted we
should not be able to find anything
interesting
on it
and it doesn't now in some cases uh we
are talking in this case we are we are
able to dump the memory of the VM of the
trust execution environment is not of
use to us because it's encrypted and
there are other companies Computing
technology in which directly you can the
host can not even dump that memory so it
depends on the computer technology but
the result is the same is that the host
is not able to extract plane plain text
secret find plain text secrets into that
memory
so let's try to do the same thing with
the storage
proxy
[Laughter]
I'm going to do for a flower
find out the endpoints
okay here we are
let's find what it says and this time
what we have here is not the binary but
this image that is Lex encrypted
foreign
we try to find something interesting on
it
we won't be able to do so
now you may see that we have other
things here that we haven't talked
before such as the entry point uh this
entry point is simply a binary which is
only mission is to do this
in case you run it this you attempt to
run this oci image without uh without
the k run runtime with a specified
decline runtime you get this message
this is the only purpose of it this is
there are no secrets here there are no
secrets here and there are no secrets in
the certification either and the only
thing is we are in this this encrypted
image
and that's basically all I had today uh
I hope you enjoy it and if you have any
questions I'll be around
thank you Sergio um there are two
questions in the Q a the first one is uh
t-e-e-l-u-k-s what does it what do these
acronyms mean
DEA is a trusted execution environment
uh which in this particular context it
means a VM that is using relying on
Hardware to have memory encryption and
another station
and Lex is just uh one of the well the
main way in which you can encrypt disks
and Linux is one of the the mechanisms
to do that
all right thank you and the next
question is can libk run coexist with
running VMS running KVM and or
virtualbox
with running what exactly running VMS
that are running KVM and or virtualbox
well computer Computing if the question
is about nesting configure Computing
does not support this team by Design not
with the look around not with any other
technology and now what you can have is
on the same host uh containers that are
running with uh with encryption SDS and
regular containers running with other
runtimes
all right um so we are out of time over
here thank you so much Sergio but I
think there are two more questions in
chat if you can just take a quick look
and answer them
but yeah thank you all and next session
is containers on cars
Посмотреть больше похожих видео
100+ Docker Concepts you Need to Know
Life After VMware - A summary and comparison of hypervisors!
you need to learn Kubernetes RIGHT NOW!!
Containerizing and Orchestrating Apps with GKE
What Can I Get You? An Introduction to Dynamic Resource Allocation - Freddy Rolland & Adrian Chiris
Containers on AWS Overview: ECS | EKS | Fargate | ECR
5.0 / 5 (0 votes)