Running Confidential Workloads with Podman - Container Plumbing Days 2023

Red Hat Community
13 Apr 202329:24

Summary

TLDRSergio presents a solution for running confidential workloads with Podman, enabling hardware-based memory encryption and attestation for container applications. By nesting virtualization with libvirt's libkrun inside container environments, they offer a compatible workflow leveraging existing container tools while providing confidential computing guarantees. The demo illustrates transforming a regular container into an encrypted, integrity-protected workload, showcasing the protection against host-level memory and storage inspection. Addressing compatibility with other virtualization technologies, Sergio highlights the unique strengths of this approach for low-footprint, single-container deployments prevalent in cloud and edge scenarios.

Takeaways

  • πŸ˜„ Confidential computing protects data and code by performing computations in a hardware-based trusted execution environment (TEE).
  • πŸ” It requires hardware support for memory encryption, integrity protection, and remote attestation.
  • 🐳 The goal is to enable running confidential workloads within the existing container workflow using Podman and CRI-O.
  • πŸ“¦ A confidential workload is an OCI image containing an encrypted disk image and TEE parameters.
  • πŸ”’ The disk image is encrypted using dm-crypt, protecting data at rest, while RAM is encrypted by hardware.
  • βœ… Remote attestation verifies the initial memory state before providing decryption keys.
  • 🧩 Confidential workloads are nested inside regular container contexts, preserving existing isolation.
  • 🌐 Network activity from confidential workloads appears like regular container traffic.
  • βš–οΈ Kata Containers and confidential workloads have trade-offs in terms of compatibility and overhead.
  • πŸ”¬ A live demo showcased the confidentiality guarantees against memory and disk inspection attacks.

Q & A

  • What is confidential computing?

    -Confidential computing is the protection of data and code by performing computation in a hardware-based trusted execution environment. It provides memory encryption, integrity protection, and the ability to generate attestations of the memory contents.

  • Why is confidential computing important?

    -Confidential computing is important because it prevents the host system from accessing sensitive data and code running in the trusted execution environment, providing a secure isolated environment for running sensitive workloads.

  • What are the main goals of enabling confidential workloads with Podman?

    -The main goals are compatibility with the existing container tools and workflows, self-contained OCI images with all necessary information, meeting the confidential computing requirements (encrypted and integrity-protected disk and measurable memory contents), and limiting host leaks.

  • How does the proposed solution work?

    -The solution involves creating a Luks-encrypted disk image containing the contents of the original OCI image, and then creating a new OCI image that includes this encrypted disk image and the necessary parameters for launching a trusted execution environment with libvirt-lkvm.

  • How is the confidential workload protected?

    -The confidential workload's memory is encrypted and integrity-protected by the hardware, and the disk image is Luks-encrypted and mounted inside the trusted execution environment, preventing the host from accessing sensitive data.

  • What is the role of the attestation server?

    -The attestation server stores the expected measurements for registered confidential workloads. It verifies the attestation from the workload's trusted execution environment and provides the encryption key to unlock the disk image if the measurement matches.

  • How does this solution differ from Kata Containers?

    -Kata Containers can run multiple containers in the same VM, while this solution intends to run one container per trusted execution environment by design. This solution aims to provide confidential computing guarantees with a smaller stack addition.

  • What are the advantages of this solution for specific deployment scenarios?

    -For single-container cloud deployments or Function-as-a-Service scenarios, this solution provides a lower footprint and lower TCO. For edge or embedded deployments, it allows meeting the confidential computing requirements with a minimal addition to the existing container infrastructure.

  • Can this solution coexist with other virtualization technologies like KVM or VirtualBox?

    -Yes, this solution can coexist on the same host with other containers running different runtimes, but by design, it does not support nesting trusted execution environments.

  • What is the purpose of the entrypoint in the confidential workload image?

    -The entrypoint is a binary that displays a message if the OCI image is attempted to be run without the libvirt-lkvm runtime specified. It serves as a safeguard against inadvertently running the confidential workload without the proper runtime.

Outlines

00:00

πŸ” Introduction to Confidential Computing

This paragraph introduces the concept of Confidential Computing, explaining its formal definition and practical implications for protecting sensitive data and computations within a hardware-based trusted execution environment. It highlights the key requirements of memory encryption, integrity protection, and attestation for confidential workloads. The paragraph also mentions the availability of virtualization-based Confidential Computing since 2017, but notes the limited adoption due to complexities in implementation.

05:02

🎯 Goals and Requirements for Confidential Workloads

This paragraph outlines the set of goals and requirements for enabling confidential workloads in a container environment. The primary goals include compatibility with existing container tools and workflows, self-containment of all necessary information within the OCI image, meeting Confidential Computing requirements (encrypted and integrity-protected disk, measurable memory contents), and limiting host leaks while potentially breaking some container semantics. The idea is to transform a regular OCI image into a confidential workload by encrypting its contents into a Luks-encrypted disk image, bundled with configuration parameters within a new OCI image.

10:02

πŸ› οΈ Implementation Approach and Nested Containers

This paragraph explains the implementation approach of nesting confidential workloads within regular container contexts. Podman and CRI-O still create the container environment with namespaces, cgroups, and SELinux, but libkrun then creates the trusted execution environment (VM-TE) within this container context. The confidential workload runs inside this VM-TE, preserving the security guarantees of containers while adding hardware-based protection. This nested approach also allows seamless integration with existing container networking and traffic management tools.

15:03

πŸ”„ Transformation Process: From OCI to Confidential Workload

This paragraph walks through the actual process of transforming a regular OCI image into a confidential workload. It involves creating a Luks-encrypted disk image, generating a random encryption key, expanding the original OCI image contents into this encrypted volume, and creating a new OCI image containing the encrypted disk image and necessary configuration parameters. The encryption key and workload parameters are then registered with an attestation server for future verification and key retrieval.

20:03

πŸ§ͺ Live Demo: Running a Confidential Workload

This paragraph presents a live demonstration of running a confidential workload. It starts with a simple Go program serving a secret over HTTPS, showing how the secret can be easily extracted from memory and storage in a regular container context. It then transforms the container into a confidential workload using the oci2cwl tool, registers it with the attestation server, and runs it with podman. Attempts to extract secrets from memory and storage fail due to encryption and hardware protection, demonstrating the effectiveness of Confidential Computing.

25:10

πŸ” Addressing Questions and Closing Remarks

This final paragraph addresses questions from the audience, explaining the acronyms TEE (Trusted Execution Environment) and LUKS (Linux Unified Key Setup), and clarifying that libkrun does not support nesting VMs running KVM or VirtualBox by design. It also mentions the ability to run both confidential and non-confidential containers on the same host. The talk concludes with closing remarks.

Mindmap

Keywords

πŸ’‘Confidential Computing

Confidential Computing refers to the protection of data in use by performing computation within a hardware-based Trusted Execution Environment (TEE). It ensures data is processed in a way that is encrypted and secure from external access, including the host system. In the script, Confidential Computing is introduced as a crucial component for running sensitive workloads with enhanced security, leveraging hardware features for memory encryption, integrity protection, and attestation to prevent unauthorized access and manipulation.

πŸ’‘Podman

Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. It operates with an emphasis on security and can run containers without root privileges. In the context of the video, Podman is extended to support running confidential workloads, integrating with virtualization technologies to enhance data security while maintaining compatibility with standard container workflows.

πŸ’‘TEE (Trusted Execution Environment)

A Trusted Execution Environment (TEE) provides a secure area within a main processor, ensuring that the code and data loaded inside are protected with respect to confidentiality and integrity. The script discusses TEEs in the context of creating secure virtual machines (VMs) for confidential computing, where memory encryption and integrity protection are essential for safeguarding sensitive workloads from the host or other malicious entities.

πŸ’‘Memory Encryption and Integrity Protection

Memory Encryption and Integrity Protection are hardware features that ensure data stored in RAM is encrypted and cannot be tampered with. These features are pivotal in the realm of Confidential Computing, as they prevent unauthorized users, including those with physical access to the hardware, from reading or modifying sensitive data. The script emphasizes the need for these capabilities to secure the Trusted Execution Environments where confidential workloads are run.

πŸ’‘Attestation

Attestation in the context of Confidential Computing is the process by which a device or a workload proves its identity and integrity to a remote verifier. This involves generating cryptographic evidence of the system's state, which can be verified to ensure it has not been tampered with. The video script describes attestation as essential for establishing trust between the confidential workload and external parties, ensuring that the software has not been altered and is safe to receive sensitive data.

πŸ’‘OCI (Open Container Initiative)

The Open Container Initiative (OCI) is a project under the Linux Foundation to create open standards for container formats and runtimes. In the video, the concept of creating OCI images that are compatible with existing container tools, yet capable of running as confidential workloads, is crucial. It allows for the seamless integration of confidential computing capabilities into the broader ecosystem of containerized applications.

πŸ’‘LUX (Linux Unified Key Setup)

LUX, often referred to as LUKS (Linux Unified Key Setup), is a specification for disk encryption. It is a standard for Linux to secure disk drives via encryption. In the script, the creation of LUX-encrypted disk images is discussed as a method for securing the persistent storage of confidential workloads, ensuring that data remains encrypted and protected even at rest.

πŸ’‘Hypervisor

A Hypervisor is software, firmware, or hardware that creates and runs virtual machines (VMs) by separating the physical hardware from the operating system and applications. The video explains the integration of a minimalist hypervisor within the container context through Libkrun, enabling the execution of confidential workloads in a secure, virtualized environment without replacing traditional container functionalities.

πŸ’‘Virtual Machine (VM)

A Virtual Machine (VM) is an emulation of a computer system that provides the functionality of a physical computer. It enables multiple instances of operating systems to run on a single physical hardware host. In the context of the script, VMs are crucial for running confidential workloads within a TEE, offering a layer of abstraction and security that isolates these workloads from the host system and other VMs.

πŸ’‘Container Workflow

The container workflow refers to the processes involved in developing, deploying, and managing containerized applications, including creation, orchestration, scaling, and networking. The video's narrative focuses on extending this familiar workflow to incorporate Confidential Computing capabilities, enabling developers to secure sensitive workloads without deviating from the established practices and tools in the container ecosystem.

Highlights

Confidential Computing is the protection of data and use by performing computation in a hardware-based trusted execution environment, which provides memory encryption, integrity protection, and the ability to generate an attestation of the memory contents.

Virtual machine-based Confidential Computing has been available since at least 2017 with AMD's SEV, but adoption has been low due to the complexity of implementing it correctly.

The goal is to enable Confidential Computing for containers by extending the existing Podman and CRI-O workflow, leveraging the libvirt-based hypervisor libkrisi.

The confidential workload must be an OCI image containing a Luks-encrypted disk image with the original container contents, and the necessary configuration parameters for launching the trusted execution environment.

All components needed to launch the confidential workload must be contained within the OCI image, without requiring any node configuration from users.

Host leaks must be limited, breaking some container semantics like volume mapping and exec to maintain security guarantees.

Confidential workloads are nested within a regular container runtime, preserving container isolation while adding hardware-based memory encryption and attestation.

The confidential workload's network traffic is perceived as coming from the container context, allowing existing network policies and sidecars to work without modification.

The attestation server stores expected measurements and provides encryption keys to unlock the disk image after verifying the workload's memory measurement.

The process of creating a confidential workload involves generating a Luks-encrypted disk image from an existing OCI image and creating a new OCI image containing that disk image and configuration parameters.

Confidential workloads run one container per trusted execution environment by design, unlike Kata Containers which can run multiple containers in a single VM.

Confidential workloads have a lower overhead by adding a small libvirt component to the existing container stack, making them suitable for edge and embedded deployments.

A live demo showed converting a simple Go HTTP server into a confidential workload, preventing the host from inspecting memory or disk to extract secrets.

The demo used the oci-cw tool to create the Luks-encrypted disk image, generate parameters, register with the attestation server, and run the confidential workload with Podman.

Confidential workloads can coexist on the same host with regular containers using different runtimes, but nesting of trusted execution environments is not supported by design.

Transcripts

play00:02

all right I think let's get started

play00:06

um welcome everyone to the second day of

play00:07

container Plumbing

play00:09

um so the session we have right now is

play00:11

running confidential workloads with

play00:13

podman from Sergio if you have any

play00:15

questions please um put them in the Q a

play00:17

tab so we can go over them after the

play00:19

talk um go ahead Sergio

play00:22

thank you for the introduction

play00:24

um so hello everyone I'm here I'm going

play00:28

to present the what we've done so far to

play00:31

enable podman to run completing their

play00:33

workloads

play00:34

but first let's start with a very very

play00:37

brief introduction to consider Computing

play00:40

this is a complex topic but we are going

play00:42

to just to talk about some of the

play00:45

highlights uh the uh the formal

play00:49

definition is that confidential

play00:50

Computing is the protection of data and

play00:53

use by performing computation in a

play00:55

hardware base at the state structure

play00:56

execution environment

play00:58

this definition comes from the computer

play01:00

Consortium but what does it means in

play01:04

practice for us

play01:05

well for the case for this we care about

play01:08

which is today which is a resultation

play01:10

based uh confident that Computing it

play01:13

means that we need a hardware that

play01:14

provides us with two filters

play01:17

uh one is the ability to run virtual

play01:20

machines with memory encryption and the

play01:23

Integrity protection with ram that is

play01:26

both encrypted and protected and

play01:29

integrity protected and the other is the

play01:31

ability to generate an attestation which

play01:35

is a assigned measurement of the memory

play01:37

contents in a way that can be provided

play01:39

to a third party

play01:41

both abilities must be provided they're

play01:44

the one with all the other is is not

play01:46

used it doesn't make the cut

play01:48

we need memory encryption because

play01:51

otherwise the host will be able to

play01:53

easily extract secrets from these stress

play01:56

execution environment

play01:58

we need Integrity protection because uh

play02:02

in other case the host will be able to

play02:04

alter the context of the memory of this

play02:07

product distribution environment as such

play02:09

potentially alter this Behavior

play02:12

and we also need the attestation because

play02:14

uh the initial payload that is going to

play02:17

be loaded into into this virtual machine

play02:20

slash trusted distribution environment

play02:22

it needs to be needs to pass through the

play02:25

host at some point so if we didn't have

play02:28

at the station it will be very easy for

play02:30

the host to alter the context of the

play02:32

initial payload and in yet malware or

play02:35

read on ingest any kind of or written

play02:37

any kind of sensitive data

play02:38

so uh we need both of them at the same

play02:42

time

play02:43

now

play02:45

uh the truth is that virtual

play02:47

efficient-based conventional Computing

play02:49

is not exactly new

play02:51

um it was at least available on the

play02:53

market at least science 2017 uh when AMD

play02:57

introduced their free sap servers that

play02:59

provided a CV support

play03:01

and SCP in the CVS are in Mainland Linux

play03:05

and ship enabled in most distributions

play03:07

but the truth is that even though it's

play03:11

there and it provides some very

play03:14

interesting features barely anyone is

play03:18

using it in for real

play03:20

so we have started a couple years ago we

play03:23

started thinking about why was that and

play03:26

we came to the conclusion that doing

play03:28

computer Computing the right way is

play03:31

complicated it's very complicated

play03:33

because yes the hardware give us the

play03:36

Primitives but it doesn't tell us how to

play03:39

use them and what you need to measure

play03:42

and how are you going to measure and

play03:44

when and you're going to donate the

play03:46

attestation those are questions that

play03:48

depend heavily on the context

play03:51

so we are starting thinking about ways

play03:54

in which we can make computer Computing

play03:56

more accessible to to the users

play03:59

and we thought that instead of trying to

play04:03

provide a completely different

play04:05

experience and a completely different

play04:07

workflow having to introduce these users

play04:09

to this workflow

play04:10

we could extend an existing workflow

play04:14

such as the container workflow which

play04:16

many users are very familiar with

play04:18

and enable it to actually use computer

play04:21

Computing for running this kind of

play04:23

sensitive workloads

play04:26

and we also noticed that we could very

play04:28

easily do that by extending podman and

play04:30

CRM and integrating that with lip Iran

play04:34

is is a built-in machine monitor written

play04:37

in Russ that instead of being a separate

play04:41

binary and unexecutable it's provided as

play04:44

a dynamic Library so you can link to it

play04:47

from other programs and instantly gain

play04:49

virtualization and computer Computing

play04:52

capabilities

play04:55

so now that we had an initial idea we

play04:58

also need to Define exactly what are we

play05:01

going to provide the uh to the users and

play05:03

what's conceptually was a confidential

play05:06

overload in our mind

play05:08

and we started by sitting in our set of

play05:10

goals

play05:12

um the first one is that it must be

play05:14

compatible with the system container

play05:16

tools and workflows obviously this is

play05:17

one of the main goals we had and because

play05:20

we wanted to reuse the system workflow

play05:22

and that was the lead motif

play05:24

and this means that uh this kind of

play05:26

workloads need to be deployed and

play05:29

Service as an obvious image because it

play05:32

needs to be something that you can

play05:33

manipulate with podman with Builder

play05:35

coscopio that you can push into a

play05:38

registry and pull it from it so it needs

play05:42

to be amazing image

play05:43

another requirement is that all the

play05:47

information that the the context the uh

play05:50

vmm and the hypervisor need to actually

play05:54

run this virtual machine has attractive

play05:56

accusation execution environment must be

play05:58

inside that OCA image it must be

play06:01

contained in it with uncon users to need

play06:04

to pass any kind of annotations or to do

play06:07

any kind of local node configuration

play06:10

just to run this kind of workloads we

play06:13

want all the information to build

play06:14

self-contained in a single image

play06:17

but on the other hand we also must meet

play06:19

the company that computer requirements

play06:20

because otherwise it will be pointless

play06:22

yeah and this means that the disk must

play06:24

be encrypted and it must be Integrity

play06:26

protected there is no use in having a

play06:29

ram that is encrypted I'm an integrated

play06:31

protective with the storage is not

play06:33

protected and encrypted

play06:35

and we also need that the memory

play06:38

contents must be easy to measure and in

play06:41

this context the initially easy to

play06:43

measure means that we need to be able to

play06:46

easily identify what we need to measure

play06:48

uh in this design ECC because all the

play06:52

components that need to measure are

play06:53

provided by lead piranhas itself

play06:56

and another requirement is that the host

play06:58

leaks must be limited uh even if that

play07:01

means breaking some of the conventional

play07:03

container semantics

play07:05

in practice this means that we cannot

play07:07

support things such as volume mapping

play07:09

and we cannot support things such as

play07:12

running podman exec and running a new

play07:15

Insider container inside a confidential

play07:18

overload because that will break the

play07:21

walls that we need there to provide the

play07:24

configure Computing guarantees

play07:29

so

play07:31

thinking about these goals we can up

play07:33

with this uh with this idea that prayers

play07:38

a completed workload must be a regular

play07:41

oci image

play07:42

because again we need it to be

play07:45

compatible with existing container tools

play07:46

but it needs to be honestly image that

play07:49

contains at least the te specific

play07:51

parameters that we need to actually

play07:53

deploy it and create a built-on machine

play07:55

has attraction trust execution

play07:56

environment

play07:57

and we need to have a lux encrypted this

play08:00

image with the contents of the an

play08:02

original oci

play08:04

so what this allows us is to for the

play08:07

users to provide tools for the user to

play08:09

pick and develop their application has a

play08:12

regular container and eventually

play08:14

transform this container into a

play08:17

confidential workload simply by picking

play08:19

up the contents of the container and

play08:21

putting them into an encrypted this

play08:23

image that in turn will be part of

play08:25

another a new oci image that will be

play08:28

back this conference confidential

play08:29

workload

play08:42

it has a well-known set of initial

play08:44

memory contents because all of them are

play08:46

provided the limit around FW which

play08:48

provides a minimal kernel minimal Linux

play08:50

kernel freeware and an indirectly system

play08:54

and this also implies that

play08:56

upgrades are very easily are very

play08:59

controllable so you just need to

play09:01

regenerate the

play09:03

measurements once every one you are when

play09:07

you update the library set well you have

play09:10

the liquor run firmware and this is

play09:11

something that can be coordinated very

play09:13

easily

play09:15

uh this contest doesn't allow any kind

play09:17

of horse leaks it's through the network

play09:19

the network is the only way in which

play09:21

city can communicate with the outside

play09:23

and uh the uh of course of course this

play09:27

kind of contest provides memory

play09:28

encryption and creative protection and

play09:30

decision by relying on the underlying

play09:32

companies and Computing Hardware

play09:34

at this moment we support cbes CB SMP

play09:38

which is a whole SUV AMD SUV family and

play09:41

we also support TDX

play09:47

um and now this is something I would

play09:49

like to highlight when we are talking

play09:50

about integrating liquid around with

play09:52

siren we are not talking about replacing

play09:56

the container context with a virtual

play09:58

virtualization based context but instead

play10:01

we are nesting them so uh when you run

play10:06

one of these confidential workloads

play10:07

podman and siren will still create the

play10:10

container contest it will use c groups

play10:13

they will use name spaces they will use

play10:14

Easy Linux to create that uh isolated

play10:18

context within the host

play10:19

and then inside that container contest

play10:22

is where Ricky Ram will create the vm-te

play10:25

and inside this bntz VM slash T is where

play10:30

the confidential world the application

play10:31

and the user has developed is going to

play10:34

be running

play10:35

this means that we are not only

play10:37

protecting uh the bmm the trust

play10:40

execution agreement against house

play10:42

inspection very we are still preserving

play10:45

all the security guarantees that are

play10:47

container regularly provides

play10:49

and there is another nice Advantage for

play10:51

this approach and that is that all the

play10:54

networking activity that happens in the

play10:56

confidential workload

play10:58

is going to be perceived from the

play11:01

container context has activity that uh

play11:03

that could happen from any other process

play11:06

within a container context

play11:08

in practice this means that if you are

play11:10

using sidecars for uh

play11:13

injecting rules or or AP table rules or

play11:19

for doing uh for taking a measurement of

play11:24

the traffic and doing any kind of

play11:26

traffic shaping that still works after

play11:28

the works you don't need any kind of

play11:30

specific support for confidential

play11:32

workloads

play11:36

now let's take a look at how each data

play11:39

context is protected with Comprehensive

play11:41

workloads in the center of the image We

play11:44

have basically the same thing we've seen

play11:45

before but slightly bigger and Jos the

play11:49

container context managed by simran then

play11:52

the bmte management look around and then

play11:54

in the guess OS slash company in their

play11:56

workload this vmte access a region of

play12:00

memory from the host that is

play12:02

transparently encrypted and decrypted by

play12:03

the hardware so the if the host tries to

play12:07

access this region of memory it will not

play12:09

be able to access it or if it can it

play12:10

will only find an encrypted garbage this

play12:13

depends on the computer Computing

play12:15

technology the host is the highways are

play12:18

actually providing now there is an

play12:21

exception of this Rule and that is where

play12:23

will be some regions of memory that will

play12:26

be shared with the host and will not be

play12:28

encrypted for storing things such as

play12:30

Vehicles adjustment and implementation

play12:33

detail from a constantial point of view

play12:36

we can say that a whole memory encrypted

play12:38

that is going to a whole data that is

play12:40

potentially sensitive is going to be

play12:42

encrypted

play12:44

now on the right side we also have the

play12:46

storage which can be any kind of

play12:48

arbitrary storage you can use with a

play12:49

regular container and inside here we

play12:52

have the confidential CI we talked

play12:54

before which contains the confidential

play12:57

workload encrypted this image which is

play12:59

this Lux encrypted this image now the

play13:02

encryption and decryption will happen by

play13:03

software inside the context of the te so

play13:07

it's the company sdd the one that will

play13:09

be open the labs device and will operate

play13:12

all three so the host has no visibility

play13:14

of the data in plain text at any moment

play13:17

and to be able to open these blacks

play13:20

encrypted this image and they guess the

play13:23

computer workload will retrieve the

play13:25

secrets from this component on the right

play13:27

which is the on the left side sorry

play13:29

which is called the attestation server

play13:32

and now the attestation server is a

play13:34

component that is trusted for some

play13:37

reason probably pass because it's

play13:38

running an IDE we have for a lot of

play13:40

reasons we are not going to cover that

play13:42

in this talk but is one that stores the

play13:46

all the expected maximums for the

play13:49

confidential workers we have registered

play13:50

in the system

play13:52

so once this DEA starts up it will take

play13:55

it will ask the hardware to make the to

play13:58

take a measurement of the memory

play14:00

contents to sign this measurement send

play14:02

it and to send it to the gas operating

play14:04

system the gos operating system will

play14:07

send will contact the attestation server

play14:09

with this with this attestation

play14:11

signature

play14:12

and it will send it to it the

play14:15

application server will be verified the

play14:16

signature and and compare against this

play14:19

Vector measurement and if it matches it

play14:22

will pick up the secret and it will send

play14:24

it back to the guest operating system to

play14:26

the confeder workload to be used to

play14:29

unlock the encrypted this image that we

play14:31

have in the confidence here on the on

play14:33

the right side

play14:39

before about this idea of taking a

play14:43

regular container regular oci and

play14:44

information

play14:48

the actual process is fairly simple

play14:50

actually so what happens here is that uh

play14:54

we need to create a this image which is

play14:57

basically a file in the house in the

play14:58

context of the uh of the Builder or the

play15:01

build operating system

play15:03

on this this image we are going to

play15:05

format it to relax we are going to

play15:06

generate a random encryption key

play15:09

we are going to expand the contents of

play15:12

the original oci image into this relax

play15:15

encrypted volume

play15:17

then we are going to create a new oci

play15:19

image that contains both this this image

play15:22

and the parameters we said before we

play15:24

need to actually launch IBM with the

play15:28

capabilities

play15:29

and once this one is created we are

play15:31

going to potentially pass it to

play15:33

situation registry and also send both

play15:36

the encryption key

play15:37

that is need to unlock the storage and

play15:40

the measurement and the workload

play15:42

parameters to the adaptation server

play15:43

we've seen on the previous slide and we

play15:46

see here on the bottom

play15:50

now uh before jump before jumping into

play15:53

the demo uh for the last couple years

play15:56

every time I talk about this uh about

play15:59

completely workloads I was asked about

play16:01

what's the difference with a catacomb

play16:03

video containers so this tiny instead of

play16:06

waiting for your question I just went

play16:08

ahead and made it part of the

play16:09

presentation

play16:11

so the the main difference from a

play16:13

practical point of view between computer

play16:16

workloads and configure containers is

play16:19

that Kata is able to run multiple

play16:22

containers in the same T in the same VM

play16:25

well lip conduction Wireless intends to

play16:28

run Jazz one container per T by Design

play16:35

for supporting all the container all the

play16:37

commissioner container semantics now

play16:39

this of course comes with with a ghost

play16:42

cater containers is computer containers

play16:45

is more complex and requires more

play16:47

components but it gives you these

play16:49

additional features on the other side

play16:52

for enabling confidential Network loss

play16:54

we just need to add lip kerram which is

play16:57

a very small piece of software show an

play16:59

already existing stack which allows us

play17:01

to meet the uh

play17:04

Computing guarantees without very very

play17:07

small addition in the stack

play17:10

but

play17:11

if you ask me honestly

play17:14

which one to choose between the others

play17:16

it honestly it really depends on what

play17:19

you intend to do with them

play17:20

if you I'll plan to use uh if you intend

play17:23

to deploy to migrate existing container

play17:26

deployments Kata will beat you give you

play17:28

better compatibility so it's likely that

play17:30

it's going to be uh you're going to uh

play17:33

find less problems this way

play17:36

uh if you intend to do a cloud

play17:38

deployment that are going to have that

play17:40

potentially going to have many

play17:41

containers per board then again

play17:44

will provide you with a lower uh

play17:47

footprint

play17:48

on the other hand if you intend to the

play17:50

plus you have a deploy a cloud

play17:52

deployment with many single container

play17:54

pods or no Pods at all and in the sense

play17:57

that everything is just a single

play17:58

container or even a container pertinent

play18:01

which is the case of function as a

play18:03

service then we click around you have a

play18:05

lower footprint and such and such a

play18:07

lower TCO

play18:08

and of course if you are aiming for an

play18:12

enclosed deployment which yes they still

play18:14

exists uh the uh with the the with

play18:18

container workers you just uh can

play18:20

leverage on the existing content

play18:22

infrastructure and it just uh has a

play18:25

little bit of of

play18:27

um I just need to add liquid around to

play18:28

the mix so it's just uh it allows you to

play18:31

do the the to meet the community

play18:35

requirements with a very small baggage

play18:37

and this is ideal for such thing for

play18:40

scenarios for contextual you have uh in

play18:43

the in the edge or Automotive or in

play18:45

general embedding contexts

play18:49

so let's jump now to the demo which is

play18:52

going to be live so

play18:54

let's hope for the best

play18:56

now here I'm connected to an ACB capable

play19:00

machine which is an AMD epic server and

play19:03

what I have here is a it's a very simple

play19:06

golang program which is basically opens

play19:10

an HTTP server and service this secret

play19:14

through DLS we saw as Nicole

play19:17

certificates so what I'm going to do is

play19:20

I'm going to generate a regular

play19:21

container from it

play19:24

okay

play19:25

build

play19:26

container file

play19:30

I'm going to give you the name CPD mode

play19:34

okay now I'm going to run this container

play19:39

I'm going to just close or 8080

play19:45

and CPD mode

play19:47

okay and I should be able to contact it

play19:51

to obtain the secret

play19:58

now

play19:59

while this communication via https has

play20:03

has been encrypted this is very easy

play20:05

from the context of the house to stack

play20:08

the secrets from this container so if I

play20:12

go ahead and

play20:14

inspect the container

play20:21

to find out his process ID

play20:24

ol and confirm this is basically the

play20:29

binary we built before

play20:31

and now I can simply done its memory

play20:34

contents

play20:38

and if I inspect this memory Contents I

play20:41

can see

play20:47

that the secret is is there in plain

play20:49

text

play20:52

in addition to attacking you from the

play20:54

memory side of things I can also undo

play20:56

the similar thing from the storage

play20:58

so I'm going to speculative coin again

play20:59

once again

play21:01

I have the clear process idiot here so

play21:04

I'm going to find out this mounts

play21:12

okay here they are and let's take a look

play21:15

at them and we see that we have here the

play21:18

binary again in plain text so we can

play21:21

simply

play21:26

obtain the secret here too

play21:29

so

play21:30

in this case these were one of these not

play21:33

protected against hosting by inspection

play21:35

now we are doing the same thing but this

play21:39

time with uh we are going to transform

play21:42

this container into a confidential

play21:44

workload so what we're going to do is

play21:47

we're going to make use of this script

play21:49

this tool which is called OC I choose w

play21:52

uh ideally this should be uh the uh it

play21:56

really will that Builder should be able

play21:58

to do this for us but I for now we are

play22:00

using this this script

play22:02

and

play22:03

to this screen we need to provide it

play22:05

with the de config

play22:07

which is uh we we say before that

play22:10

contains the parameters that are needed

play22:12

for the hypervisor to actually launch

play22:13

this T which contains the workload ID

play22:16

the number of pcpus the amount of ram

play22:18

the technology we are going to use we

play22:21

have right now we are supporting both

play22:22

the CV and SMP

play22:25

specific data and then the attestational

play22:28

URL uh to which the world needs to

play22:31

contact you send the measurement and

play22:33

obtain the lock secreting change if they

play22:35

measurement is successful

play22:37

so I'm going to run ca27u

play22:42

specifying this container this

play22:44

configuration file I'm going to pass the

play22:47

ACB certificate this is something

play22:48

specific about the CP SMP for instance

play22:50

doesn't need this uh this argument

play22:53

I'm going to give the original image

play22:56

name and I'm going to provide also a new

play22:59

oci image which is going to be the same

play23:00

one-cw

play23:03

now he's asked us to run it into a

play23:05

building share which we are going to do

play23:06

right away

play23:08

and now what it's doing is what we're

play23:09

talking for is basically automatically

play23:11

creating a file which is this image it's

play23:14

going to generate a random encryption

play23:16

key it's going to be Larry formatting it

play23:18

with relax it's going to mount it

play23:21

somewhere copy the contents of the

play23:23

original image and then as you can see

play23:25

here create a new oci image with the

play23:28

this image among all the configuration

play23:30

file and in action figure first value

play23:33

the under certificate

play23:34

now by the at the end of this process it

play23:37

will also register it with the

play23:38

destination server we will have run

play23:40

right here for demonstration purposes

play23:44

so we can see here we will save a

play23:47

register wordload

play23:49

and now we should be able to run it

play23:51

using random using a run as a random

play23:55

so I'm going to do exactly that

play23:59

I'm going to also pass the 8080 board

play24:03

CW

play24:07

I think we should be it

play24:11

now this this 90 will take you longer

play24:14

because we are asking the hardware to

play24:16

authenticate the memory and generate

play24:18

data station I encrypt it but the other

play24:21

STC has already done that it has already

play24:24

sent the attestation and the measurement

play24:26

to the attestation server it has

play24:28

received back the receipt back the key

play24:31

which has been used to unlock and mount

play24:34

the route the Lux encrypted loop system

play24:36

and the HTTP server has started yes as

play24:39

we it happened in the uh non-encrypted

play24:42

in the non-completed world of case

play24:44

now I should be able to do call again

play24:47

with this one and yes we have the same

play24:50

Secret

play24:51

but let's try to start a secret the same

play24:54

way we did before from memory and both

play24:55

storage

play24:56

so let's find out the process ID

play25:10

and here we find a fin that something

play25:12

different because instead of finding our

play25:14

hello openshift the example we see that

play25:17

this is running with cran-k run which

play25:20

means it's running in IBM and more

play25:22

precisely an interesting environment in

play25:25

this case

play25:26

and we can still dump this whole memory

play25:29

of this VM of course

play25:34

but since the memory is encrypted we

play25:37

should not be able to find anything

play25:39

interesting

play25:41

on it

play25:50

and it doesn't now in some cases uh we

play25:52

are talking in this case we are we are

play25:54

able to dump the memory of the VM of the

play25:57

trust execution environment is not of

play26:00

use to us because it's encrypted and

play26:02

there are other companies Computing

play26:03

technology in which directly you can the

play26:04

host can not even dump that memory so it

play26:08

depends on the computer technology but

play26:09

the result is the same is that the host

play26:12

is not able to extract plane plain text

play26:14

secret find plain text secrets into that

play26:16

memory

play26:17

so let's try to do the same thing with

play26:19

the storage

play26:22

proxy

play26:24

[Laughter]

play26:26

I'm going to do for a flower

play26:32

find out the endpoints

play26:34

okay here we are

play26:36

let's find what it says and this time

play26:40

what we have here is not the binary but

play26:42

this image that is Lex encrypted

play26:47

foreign

play26:54

we try to find something interesting on

play26:57

it

play26:58

we won't be able to do so

play27:00

now you may see that we have other

play27:03

things here that we haven't talked

play27:05

before such as the entry point uh this

play27:08

entry point is simply a binary which is

play27:10

only mission is to do this

play27:12

in case you run it this you attempt to

play27:15

run this oci image without uh without

play27:18

the k run runtime with a specified

play27:20

decline runtime you get this message

play27:23

this is the only purpose of it this is

play27:26

there are no secrets here there are no

play27:27

secrets here and there are no secrets in

play27:29

the certification either and the only

play27:31

thing is we are in this this encrypted

play27:34

image

play27:37

and that's basically all I had today uh

play27:41

I hope you enjoy it and if you have any

play27:42

questions I'll be around

play27:45

thank you Sergio um there are two

play27:47

questions in the Q a the first one is uh

play27:51

t-e-e-l-u-k-s what does it what do these

play27:53

acronyms mean

play27:55

DEA is a trusted execution environment

play27:59

uh which in this particular context it

play28:02

means a VM that is using relying on

play28:06

Hardware to have memory encryption and

play28:08

another station

play28:10

and Lex is just uh one of the well the

play28:14

main way in which you can encrypt disks

play28:16

and Linux is one of the the mechanisms

play28:19

to do that

play28:21

all right thank you and the next

play28:22

question is can libk run coexist with

play28:24

running VMS running KVM and or

play28:27

virtualbox

play28:30

with running what exactly running VMS

play28:34

that are running KVM and or virtualbox

play28:38

well computer Computing if the question

play28:40

is about nesting configure Computing

play28:43

does not support this team by Design not

play28:45

with the look around not with any other

play28:47

technology and now what you can have is

play28:50

on the same host uh containers that are

play28:53

running with uh with encryption SDS and

play28:56

regular containers running with other

play28:57

runtimes

play29:02

all right um so we are out of time over

play29:05

here thank you so much Sergio but I

play29:07

think there are two more questions in

play29:08

chat if you can just take a quick look

play29:10

and answer them

play29:12

but yeah thank you all and next session

play29:15

is containers on cars

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Confidential ComputingPodmanContainer SecurityHardware EncryptionAttestationCRI-OlibvirtVirtualizationOpen SourceCloud Security