Learn Docker in 7 Easy Steps - Full Beginner's Tutorial

Fireship
24 Aug 202011:01

Summary

TLDRThis video script offers an in-depth tutorial on Docker for developers, focusing on containerizing a Node.js application. It covers Docker's basics, including Dockerfiles, images, and containers, and addresses advanced topics like port forwarding and volumes. The tutorial guides viewers through installation, writing Dockerfiles, building and running images, and managing multi-container setups with Docker Compose, aiming to demystify Docker and empower developers to package and deploy applications consistently across any environment.

Takeaways

  • 📦 Docker is a tool for packaging software so it can run on any hardware, helping to standardize development environments.
  • 🛠️ The core components of Docker are Dockerfiles, images, and containers, which work together to build, distribute, and run applications consistently.
  • 💡 Docker solves the 'it works on my machine' problem by allowing developers to define and reproduce environments with Dockerfiles.
  • 📝 A Dockerfile is a script with instructions to build a Docker image, which serves as a template for creating containers.
  • 🚀 The process of Dockerizing an application involves creating a Dockerfile, installing dependencies, and defining how the application runs within a container.
  • 🖥️ Docker Desktop is recommended for Mac or Windows users, providing both command-line access and a GUI for managing containers.
  • 🔍 Docker extensions for IDEs like VS Code enhance development by providing language support for Dockerfiles and integration with registries.
  • 🔑 The 'FROM' instruction in a Dockerfile starts with a base image, which can be an official image like 'node:12' to simplify setup.
  • 🔄 Efficient layer caching in Dockerfiles is achieved by installing dependencies before copying the source code, minimizing rebuild times.
  • 🔒 Dockerignore files help to exclude unnecessary files like 'node_modules' from being copied into the Docker image, keeping the image lean.
  • 🔌 Port forwarding is essential for accessing applications running inside containers from the host machine, using the '-p' flag in 'docker run'.
  • 🔄 Volumes in Docker allow data persistence and sharing across containers, useful for databases and other stateful services.
  • 🔧 Docker Compose simplifies the management of multi-container applications by defining and running services, volumes, and networking in a YAML file.
  • 🔄 Debugging Dockerized applications can be done through logs, direct CLI access within containers, or using Docker Desktop's GUI.

Q & A

  • What is one of the leading causes of imposter syndrome among developers mentioned in the script?

    -Not knowing Docker and feeling left out during discussions about advanced topics like Kubernetes, swarms, and sharding at parties.

  • What is the main goal of the video?

    -To teach developers everything they need to know about Docker, including installation, Dockerfile instructions, and advanced concepts like port forwarding, volumes, and Docker Compose.

  • What are the three essential concepts in Docker that one must understand?

    -The three essential concepts are Dockerfiles, images, and containers. A Dockerfile is a blueprint for building a Docker image, an image is a template for running containers, and a container is a running process of that image.

  • Why is Docker useful for developers?

    -Docker is useful for developers because it packages software so it can run on any hardware, ensuring that applications run consistently across different environments and avoiding issues like 'it works on my machine'.

  • What is the purpose of a Dockerfile?

    -A Dockerfile is used as a blueprint to build a Docker image, which is then used to create containers. It contains a set of instructions that define the environment in which the application will run.

  • How does Docker solve the problem of different environments causing application failures?

    -Docker solves this problem by allowing developers to define the environment in a Dockerfile, creating an immutable snapshot known as an image. This image can be used to spawn the same process in multiple places, ensuring consistency.

  • What command should one memorize after installing Docker?

    -The 'docker' command should be memorized as it lists all the running containers on the system.

  • Why is it important to install the Docker extension for VS Code or another IDE?

    -The Docker extension provides language support when writing Dockerfiles, and it can also link up to remote registries, offering additional functionality and convenience.

  • What is the significance of the 'FROM' instruction in a Dockerfile?

    -The 'FROM' instruction specifies the base image to start from when building the Docker image. It's typically a specific version of an operating system or a language runtime environment.

  • Why should dependencies be installed before the app source code in a Dockerfile?

    -Installing dependencies first allows Docker to cache them, which means that they don't need to be reinstalled every time the app source code changes, making the build process more efficient.

  • What is the purpose of the 'docker build' command?

    -The 'docker build' command is used to build a Docker image from a Dockerfile. It processes each instruction in the Dockerfile and creates an image that can be used to run containers.

  • How can one access a running Docker container's logs and interact with its command line?

    -One can access logs and interact with a Docker container's command line using Docker Desktop's GUI, which provides a dashboard for viewing logs and a CLI button for executing commands, or by using the 'docker exec' command from the command line.

  • What is Docker Compose and how does it help with running multiple containers?

    -Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to configure your application's services, networks, and volumes in a 'docker-compose.yaml' file and start all the containers together with a single command.

  • Why should each container in a Dockerized application ideally run only one process?

    -Running only one process per container helps keep the containers simple and maintainable. It also aligns with the microservices architecture, making it easier to manage and scale individual components of the application.

Outlines

00:00

📦 Understanding Docker Basics for Developers

This paragraph introduces the concept of imposter syndrome among developers due to unfamiliarity with Docker, a tool for containerization. It outlines the importance of Docker in modern development practices and promises a step-by-step guide to mastering Docker by containerizing a Node.js application. The paragraph explains the basic components of Docker, including Dockerfiles, images, and containers, and emphasizes Docker's role in creating reproducible software environments. It also touches on advanced Docker concepts such as port forwarding, volumes, and Docker Compose for managing multiple containers.

05:01

🛠 Docker Installation and Dockerfile Essentials

The second paragraph delves into the installation process of Docker, recommending Docker Desktop for Mac and Windows users. It details the first Docker command to memorize and the importance of the Docker extension for IDEs like VS Code. The main focus then shifts to the Dockerfile, which is the core of building a Docker image. The paragraph walks through creating a Dockerfile, starting with choosing a base image, adding app source code, and installing dependencies. It also discusses best practices for Dockerfile instructions, such as caching layers for efficiency and using the 'docker ignore' file to avoid copying unnecessary files into the image. The paragraph concludes with how to set environment variables, expose ports, and the significance of the 'command' instruction in the Dockerfile for running the application.

10:02

🔧 Building and Running Docker Images

This paragraph explains the process of building a Docker image using the 'docker build' command, including how to tag images for easy identification and retrieval. It describes the build process starting from pulling the base image to executing each step in the Dockerfile. The paragraph also covers how to run a Docker container locally using the 'docker run' command and the importance of port mapping for external access to the container's services. Additionally, it discusses the persistence of container data using volumes and how to manage and debug containers using Docker Desktop or command-line tools like 'docker exec'. The paragraph wraps up with a brief introduction to Docker Compose for orchestrating multiple containers, which is particularly useful for complex applications requiring services like databases.

🔄 Orchestrating Containers with Docker Compose

The final paragraph introduces Docker Compose as a tool for defining and running multi-container Docker applications. It provides an overview of creating a 'docker-compose.yaml' file to configure services, such as a Node.js app and a MySQL database, along with volume definitions for data persistence. The paragraph explains how to use YAML syntax to simplify the configuration of multiple containers and services. It concludes with the execution of 'docker compose up' to start all services together and 'docker compose down' to shut them down, emphasizing the ease of managing containerized applications with Docker Compose.

Mindmap

Keywords

💡Imposter Syndrome

Imposter syndrome is a psychological pattern where individuals doubt their skills and fear being exposed as a 'fraud'. In the script, it's mentioned as a leading cause among developers, particularly when they feel they lack knowledge in areas like Docker, which can make them feel out of place in technical discussions.

💡Docker

Docker is a platform that allows developers to package applications into containers that can run consistently across different environments. The video aims to teach viewers how to use Docker, emphasizing its importance for developers to ensure their applications run seamlessly on any hardware.

💡Container

A container in the context of Docker is a lightweight, standalone, and executable package of software that includes everything needed to run an application. The script explains that containers are running processes that can be created from an image, which helps in solving environment-related issues like 'it works on my machine' problems.

💡Dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. The script details the importance of Dockerfiles in defining the environment for an application, allowing for the creation of a Docker image that can be shared and reused.

💡Docker Image

A Docker image is a read-only template with instructions for creating a Docker container. The script uses the term to describe the output of a Dockerfile, which can be used to create containers and is stored as an immutable snapshot.

💡Port Forwarding

Port forwarding is a technique that allows a networked request made to a specific port on a local machine to be forwarded to a specified port on a remote machine. In the script, port forwarding is explained as a way to make a Docker container's port accessible from the local machine.

💡Volumes

In Docker, a volume is a way to persist data generated by and used by Docker containers. The script mentions volumes as a means to share data across multiple containers and to persist data even after the containers are stopped.

💡Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications. The script introduces Docker Compose as a solution for managing multiple containers, such as a Node.js application and a MySQL database, by using a YAML file to configure the services.

💡Node.js

Node.js is a JavaScript runtime that allows developers to write server-side code in JavaScript. The script uses Node.js as an example application that is containerized using Docker, demonstrating the process of creating a Dockerfile and building a Docker image for a Node.js application.

💡Environment Variables

Environment variables are a set of dynamic values that can affect the way running processes will behave on a computer. In the script, environment variables are used within the Docker container to configure the application, such as setting the port on which the Node.js application listens.

💡Microservices

Microservices is an architectural style that structures an application as a collection of loosely coupled services. The script suggests that each container should run a single process, advocating for microservices architecture to keep containers simple and maintainable.

Highlights

Imposter syndrome among developers can be caused by a lack of Docker knowledge.

Docker helps package software to run on any hardware, solving 'it works on my machine' problems.

Understanding Docker involves knowing about Dockerfiles, images, and containers.

A Dockerfile is a blueprint for building a Docker image.

Docker images are templates for running Docker containers.

Containers are running processes that can be created from an image.

Docker can reproduce environments defined by developers to ensure consistency.

Images can be uploaded to public and private registries for easy access.

Docker Desktop application is recommended for Mac and Windows users.

Docker extension for VS Code provides language support and remote registry linking.

The Dockerfile contains instructions to build a Docker image and run an application.

Base images like 'node:12' are used to provide a starting environment for Dockerfiles.

Dependencies should be installed first in a Dockerfile to leverage Docker's layer caching.

Dockerignore file helps to exclude unnecessary files like 'node_modules' from being copied.

Environment variables and port exposure are set in the Dockerfile for container configuration.

The 'exec form' in Dockerfile is preferred for running the application process.

Docker images are built using the 'docker build' command with a tag for easy access.

Port forwarding is implemented using the '-p' flag to make containers accessible locally.

Volumes are used in Docker to share data across multiple containers and persist data.

Docker Compose is a tool for running multiple Docker containers simultaneously.

Docker Compose uses a 'docker-compose.yaml' file to define and run services and volumes.

Docker Compose allows for easy scaling of containers with tools like Kubernetes and Swarm.

Debugging Docker containers can be done through logs, CLI access, and Docker Desktop GUI.

Docker encourages the development of simple, maintainable microservices.

Transcripts

play00:00

one of the leading causes of imposter

play00:01

syndrome among developers is not knowing

play00:03

docker it makes it hard to go to parties

play00:05

where everybody's talking about

play00:06

kubernetes

play00:07

swarms shuffle sharding while you hide

play00:09

in the corner googling what is a

play00:10

container we've all been there at one

play00:12

point or another

play00:12

in today's video you'll learn everything

play00:14

you need to know about docker to survive

play00:16

as a developer in 2020

play00:17

we'll take a hands-on approach by

play00:19

containerizing a node.js application

play00:21

i'll assume you've never touched a

play00:22

docker container before so we'll go

play00:23

through installation and tooling

play00:25

as well as the most important

play00:26

instructions in a dockerfile in addition

play00:28

we'll look at very important advanced

play00:30

concepts like port forwarding

play00:31

volumes and how to manage multiple

play00:33

containers with docker compose

play00:35

we'll do everything step by step so feel

play00:37

free to skip ahead with the chapters in

play00:39

the video description

play00:40

what is docker from a practical

play00:42

standpoint it's just a way to package

play00:43

software so it can run on any hardware

play00:46

now in order to understand how that

play00:47

process works there are three things

play00:49

that you absolutely must know

play00:50

docker files images and containers a

play00:53

docker file is a blueprint for building

play00:55

a docker image

play00:56

a docker image is a template for running

play00:59

docker containers

play01:00

a container is just a running process in

play01:02

our case we have a node application we

play01:04

need to have a server that's running the

play01:05

same version of node and that has also

play01:07

installed these dependencies

play01:09

it works on my machine but if someone

play01:10

else with a different machine tries to

play01:12

run it with a different version of node

play01:13

it might break

play01:14

the whole point of docker is to solve

play01:16

problems like this by reproducing

play01:18

environments the developer who creates

play01:19

the software can define the environment

play01:21

with a docker file

play01:22

then any developer at that point can use

play01:24

the docker file to rebuild the

play01:26

environment which is saved as an

play01:27

immutable snapshot known as an image

play01:29

images can be uploaded to the cloud in

play01:31

both public and private registries

play01:33

then any developer or server that wants

play01:35

to run that software

play01:36

can pull the image down to create a

play01:38

container which is just a running

play01:40

process of that image in other words one

play01:42

image file can be used to spawn the same

play01:44

process multiple times in multiple

play01:46

places

play01:46

and it's at that point where tools like

play01:48

kubernetes and swarm come into play

play01:50

to scale containers to an infinite

play01:51

workload the best way to really learn

play01:53

docker

play01:54

is to use it and to use it we need to

play01:56

install it if you're on mac or windows i

play01:58

would highly recommend installing the

play01:59

docker desktop application it installs

play02:01

everything you need for the command line

play02:03

and also gives you a gui where you can

play02:04

inspect your running containers

play02:06

once installed you should have access to

play02:07

docker from the command line and here's

play02:09

the first command you should memorize

play02:11

docker which gives you a list of all the

play02:13

running containers on your system

play02:14

you'll notice how every container has a

play02:16

unique id and is also linked to an image

play02:19

and keep in mind you can find the same

play02:20

information from the gui as well now the

play02:22

other thing you'll want to install is

play02:23

the docker extension for vs code or for

play02:26

your ide

play02:26

this will give you language support when

play02:28

you write your docker files and can also

play02:30

link up to remote registries and a bunch

play02:32

of other stuff

play02:32

now that we have docker installed we can

play02:34

move on to what is probably the most

play02:36

important section of this video and

play02:37

that's the docker file

play02:39

which contains code to build your docker

play02:41

image and ultimately run your app as a

play02:43

container

play02:43

now to follow along at this point you

play02:45

can grab my source code from github or

play02:47

fireship io or better yet

play02:48

use your own application as a starting

play02:50

point in this case i just have a single

play02:52

index.js file

play02:53

that exposes an api endpoint that sends

play02:55

back a response

play02:56

docker is easy then we expose our app

play02:58

using the port environment variable

play03:00

and that'll come into play later the

play03:02

question we're faced with now is how do

play03:03

we dockerize this app

play03:05

we'll start by creating a docker file in

play03:07

the root of the project

play03:08

the first instruction in our docker file

play03:10

is from and if you hover over it it will

play03:12

give you some documentation about what

play03:13

it does

play03:14

you could start from scratch with

play03:15

nothing but the docker runtime however

play03:17

most docker

play03:18

files will start with a specific base

play03:19

image for example

play03:21

when i type ubuntu you'll notice it's

play03:22

underlined and when i control click it

play03:24

it will take me to all the base images

play03:26

for this flavor of linux and then you'll

play03:28

notice it supports a variety of

play03:29

different tags which are just different

play03:31

variations on this base image

play03:32

ubuntu doesn't have nodejs installed by

play03:34

default we could still use this image

play03:36

and install node.js manually

play03:38

however there is a better option and

play03:39

that's to use the officially supported

play03:41

node.js image

play03:42

we'll go ahead and use the node version

play03:43

12 base image which will give us

play03:45

everything we need to start working with

play03:46

node in this environment

play03:48

the next thing we'll want to do is add

play03:49

our app source code to the image

play03:51

the working directory instruction is

play03:53

kind of like when you cd into a

play03:54

directory

play03:55

now any subsequent instructions in our

play03:57

docker file will start from this app

play03:58

directory

play03:59

now at this point there is something

play04:01

very important that you need to

play04:02

understand

play04:03

and that's that every instruction in

play04:04

this docker file is considered its own

play04:06

step or layer

play04:07

in order to keep things efficient docker

play04:09

will attempt to cache layers

play04:11

if nothing is actually changed now

play04:12

normally when you're working on a node

play04:14

project

play04:14

you get your source code and then you

play04:16

install your dependencies but in docker

play04:18

we actually want to install our

play04:19

dependencies first so they can be cached

play04:21

in other words we don't want to have to

play04:23

reinstall all of our node modules every

play04:25

time we change our app source code we

play04:27

use the copy instruction which takes two

play04:28

arguments

play04:29

the first argument is our local package

play04:31

json location

play04:32

and then the second argument is the

play04:34

place we want to copy it in the

play04:35

container which is the current working

play04:37

directory

play04:38

and now that we have a package json we

play04:40

can run the npm install command

play04:42

this is just like opening a terminal

play04:44

session and running a command

play04:45

and when it's finished the results will

play04:47

be committed to the docker image as a

play04:49

layer

play04:49

now that we have our modules in the

play04:50

image we can then copy over our source

play04:52

code

play04:53

which we'll do by copying over all of

play04:55

our local files to the current working

play04:56

directory

play04:57

but this actually creates a problem for

play04:59

us because you'll notice that we have a

play05:00

node modules folder here in our local

play05:02

file system

play05:03

that would also be copied over to the

play05:04

image and override the node modules that

play05:06

we install there

play05:08

what we need is some kind of way for a

play05:09

docker to ignore our local node modules

play05:11

we can do that by creating a docker

play05:13

ignore file

play05:14

and adding node modules to it it works

play05:16

just like a git ignore file which you've

play05:18

probably seen before

play05:19

okay so at this point we have our source

play05:21

code in the docker image

play05:22

but in order to run our code we're using

play05:24

an environment variable we can set that

play05:26

environment variable in the container

play05:27

using the env

play05:28

instruction now when we actually have a

play05:30

running container we also want it to be

play05:31

listening on port 8080 so we can access

play05:34

the nodejs express app publicly

play05:36

and we'll look at port some more detail

play05:37

in just a minute when we run the

play05:38

container

play05:39

and that brings us to our final

play05:40

instruction command there can only be

play05:42

one of these per docker file and it

play05:44

tells the container how to run the

play05:45

actual application

play05:46

which it does by starting a process to

play05:48

serve the express app you'll also notice

play05:50

that unlike run

play05:51

we've made this command an array of

play05:53

strings this is known as exec form

play05:55

and it's the preferred way to do things

play05:56

unlike a regular command it doesn't

play05:58

start up a shell session

play05:59

and that's basically all there is to it

play06:01

we now have a full set of instructions

play06:02

for building a docker image

play06:04

and that brings us to the next question

play06:05

how do we build a docker image

play06:07

you build a docker image by running the

play06:09

docker build command there's a lot of

play06:11

different options you can pass with the

play06:12

command but the one you want to know for

play06:13

right now

play06:14

is tag or t this will give your image a

play06:16

name tag that's easy to remember so you

play06:18

can access it later

play06:19

when defining the tag name i'd first

play06:21

recommend setting up a username on

play06:23

docker hub

play06:24

and then do that username followed by

play06:25

whatever you want to call this image

play06:27

so in my case it would be fireship slash

play06:29

demo app and you could also add a

play06:31

version number separated by a colon

play06:33

from there you simply add the path to

play06:34

your docker file which in our case is

play06:36

just a period for the current working

play06:38

directory

play06:38

when we run it you'll notice it starts

play06:40

with step one which is to pull the node

play06:42

12 image remotely

play06:43

then it goes through each step in our

play06:44

docker file and finally it says

play06:46

successfully built the

play06:47

image id and now that we have this image

play06:49

we can use it as a base image to create

play06:51

other images or we can use it to run

play06:53

containers

play06:54

in real life to use this image you'll

play06:56

most likely push it to a container

play06:57

registry somewhere

play06:58

that might be docker hub or your

play07:00

favorite cloud provider and the command

play07:02

you would use to do that is

play07:03

docker push then a developer or server

play07:05

somewhere else in the world could use

play07:07

docker pull to pull that image back down

play07:09

but we just want to run it here locally

play07:10

in our system so let's do that with the

play07:12

docker run command

play07:14

we can supply it with the image id or

play07:15

the tag name and all that does

play07:17

is create a running process called a

play07:19

container and we can see in the terminal

play07:21

it should say app listening on localhost

play07:23

8080. but if we open the browser and go

play07:25

to that address we don't see anything

play07:27

so why can't i access my container

play07:29

locally remember we exposed port 8080 in

play07:31

our docker file

play07:32

but by default it's not accessible to

play07:34

the outside world let's refactor our

play07:36

command to use the p

play07:37

flag to implement port forwarding from

play07:39

the docker container to our local

play07:40

machine

play07:41

on the left side we'll map a port on our

play07:43

local machine 5000 in this case

play07:45

to a port on the docker container 8080

play07:48

on the right side

play07:49

and now if we open the browser and go to

play07:50

localhost 5000 we'll see the app running

play07:53

there

play07:53

now one thing to keep in mind at this

play07:55

point is that the docker container will

play07:56

still be running even after you close

play07:58

the terminal window

play07:59

let's go ahead and open up the dashboard

play08:01

and stop the container you should

play08:03

actually have two running containers

play08:04

here if you've been following along

play08:06

when you stop the container any state or

play08:08

data that you created inside of it will

play08:09

be lost

play08:11

but there can be situations where you

play08:12

want to share data across multiple

play08:14

containers

play08:15

and the preferred way to do that is with

play08:16

volumes a volume is just a dedicated

play08:19

folder on the host machine

play08:20

and inside this folder a container can

play08:22

create files that can be remounted into

play08:24

future containers or multiple containers

play08:26

at the same time

play08:27

to create a volume we use the docker

play08:29

volume create command

play08:31

now that we have this volume we can

play08:32

mount it somewhere in our container when

play08:34

we run it

play08:35

multiple containers can mount this

play08:36

volume simultaneously and access the

play08:38

same set of files

play08:40

and the files stick around after all the

play08:41

containers are shut down

play08:43

now that you know how to run a container

play08:44

let's talk a little bit about debugging

play08:46

when things don't go as planned you

play08:48

might be wondering how do i inspect the

play08:49

logs and how do i get into my container

play08:51

and start interacting with the command

play08:53

line

play08:53

well this is where docker desktop really

play08:55

comes in handy if you click on the

play08:56

running container you can see all the

play08:58

logs right there

play08:59

and you can even search through them you

play09:01

can also execute commands in your

play09:02

container by clicking on the cli button

play09:04

and keep in mind you can also do this

play09:06

from your own command line using the

play09:07

docker exec command

play09:09

in any case it puts us in the root of

play09:11

the file system of that container so we

play09:13

can then ls to see files

play09:15

or do whatever we want in our linux

play09:16

environment that's useful to know but

play09:19

one of the best things you can do to

play09:20

keep your containers healthy

play09:21

is to write simple maintainable micro

play09:23

services each container should only run

play09:25

one process

play09:26

and if your app needs multiple processes

play09:28

then you should use multiple containers

play09:30

and docker has a tool designed just for

play09:32

that called docker compose

play09:34

it's just a tool for running multiple

play09:35

docker containers at the same time

play09:37

we already have a docker file for our

play09:38

node app but let's imagine that our node

play09:40

app also needs to access a mysql

play09:42

database

play09:43

and we also likely want a volume to

play09:45

persist the database across multiple

play09:47

containers

play09:48

we can manage all that with docker

play09:49

compose by creating a

play09:51

docker-compose.yaml file in the root of

play09:53

our project

play09:54

inside that file we have a services

play09:56

object where each key

play09:58

in that object represents a different

play10:00

container that we want to run

play10:01

we'll use web to define our node.js app

play10:03

that we've already built

play10:04

and then we'll use build to point it to

play10:06

the current working directory which is

play10:08

where it can find the docker file

play10:10

and then we'll also define the port

play10:11

forwarding configuration here as well

play10:13

then we have a separate container called

play10:15

db which is our mysql database process

play10:18

after services we'll also define a

play10:20

volume to store the database data across

play10:22

multiple containers

play10:23

and then we can mount that volume in our

play10:25

db container and hopefully you're

play10:27

starting to see how much

play10:28

easier it is to define this stuff as

play10:29

yaml as opposed to writing it out as

play10:32

individual commands

play10:33

and now that we have this configuration

play10:34

set we can run docker compose up from

play10:36

the command line which will find this

play10:38

file

play10:38

and run all the containers together we

play10:40

can mess around with our app for a

play10:42

little while

play10:42

and then run docker compose down to shut

play10:44

down all the containers together

play10:46

i'm going to go ahead and wrap things up

play10:47

there if this video helped you please

play10:49

like and subscribe and consider becoming

play10:50

a pro member at fireship io

play10:52

where we use docker in a variety of

play10:54

different project-based courses

play10:56

thanks for watching and i will see you

play10:58

in the next one

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
DockerNode.jsContainerizationDevOpsMicroservicesDockerfileDocker ComposeKubernetesImposter SyndromeSoftware Packaging
هل تحتاج إلى تلخيص باللغة الإنجليزية؟