Introduction to PyTorch
Summary
TLDRIn this video, Brad Heinz, a partner engineer with Facebook's PyTorch team, introduces PyTorch, an open-source machine learning framework that streamlines the process from research to production. Key features include tensors, autograd for dynamic computation, and tools for model building, data loading, and deployment. The tutorial covers installing PyTorch, tensor operations, building neural networks, training loops, and deploying models using TorchScript. It also highlights PyTorch's community, its use in enterprises, and associated projects like AllenNLP and ClassyVision.
Takeaways
- 😀 Brad Heinz, a partner engineer at Facebook, introduces PyTorch, an open-source machine learning framework.
- 💻 PyTorch is designed for rapid prototyping and deployment, supporting the full ML application lifecycle from research to production.
- 📊 Tensors are the core data structure in PyTorch, functioning as multi-dimensional arrays with built-in operations for computation.
- 🔄 Autograd is PyTorch's automatic differentiation engine that facilitates the computation of derivatives, crucial for model training.
- 🏗️ PyTorch allows building models using modules, which can be composed into complex architectures like convolutional neural networks.
- 🔁 The training loop in PyTorch involves feeding data through the model, computing loss, performing backpropagation, and updating model weights.
- 📚 Torchvision and Torchaudio are associated libraries that provide datasets, pre-trained models, and other utilities for computer vision and audio applications.
- 🚀 PyTorch supports hardware acceleration, particularly on NVIDIA GPUs, to enhance model training and inference performance.
- 🔗 Torch Script is a way to serialize and optimize PyTorch models for deployment, enabling high-performance model serving.
- 🌐 The PyTorch community is extensive, with contributions from around the world, supporting a diverse range of ML projects and applications.
Q & A
What is the primary role of Brad Heinz in the video?
-Brad Heinz is a Partner Engineer working with the PyTorch team at Facebook.
What does the video aim to provide for viewers who are new to machine learning with PyTorch?
-The video provides an introduction to PyTorch, covering its features, key concepts, and associated tools and libraries.
Why is it important to match the CUDA toolkit version with the CUDA drivers on a machine?
-Matching the CUDA toolkit version with the CUDA drivers ensures compatibility for GPU acceleration with PyTorch on Linux or Windows machines with NVIDIA CUDA compatible GPUs.
What is PyTorch and what does it accelerate according to the video?
-PyTorch is an open-source machine learning framework that accelerates the path from research prototyping to production deployment.
What are some of the associated libraries for PyTorch mentioned in the video?
-Some associated libraries for PyTorch include Torch Vision for computer vision applications, and there are also libraries for text, natural language, and audio applications.
How does PyTorch enable fast iteration on ML models and applications?
-PyTorch enables fast iteration by allowing work in regular idiomatic Python without a new domain-specific language, and by using Autograd, PyTorch's automatic differentiation engine.
What is the significance of the backward method in PyTorch's Autograd?
-The backward method in Autograd is significant as it rapidly calculates the gradients needed for learning by utilizing the metadata of each tensor, which tracks the computation history.
How is a neural network model typically structured in PyTorch code?
-A neural network model in PyTorch typically inherits from torch.nn.Module, has an __init__ method for constructing layers, and a forward method for the actual computation.
What is the purpose of the data loader in PyTorch?
-The data loader in PyTorch is used to efficiently feed data to the model in batches, with options to shuffle the data and to parallelize data loading.
How does the training loop in PyTorch work as described in the video?
-The training loop in PyTorch involves iterating over batches of training data, computing predictions and loss, performing a backward pass to calculate gradients, and updating model weights using an optimizer.
What is Torch Script and how does it relate to model deployment in PyTorch?
-Torch Script is a static, high-performance subset of Python that allows for the serialization and optimization of PyTorch models for deployment, enabling them to run without the Python interpreter for better performance.
Outlines
🤖 Introduction to PyTorch
Brad Heinz, a partner engineer with the PyTorch team at Facebook, introduces PyTorch as an open-source machine learning framework designed to streamline the process from research prototyping to production deployment. The video targets newcomers to machine learning with PyTorch, covering its features, key concepts, and associated tools and libraries. It emphasizes the importance of installing PyTorch and Torch Vision for following along with the demonstrations and exercises. PyTorch is highlighted for its hardware acceleration on NVIDIA GPUs, its rich ecosystem of community projects, and its support for a variety of applications including computer vision, text, and natural language processing. The video also touches on the importance of CUDA compatibility for GPU acceleration on non-Mac platforms.
🔢 Deep Dive into PyTorch Tensors and Autograd
The script delves into the fundamentals of PyTorch tensors, which are the core data abstraction and central to all operations in PyTorch. It explains how tensors are multi-dimensional arrays with additional functionalities and come with over 300 mathematical and logical operations. The video demonstrates tensor creation, data type specification, and random tensor generation with seeding for reproducibility. It also covers basic arithmetic operations between tensors and the importance of tensor shape compatibility for element-wise operations. The Autograd feature of PyTorch is introduced as the automated differentiation engine that facilitates rapid model iteration by computing derivatives with a single function call, crucial for model training.
🏗 Building Neural Networks with PyTorch
This section of the script guides viewers through constructing a neural network using PyTorch, specifically the LeNet-5 model, which is a convolutional neural network designed for image classification. The video outlines the process of importing necessary PyTorch modules, defining the network architecture, and utilizing layers like convolutional and linear layers. It emphasizes the model structure, including the initialization method where layers are constructed, and the forward method where the computation happens. The script also demonstrates how to instantiate the model, pass input through it, and examine the output, highlighting the batch dimension in tensor shapes which is essential for processing multiple data points simultaneously.
📚 Efficient Data Handling with PyTorch
The script discusses the efficient handling of data in PyTorch, necessary for training machine learning models. It introduces the use of the CIFAR-10 dataset and the transformation of images into tensors with normalization to enhance the learning process. The video explains the creation of a dataset instance and the application of transformations to prepare data for model training. It also covers the use of data loaders to organize data into batches, shuffle the order for stochastic training, and parallelize data loading with multiple workers. The script emphasizes the importance of visualizing data batches to ensure correctness before training.
🏋️♂️ Training Neural Networks and Preventing Overfitting
The script describes the process of training a neural network in PyTorch, focusing on the setup of training and test datasets, the choice of a loss function, and the use of an optimizer for updating model weights. It outlines the training loop, which includes gradient zeroing, forward pass for predictions, loss computation, backward pass for gradient calculation, and optimizer updates. The video also addresses the issue of overfitting, suggesting the use of a separate test dataset to evaluate the model's generalization capabilities. The script concludes with a brief mention of deploying trained models using Torch Script, which allows for converting dynamic PyTorch models into a high-performance, serializable format suitable for production environments.
Mindmap
Keywords
💡PyTorch
💡Tensor
💡Autograd
💡TorchScript
💡Neural Network Layers
💡Dataset
💡Optimizer
💡Loss Function
💡Training Loop
💡Model Deployment
Highlights
Introduction to PyTorch, its features, key concepts, and associated tools and libraries.
Assumption that the viewer is new to machine learning with PyTorch.
Overview of PyTorch and related projects, including core data abstraction and autograd.
Instructions on installing PyTorch and Torch Vision for demos and exercises.
Details on CUDA compatibility and GPU acceleration for PyTorch on different operating systems.
Definition of PyTorch as an open-source machine learning framework.
Description of PyTorch's full toolkit for building and deploying ML applications.
Explanation of hardware acceleration and associated libraries for computer vision, text, and audio applications.
Discussion on PyTorch's ability to enable fast iteration on ML models and applications.
Introduction to PyTorch's automatic differentiation engine, Autograd.
Overview of building a model with PyTorch modules.
Demonstration of a basic training loop for a model in PyTorch.
Introduction to deployment with Torch Script.
Explanation of tensors as the core data structure in PyTorch.
Tutorial on creating and manipulating tensors in PyTorch.
Discussion on the mathematical operations available on PyTorch tensors.
Introduction to autograd and its role in the training process.
Explanation of how to build and run a simple neural network model in PyTorch.
Discussion on data loading and preprocessing for model training.
Overview of the training loop and its components in PyTorch.
Introduction to testing a model for general learning and avoiding overfitting.
Explanation of Torch Script for model deployment and its advantages.
Conclusion and invitation to further explore PyTorch through additional resources.
Transcripts
hello my name is brad heinz i'm a
partner engineer working with the
pytorch team at facebook
in this video i'll be giving you an
introduction to pi torch
its features key concepts and associated
tools and libraries
this overview assumes that you are new
to doing machine learning with pytorch
in this video we're going to cover an
overview of pi torch and related
projects
tensors which are the core data
abstraction of pi torch
auto grad which drives the eager mode
computation that makes rapid iteration
in your model possible
we'll talk about building a model with
uh with pi torch modules
we'll talk about how to load your data
efficiently to train your model
we'll demonstrate a basic training loop
and finally we'll talk about deployment
with torch script
before we get started you'll want to
install pi torch and torch vision so you
can follow along with the demos and
exercises
if you haven't installed the latest
version of pytorch yet visit pytorch.org
the front page has an install wizard
shown here
there are two important things to note
here first cuda drivers are not
available for the mac
therefore gpu acceleration is not going
to be available by a pi torch on the mac
second if you're working on a linux or
windows machine with one or more nvidia
cuda compatible gpus attached
make sure the version of cuda toolkit
you install matches the cuda drivers on
your machine
so what is pi torch pytorch.org
tells us that pi torch is an open source
machine learning framework that
accelerates the path from research
prototyping to production deployment
let's unpack that first
pytorch is software for machine learning
it contains a full tool kit for building
and deploying ml applications
including deep learning primitives such
as neural network layer types
activation functions and gradient based
optimizers it has hardware acceleration
on nvidia gpus
and it has associated libraries for
computer vision text and natural
language and audio applications
torch vision the pi torch library for
computer vision applications
also includes pre-trained models and
package data sets that you can use to
train your own models
pi torch is built to enable fast
iteration on your ml models and
applications
you can work in regular idiomatic python
there's no new domain specific language
to learn to build your computation graph
with auto grad pytorch's automatic
differentiation engine
the backward pass over your model is
done with a single function call
and done correctly no matter which path
through the code a computation took
offering you unparalleled flexibility in
model design
pytorch has the tooling to work at
enterprise scale with tools like torch
script which is a way to create
serializable and optimizable models from
your pi torch code
torch serve pi torch's model serving
solution and multiple options for
quantizing your model for performance
and finally pytorch is free and open
source software
free to use and open to contributions
from the community
its open source nature fosters a rich
ecosystem of community projects as well
supporting use cases from stochastic
processes to graph based neural networks
the pi torch community is large and
growing with over 1200 contributors to
the project from around the world
and over 50 percent year-on-year growth
in research paper citations
pytorch is in use at top tier companies
like these and provides the foundations
for projects like allen nlp
the open source research library for
deep learning with natural language
ai which simplifies training fast and
accurate neural nets using best modern
practices
classyvision an end-to-end framework for
image and video classification
and captum an open source extensible
library that helps you understand and
interpret your model's behavior
now that you've been introduced to pi
torch let's look under the hood
tensors will be at the center of
everything you do in pi torch
your model's inputs outputs and learning
weights are all in the form of tensors
now if tensor is not a part of your
normal mathematical vocabulary
just know that in this context we're
talking about a multi-dimensional array
with a lot of extra bells and whistles
pi torch tensors come bundled with over
300 mathematical and logical operations
that can be performed on them
though you access tensors through a
python api the computation actually
happens in compiled c
plus code optimized for cpu and gpu
let's look at some typical tensor
manipulations in pi torch
the first thing we'll need to do is
import pi torch with the import torch
call
then we'll go ahead and create our first
tensor here i'm going to take
create a two-dimensional tensor with
five rows and three columns and fill it
with zeros
i'm going to query it for the data type
of those zeros
and here you can see i got my requested
matrix of 15 zeros
and the data type is 32-bit floating
point by default
pi torch creates all tensors as 32-bit
floating point
what if you wanted integers instead you
can always override the default
here in the next cell i create a
tensorflow of ones i request that they
be 16-bit integers
and note that when i print it without
being asked pi torch tells me that these
are 16-bit integers because it's not the
default that might not be what i expect
it's common to initialize learning
weights randomly often with a specific
seed for the random number generators
that you can reproduce your results on
subsequent runs
here we demonstrate seeding the pytorch
random number generator with a specific
number
generating a random tensor generating a
second random tensor which we expect to
be different from the first
re-seating the random number generator
with the same input
and then finally creating another random
tensor which we expect to match the
first
since it was the first thing created
after seeding the rng
and sure enough those are the results we
get first tensor and the third tensor do
match and the second one does not
arithmetic with pytorch tensors is
intuitive tensors of similar
shapes may be added or multiplied etc
and operations between a scalar and a
tensor will distribute
over all the cells of the tensor so
let's look at a couple of examples
first i'm just going to create a
tensorful of ones
then i'm going to create another
tensorflow of ones but i'm going to
multiply it by a scalar 2. and what's
going to happen
is all of those ones are going to become
twos the multiplication is distributed
over every
element of the tensor then i'll add the
two tensors i can do this
because they're of the same shape the
operation happens element wise between
the two of them
and we get out now a tenser full of
threes when i query that tensor for its
shape
it's the same shape as the two input
tensors
from the addition operation finally i
create two random tensors of different
shapes
and attempt to add them i get a runtime
error because there's no
clean way to do element-wise arithmetic
operations between two tensors of
different shapes
here's a small sample of the
mathematical operations available on pi
torch tensors
i'm going to create a random tensor and
adjust it so that its values are between
-1 and 1.
i can take the absolute value of it and
see all the values turn positive
i can take the inverse sine of it
because the values
are between -1 and 1 and get an angle
back
i can do linear algebra operations like
taking the determinant or doing singular
value decomposition
and there are statistical and aggregate
operations as well
means and standard deviations and
minimums and maximums etc
there's a good deal more to know about
the power of pi torch tensors including
how to set them up for parallel
computation on gpu
we'll be going into more depth in
another video
as an introduction to auto grad
pytorch's automated differentiation
engine
let's consider the basic mechanics of a
single training pass
for this example we'll use a simple
recurrent neural network or rnn
we start with four tensors x the input
h the hidden state of the rnn that gives
it its memory
and two sets of learning weights one
each for the input and the hidden state
next we'll multiply the weights by their
respective tensors
mm here stands for matrix multiplication
after that we add the outputs of the two
matrix multiplications
and pass the result through an
activation function here hyperbolic
tangent
and finally we compute the loss for this
output the loss is the difference
between the correct
output and the actual prediction of our
model
so we've taken a training input run it
through a model
gotten an output and determined the loss
this is the point in the training loop
where we have to compute the derivatives
of that loss
with respect to every parameter of the
model and use the gradients over
learning weights to decide how to adjust
those weights
in a way that reduces the loss even for
a small model like this that's a bunch
of parameters and a lot of derivatives
to compute
but here's the good news you can do it
in one line of code
each tensor generated by this
computation knows how it came to be
for example i2h carries metadata
indicating that it came from the matrix
multiplication
of wx and x and so it continues down the
rest of the graph
this history tracking enables the
backward method to rapidly calculate the
gradients your model needs for learning
this history tracking is one of the
things that enables flexibility and
rapid iteration in your models
even in a complex model with decision
branches and loops the computation
history will track the particular path
through the model that a particular
input took and compute the backward
derivatives correctly
in a later video we'll show you how to
do more tricks with auto grad
like using the auto grad profiler and
taking second derivatives
and how to turn off autograd when you
don't need it
we've talked so far about tensors and
automatic differentiation
and some of the ways they interact with
your pi torch model but what does that
model look like in code
let's build and run a simple one to get
a feel for it
first we're going to import pi torch
we're also going to import torch.nn
which contains the neural network layers
that we're going to compose into our
model
as well as the parent class of the model
itself and we're going to import
torch.nn.functional to give us
activation functions
and max pooling functions that we'll use
to connect the layers
so here we have a diagram of linette v
it's one of the earliest convolutional
neural networks and one of the drivers
of the explosion in deep learning
it was built to read small images of
handwritten numbers
the endness data set and correctly
classify which digit was represented in
the image
here's the abridged version of how it
works layer c1
is a convolutional layer meaning that it
scans the input image for features that
learn during training
it outputs a map of where it saw each of
each of its learned features in this
image
this activation map is down sampled in
layer s2
layer c3 is another convolutional layer
this time scanning c1's activation map
for combinations of features
it also puts out an activation map
describing the spatial locations of
these
feature combinations which is
downsampled in layer s4
finally the fully connected layers of
the end f5
f6 and output are a classifier that
takes the final activation map
and classifies it into one of 10 bins
representing the 10 digits
so how do we express this simple neural
network in code
looking over this code you should be
able to spot some structural
similarities with the diagram above
this demonstrates the structure of a
typical pi torch model
it inherits from torch.n.module and
modules may be nested
in fact even the com2d and linear layers
here
are subclasses of torsta and n.module
every model will have an init where it
constructs the layers that it will
compose into its computation graph
and loads any data artifacts it might
need for example an nlp model might load
a vocabulary
a model will have a forward function
this is where the actual computation
happens
an input is passed through the network
layers and various functions to generate
an output a prediction
other than that you can build out your
model class like any other python class
adding whatever properties and methods
you need to support your model's
computation
so let's distantiate this
and run an input through it so there are
a few important things happening here
we're creating an instance of limit we
are printing the
net object now a subclass of torster and
in a module
will report the layers it has created
and their shapes and parameters
this can provide a handy overview of a
model if you want to get the gist of its
processing
below that we create a dummy input
representing a 32 by 32
image with one color channel normally
you would load an image tile and convert
it to a tensor of this shape
you may have noticed an extra dimension
to our tensor this is the batch
dimension
pi torch models assume they are working
on batches of data
for example a batch of 16 of our image
tiles would have the shape
16 by 1 by 32 by 32
since we're only using one image we
create a batch of one
with shape one by one by 32 by 32.
we ask the model for an inference by
calling it like a function
net input the output of this call
represents the model's confidence that
the input represents a particular digit
since this instance of the model hasn't
been trained we shouldn't expect to see
any signal in the output
looking at the shape of the output we
can see that it also has a batch
dimension the size of which should
always match the input batch dimension
had we passed in an input batch of 16
instances output
have a shape of 16 by 10. you've seen
how a model is built and how to give it
a batch of input and examine the output
the model didn't do much though because
it hasn't been trained yet
for that we'll need to feed it a bunch
of data
in order to train our model we're going
to need a way to feed it data in bulk
this is where the pi torch data set and
data loader classes come into play
let's see them in action so here i'm
declaring
matplotlib inline because we'll be
rendering some images in the notebook
i'm importing pi torch i'm also
importing torch vision and torch vision
transforms
these are going to give us our data sets
and some transforms that we need to
apply to the images to make them
digestible by our pi torch model
so the first thing we need to do is
transform our incoming images into a pi
torch tensor here we specify two
transformations for our input
transforms to tensor takes images loaded
by pi the pillow library
and converts them into pi torch tensors
transformers.normalize adjusts the
values of the tensor so that their
average is zero and their standard
deviation is 0.5
most activation functions have their
strongest gradients around the zero
point
so centering our data there can speed
learning
there are many more transforms available
including cropping centering rotation
reflection and most of the other things
you might do to an image
next we're going to create an instance
of the cifar10 dataset
this is a set of 32x32 color image tiles
representing 10 classes of objects
six of animals and four vehicles
when you're in the cell above it may
take a minute or two for this
the data set to finish downloading for
you so be aware of that
so this is an example of creating a data
set in pi torch downloadable data sets
like cipher 10 above are subclasses of
torch
utils data data set data set classes in
pi torch include the downloadable data
sets in torch vision
torch text and torch audio as well as
utility data set classes such as
torchvision.datasets.imagefolder
which will read a folder of labeled
images you can also create your own
subclasses of data set
when we instantiate our data set we need
to tell it a few things
the file system path where we want the
data to go whether or not we're using
this set for training because most data
sets will be split between training and
test subsets
whether we would like to download the
data set if we haven't already
and the transformations that we want to
apply to the images
once you have your data set ready you
can give it to the data loader
now a data set subclass wraps access to
the data
and if specialized the type of the data
is serving
the data loader knows nothing about the
data but organizes the input tensors
served by the data set
into batches with the parameters you
specify in the example above we've asked
the data loader to give us batches of
four
images from train set randomizing their
order
with shuffle equals true and we told it
to spin up two workers to load data from
disk
it's good practice to visualize the
batches your data loader loader serves
running the running the shell should
show you a strip of four images and you
should see a correct label for each one
and so here are four images which
do in fact look like a cat a deer and
two trucks
we've looked under the hood at tensors
and autograd and we've seen how pi torch
models are constructed and how to
efficiently
feed them data it's time to put all the
pieces together
and see how a model gets trained
so here we are back in our notebook
you'll see the
imports here all of these should look
familiar from
earlier in the video except for
torch.optum which i'll be talking about
soon
the first thing we'll need is training
and test data sets
so if you haven't already run the cell
below and make sure the data set is
downloaded it may take a minute if you
haven't done so already
we'll run our check on the output from
the data loader
and again we should see a strip of four
images
a plain plain plain ship that looks
correct
so our data electro is good this is the
model we'll train
now if this model looks familiar it's
because it's a variant of lynette which
we discussed earlier in this video but
it's adapted to take three color images
the final ingredients we need are a loss
function and an optimizer
the loss function as discussed earlier
in this video is a measure of how far
from our ideal output the model's
prediction was
cross entropy loss is a typical loss
function for classification models like
ours
the optimizer is what drives the
learning here
we've created an optimizer that
implements stochastic gradient descent
one of the more straightforward
optimization algorithms besides
parameters of the algorithm
like the learning rate and momentum we
also pass in net dot parameters
which is a collection of all the
learning weights in the model which is
what the optimizer adjusts
finally all this is assembled into the
training loop
go ahead and run this cell as it'll take
a couple of minutes to execute
so here we're only doing two training
epics as
you can see from line one that is two
complete passes over the training data
set
each pass has an inner loop that
iterates over the training data
serving batches of transformed images in
their correct labels
zeroing the gradients in line nine is a
very important step
when you run a batch gradients are
accumulated over that batch
and if we don't reset the gradients for
every batch they will keep accumulating
and provide incorrect values and
learning will stop
in line 12 we ask the model for its
actual prediction of the batch
in the following line line 13 we compute
the loss
the difference between the outputs and
the labels in line 14 we do our backward
pass and calculate the gradients that
will direct the learning
in line 15 the optimizer performs one
learning step
it uses the gradients from the backward
call to nudge the learning weights in
the direction it thinks will reduce the
loss
so the remainder of the loop just does
some light reporting on the epic number
and how many training instances have
been completed and
what the collected loss is uh over the
uh
the training eight buck epic
so note that the loss is monotonically
descended indicating that our model is
continuing to improve its performance on
the training data set
as a final step we should check that the
model is actually doing general
learning and not simply memorizing the
data set this is called overfitting
and will often indicate that either your
data set is too small and doesn't have
enough examples
or that your model is too large it's
overspecified
for modeling the data uh you're feeding
it
so our training is done so anyways the
way we uh check for overfitting and
guard against it
is to test the model on data it hasn't
trained on
that's why we have a test data set so
here i'm just going to
run the test data through we'll get an
accuracy measure out
55 okay so that's not exactly
state-of-the-art but it's much better
than the 10
we'd expect to see from a random output
this demonstrates that some general
learning did happen in the model
now when you go to the trouble of
building and training a non-trivial
model is usually because you want to use
it for something
you need to connect it to a system that
feeds it inputs
and processes the model's predictions if
you're keen on optimizing performance
you may want to do this without a
dependency on the python interpreter
the good news is that pi torch
accommodates you with torch script
torch script is a static high
performance subset of python
when you convert a model to torch script
the dynamic and pythonic nature of your
model is fully preserved
control flow is preserved when referring
to torch torchscript and you can still
use convenient python data structures
like lists and dictionaries
looking at the code on the right you'll
see a pi torch model defined in python
below that an instance of the model is
created and then we'll call
torch.jet.script mymodule that one line
of code is all it takes to convert your
python model to torchscript
the serialized version of this gets
saved in the final line
and it contains all the information
about your model's computation graph
and its learning weights
the torch script rendering of the model
is shown at the right
torchscript is meant to be consumed by
the pytorch just in time compiler
or jit the jit seeks runtime
optimizations such as operation
reordering and layer fusion
to maximize your model's performance on
cpu or gpu hardware
so how do you load and execute a torque
script model you start by loading the
serialized package with torch.jit.load
and then you can call it just like any
other model what's more you can do this
in python
or you can load it into the pi torch c
plus
runtime to remove the interpreted
language dependency
in subsequent videos we'll go into more
detail about torch script
best practices for deployment and we'll
cover torch serve pi torch's model
serving solution
so that's our lightning fast overview of
pi torch the models and data sets we
used here were quite simple but pytorch
is used in production at large
enterprises for powerful
real-world use cases like translating
between human languages
describing the content of video scenes
or generating realistic human voices
in the videos to follow will give you
access to that power we'll go deeper on
all the topics covered here
with more complex use cases like the
ones you'll see in the real world
thank you for your time and attention
and i hope to see you around the pytorch
forums
Ver Más Videos Relacionados
Building a Neural Network with PyTorch in 15 Minutes | Coding Challenge
Pytorch vs TensorFlow vs Keras | Which is Better | Deep Learning Frameworks Comparison | Simplilearn
#1 Machine Learning Engineering for Production (MLOps) Specialization [Course 1, Week 1, Lesson 1]
What is Vertex AI?
Custom training
CUDA Explained - Why Deep Learning uses GPUs
5.0 / 5 (0 votes)