How does Ray Tracing Work in Video Games and Movies?

Branch Education
17 Aug 202429:21

Summary

TLDRThis video delves into the intricacies of Ray Tracing, a technology pivotal for creating realistic CGI in movies and TV shows. It explains how path tracing, the industry standard, simulates light interactions to render scenes with lifelike quality. Despite its computational intensity, recent advancements in GPU architecture have made real-time rendering feasible. The video also explores Ray Tracing's application in video games, contrasting methods like path tracing with screen space ray tracing, and highlights the educational resources available for those interested in the field.

Takeaways

  • 🌟 Ray Tracing is a rendering technique used in TV shows and movies to create realistic images by simulating how light interacts with 3D models.
  • 🚀 The industry standard for rendering realistic scenes is path tracing, which requires an immense number of calculations.
  • 💡 Path tracing was considered computationally impossible for decades due to its high computational demands, but advancements in technology have made it feasible.
  • 🎬 Movies like Zootopia, Moana, Finding Dory, and Coco were rendered using path tracing, which required thousands of computers and multiple months to complete.
  • 🧩 Ray Tracing works by sending rays from a virtual camera through a view plane and into a scene, calculating how light bounces off objects to create realistic lighting and shadows.
  • 🖼️ Each pixel in a rendered image is determined by numerous rays, with each ray potentially bouncing multiple times off different surfaces before reaching the camera.
  • 📊 Direct illumination is calculated by sending shadow rays from the point of intersection to each light source to determine the brightness of a pixel.
  • 🔄 Indirect illumination is calculated by bouncing secondary rays off surfaces to simulate light reflecting from one object to another, contributing to global illumination.
  • 💻 The bounding volume hierarchy (BVH) is a technique used to efficiently determine which triangle a ray hits first in a complex 3D scene.
  • 💾 Modern GPUs with specialized Ray Tracing cores have made real-time rendering with path tracing possible, significantly reducing the time required for complex scenes.

Q & A

  • What is Ray Tracing and why is it important for TV shows and movies?

    -Ray Tracing is a computational process that simulates how rays of light bounce off of and illuminate each of the models in a 3D scene, transforming it into a realistic environment. It's important for TV shows and movies because it allows for the creation of highly realistic computer-generated images and special effects.

  • What is the industry standard algorithm for rendering scenes in TV shows and movies?

    -The industry standard algorithm for rendering scenes in TV shows and movies is called path tracing. It requires an unimaginable number of calculations to simulate how light interacts with and bounces off every surface in the scene.

  • How does path tracing work and why does it require so many calculations?

    -Path tracing simulates how light interacts with and bounces off every surface in a scene by sending out rays from a virtual camera and into the scene. It requires many calculations because it needs to determine which objects the rays hit, how those objects are illuminated by light sources, and account for direct and indirect illumination to produce realistic effects.

  • What is a bounding volume hierarchy (BVH) and how does it help with ray tracing?

    -A bounding volume hierarchy (BVH) is a data structure that helps to organize the geometry in a scene by dividing it into a hierarchy of bounding volumes or boxes. This allows for more efficient calculations of ray-triangle intersections by reducing the number of triangles that need to be considered for each ray, thus speeding up the ray tracing process.

  • How has the advancement of GPUs enabled real-time ray tracing for applications like video games?

    -The advancement of GPUs, specifically the inclusion of Ray Tracing or RT cores, has enabled real-time ray tracing by allowing for parallel processing of billions of rays per second. This has made it possible to render complex scenes with realistic lighting and reflections in a fraction of the time that was previously required.

  • What is the difference between primary rays, secondary rays, and shadow rays in ray tracing?

    -Primary rays are the initial rays emitted from the virtual camera into the scene to determine the first object they hit. Secondary rays are generated after a primary ray hits a surface and bounce off, simulating indirect illumination. Shadow rays are used to determine whether a point on a surface is directly illuminated by a light source or if it's in shadow.

  • How does indirect illumination contribute to the realism of a rendered scene?

    -Indirect illumination contributes to the realism of a rendered scene by simulating light bouncing off surfaces and illuminating other objects in the scene. This results in effects like color bleeding from one object to another, and more accurate shading based on the interplay of light and objects in the environment.

  • What are some of the challenges that were historically associated with implementing path tracing for TV shows and movies?

    -Historically, the challenges with implementing path tracing included the immense computational requirements, which made it nearly impossible for anything but supercomputers. Additionally, determining which triangle a ray hits first in a scene with millions of triangles was a significant computational problem.

  • How has the use of ray tracing evolved in video games, and what are some methods employed?

    -Ray tracing in video games has evolved with methods like using low-resolution duplicates of scenes to create light maps for indirect lighting, and screen space ray tracing which uses the game's generated data like depth and normal maps to create reflections and refractions. These techniques are employed in engines like Unreal Engine's Lumen renderer and games like Cyberpunk.

  • What role did Steve Jobs play in the development of computer-generated imagery and ray tracing?

    -Steve Jobs, as the CEO of Pixar from 1986 to 2006, played a significant role in the development of computer-generated imagery and ray tracing. He helped design the computers used to render some of Pixar's first movies, contributing to the advancement of the technology used in CGI.

  • How does the script describe the process of creating a realistic 3D scene, from modeling to rendering?

    -The script describes the process of creating a realistic 3D scene as involving several steps: modeling objects and assigning textures, positioning them in a scene with lights and a camera, and then rendering the scene using path tracing to simulate light interactions. This process transforms a simple collection of 3D models into a realistic environment with accurate lighting and reflections.

Outlines

00:00

🎬 Introduction to Ray Tracing in Visual Effects

This paragraph introduces the concept of Ray Tracing, a technique pivotal for creating realistic computer-generated images (CGI) and special effects in TV shows and movies. It explains how Ray Tracing works by simulating the behavior of light rays as they bounce off objects within a 3D scene, contributing to the rendering process. The industry-standard algorithm for this, path tracing, is highlighted as computationally intensive, with an example given to illustrate the vast number of calculations required, comparing it to the world's population performing calculations continuously. The paragraph sets the stage for a deeper exploration of Ray Tracing, its history, and its application in modern visual effects, including a brief mention of the computational evolution that made it practical for feature films.

05:07

🖼️ The Process of Creating and Rendering a Scene

The paragraph delves into the detailed process of creating a 3D scene for visual effects, starting with the modeling and texturing of objects by artists. It describes how these objects, broken down into triangles, are positioned within a scene, illuminated by lights, and captured by a virtual camera. The rendering process is explained, focusing on path tracing's role in simulating light interactions to produce realistic effects like shadows and reflections. The paragraph also discusses the practical aspects of creating a view plane for a 2D image, the animation of the camera, and the massive number of rays required to render a single image, emphasizing the parallel nature of Ray Tracing computations.

10:09

🌟 Understanding Direct and Indirect Illumination

This section explains the concepts of direct and indirect illumination within the context of Ray Tracing. It describes how primary rays determine the initial color of objects in a scene and how shadow rays help calculate direct illumination by checking for obstructions between objects and light sources. The importance of global illumination, achieved by combining direct and indirect light, is highlighted as crucial for realistic rendering. The paragraph also introduces the process of calculating indirect illumination through secondary rays and shadow rays, which account for light bouncing off surfaces and illuminating other objects, contributing to the scene's overall lighting and color.

15:12

🔄 Advanced Ray Tracing Techniques and Material Interactions

The paragraph explores advanced Ray Tracing techniques, focusing on the calculation of indirect illumination through multiple bounces of secondary rays. It discusses how these rays can transform surfaces into effective light sources, affecting the illumination of other objects in the scene. The concept of path tracing is further expanded upon, explaining how it traces numerous paths of light through a scene to achieve realistic rendering. Additionally, the influence of material properties on the direction of secondary rays is covered, with examples of how different surface roughness levels can drastically alter the appearance of objects, from mirror-like reflections to diffuse surfaces.

20:14

💻 Overcoming Computational Challenges with BVH and GPU Advancements

This section addresses the computational challenges of Ray Tracing, particularly the issue of determining which triangle a ray intersects first in a scene with millions of triangles. It introduces the bounding volume hierarchy (BVH) as a solution, a method that organizes triangles into a hierarchy of boxes to simplify intersection calculations. The paragraph also discusses the evolution of GPU hardware, highlighting the role of RT cores in performing BVH traversal and ray-triangle intersection calculations efficiently. The comparison between the computational power of supercomputers and modern GPUs underscores the significant advancements that have made Ray Tracing feasible for real-time applications.

25:15

🎮 Ray Tracing in Video Games and Educational Resources

The final paragraph shifts focus to the application of Ray Tracing in video games, outlining different methods such as the use of light maps for indirect lighting and screen space ray tracing for reflections. It touches on the limitations of these techniques, especially in dynamic game environments. The paragraph also acknowledges the educational value of understanding Ray Tracing, promoting resources like Brilliant.org for learning related disciplines. It concludes with a call to action for viewers to support the creators' efforts in producing educational content and gives a nod to the Blender Dev Team for their open-source software, which was used to create the scenes discussed in the video.

Mindmap

Keywords

💡Ray Tracing

Ray Tracing is a rendering technique used in computer graphics to simulate how light behaves in a 3D environment. It involves tracing the path of light rays as they bounce off objects in a scene to create realistic lighting, shadows, and reflections. In the video, Ray Tracing is central to the creation of realistic CGI for movies and TV shows, with path tracing being the industry standard algorithm. The script explains how Ray Tracing works by simulating the behavior of light in a scene, which is crucial for creating lifelike visuals.

💡Path Tracing

Path Tracing is a specific type of Ray Tracing algorithm that simulates the behavior of light by tracing the paths that light rays take from a light source, bouncing off objects, and eventually reaching the camera. It is computationally intensive and produces highly realistic images. The video script uses path tracing as an example to illustrate the complexity of Ray Tracing, noting that it was once considered impossible for real-time rendering due to the vast number of calculations it requires.

💡Rendering

Rendering in the context of computer graphics is the process of generating a 2D image from a 3D model by simulating the behavior of light in a virtual environment. It involves calculating how light interacts with objects in a scene to produce a realistic image. The video script discusses rendering as a critical step in creating TV shows and movies, where Ray Tracing plays a significant role in simulating light to transform 3D models into realistic environments.

💡3D Models

3D Models are digital representations of three-dimensional shapes used in computer graphics. They are the building blocks for creating virtual environments and objects. In the video, 3D artists create models of spaceships, buildings, and other objects, which are then textured, lit, and positioned within a scene to be rendered using Ray Tracing for a realistic outcome.

💡Textures

Textures in computer graphics are digital images that are applied to the surface of 3D models to define their appearance, including color and material properties like roughness, metallic, or glass-like surfaces. The script mentions how 3D artists assign textures to models to give them realistic attributes, which is essential for the path tracing algorithm to calculate how light interacts with these surfaces.

💡Bounding Volume Hierarchy (BVH)

A Bounding Volume Hierarchy (BVH) is a data structure used in computer graphics to organize 3D objects in a scene for efficient processing. It divides the objects into a hierarchy of bounding volumes, which are simple shapes that enclose groups of objects. The video script explains how BVHs are used to solve the computational challenge of determining which triangle a ray hits first in a complex scene, thus optimizing the Ray Tracing process.

💡Direct Illumination

Direct Illumination refers to the light that reaches an object directly from a light source without being reflected or refracted by other surfaces. In the video, calculating direct illumination involves sending rays from the intersection point of a primary ray with a surface towards each light source to determine the brightness and color of that point based on the light's properties.

💡Indirect Illumination

Indirect Illumination is the light that reaches an object after being reflected or refracted by other surfaces in a scene. The script describes how indirect illumination is calculated by sending secondary rays from the point of intersection and then additional shadow rays to determine the illumination from light bouncing off other objects, contributing to the overall realism of the rendered image.

💡Global Illumination

Global Illumination is the combined effect of both direct and indirect illumination in a scene, which produces a more realistic rendering by accounting for light that is both directly emitted by sources and reflected or refracted by other objects. The video script explains global illumination as a key concept in Ray Tracing, where the algorithm calculates the interplay of light throughout the scene to achieve a natural and lifelike appearance.

💡Shadow Rays

Shadow Rays are virtual rays cast in the direction of light sources from a point of intersection to determine whether that point is in shadow or directly illuminated. The video script uses shadow rays to illustrate how the algorithm checks for direct illumination, which is essential for calculating the brightness and color of pixels in the rendered image.

💡GPU

A GPU (Graphics Processing Unit) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. The script highlights the advancements in GPU technology, particularly the inclusion of Ray Tracing cores, which have enabled real-time Ray Tracing and significantly reduced the time required to render complex scenes.

Highlights

Ray Tracing is essential for creating realistic computer-generated images and special effects in TV shows and movies.

Path tracing, the industry standard for rendering, requires an enormous number of calculations.

Rendering a single image with path tracing was considered impossible for decades due to computational demands.

Movies like Zootopia and Moana utilized path tracing, but required massive computational resources.

Path tracing simulates how light interacts with every surface in a scene to produce realistic effects.

A 3D model is broken down into small triangles, which are fundamental to GPU rendering.

Textures define color and material properties, such as roughness or metallic quality, of 3D models.

Direct and indirect illumination are combined to create global illumination, which is crucial for realistic lighting.

Shadow rays determine if a point on a surface is directly illuminated or in shadow.

Secondary rays simulate indirect illumination by bouncing light off other surfaces.

Path tracing involves sending rays from the camera through each pixel to simulate light paths.

The direction of secondary rays is influenced by material properties like roughness or reflectivity.

Bounding volume hierarchies (BVH) optimize the process of determining which triangle a ray hits first.

Modern GPUs with specialized ray tracing cores have made real-time ray tracing feasible.

Ray tracing has been integrated into video games using various techniques like light maps and screen space ray tracing.

The video provides a detailed explanation of how ray tracing works, from primary and secondary rays to shadow rays.

The video also explores the practical applications of ray tracing in creating CGI for TV and movies.

The advancements in GPU technology have revolutionized the rendering process, making it more accessible.

Transcripts

play00:00

Every new TV show and movie that uses  computer-generated images and special  

play00:04

effects relies on Ray Tracing. For example,  in order to build an interstellar battle,  

play00:11

set in a galaxy far, far away, 3D  artists model and texture the spaceships,  

play00:18

position them around the scene with lights, a  background, and a camera, and then render the  

play00:24

scene. Rendering is a computational process that  simulates how rays of light bounce off of and  

play00:31

illuminate each of the models, thus transforming  a scene full of simple 3D models into a realistic  

play00:38

environment. There are many different ray  tracing algorithms used to render scenes,  

play00:44

but the current industry standard in TV  shows and movies is called path tracing. 

play00:49

This algorithm requires an unimaginable number of  calculations. For example, if you had the entire  

play00:56

population of the world working together  and performing 1 calculation every second,  

play01:02

it would take 12 days of nonstop problem solving  to turn this scene into this image. Due to these  

play01:10

incredible computational requirements, path  tracing was considered impossible for anything  

play01:16

but super computers for decades. In fact,  this algorithm for simulating light was first  

play01:22

conceptualized in 1986, however it took 30 years  before movies like Zootopia, Moana, Finding Dory  

play01:31

and Coco could be rendered using path tracing  and even then, rendering these movies required  

play01:38

a server farm of 1000s of computers and multiple  months to complete. So, why does path tracing  

play01:46

require quadrillions of calculations? And how  does Ray Tracing work? Well, in this video, we’ll  

play01:54

answer these two questions, and in the process,  you’ll get a better understanding of how Computer  

play02:00

Generated Images or CGI and special effects are  created for TV and movies. After that we’ll open  

play02:08

up this GPU and see how its architecture is  specifically designed to execute ray tracing,  

play02:14

enabling it to render this scene in only a few  minutes. And finally, we’ll investigate how Video  

play02:21

games like Cyberpunk or the Unreal Engine Lumen  Renderer use Ray Tracing. So, let’s dive right in. 

play02:36

This video is sponsored by Brilliant.org Let’s first see how Path Tracing works  

play02:43

and how this dragon and kingdom are created and  turned into a setting for a fantasy show. To make  

play02:49

the scene, an artist first spends a few months  modeling everything, the islands, the castles,  

play02:55

the houses, the trees, and of course, the dragon.  Although these models may have some smooth curves  

play03:02

or squares and polygons, they’re actually all  broken down into small triangles. In short,  

play03:10

GPUs almost exclusively work with 3D  scenes made of triangles, and this  

play03:16

scene is built from 3.2 million triangles. After a model is built, the 3D artist assigns  

play03:24

a texture to it which defines both the color,  as well as material attributes, such as whether  

play03:30

the surface is rough, smooth, metallic, glass,  water-like, or composed of a wide range of other  

play03:37

materials. Next, the completed models are properly  positioned around the scene and the artist adds  

play03:44

lights such as the sky and the sun and adjusts  their intensity and direction to simulate the  

play03:50

time of day. Finally, a virtual camera is added  and the scene is rendered and brought to life. 

play03:58

As mentioned earlier, path tracing simulates  how light interacts with and bounces off every  

play04:04

surface in the scene, thereby producing  realistic effects such as smooth shadows  

play04:10

across the buildings or the way light interacts  with the water and produces bright highlights  

play04:15

in some areas and water covered sand in others. In the real world, light rays start at the sun,  

play04:23

and when they hit a surface such as this red roof,  some light is absorbed while the red light is  

play04:29

reflected, thus tinting the light based on the  color of the object. These now tinted light  

play04:36

rays bounce off the surface and make their  way to the camera and produce a 2D image. 

play04:42

With this scene, a near infinite number  of light rays are produced by the sun  

play04:47

and sky and only a small fraction of them  actually reach the camera. Calculating an  

play04:54

infinite number of light rays is impossible  and only the light rays that reach the camera  

play04:59

are useful, and therefore with path tracing we  don’t send rays out from the sky or light source,  

play05:07

but rather we send out rays from a virtual  camera and into the scene. We then determine  

play05:13

which objects the rays hit and calculate how those  objects are illuminated by the light sources. 

play05:19

With computer-generated images or CGI, the 2D  image is represented by a view plane in front  

play05:26

of the virtual camera. This view plane has the  same pixel count as the final image, so a 4K image  

play05:34

has 8.3 million pixels. Furthermore, by animating  the camera around or changing its field of view,  

play05:42

the view plane will correspondingly change. Let’s transition to an indoor scene such  

play05:48

as this barbershop, which contains 8 million  triangles and is actually more complicated than  

play05:55

the island kingdom. In order to create this image  on the view plane, a total of 8.3 billion rays,  

play06:02

which are a thousand rays per pixel, are sent out  from the virtual camera through the view plane and  

play06:09

into the scene. Ray Tracing is a massively  parallel operation because each pixel is  

play06:15

independent from all other pixels. This means that  the thousand rays from one pixel can be calculated  

play06:22

at the same time as the rays from the next pixel  over and so on. Once a single pixel’s rays finish  

play06:29

flying around the scene, the results are combined  with the other rays and pixels to form a single  

play06:35

image. If we were to show billions of rays, the  scene would quickly become inundated with lines,  

play06:42

so let’s simplify it down to a single ray  running through one pixel of the viewing plane. 

play06:48

This ray starts at the camera, travels through a  random point in the pixel and into the scene. It  

play06:55

flies straight and eventually hits a triangle,  and once it does, that object’s color becomes  

play07:01

associated with that ray and pixel. For example,  when the ray hits this chair, then the pixel  

play07:09

becomes red. The other nearby rays running through  random places in the same pixel will hit pretty  

play07:15

close to this ray and have their colors averaged  together. These rays are called primary rays and  

play07:21

they answer the question of what triangle and  object do the rays first hit and what basic color  

play07:27

should be in that specific pixel. Another example  is that these rays running through this pixel hit  

play07:34

the blue stripe on the barbershop pole turning the  pixel blue. The other billions of rays do the same  

play07:41

thing resulting in a single image with the proper  3D perspective from the virtual camera. This image  

play07:48

is fairly flat colored because each pixel just  has the simple color of the object the rays hit. 

play07:55

So the next question is: how is the location where  the primary ray hits illuminated by the light  

play08:02

sources and how bright or dark should the pixel’s  color be shaded. For example, when you look at the  

play08:09

blue stripe of the barbershop pole, the entire  stripe is just blue, but in the rendered image,  

play08:16

there’s a gradient from bright to dark across  a number of pixels depending on how the  

play08:21

triangles are facing the lights and the window.  Specifically, the dark blue backside doesn’t  

play08:28

face any of the light sources and therefore its  illumination comes only from light bouncing off  

play08:34

the nearby walls. Furthermore, when the lighting  conditions change and more light enters the scene,  

play08:41

the entire barbershop pole brightens up. This  accurate lighting applies to all the objects in  

play08:47

the scene and is what transforms the scene and  makes it look realistic. In order to accurately  

play08:54

determine the brightness of these blue pixels, ray  tracing first needs to determine how the surface  

play09:00

is illuminated directly by the light sources,  which is called direct illumination, and second,  

play09:06

how the surface is illuminated by light bouncing  off other objects, which is called indirect  

play09:12

illumination. Combining direct and indirect  illumination is called global illumination. 

play09:19

In order to calculate direct illumination, we  start at the intersection point where the primary  

play09:25

ray hits the triangle in the barbershop pole and  then we generate additional rays called shadow  

play09:31

rays and send them in the direction of each light  source such as the light bulbs or the sun outside  

play09:37

the window. If there are no objects between  the intersection point and a light source,  

play09:42

then that means that this point on the blue stripe  is directly illuminated by that light source. For  

play09:48

each light source that directly illuminates this  point, we factor in the light source’s brightness,  

play09:54

size, color, distance, and the direction of the  surface that the triangle inside the blue stripe  

play10:01

is facing. All these factors are multiplied by  the Red, Blue, and Green or RGB values of the  

play10:08

blue stripe, which in turn changes the shading or  brightness of the pixel that the primary ray went  

play10:15

through. Let’s brighten the room again, and you  can see the RGB values increase for this pixel. 

play10:22

Now let’s dim the room once more and look at  a different pixel whose primary ray hits the  

play10:28

backside of the barbershop pole. A similar set of  shadow rays are sent out from this intersection  

play10:34

point to each light source, but each of these  rays is blocked by other triangles in the pole,  

play10:41

and thus this point doesn’t receive any direct  illumination from any of the light sources,  

play10:46

leaving the pixel dark. These rays are  called shadow rays because they determine  

play10:52

whether a location is directly illuminated by  a light source or whether it’s in a shadow. 

play10:58

You might think that this backside should be  entirely black because it’s in the shadows and  

play11:03

none of the light rays from the light sources can  reach it. However, this backside still has color  

play11:10

because it’s illuminated by light bouncing off the  walls. This light is called indirect illumination,  

play11:17

and in order to calculate it, we take the  intersection point from the primary ray and  

play11:22

generate a secondary ray that bounces off it. This  secondary ray then hits a new surface such as this  

play11:29

point on the wall. From this secondary point we  send out a new set of shadow rays to each light  

play11:36

source to see whether the point on the wall is  in shadows or whether it’s directly illuminated.  

play11:43

The results from these new shadow rays and the  attributes of the corresponding light sources  

play11:48

are combined with the color of the wall’s surface,  essentially turning this point on the wall into a  

play11:54

light source that illuminates the backside of  the barbershop pole. Sometimes this point is  

play12:00

still in shadows, so we create an additional  secondary ray from the point on the wall and  

play12:07

send it in a new direction and see what it hits.  Then we calculate how that third point is directly  

play12:13

illuminated using yet another set of shadow rays  thereby turning this third point into a light  

play12:20

source that illuminates the previous point. This  secondary ray bouncing happens multiple times,  

play12:26

and each time we send shadow rays to the light  sources and check how that point is illuminated. 

play12:33

The purpose of bouncing the secondary rays  around and sending out shadow rays at each  

play12:38

point is to find different paths where  light bounces off different surfaces and  

play12:43

indirectly illuminates the original point  where the primary ray hits. Furthermore,  

play12:49

by sending a thousand rays through random points  in a single pixel, and by having thousands of  

play12:56

secondary rays bounce in different directions,  we get an accurate approximation for indirect  

play13:03

illumination or how this pixel is illuminated  by light bouncing off the other objects. 

play13:09

It's called path tracing because by using these  primary rays, secondary rays and shadow rays,  

play13:17

we’re finding billions of paths from the camera  through different points in the scene and to the  

play13:23

light sources. One additional benefit of indirect  illumination and the use of secondary rays is that  

play13:30

color can bounce from one object to another.  For example, when we place a red balloon next  

play13:36

to the wall and brighten the scene, some secondary  light rays are tinted red by the balloon, and this  

play13:42

reddish color can be seen on the wall itself. An important detail is that the direction the  

play13:48

secondary rays bounce off the surface depends  on the material and texture properties assigned  

play13:54

to the object. For example, here is  a set of spheres that are all gray,  

play14:00

but have different roughness values that  drastically change their look. Essentially, for  

play14:06

a perfectly smooth surface with no roughness, the  object becomes a mirror because every one of the  

play14:12

secondary rays will bounce off in the same perfect  reflection direction, and whatever the secondary  

play14:19

rays hit will combine together and become  visible in the mirror-like surface. However,  

play14:25

when a material has a roughness set to 100%, then  the secondary rays will bounce in entirely random  

play14:33

directions resulting in a flat gray surface. Furthermore, if an object is assigned a glass  

play14:40

material, then additional refraction rays  that pass through the glass are generated,  

play14:46

and the color and brightness of the pixels in the  glass will depend mostly on the direction of the  

play14:51

refraction rays and what those rays hit. Here’s an  interesting scene of some glass and mirror objects  

play14:58

that truly show the power of path tracing, and  you can see multiple mirror bounces in some of  

play15:04

the objects and proper refraction in the glass. Note that for this barbershop scene a thousand  

play15:11

rays per pixel and four secondary bounces are the  render settings we chose during scene setup. Other  

play15:18

scenes use different numbers of rays per pixel,  secondary bounces, and light sources. When we  

play15:25

multiply these values together with the number of  pixels in an image we get the total number of rays  

play15:31

required to generate a single image. Furthermore,  animations typically have 24 frames a second,  

play15:40

so a 20-minute animation requires over a  quadrillion rays, and that’s why path tracing  

play15:46

was considered computationally impossible for  TV shows and movies for decades. The other key  

play15:53

problem was figuring out which one triangle  out of 8 million each of the rays hits first. 

play16:01

So let’s see how these problems are solved and  we’ll start by transitioning to a new scene and  

play16:07

see how ray-triangle intersections are calculated.  Let’s simplify the scene down to one ray and two  

play16:14

triangles and find which one the ray hits. We  start by extending the planes that the triangles  

play16:21

are on and then, using the equations of the planes  and the ray, we calculate the point at which they  

play16:27

intersect. Now that we have a set of intersection  points on separate planes, we find whether the  

play16:34

point is inside each corresponding triangle. If  it is, then that means the ray hits the triangle,  

play16:41

and if it isn’t that means it misses the  triangle. These steps are relatively simple,  

play16:48

and with 10 triangles, we can do this over  and over, once for each triangle. If multiple  

play16:55

triangles are hit we do a distance calculation to  find the closest one. However, when a scene has  

play17:02

millions of triangles, finding which one triangle  a single ray hits first becomes incredibly  

play17:09

repetitive and computationally problematic. We solve this by using what’s called a bounding  

play17:15

volume hierarchy or BVH. Essentially, we take  triangles in the scene and, using their 3D  

play17:23

coordinates, we divide them into two separate  boxes called bounding volumes. Each of these  

play17:30

boxes contains half of all the triangles in the  scene. Then we take these 2 boxes with their  

play17:36

1.5 million triangles and divide them again into  boxes with 750,000 triangles. We keep on dividing  

play17:45

the triangles into more and more progressively  smaller pairs of boxes for a total of 19 divides.  

play17:53

In the end we’ve separated 3 million triangles  into a hierarchy of 19 divisions of boxes with  

play18:01

a total of 525 thousand very small boxes at the  bottom, each with around 6 triangles inside. 

play18:10

The key is that all of these boxes have  their sides aligned with the coordinate axes,  

play18:16

which makes a far easier calculation. For example  e, if we have a ray and two axes aligned boxes,  

play18:23

finding whether it hits box A or box B is just  a matter of finding the intercept with the plane  

play18:30

of Y equals six, and then seeing whether the  intercept coordinates fall between box A’s  

play18:36

bounds or between Box B’s bounds. Then we do the  same thing inside Box B but using the axes aligned  

play18:45

coordinates of the two smaller boxes inside of it. For a scene of 3 million triangles, these 19 box  

play18:53

divide branches form a binary tree or hierarchy,  hence the name bounding volume hierarchy. At each  

play19:01

branch we perform a simple ray-box intersection  calculation to see which box the ray hits first,  

play19:08

and then the ray travels to the next branch. At  the very bottom, once a ray finishes traveling  

play19:15

through all the bounding volume branches,  which is called BVH traversal, we end up  

play19:22

with a small box of only 6 triangles. We then do  the ray-triangle intersection calculation that we  

play19:29

mentioned earlier with just these 6 triangles. As a result, BVH trees and traversal reduce  

play19:37

tens of millions of calculations down to  a handful of simple ray box intersections  

play19:43

followed by 6 ray triangle intersections. Using BVHs helps to solve which triangle  

play19:50

a ray will hit first but doesn’t fix the fact  that a single frame of animation requires over  

play19:57

a hundred billion rays. The solution is in the  incredibly powerful GPUs we now have. When we  

play20:05

open up this GPU, we find a rather large  microchip that has 10496 CUDA or shading  

play20:13

cores and 82 Ray Tracing or RT cores. The  CUDA cores perform basic arithmetic while  

play20:21

the ray tracing cores are specially designed and  optimized to execute Ray Tracing. Inside the RT  

play20:28

cores are two sections, the BVH traversal section  takes in all the coordinates of the boxes and the  

play20:35

direction of the ray and executes BVH traversal in  nanoseconds. Then, the ray triangle intersection  

play20:43

section uses the coordinates of the six or so  triangles in the smallest bounding volume and  

play20:49

quickly finds which triangle the ray hits first.  The RT cores operate in parallel with one another  

play20:56

and pipeline the operations so that a few billion  rays can be handled every second, and a complex  

play21:03

scene like this one can be rendered in 4 minutes. Overall Path Tracing’s computationally impossible  

play21:10

problems are solved by using bounding volume  hierarchies along with improvements in GPU  

play21:17

hardware. One crazy fact is that the most powerful  supercomputer in the year 2000 was the ASCI White,  

play21:26

which cost 110 million dollars and could perform  12.3 trillion operations a second. Compare  

play21:34

this with the NVidia 3090 GPU which cost a few  thousand dollars when it first came out in 2022  

play21:43

and the CUDA or shading cores perform 36 trillion  operations a second. It’s mind-boggling how such  

play21:50

an incredible amount of computing power can fit  into a graphics card the size of a shoebox and  

play21:56

how computer-generated images or CGI and special  effects, which used to be only for high-budget  

play22:04

films, can now be created on a desktop computer. Ray Tracing is a fusion of a variety of different  

play22:11

disciplines from the physics of light, to  trigonometry, vectors, and matrices, and then  

play22:18

also computer science, algorithms and hardware.  Covering all these topics would require multiple  

play22:24

hour-long videos which we don’t have time to do,  but luckily Brilliant, the sponsor of this video,  

play22:31

already has several free and easy to access  courses that explore these topics. Brilliant is  

play22:38

where you learn by doing, and is a website filled  with thousands of fun and interactive modules,  

play22:45

loaded with subjects ranging from the fundamentals  of math to quantum mechanics to programming in  

play22:51

python to biology, and much more. When I learn new  things on Brilliant, I like to think about Steve  

play22:58

Jobs, and how he took a calligraphy class  at college. Although at the time it had no  

play23:04

practical application in his life, 10 years later  when designing the Macintosh computer, he applied  

play23:11

all the lessons from that calligraphy course  to designing the typefaces and proportionally  

play23:16

spaced fonts of the Mac. The key is that as  you progress through Brilliant’s interactive  

play23:22

lessons and learn new things, you may not know  how those lessons apply to your job or life,  

play23:29

but there will be one or two courses that will  click into place and change the trajectory of  

play23:34

your career. However, if you don’t try out their  courses, then you’ll never know. The other reason  

play23:40

why Steve Jobs is applicable to ray tracing  is because he was the CEO of Pixar from 1986  

play23:48

until 2006 and helped to design the computers  that rendered some of its first movies. To be  

play23:54

a successful inventor like Steve Jobs, you need  to be well versed in a wide range of disciplines.  

play24:01

For the viewers of this channel, Brilliant  is providing a free 30-day trial with access  

play24:07

to all their thousands of lessons and is also  offering 20% off an annual subscription. Just  

play24:14

go to brilliant.org/brancheducation.  The link is in the description below. 

play24:21

We loved making this video because path  tracing is an algorithm that we use daily  

play24:26

due to the fact that all our animations are  created and rendered using a software called  

play24:32

Blender which uses path tracing in its  rendering engine. Specifically, here are  

play24:38

all the scenes we used and some statistics  that you can pause the video and look at. 

play24:43

It takes a ton of work to create high quality  educational videos. Researching this video,  

play24:50

writing the script, and then animating  the scenes has taken us over 800 hours,  

play24:56

so if you could take a quick second to  like this video, subscribe to the channel,  

play25:01

write a comment below and share it with someone  who watches TV or movies it would help us a ton. 

play25:08

Furthermore, we’d like to give a shout-out to  the Blender Dev Team. Blender is an incredibly  

play25:14

powerful, free-to-use modeling and animation  software. Each of these scenes was made by  

play25:20

an incredible artist and you can download  them for free from the Blender website. 

play25:26

Finally, one question you may have is:  how is ray tracing is used in video games. 

play25:33

There are many different methods, so we’ll cover  just a few of them. The first one is similar  

play25:39

to path tracing but with some shortcuts.  For a given environment in a video game,  

play25:45

a very low-resolution duplicate of all the models  in the scene is created. Path tracing is then used  

play25:52

to determine direct and indirect lighting for each  of these low-resolution objects and the results  

play25:58

are saved into a light map on the low-resolution  duplicate. Then the light map is applied to the  

play26:05

high-resolution version of the objects in the  scene, creating realistic indirect lighting  

play26:10

and shadows on the high-resolution objects.  This method is pretty good at approximating  

play26:16

indirect lighting and is one of the ray tracing  techniques used in Unreal Engine’s Lumen renderer. 

play26:23

The second and completely different method  for using ray tracing in video games is called  

play26:28

screen space ray tracing. It doesn’t use the  scene’s geometries but rather uses the images  

play26:34

and data generated from the video game graphics  rendering pipeline where all the objects in the  

play26:40

scene undergo 3D transformations to build a  flat 2D image on the viewscreen. During the  

play26:47

video game graphics process, additional data  is created, such as a depth map that shows how  

play26:54

far each object and the corresponding pixels  are from the camera, as well as a normal map  

play27:00

that shows the direction each of the objects and  pixels are facing. By combining the view screen,  

play27:06

the depth map, and the normal map, we can generate  an approximation for the X, Y, and Z values of  

play27:13

the various objects in the scene, as well as  determine what direction each pixel is facing. 

play27:20

Now that we have a simplified scene, let’s say  this lake is reflective, and we want to know  

play27:26

what pixels should be shown in its reflection.  To figure it out, we use ray tracing with this  

play27:32

simplified screen space 3D representation and  bounce the rays off of the lake’s pixels using the  

play27:38

normal map. These rays then continue through the  simplified geometry and hit the trees behind it,  

play27:45

thus producing a reflection of the trees on  the lake. One problematic issue with screen  

play27:52

space ray tracing is that it can only use  the data that’s on the screen. As a result,  

play27:58

when the camera moves, the trees move out of  view, and thus the trees are removed from the  

play28:03

screen space data and it’s impossible to  see them in the reflection. Additionally,  

play28:09

screen space ray tracing doesn’t allow for  reflections of objects behind the camera. This  

play28:14

type of ray tracing along with other rendering  algorithms are used in games like Cyberpunk. 

play28:21

Additionally, if you’re curious as  to how video game graphics work,  

play28:25

we have a separate video that explores  all the steps such as Vertex Shading,  

play28:31

Rasterization, and Fragment Shading. The  video game graphics rendering pipeline  

play28:36

is entirely different from Ray Tracing,  so we recommend you check it out. And,  

play28:42

that’s pretty much it for Ray Tracing. We’d like to give a shoutout to Cem Yuksel, a  

play28:48

professor at the School of Computing at the  University of Utah. On his YouTube channel,  

play28:53

you can find his lecture series on computer  graphics and interactive graphics, which were  

play28:59

both instrumental in the research for this video. This is Branch Education, and we create  

play29:05

3D animations that dive deeply into the  technology that drives our modern world.  

play29:11

Watch another Branch video by clicking one  of these cards or click here to subscribe.

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
Ray TracingCGI3D RenderingPath TracingComputer GraphicsAnimationTechnologyVisual EffectsGPUBlender
هل تحتاج إلى تلخيص باللغة الإنجليزية؟