Nvidia's Breakthrough AI Chip Defies Physics (GTC Supercut)

Ticker Symbol: YOU
19 Mar 202419:43

Summary

TLDRThe transcript discusses the rapid advancements in computing, highlighting the increase in computation by 1,000 times in 8 years and the introduction of the Hopper chip and Blackwell GPU. It emphasizes the transformative impact of these technologies on AI and robotics, showcasing the development of the Jetson autonomous processor, the Omniverse simulation engine, and the potential for humanoid robotics. The speaker predicts a future where everything that moves will be robotic, requiring a digital platform like Omniverse for coordination and development.

Takeaways

  • 🚀 Computing power has increased by 1,000 times in the last 8 years, far surpassing Moore's Law's prediction of 10x every 5 years.
  • 🌟 NVIDIA has introduced Hopper, a new advanced GPU with 28 billion transistors, marking a significant leap in computing capabilities.
  • 🔗 The Blackwell chip features a unique design where two dies are abutted, allowing for 10 terabytes per second of data transfer without memory locality or cache issues.
  • 💡 Blackwell is part of a new platform that redefines what GPUs look like, offering unprecedented computation and networking capabilities.
  • 🔧 The new Transformer engine in Blackwell is 2.5 times more powerful than Hopper for training per chip and introduces a new format called FP6 for improved inference capability.
  • 📈 Blackwell's energy efficiency and networking bandwidth optimizations are set to save significant amounts of time and resources in computation.
  • 🤖 NVIDIA envisions a future where everything that moves will be robotic, with a focus on AI robotics and the development of transformative technologies.
  • 🏭 The company is building end-to-end systems for robotics, including the Jetson autonomous processor, Omniverse for simulation, and the DGX AI system for training.
  • 🚗 NVIDIA's technology is being adopted by major automotive companies like Mercedes and BYD, with the Thor platform designed for the next generation of robotics.
  • 🌐 The Omniverse platform is presented as the digital twin platform and operating system for the robotics world, essential for the next Industrial Revolution.

Q & A

  • How has the rate of computing advancement changed over the last 8 years compared to the past?

    -Over the last 8 years, computing has increased by 1,000 times, which is a significant acceleration compared to the historical rate of 10 times every 5 years as per Moore's law during the PC revolution.

  • What is the significance of the Hopper chip in the context of GPU advancement?

    -The Hopper chip represents a significant leap in GPU technology, being the most advanced GPU in the world at the time of the transcript. It features 28 billion transistors and enables two dies to work together as one chip, with 10 terabytes of data transfer per second.

  • What is the Blackwell chip and how does it differ from traditional GPUs?

    -The Blackwell chip is a revolutionary GPU platform that has redefined what a GPU looks like. It is not just a single chip but a system that includes advanced features like two abutting dies that function as one, with no memory locality or cache issues, essentially forming one giant chip.

  • How does the new Transformer engine in the Blackwell chip enhance performance?

    -The new Transformer engine in the Blackwell chip features a fifth-generation MV link that is twice as fast as Hopper. It introduces computation in the network, allowing for faster information sharing and synchronization among multiple GPUs, which amplifies overall performance.

  • What is the fp6 format and how does it benefit the chip's performance?

    -The fp6 format is a new computational format that allows the chip to store more parameters in memory without changing the computation speed. This effectively doubles the throughput, which is crucial for inference tasks and significantly improves the chip's performance.

  • What is the significance of the mvlink switch chip with 50 billion transistors?

    -The mvlink switch chip is significant because it enables every single GPU to communicate with every other GPU at full speed simultaneously. With four MV links each capable of 1.8 terabytes per second, this chip is essential for building a system where GPUs can interact at maximum efficiency.

  • How does the DGX system exemplify the power of the Blackwell chip and MV link technology?

    -The DGX system, with its MV link spine capable of 130 terabytes per second, showcases the power of the Blackwell chip and MV link technology by providing a system that can handle immense computational tasks. It represents a leap from 170 teraflops to 720 petaflops, almost reaching an exaflop for training, all in a single rack.

  • What are the key components of the end-to-end system for AI robotics?

    -The key components of the end-to-end system for AI robotics include the DGX system for training the AI, the AGX system as the autonomous processor, and the Omniverse as the digital representation of the world for the robot to learn and interact with.

  • How does the Jetson AGX system support autonomous systems?

    -The Jetson AGX system is designed to be a low-power, high-speed processor for sensor processing and AI, making it ideal for autonomous systems that require real-time decision-making and action, such as self-driving cars or robots.

  • What is the potential impact of the Thor platform on the future of robotics?

    -The Thor platform is designed for Transformer engines and is expected to be used in the next generation of robotics, including humanoid robots. It signifies a shift towards more generalized and adaptable robotic systems, capable of learning and functioning in a variety of environments with human-like adaptability.

  • How does the Isaac simulation engine contribute to the development of robotics?

    -The Isaac simulation engine provides a virtual environment where humanoid robots can learn and adapt to the physical world. It serves as a 'gym' for robots, allowing them to develop and refine their capabilities in a controlled and safe digital setting before real-world application.

Outlines

00:00

🚀 Computing Power Advancements and the Introduction of Blackwell GPU

The paragraph discusses the significant advancements in computing power over the last eight years, highlighting a 1,000-fold increase. It contrasts this growth with the historical progress of Moore's Law and the PC Revolution. The introduction of the Blackwell GPU is announced, emphasizing its large scale and capabilities. The Blackwell chip, with its 28 billion transistors and innovative design allowing two dies to function as one, is described as a game-changer in the world of computing. The paragraph also mentions the challenges of ramping up production and infrastructure to support the new technology.

05:01

🌐 The Evolution of Hopper and the Emergence of Generative AI

This paragraph delves into the specifics of the Hopper version for the current hgx configuration and the development of a prototype board. It introduces a new Transformer engine and the concept of computation in the network, which enhances the performance of the system. The paragraph also discusses the introduction of a new format called fp6, which increases the memory capacity for parameters. The focus shifts to the future of generative AI, emphasizing the creation of a processor designed for this era and the importance of content token generation. The potential for scaling up the technology is highlighted, with the mention of a new chip, the mvlink switch, and its capabilities.

10:05

🤖 Robotics and AI Integration: The Next Wave of Automation

The paragraph envisions the future of robotics and AI, suggesting that the advancements in AI will soon be applied to robotics. It describes the development of end-to-end systems for robotics, including the AI system dgx, the autonomous system agx, and the simulation engine Omniverse. The paragraph provides an example of how AI and Omniverse will work together in a robotics building, illustrating the interaction between autonomous systems and humans. The potential for humanoid robotics is also discussed, with the introduction of the Thor computer designed for Transformer engines, marking a significant step in the evolution of general robotics.

15:08

🌟 The Future of Robotics and AI: A Digital Platform for the Robotics World

The final paragraph outlines the broader implications of the advancements in computing and AI for the future of robotics. It predicts a new Industrial Revolution where every moving object will be robotic, emphasizing the safety and convenience of such a future. The paragraph also highlights the importance of a digital platform, Omniverse, as the operating system for the robotics world. The speaker concludes by thanking the audience and expressing excitement for the future of GTC and the transformative impact of these technologies.

Mindmap

Keywords

💡Computing Advancement

Refers to the rapid progress in computing capabilities, particularly the increase in computation power by 1,000 times over the last 8 years. This is a central theme of the video, highlighting the exponential growth in computing and its implications for technological innovation.

💡Hopper

Hopper is a high-performance computing chip developed by the company. It represents a significant leap in GPU technology, with 28 billion transistors and the ability to process 10 terabytes of data per second. Hopper is integral to the video's narrative as it sets the stage for the introduction of the even more advanced Blackwell GPU.

💡Blackwell

Blackwell is the name of a platform and the most advanced GPU in the world at the time of the video. It is characterized by its ability to connect two chip dies in a way that they function as one, with no memory locality or cache issues. Blackwell signifies a major breakthrough in GPU technology, with 28 billion transistors and a data transfer rate of 10 terabytes per second.

💡Generative AI

Generative AI refers to the new era of artificial intelligence that is capable of creating or generating new content, such as language models or images. The video positions generative AI as a fundamental shift in computing, where the focus is on creating and producing rather than just processing information.

💡Transformer Engine

The Transformer engine is a type of AI model that is used for processing sequences of data, such as text or speech. In the context of the video, the company has invented another Transformer engine, which is part of their fifth-generation MV link. This new engine is faster and includes computation in the network, allowing for more efficient data sharing and synchronization among multiple GPUs.

💡FP4 Format

FP4 is a new computational format introduced in the video, which is designed to increase the efficiency of AI computations. It allows for the storage of more parameters in memory and effectively doubles the throughput compared to previous formats. This is crucial for inference tasks and contributes to the overall performance improvement of the Blackwell GPU.

💡MVLink Switch

The MVLink switch is a component with 50 billion transistors, designed to enable high-speed communication between GPUs. It features four MV links, each capable of 1.8 terabytes per second data transfer, allowing every single GPU to communicate with every other GPU at full speed simultaneously.

💡DGX System

DGX, or Data Center GPU System, is a product line of AI and deep learning systems. In the video, the DGX system is showcased as having evolved significantly, with the latest version offering 720 petaflops, almost an exaflop for training. This represents a dramatic increase in computational power compared to previous versions, enabling more complex AI models and faster training times.

💡Robotics

In the context of the video, robotics refers to the field of technology that involves the development of machines or systems capable of performing tasks autonomously, often with the ability to interact with the physical world. The video discusses the next wave of AI robotics, indicating a future where AI and robotics will be integrated more deeply, leading to more advanced autonomous systems.

💡Jetson

Jetson is a low-power, high-performance AI computing platform designed for autonomous systems. It is particularly suited for high-speed sensor processing and AI applications in mobile or moving platforms, such as autonomous vehicles. In the video, Jetson is positioned as a key component in the company's end-to-end system for robotics, alongside the DGX for training AI and Omniverse for simulation.

💡Omniverse

Omniverse is a digital platform and operating system for robotics, designed to represent the world digitally for robots to learn and simulate their actions. It serves as a 'gym' for robots, allowing them to adapt to the physical world through simulation before real-world deployment.

Highlights

Computing has advanced by 1,000 times in the last 8 years, far surpassing Moore's Law.

A new chip, Hopper, has been developed, but there's a need for even bigger GPUs.

Introduction of a very large GPU platform named Blackwell.

Blackwell features 28 billion transistors and a unique design where two dies communicate as one chip with 10 terabytes per second data transfer.

The Blackwell chip eliminates memory locality and cache issues, presenting as a single, giant chip.

Blackwell's ambitions were considered beyond the limits of physics, but the engineering team overcame these challenges.

The Blackwell chip is used in two types of systems: fit function compatible with Hopper and a prototype board with a Grace CPU.

The new Transformer engine in Blackwell is 2.5 times more powerful than Hopper for training per chip and introduces a new format called FP6.

FP6 format allows for amplified token generation and inference capability, vital for the future of generative AI.

The new era of computing is focused on generative AI, with a processor designed specifically for this purpose.

Content token generation is a key part of the new generative AI era.

The mvlink switch chip, with 50 billion transistors, enables every GPU to communicate at full speed simultaneously.

The DGX system, now with Blackwell, achieves 720 petaflops, almost an exaflop for training.

The DGX system in a single rack is the world's first exaflop AI system.

The DGX MV link spine has a bandwidth of 130 terabytes per second, surpassing the aggregate bandwidth of the internet.

The entire DGX rack is liquid-cooled, saving significant energy and allowing for high-performance computing.

The future of robotics is discussed, with a focus on AI robotics and physical AI.

The Jetson autonomous processor and the OVX computer for running Omniverse are introduced as key components for robotics.

An example of a robotics building with autonomous systems, including humans and forklifts, is shared to demonstrate the future of AI and robotics integration.

The potential for humanoid robotics is discussed, with the necessary technology now available for generalized human robotics.

The project 'General Robotics 003' is introduced, aiming to bring about a new industrial revolution with robotics at its core.

Transcripts

play00:00

the rate at which we're advancing

play00:01

Computing is insane over the course of

play00:04

the last 8 years we've increased

play00:06

computation by 1,000 times 8 years 1,000

play00:09

times remember back in the good old days

play00:11

of Mo's law it was 10x every 5 years 100

play00:16

times every 10 years in the middle of

play00:19

the PC

play00:20

Revolution 100 times every 10 years in

play00:23

the last eight years we've gone 1,000

play00:26

times we have two more years to

play00:28

go

play00:31

the rate at which we're advancing

play00:33

Computing is insane and it's still not

play00:35

fast enough so we built another chip

play00:37

Hopper is

play00:39

fantastic but we need bigger

play00:42

gpus and so ladies and gentlemen I would

play00:46

like to introduce you to a very very big

play00:52

GPU ladies and gentlemen enjoy

play00:58

this

play01:31

[Music]

play01:39

[Music]

play01:58

to

play02:28

what

play03:00

[Music]

play03:28

Blackwell is not a chip Blackwell is the

play03:30

name of a

play03:31

platform uh people think we make

play03:34

gpus and and we do but gpus don't look

play03:39

the way they used to this is the most

play03:41

advanced GPU in the world in production

play03:43

today this is

play03:45

Hopper this is hopper Hopper changed the

play03:50

world this is

play03:58

Blackwell

play04:01

it's okay

play04:06

Hopper 28 billion transistors and so so

play04:10

you could see you I can see that there's

play04:13

a small line between two dies this is

play04:16

the first time two dieses have abutted

play04:18

like this together in such a way that

play04:21

the two chip the two dieses think it's

play04:23

one chip there's 10 terabytes of data

play04:26

between it 10 terabytes per second so

play04:29

that these two these two sides of the

play04:31

Blackwell Chip have no clue which side

play04:33

they're on there's no memory locality

play04:36

issues no cash issues it's just one

play04:39

giant chip when we were told that

play04:42

Blackwell's Ambitions were beyond the

play04:44

limits of physics uh the engineer said

play04:47

so what and so this is what happened and

play04:50

so this is the Blackwell chip and it

play04:53

goes into two types of systems the first

play04:58

one is for fit function compatible to

play05:01

Hopper and so you slide on Hopper and

play05:03

you push in Blackwell that's the reason

play05:05

why one of the challenges of ramping is

play05:08

going to be so efficient there are

play05:10

installations of Hoppers all over the

play05:12

world and the same infrastructure same

play05:14

design the power the electricity The

play05:17

Thermals the software identical push it

play05:21

right back and so this is a hopper

play05:24

version for the current hgx

play05:27

configuration the second Hopper looks

play05:30

like this now this is a prototype board

play05:33

and this is a fully functioning board

play05:35

and I just be careful here this right

play05:38

here is I don't know1

play05:43

billion the second one's

play05:46

five it gets cheaper after that so the

play05:49

way it's going to go to production is

play05:50

like this one here two Blackwell chips

play05:53

and four Blackwell dyes connected to a

play05:56

Grace CPU the grace CPU has a super for

play05:59

fast chipto chip link what's amazing is

play06:02

this computer is the first of its kind

play06:05

where this much computation fits into

play06:09

this small of a place but we need a

play06:11

whole lot of new features in order to

play06:13

push the limits Beyond if you will the

play06:15

limits of physics and so one of the

play06:18

things that we did was We Invented

play06:19

another Transformer engine and so this

play06:21

new Transformer engine we have a fifth

play06:23

generation MV

play06:25

link it's now twice as fast as Hopper

play06:28

but very importantly it has computation

play06:31

in the network and the reason for that

play06:33

is because when you have so many

play06:34

different gpus working together we have

play06:36

to share our information with each other

play06:39

we have to synchronize and update each

play06:40

other having extraordinarily fast links

play06:43

and being able to do mathematics right

play06:45

in the network allows us to essentially

play06:49

amplify even further so even though it's

play06:51

1.8 terabytes per second it's

play06:54

effectively higher than that and so it's

play06:56

many times that of Hopper overall

play06:59

compared to

play07:01

Hopper it is two and a half times the

play07:05

fp8 performance for training per chip it

play07:08

also has this new format called fp6 so

play07:11

that even though the computation speed

play07:13

is the same the amount of parameters you

play07:16

can store in the memory is now Amplified

play07:19

fp4 effectively doubles the throughput

play07:21

this is vitally important for inference

play07:25

the amount of energy we save the amount

play07:27

of networking bandwidth we save the the

play07:29

amount of waste of time we save will be

play07:33

tremendous the future is generative

play07:36

which is the reason why we call it

play07:37

generative AI which is the reason why

play07:40

this is a brand new industry the way we

play07:43

compute is fundamentally different we

play07:46

created a processor for the generative

play07:49

AI era and one of the most important

play07:52

parts of it is content token generation

play07:55

we call it this format is

play07:57

fp4 that's a lot of computation

play08:02

5x the token generation 5x the inference

play08:06

capability of Hopper seems like enough

play08:11

but why stop

play08:13

there and so we would like to have a

play08:15

bigger GPU even bigger than this one and

play08:18

so we decided to scale it so we built

play08:22

another

play08:24

chip this chip is just an incredible

play08:28

chip we call it the mvlink switch it's

play08:31

50 billion transistors it's almost the

play08:34

size of Hopper all by itself this switch

play08:37

ship has four MV links in

play08:39

it each 1.8 terabytes per second what is

play08:43

this chip

play08:44

for if we were to build such a chip we

play08:48

can have every single GPU talk to every

play08:52

other GPU at full speed at the same time

play08:57

that's

play08:58

insane

play09:00

and as a result you can build a system

play09:03

that looks like this this is what a dgx

play09:06

looks like now remember just six years

play09:09

ago I delivered the uh first djx1 to

play09:12

open AI that dgx by the way was

play09:16

170

play09:18

teraflops that's

play09:20

0.17 pedop flops so this is

play09:24

720 and so this is now 720 pedop flops

play09:29

almost an exop flop for training and the

play09:32

world's first one exop flops machine in

play09:35

one

play09:39

rack just so you know there are only a

play09:41

couple two three exop flops machines on

play09:44

the planet as we

play09:45

speak and so this is an exif flops AI

play09:49

system in one single rack well let's

play09:53

take a look at the back of it so this is

play09:55

what makes it possible that's the back

play09:57

that's the that's the back the dgx MV

play10:00

link spine 130 terabytes per

play10:05

second goes through the back of that

play10:08

chassis that is more than the aggregate

play10:10

bandwidth of the internet we could

play10:12

basically send everything to everybody

play10:13

within a second 5,000 mvlink cables in

play10:17

total two

play10:19

miles now this is the amazing thing if

play10:21

we had to use Optics we would have had

play10:24

to use transceivers and retim and those

play10:26

transceivers and re ERS alone would have

play10:29

cost

play10:31

20,000

play10:32

watts 2 kilowatt of just transceivers

play10:36

alone just to drive the MV link spine as

play10:39

a result we did it completely for free

play10:42

over mvlink switch and we were able to

play10:44

save the 20 Kow for computation this

play10:47

entire rack is 120 kilowatt so that 20

play10:50

kilow makes a huge difference it's

play10:53

liquid cooled what goes in is 25° C

play10:56

about room temperature what comes out is

play10:59

45° C about your jacuzzi so room

play11:03

temperature goes in jacuzzi comes out 2

play11:06

L per second we could sell a

play11:09

peripheral 600,000 Parts somebody used

play11:14

to say you know you guys make gpus and

play11:16

we do but this is what a GPU looks like

play11:19

to me when somebody says GPU I see this

play11:23

two years ago when I saw a GPU was the

play11:25

hgx it was 70 lbs 35,000 parts our gpus

play11:30

now are 600,000 parts and 3,000 lb okay

play11:35

so 3,000 lb ton and a half so it's not

play11:38

quite an elephant now let's see what it

play11:41

looks like in operation if you were to

play11:43

train a GPT model 1.8 trillion parameter

play11:46

model it took about 3 to 5 months or so

play11:50

uh with 25,000 ampers uh if we were to

play11:53

do it with hopper it would probably take

play11:54

something like 8,000 gpus and it would

play11:56

consume 15 megawatt 8,000 gpus on 15

play12:00

megawatts it would take 90 days about

play12:02

three months if you were to use

play12:04

Blackwell to do this it would only take

play12:07

2,000

play12:09

gpus 2,000 gpus same 90 days but this is

play12:13

the amazing part only four megawatts of

play12:16

power so from 15 yeah that's

play12:21

right Blackwell would be the most

play12:24

successful product launch in our history

play12:26

and so I can't wait to see that let's

play12:28

talk about the next wave of Robotics the

play12:31

next wave of AI robotics physical

play12:34

AI so far all of the AI that we've

play12:37

talked about is one

play12:39

computer data comes into one computer we

play12:41

take all of the data we put it into a

play12:43

system like dgx we compress it into a

play12:46

large language model trillions of tokens

play12:49

becomes billions of parameters these

play12:50

billions of parameters becomes your AI

play12:52

so I just described in very simple terms

play12:56

essentially what just happened in large

play12:57

language models except the chat GPT

play12:59

moment for robotics may be right around

play13:01

the corner and so we've been building

play13:04

the end to end systems for robotics for

play13:06

some time I'm super super proud of the

play13:08

work we have the AI system

play13:11

dgx we have the lower system which is

play13:13

called agx for autonomous systems the

play13:15

world's first robotics processor when we

play13:17

first built this thing people are what

play13:19

are you guys building it's a s so it's

play13:22

one chip it's designed to be very low

play13:23

power but it's designed for high-speed

play13:25

sensor processing and Ai and so so if

play13:29

you want to run Transformers in a car or

play13:32

anything um that moves uh we have the

play13:35

perfect computer for you it's called the

play13:37

Jetson and so the dgx on top for

play13:40

training the AI the Jetson is the

play13:41

autonomous processor and in the middle

play13:44

we need another computer we need a

play13:47

simulation engine that represents the

play13:49

world digitally for the robot so that

play13:51

the robot has a gym to go learn how to

play13:53

be a robot we call that virtual world

play13:57

Omniverse and the compter computer that

play13:59

runs Omniverse is called ovx and ovx the

play14:03

computer itself is hosted in the Azure

play14:06

Cloud okay and so basically we built

play14:08

these three things these three systems

play14:10

on top of it we have algorithms for

play14:12

every single one now I'm going to show

play14:15

you one super example of how Ai and

play14:18

Omniverse are going to work together the

play14:21

example I'm going to show you is kind of

play14:22

insane but it's going to be very very

play14:24

close to tomorrow it's a robotics

play14:27

building this robotics building is

play14:29

called a warehouse inside the robotics

play14:32

building are going to be some autonomous

play14:33

systems some of the autonomous systems

play14:36

are going to be called humans and some

play14:38

of the autonomous systems are going to

play14:39

be called forklifts and these autonomous

play14:42

systems are going to interact with each

play14:44

other of course autonomously and it's

play14:47

going to be overlooked upon by this

play14:49

Warehouse to keep everybody out of

play14:50

Harm's Way the warehouse is essentially

play14:53

an air traffic controller and whenever

play14:55

it sees something happening it will

play14:57

redirect traffic and give New Way points

play15:01

just new way points to the robots and

play15:03

the people and they'll know exactly what

play15:04

to do this warehouse this building you

play15:08

can also talk to of course you could

play15:10

talk to it hey and all of this is

play15:12

running in real time what about all the

play15:14

robots all of those robots you were

play15:15

seeing just now they're all running

play15:17

their own autonomous robotic stack let's

play15:20

talk about robotics everything that

play15:22

moves will be robotic there's no

play15:23

question about that it's safer it's more

play15:25

convenient and one of the largest

play15:27

Industries is going to be Automotive

play15:29

beginning of next year we will be

play15:31

shipping in Mercedes and then shortly

play15:33

after that jlr today we're announcing

play15:36

that byd the world's largest ev company

play15:39

is adopting our next Generation it's

play15:42

called Thor Thor is designed for

play15:44

Transformer engines Thor our next

play15:47

Generation AV computer will be used by

play15:52

byd the next generation of Robotics will

play15:55

likely be a humanoid

play15:57

robotics we now have the Necessary

play16:00

Technology to imagine generalized human

play16:04

robotics in a way human robotics is

play16:06

likely easier and the reason for that is

play16:08

because we have a lot more training data

play16:11

that we can provide the robots because

play16:13

we are constructed in a very similar way

play16:15

it could be in video form it could be in

play16:17

virtual reality form we then created a

play16:21

gym for it called Isaac reinforcement

play16:23

learning gym which allows the humanoid

play16:26

robot to learn how to adapt to the

play16:29

physical world and then an incredible

play16:31

computer the same computer that's going

play16:33

to go into a robotic car this computer

play16:36

will run inside a human or robot called

play16:38

Thor it's designed for Transformer

play16:41

engines the soul of

play16:43

Nvidia the intersection of computer

play16:45

Graphics physics artificial intelligence

play16:49

it all came to bear at this moment the

play16:51

name of that project general robotics

play16:57

003 I know

play16:59

super

play17:01

good super

play17:04

good well I think we have some special

play17:08

guests do

play17:18

we hey

play17:22

guys so I understand you guys are

play17:25

powered by

play17:26

Jetson they're powered by Jetson

play17:30

little Jetson robotics computers inside

play17:33

they learn to walk in Isaac

play17:39

Sim ladies and gentlemen this this is

play17:42

orange and this is the famous green they

play17:45

are the bdx robots of

play17:50

Disney amazing Disney

play17:54

research come on you guys let's wrap up

play17:57

let's go

play18:00

five things where you

play18:02

going what are you

play18:05

saying no it's not time to

play18:09

eat it's not time to

play18:11

[Music]

play18:14

eat I'll give I'll give you a snack in a

play18:16

moment let me finish up real quick first

play18:19

a new Industrial Revolution every data

play18:22

center should be accelerated a trillion

play18:24

dollars worth of installed data centers

play18:27

will become modernized over the next

play18:28

several years second the computer of

play18:30

this revolution the computer of this

play18:33

generation generative AI trillion

play18:36

parameters this is what we announce to

play18:38

you today this is Blackwell amazing

play18:41

amazing processors MV link switches

play18:44

networking systems and the system design

play18:47

is a miracle this is Blackwell and this

play18:50

to me is what a GPU looks like in my

play18:53

mind everything that moves in the future

play18:56

will be robotic you're not going to be

play18:58

the only one and these robotic systems

play19:01

whether they are humanoid amrs

play19:04

self-driving cars forklifts manipulating

play19:07

arms they will all need one thing Giant

play19:11

stadiums warehouses factories they going

play19:14

to be factories that are robotic

play19:16

manufacturing lines that are robotics

play19:17

building cars that are robotics these

play19:21

systems all need one thing they need a

play19:24

platform a digital platform a digital

play19:27

twin platform and we call that Omniverse

play19:29

the operating system of the robotics

play19:32

world thank

play19:34

you thank you have a great have a great

play19:38

GTC thank you all for coming thank you

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
ComputingRevolutionAIInnovationGPUTechnologyBlackwellChipGenerativeAIRoboticsFutureDataCenterModernizationAutonomousSystemsNVIDIATechDigitalTwin
هل تحتاج إلى تلخيص باللغة الإنجليزية؟