NVIDIA Reveals STUNNING Breakthroughs: Blackwell, Intelligence Factory, Foundation Agents [SUPERCUT]

AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGI
19 Mar 202416:47

Summary

TLDRThe transcript discusses the significant growth of the AI industry, particularly in large language models post the invention of the Transformer model. It highlights the computational demands of training such models, with the latest OpenAI model requiring 1.8 trillion parameters and several trillion tokens. The introduction of Blackwell, a new GPU platform, is announced, which promises to reduce the energy and cost associated with training next-generation AI models. The transcript also touches on the importance of inference in AI and the potential of Nvidia's technologies in training humanoid robots through the Groot model and Isaac lab.

Takeaways

  • πŸš€ The AI industry has seen tremendous growth due to the scaling of large language models, doubling in size approximately every six months.
  • 🧠 Doubling the model size requires a proportional increase in training token count, leading to a significant computational scale.
  • 🌐 State-of-the-art models like OpenAI's GPT require training on several trillion tokens, resulting in massive floating-point operations.
  • πŸ”„ Training such models would take millennia with a petaflop GPU, highlighting the need for more advanced hardware.
  • πŸ“ˆ The development of multimodality models is the next step, incorporating text, images, graphs, and charts to provide a more grounded understanding of the world.
  • πŸ€– Synthetic data generation and reinforcement learning will play crucial roles in training future AI models.
  • πŸ”’ The Blackwell GPU platform represents a significant leap in computational capabilities, offering a memory-coherent system for efficient AI training.
  • 🌐 Blackwell's design allows for two dies to function as one chip, with 10 terabytes per second of data transfer between them.
  • πŸ’‘ Blackwell's introduction aims to reduce the cost and energy consumption associated with training the next generation of AI models.
  • 🏭 The future of data centers is envisioned as AI Factories, focused on generating intelligence rather than electricity.
  • πŸ€– Nvidia's Project Groot is an example of an AI model designed for humanoid robots, capable of learning from human demonstrations and executing tasks with human-like movements.

Q & A

  • How did the invention of the Transformer model impact the scaling of language models?

    -The invention of the Transformer model allowed for the scaling of large language models at an incredible rate, effectively doubling every six months. This scaling is due to the ability to increase the model size and parameter count, which in turn requires a proportional increase in training token count.

  • What is the computational scale required to train the state-of-the-art OpenAI model?

    -The state-of-the-art OpenAI model, with approximately 1.8 trillion parameters, requires several trillion tokens to train. The combination of the parameter count and training token count results in a computation scale that demands high performance computing resources.

  • What is the significance of doubling the size of a model in terms of computational requirements?

    -Doubling the size of a model means that you need twice as much information to fill it. Consequently, every time the parameter count is doubled, the training token count must also be appropriately increased to support the computational scale needed for training.

  • How does the development of larger models affect the need for more data and computational resources?

    -As models grow larger, they require more data for training and more powerful computational resources to handle the increased parameter count and token count. This leads to a continuous demand for bigger GPUs and higher energy efficiency to train the next generation of AI models.

  • What is the role of multimodality data in training AI models?

    -Multimodality data, which includes text, images, graphs, and charts, is used to train AI models to provide them with a more comprehensive understanding of the world. This approach helps models develop common sense and grounded knowledge in physics, similar to how humans learn from watching TV and experiencing the world around them.

  • How does synthetic data generation contribute to the learning process of AI models?

    -Synthetic data generation allows AI models to use simulated data for learning, similar to how humans use imagination to predict outcomes. This technique enhances the model's ability to learn and adapt to various scenarios without the need for extensive real-world data.

  • What is the significance of the Blackwell GPU platform in the context of AI model training?

    -The Blackwell GPU platform represents a significant advancement in AI model training by offering a highly efficient and energy-saving solution. It is designed to handle the computational demands of training large language models and can reduce the number of GPUs needed, as well as the energy consumption, compared to previous generations of GPUs.

  • How does the Blackwell system differ from traditional GPU designs?

    -The Blackwell system is a platform that includes a chip with two dies connected in such a way that they function as one, with no memory locality or cache issues. It offers 10 terabytes per second of data transfer between the two sides, making it a highly integrated and coherent system for AI computations.

  • What is the expected training time for a 1.8 trillion parameter GPT model with the Blackwell system?

    -Using the Blackwell system, the training time for a 1.8 trillion parameter GPT model is expected to be the same as with Hopper, approximately 90 days, but with a significant reduction in the number of GPUs required (from 8,000 to 2,000) and a decrease in energy consumption from 15 megawatts to only four megawatts.

  • How does the Blackwell system enhance inference capabilities for large language models?

    -The Blackwell system is designed for trillion parameter generative AI and offers an inference capability that is 30 times greater than Hopper for large language models. This is due to its advanced features like the FP4 tensor core, the new Transformer engine, and the Envy link switch, which allows for faster communication between GPUs.

  • What is the role of the Jetson Thor robotics chips in the future of AI-powered robotics?

    -The Jetson Thor robotics chips are designed to power the next generation of AI-powered robots, enabling them to learn from human demonstrations and emulate human movements. These chips, along with technologies like Isaac Lab and Osmo, provide the building blocks for advanced AI-driven robotics that can assist with everyday tasks.

Outlines

00:00

πŸš€ Growth of Large Language Models and Computational Requirements

This paragraph discusses the significant growth in the industry due to the scaling of large language models following the invention of the Transformer model. It highlights the exponential increase in computational requirements, doubling every six months, and the need for a proportional increase in training token count with the parameter count. The state-of-the-art open AI model, with 1.8 trillion parameters, requires several trillion tokens for training, leading to an immense computational scale measured in floating-point operations per second (FLOPS). The challenge of training such models is exemplified by the time and resources needed, which is addressed by the emergence of technologies like the chat GPT and the need for even larger models trained with multimodality data.

05:00

🌐 Introducing Blackwell: The Next-Generation GPU Platform

The paragraph introduces Blackwell, a groundbreaking GPU platform named after David Blackwell, which represents a significant advancement in computational capabilities. It explains how Blackwell's design, with two chip dies acting as one, allows for 10 terabytes per second data transfer, eliminating memory locality and cache issues. The integration of Blackwell with existing Hopper infrastructure is discussed, along with its potential to drastically reduce the energy consumption and computational resources needed for training large AI models, such as GPT, from thousands of GPUs and megawatts to a more efficient and cost-effective solution.

10:02

πŸ€– Inference and the Future of AI: Blackwell's Impact

This section emphasizes the importance of training, inference, and generation in the evolution of AI. It points out the challenges of inference for large language models due to their size and the need for a system designed for generative AI like Blackwell. Blackwell's inference capability is highlighted as being 30 times more powerful than Hopper, with improvements due to the new FP4 tensor core, Transformer engine, and Envy link switch. The potential of data centers as AI factories and the goal of generating intelligence rather than electricity is also discussed, along with the vision of pushing the boundaries of what AI can achieve.

15:02

πŸ€–πŸŒŸ Isaac and Groot: AI-Powered Robotics for the Future

The final paragraph introduces Isaac, an AI-powered robot learning application, and Groot, a general-purpose foundation model for humanoid robot learning. It explains how these technologies enable robots to learn from human demonstrations and emulate human movements through observation. The use of Nvidia's technologies for understanding humans from videos, training models, and deploying them to physical robots is highlighted. The paragraph also discusses the Jetson Thor robotics chips designed for Groot, and how these components collectively provide the building blocks for the next generation of AI-powered robotics.

Mindmap

Keywords

πŸ’‘Transformer

The Transformer is a type of deep learning architecture introduced in the paper 'Attention Is All You Need'. It is fundamental to modern natural language processing and is used for tasks like translation, summarization, and text classification. In the video, it is mentioned as the foundation for scaling large language models, which has significantly benefited the industry by enabling rapid advancements in AI capabilities.

πŸ’‘Parameter Count

Parameter count refers to the number of weights in a machine learning model, which determines the complexity of the model and its ability to learn from data. In the context of the video, increasing the parameter count is associated with creating larger and more powerful AI models, necessitating more computational resources and data for training.

πŸ’‘Computational Requirements

Computational requirements refer to the resources needed to perform a particular task, such as training or inference in AI models. This includes processing power, memory, and data storage. In the video, it is emphasized that as AI models grow in size, so do their computational requirements, leading to the development of more powerful hardware like GPUs.

πŸ’‘Multimodality Data

Multimodality data refers to data that includes more than one type of content, such as text, images, graphs, and charts. The use of multimodality data in AI training allows models to develop a more comprehensive understanding of the world by integrating different types of information. In the video, the speaker mentions the future direction of AI involves training models with multimodality data to enhance their capabilities beyond just text.

πŸ’‘Synthetic Data Generation

Synthetic data generation is the process of creating artificial data that mimics real-world data. This technique is used in machine learning to augment datasets, especially when real data is scarce or expensive to obtain. In the video, synthetic data generation is mentioned as a method to help AI models learn and simulate various scenarios, enhancing their ability to make predictions and decisions.

πŸ’‘Reinforcement Learning

Reinforcement learning is a type of machine learning where an agent learns to make decisions by interacting with its environment. The agent receives rewards or penalties, which guide its learning process towards optimal behavior. In the video, reinforcement learning is mentioned as a technique to train AI models, suggesting that AI will work with other AI in a collaborative manner, similar to how students and teachers interact.

πŸ’‘Hopper GPU

The Hopper GPU is a high-performance graphics processing unit (GPU) developed by Nvidia. It is designed to handle complex computational tasks, particularly for AI and deep learning applications. In the video, Hopper is presented as a significant advancement in GPU technology, capable of supporting the training of large-scale AI models.

πŸ’‘Blackwell GPU

Blackwell GPU is a next-generation graphics processing unit (GPU) platform introduced by Nvidia, succeeding the Hopper architecture. It is designed to provide even greater computational capabilities, allowing for more efficient training of AI models with reduced energy consumption. In the video, Blackwell is described as a revolutionary GPU that will enable the training of larger AI models with less power and computational resources.

πŸ’‘AI Factory

An AI Factory is a concept that refers to data centers or facilities focused on generating intelligence and AI capabilities rather than traditional outputs like electricity. The goal of an AI Factory is to produce innovative AI solutions that can drive revenue and advance technology. In the video, the speaker envisions future data centers as AI Factories, emphasizing the shift from traditional industrial outputs to the generation of intelligence.

πŸ’‘Project Groot

Project Groot is an initiative by Nvidia to develop a general-purpose foundation model for humanoid robot learning. The model takes multimodal instructions and past interactions as input and produces actions for robots to execute. It aims to enable robots to learn from human demonstrations and emulate human movements, facilitating AI-powered robotics advancements.

πŸ’‘Jetson Thor

Jetson Thor is a robotics chip designed by Nvidia for the next generation of AI-powered robots. It is built to support the advanced computational needs of AI models like Groot, enabling robots to perceive, move, reason, and share our world with humans. The chip is part of the technology stack that Nvidia is providing to build AI-powered robotics of the future.

Highlights

The industry of large language models has seen tremendous growth due to scaling capabilities, effectively doubling every six months.

Doubling the size of the model requires twice as much information to fill it, leading to an increase in training token count.

The computational requirements for training large language models have grown exponentially, with the latest Open AI model requiring 1.8 trillion parameters.

Training such large models necessitates several trillion tokens, resulting in an immense computational scale.

The state-of-the-art AI models demand massive computational resources, with the example given requiring 30-50 billion quadrillion floating-point operations per second.

The development of larger models is ongoing, with plans to incorporate multimodality data including text, images, graphs, and charts.

Future models will be grounded in physics and common sense, understanding concepts like an arm not going through a wall.

The use of synthetic data generation and reinforcement learning will be key in training these models, similar to human learning processes.

A new, very large GPU named after David Blackwell is introduced, which is not a chip but a platform.

The Blackwell system features a unique design where two chips act as one, with 10 terabytes of data transfer per second between them.

Blackwell aims to reduce the cost and energy associated with computing, making it more efficient for training next-generation models.

Inference or generation is crucial for AI, with Nvidia GPUs in the cloud often used for token generation.

The Blackwell system has an exceptional inference capability, being 30 times faster than Hopper for large language models.

Blackwell's new Transformer engine and Envy link switch contribute to its superior performance.

Data centers will evolve into AI Factories, focused on generating intelligence rather than electricity.

Nvidia Project Groot is a general-purpose foundation model for humanoid robot learning, capable of taking multimodal instructions.

Isaac Lab and Osmo are tools developed by Nvidia for training and simulation, enabling robots to learn from human demonstrations.

The Jetson Thor robotics chips are designed for Groot, providing the building blocks for the next generation of AI-powered robotics.

Transcripts

play00:00

one of the industries that benefited

play00:02

tremendously from scale and you know you

play00:04

all know this one very well large

play00:06

language

play00:08

models basically after the Transformer

play00:10

was

play00:11

invented we were able to scale large

play00:14

language models and incredible rates

play00:17

effectively doubling every six months

play00:19

now how is it possible that by doubling

play00:22

every six months that we have grown the

play00:24

industry we have grown the computational

play00:27

requirements so far and the reason for

play00:29

that is quite simply this if you double

play00:31

the size of the model you double the

play00:33

size of your brain you need twice as

play00:34

much information to go fill it and so

play00:37

every time you double your parameter

play00:41

count you also have to appropriately

play00:44

increase your training token count the

play00:47

combination of those two

play00:49

numbers becomes the computation scale

play00:52

you have to

play00:53

support the latest the state-of-the-art

play00:55

open AI model is approximately 1.8

play00:58

trillion parameters 1.8 trillion

play01:01

parameters required several trillion

play01:04

tokens to go

play01:06

train so a few trillion parameters on

play01:09

the order of a few trillion tokens on

play01:12

the order of when you multiply the two

play01:13

of them together approximately 30 40 50

play01:19

billion quadrillion floating Point

play01:22

operations per second now we just have

play01:25

to do some Co math right now just hang

play01:27

hang with me so you have 30 billion

play01:30

quadrillion a quadrillion is like a paa

play01:34

and so if you had a p flop GPU you would

play01:38

need

play01:39

30 billion seconds to go compute to go

play01:42

train that model 30 billion seconds is

play01:44

approximately 1,000

play01:47

years and here we

play01:49

are as we see the miracle of chat GPT

play01:53

emerg in front of us we also realize we

play01:57

have a long ways to go we need even

play02:00

larger models we're going to train it

play02:02

with multimodality data not just text on

play02:05

the internet but we're going to we're

play02:06

going to train it on text and images and

play02:08

graphs and

play02:09

charts and just as we learn watching TV

play02:14

and so there's going to be a whole bunch

play02:15

of watching video so that these Mo

play02:17

models can be grounded in physics

play02:20

understands that an arm doesn't go

play02:22

through a wall and so these models would

play02:25

have common sense by watching a lot of

play02:28

the world's video combined with a lot of

play02:30

the world's languages they'll use things

play02:33

like synthetic data generation just as

play02:35

you and I do when we try to learn we

play02:37

might use our imagination to simulate

play02:40

how it's going to end up just as I did

play02:43

when I Was preparing for this keynote I

play02:45

was simulating it all along the way

play02:47

we're sitting here using synthetic data

play02:48

generation we're going to use

play02:49

reinforcement learning we're going to

play02:51

practice it in our mind we're going to

play02:53

have ai working with AI training each

play02:55

other just like student teacher

play02:57

Debaters all of that is going to

play02:59

increase the size of our model it's

play03:01

going to increase the amount of the

play03:02

amount of data that we have and we're

play03:04

going to have to build even bigger

play03:07

gpus Hopper is

play03:10

fantastic but we need bigger

play03:13

gpus and so ladies and

play03:16

gentlemen I would like to introduce

play03:20

you to a very very big

play03:28

GPU

play03:36

named after David

play03:38

Blackwell Blackwell is not a chip

play03:40

Blackwell is the name of a

play03:42

platform uh people think we make

play03:45

gpus and and we do but gpus don't look

play03:49

the way they used

play03:51

to uh here here's the here's the here's

play03:54

the the if you will the heart of the

play03:56

black well system and this inside the

play03:59

company is not called Blackwell it's

play04:01

just a number and um uh

play04:05

this this is Blackwell sitting next to

play04:09

oh this is the most advanced GPU in the

play04:10

world in production

play04:15

today this is

play04:17

Hopper this is hopper Hopper changed the

play04:22

world this is

play04:28

Blackwell

play04:33

it's okay

play04:39

Hopper you're you're very

play04:42

good good good

play04:45

boy well good

play04:50

girl 208 billion transistors and so so

play04:54

you could see you I can see that there's

play04:57

a small line between two dots this is

play05:00

the first time two dies have abutted

play05:02

like this together in such a way that

play05:05

the two chip the two dieses think it's

play05:07

one chip there's 10 terabytes of data

play05:10

between it 10 terabytes per second so

play05:13

that these two these two sides of the

play05:15

Blackwell Chip have no clue which side

play05:17

they're on there's no memory locality

play05:20

issues no cash issues it's just one

play05:23

giant chip and so uh when we were told

play05:27

that Blackwell's Ambitions were Beyond

play05:29

the limits of physics the engineer said

play05:32

so what and so this is what what

play05:35

happened and so this is the Blackwell

play05:38

chip and it goes into two types of

play05:41

systems the first

play05:43

one is form fit function compatible to

play05:46

Hopper and so you slide on Hopper and

play05:49

you push in Blackwell that's the reason

play05:51

why one of the challenges of ramping is

play05:53

going to be so efficient there are

play05:55

installations of Hoppers all over the

play05:57

world and they could be they could be

play05:59

you know the same infrastructure same

play06:01

design the power the electricity The

play06:04

Thermals the software identical push it

play06:08

right back and so this is a hopper

play06:11

version for the current hgx

play06:14

configuration and this is what the other

play06:17

the second Hopper looks like this now

play06:19

this is a prototype board and um Janine

play06:23

could I just

play06:25

borrow ladies and gentlemen J

play06:28

Paul

play06:33

and so this this is the this is a fully

play06:36

functioning board and I just be careful

play06:40

here this right here is I don't know10

play06:49

billion the second one's

play06:54

five it gets cheaper after that so any

play06:57

customers in the audience it's

play07:03

okay all right but this is this one's

play07:05

quite expensive this is to bring up

play07:06

board and um and the the way it's going

play07:10

to go to production is like this one

play07:11

here okay and so you're going to take

play07:14

take this it has two Blackwell D two two

play07:17

Blackwell chips and four Blackwell dyes

play07:20

connected to a Grace CPU the gray CPU

play07:24

has a super fast chipto chip link what's

play07:27

amazing is this computer is the first of

play07:30

its kind where this much computation

play07:33

first of all fits into this small of a

play07:36

place second it's memory coherent they

play07:39

feel like they're just one big happy

play07:41

family working on one application

play07:44

together and so everything is coherent

play07:47

within it um the just the amount of you

play07:50

know you saw the numbers there's a lot

play07:52

of terabytes this and terabytes that's

play07:55

um but this is this is a miracle this is

play07:57

a this let's see what are some of the

play08:00

things on here uh there's um uh mvy link

play08:03

on top PCI Express on the

play08:08

bottom on on uh

play08:12

your which one is mine and your left one

play08:15

of them it doesn't matter uh one of them

play08:18

one of them is a c CPU chipto chip link

play08:21

it's my left or your depending on which

play08:23

side I was just I was trying to sort

play08:25

that out and I just kind of doesn't

play08:28

matter

play08:33

hopefully it comes plugged in

play08:39

so okay so this is the grace Blackwell

play08:43

system if you were to train a GPT model

play08:47

1.8 trillion parameter

play08:49

model it took it took about apparently

play08:52

about you know 3 to 5 months or so uh

play08:55

with 25,000 amp uh if we were to do it

play08:58

with hopper it would probably take

play08:59

something like 8,000 gpus and it would

play09:01

consume 15 megawatt 8,000 gpus and 15

play09:05

megawatts it would take 90 days about 3

play09:07

months and that would allows you to

play09:09

train something that is you know this

play09:12

groundbreaking AI model and this it's

play09:15

obviously not as expensive as as um as

play09:18

anybody would think but it's 8,000 8,000

play09:20

gpus it's still a lot of money and so

play09:23

8,000 gpus 15 megawatts if you were to

play09:26

use Blackwell to do this it would only

play09:29

take 2,000

play09:31

gpus 2,000 gpus same 90 days but this is

play09:35

the amazing part only four megawatts of

play09:38

power so from 15 yeah that's

play09:46

right and that's and that's our goal our

play09:49

goal is to continuously drive down the

play09:51

cost and the energy they're directly

play09:53

proportional to each other cost and

play09:54

energy associated with the Computing so

play09:57

that we can continue to expand and scale

play09:59

up

play10:00

the computation that we have to do to

play10:01

train the Next Generation models well

play10:03

this is

play10:05

training inference or generation is

play10:09

vitally important going forward you know

play10:11

probably some half of the time that

play10:13

Nvidia gpus are in the cloud these days

play10:15

it's being used for token generation you

play10:17

know they're either doing co-pilot this

play10:19

or chat you know chat GPT that or um all

play10:22

these different models that are being

play10:23

used when you're interacting with it or

play10:25

generating IM generating images or

play10:27

generating videos generating proteins

play10:30

generating chemicals there's a bunch of

play10:32

gener generation going on all of that is

play10:35

B in the category of computing we call

play10:37

inference but inference is extremely

play10:40

hard for large language models because

play10:43

these large language models have several

play10:44

properties one they're very large and so

play10:46

it doesn't fit on one GPU so now that

play10:50

you understand the basics let's take a

play10:53

look at inference of Blackwell compared

play10:57

to Hopper and this is this is the

play11:00

extraordinary thing in one generation

play11:03

because we created a system that's

play11:06

designed for trillion parameter gener

play11:08

generative AI the inference capability

play11:11

of Blackwell is off the

play11:14

charts and in fact it is some 30 times

play11:17

Hopper

play11:23

yeah for large language models for large

play11:26

language models like Chad GPT and others

play11:30

like it the blue line is Hopper I gave

play11:33

you imagine we didn't change the

play11:35

architecture of Hopper we just made it a

play11:37

bigger

play11:38

chip we just used the latest you know

play11:41

greatest uh 10 ter you know terabytes

play11:46

per second we connected the two chips

play11:47

together we got this giant 208 billion

play11:49

parameter chip how would we have

play11:51

performed if nothing else changed and it

play11:54

turns out quite

play11:55

wonderfully quite wonderfully and that's

play11:57

the purple line but not as great as it

play12:00

could be and that's where the fp4 tensor

play12:03

core the new Transformer engine and very

play12:07

importantly the Envy link switch and the

play12:10

reason for that is because all these

play12:11

gpus have to share the results partial

play12:14

products whenever they do all to all all

play12:16

all gather whenever they communicate

play12:18

with each

play12:19

other that mvlink switch is

play12:22

communicating almost 10 times faster

play12:26

than what we could do in the past using

play12:27

the fastest Networks

play12:30

okay so Blackwell is going to be just an

play12:33

amazing system for a generative Ai and

play12:36

in the

play12:38

future in the future data centers are

play12:41

going to be thought of as I mentioned

play12:44

earlier as an AI Factory an AI Factory's

play12:47

goal in life is to generate revenues

play12:51

generate in this

play12:53

case

play12:55

intelligence in this facility not

play12:58

generating electricity as in AC

play13:01

generators but of the last Industrial

play13:03

Revolution and this Industrial

play13:05

Revolution the generation of

play13:06

intelligence it's not enough for humans

play13:09

to

play13:09

[Music]

play13:15

imagine we have to

play13:19

invent and

play13:21

explore and push Beyond what's been

play13:24

done am of

play13:28

detail

play13:31

we create

play13:32

smarter and

play13:36

faster we push it to

play13:39

fail so it can

play13:43

learn we teach it then help it teach

play13:48

itself we broaden its

play13:51

[Music]

play13:53

understanding to take on new

play13:58

challenges with absolute

play14:02

precision and

play14:05

succeed we make it

play14:09

perceive and

play14:12

move and even

play14:16

reason so it can share our world with

play14:23

[Applause]

play14:25

[Music]

play14:28

us

play14:31

[Music]

play14:40

this is where inspiration leads us the

play14:43

next

play14:45

Frontier this is Nvidia Project

play14:49

[Music]

play14:52

Groot a general purpose Foundation model

play14:55

for humanoid robot

play14:58

learning the group model takes

play15:00

multimodal instructions and past

play15:02

interactions as input and produces the

play15:05

next action for the robot to

play15:08

execute we developed Isaac lab a robot

play15:11

learning application to train grp on

play15:14

Omniverse Isaac

play15:15

Sim and we scale out with osmo a new

play15:19

compute orchestration service that

play15:21

coordinates workflows across dgx systems

play15:23

for training and ovx systems for

play15:27

simulation with these tools we can train

play15:30

Groot in physically based simulation and

play15:32

transfer zero shot to the real

play15:35

world the Groot model will enable a

play15:38

robot to learn from a handful of human

play15:41

demonstrations so it can help with

play15:43

everyday

play15:45

tasks and emulate human movement just by

play15:49

observing

play15:50

us this is made possible with nvidia's

play15:53

technologies that can understand humans

play15:55

from videos train models and simulation

play15:58

and ultimately deploy them directly to

play16:00

physical robots connecting group to a

play16:03

large language model even allows it to

play16:06

generate motions by following natural

play16:08

language instructions hi G1 can you give

play16:11

me a high five sure thing let's high

play16:15

five can you give us some cool moves

play16:18

sure check this

play16:23

out all this incredible intelligence is

play16:26

powered by the new Jetson Thor robotics

play16:28

chips designed for Groot built for the

play16:31

future with Isaac lab osmo and Groot

play16:35

we're providing the building blocks for

play16:36

the next generation of AI powered

play16:45

robotics

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI InnovationLarge Language ModelsMultimodal TrainingBlackwell GPUComputational EfficiencyAI IndustryTech AdvancementsTransformer EngineAI RoboticsNvidia Technologies