Nvidia's Announces Game Changer AI Platform That Changes Everything For AMD & Intel, Worth Trillions
Summary
TLDRDieses Skript fasst die Entwicklung von Nvidia und die Bedeutung von Deep Learning in der Generativen KI zusammen. Es erzählt von der Geburt von Nvidia, der Erfindung des programmierbaren Shading GPUs, bis hin zu den Fortschritten in der Computergrafik und der Einführung von AI in der Industrie. Es präsentiert auch die Herausforderungen und Lösungen im Bereich der Energieeffizienz und zeigt die Vision einer zukünftigen KI, die nicht nur die Produktivität steigert, sondern auch neue Wissenschaft entdeckt und die Energieeffizienz verbessert.
Takeaways
- 🚀 Die Beschleunigung des Deep Learnings um eine Million Mal hat die Schaffung großer Sprachmodelle und die Entstehung von generativer KI ermöglicht.
- 📉 Der Kosten- und Energieverbrauch für die Entwicklung von generativer KI hat sich drastisch reduziert.
- 🎨 NVIDIA hat sich durch die Erfindung des programmierbaren Shading und der GPUs für Computer-Grafiken einen Namen gemacht.
- 🧠 Die erste Kontaktaufnahme mit KI erfolgte 2012 mit AlexNet, was ein Durchbruch in der Computer Vision darstellte.
- 🔧 NVIDIA hat seine Forschung und Entwicklung auf Deep Learning umgestellt, um neue Software-Schreibweisen zu erforschen und zu nutzen.
- 🤖 Die Einführung des DJX1 von NVIDIA im Jahr 2016 markierte einen Wendepunkt in der Entwicklung von KI für autonome Fahrzeuge und Robotik.
- 🔄 Der Einsatz von DLSS (Deep Learning Super Sampling) ermöglichte eine enorme Geschwindigkeitssteigerung bei der Ray-Tracing-Grafik.
- 🌐 Generative AI wird in naher Zukunft viele Branchen revolutionieren, einschließlich wissenschaftlicher Computing und autonomer Fahrzeuge.
- 🛡️ Neuere Fortschritte in der KI, wie z.B. die Verstärkung durch menschliches Feedback und die 'Guard railing', haben dazu beigetragen, die Kontrollbarkeit und Genauigkeit von generativer KI zu erhöhen.
- 🔍 Die Einführung von 'Retrieval-augmented generation' hat es möglich gemacht, die KI mit semantischen Datenbanken zu beliefern, um eine fundierterere und zielgerichtetere Antwortfähigkeit zu erreichen.
- 🌍 Generative AI ist energieintensiv, doch es gibt Hoffnung, dass durch neue Technologien wie Blackwell die Energieeffizienz signifikant gesteigert werden kann.
Q & A
Wie wurde das Deep-Learning vor einigen Jahren beschleunigt?
-Das Deep-Learning wurde durch eine Millionfache Beschleunigung vor einigen Jahren vorangetrieben, was es möglich machte, große Sprachmodelle zu erstellen.
Was war der Grund für die Entstehung von generativer KI?
-Die Entstehung von generativer KI wurde durch die Millionfache Beschleunigung und Kostenreduktion ermöglicht, was die Entwicklung von allgemeiner generativer KI erlaubte.
Was ist der Unterschied zwischen traditioneller Computergrafik und der durch AI unterstützten Computergrafik?
-Traditionelle Computergrafik basiert auf programmierbaren Schattierungen und Rendering-Techniken, während AI-unterstützte Computergrafik in der Lage ist, komplexe Aufgaben wie Ray Tracing in Echtzeit durchzuführen, was zu einer höheren Qualität bei interaktiven Visualisierungen führt.
Welche Rolle spielten die Tensor Core GPUs und das MVLink Switch Fabric in der Entwicklung von AI?
-Tensor Core GPUs und das MVLink Switch Fabric sind grundlegende Bausteine für die Beschleunigung von AI-Anwendungen und haben es ermöglicht, Deep-Learning-Modelle effizienter und schneller zu trainieren und zu nutzen.
Wie hat sich die Einführung von DLSS (Deep Learning Super Sampling) auf die Computergrafik ausgewirkt?
-DLSS hat es ermöglicht, hochauflösende, vollständig raytrace-basierte Simulationen mit einer hohen Framerate zu rendern, was durch die Verwendung von AI und generativer KI erreicht wurde, die die Qualität bei einer reduzierten Anzahl von berechneten Pixeln erhält.
Was ist der Unterschied zwischen Chat GPT und früheren AI-Modellen?
-Chat GPT nutzt reinforcement learning with human feedback, um die AI an unseren Kernwerten auszurichten und die gewünschten Fähigkeiten zu erlernen, was zu einer größeren Kontrollbarkeit und Genauigkeit führt.
Wie wird die zukünftige Entwicklung von generativer AI durch die Energieeffizienz beeinflusst?
-Generative AI hat das Potenzial, die Energieeffizienz in vielen Bereichen zu verbessern, indem sie die Notwendigkeit reduziert, Daten aus Datenzentren abzurufen und stattdessen lokal generiert werden kann.
Was sind die Vorteile von Domain-Specific Libraries (DSLs) in Bezug auf generative AI?
-DSLs wie CDNN für generative AI und CDF für SQL-Verarbeitung ermöglichen spezialisierte Beschleunigung für bestimmte Anwendungen, was die Leistung und Effizienz in diesen Bereichen erhöht.
Wie plant NVIDIA, die Herausforderungen der Energieintensität von generativer AI zu bewältigen?
-NVIDIA plant, durch die Entwicklung von Technologien wie Blackwell, die die Anwendungsleistung bei konstanter Energie und Kosten erhöht, sowie durch die Verlagerung von Datenzentren in Bereiche mit überschüssiger Energie, die Herausforderungen zu bewältigen.
Was ist das Konzept hinter Omniverse und wie wird es in der generativen AI eingesetzt?
-Omniverse ist eine Plattform zur Zusammensetzung von multimodalen Daten, die es ermöglicht, Inhalte aus verschiedenen Quellen zu verbinden und zu steuern. In Kombination mit generativer AI kann dies zu einer präziseren und kontrollierteren Erstellung von Inhalten führen.
Wie wird die zukünftige Entwicklung von Hardware im Bereich der generativen AI aussehen?
-Die Entwicklung von Hardware wird sich darauf konzentrieren, die Leistung von generativer AI weiter zu verbessern, indem man neue Prozessoren und Systeme entwirft, die speziell für die Anforderungen von AI-Anwendungen optimiert sind.
Outlines
🚀 Entwicklung von KI und Deep Learning
Dieser Absatz beschreibt die enorme Beschleunigung im Bereich des Deep Learnings, welche die Schaffung von großen Sprachmodellen und generativer KI ermöglicht hat. Es wird auf bedeutende Meilensteine in der Computerindustrie hingewiesen, darunter die IBM System 360, die Erfindung der modernen Computergrafik, Raytracing und die Programmiersteuerung von Shadern. Der Absatz erwähnt die Gründung von Nvidia durch Chris Curtis und den Redner selbst, die Erfindung des ersten programmierbaren Shader-GPUs und wie diese Technologien zur Entwicklung von Nvidia beigetragen haben. Zudem wird die erste Begegnung mit künstlicher Intelligenz durch AlexNet im Jahr 2012 und die darauf folgende Umorientierung des gesamten Unternehmens auf Deep Learning thematisiert.
🤖 KI in der Computergrafik und in der Realität
Dieser Absatz konzentriert sich auf die Anwendung von generativer KI in der Computergrafik und wie Nvidia mithilfe von DLSS (Deep Learning Super Sampling) die Leistung von KI zur Verbesserung von Rendering-Geschwindigkeiten genutzt hat. Es wird auch auf die Herausforderungen und die Optimismus bezüglich der Kontrollierbarkeit und Genauigkeit von generativer KI eingegangen. Der Redner erwähnt die drei Durchbruchstechnologien: Verstärkungslernen durch menschliche Feedback, 'Guard railing' zur Einengung der AI-Ausgaben und 'Retrieval-augmented generation' zur Verbesserung der Datensicherheit. Der Absatz endet mit der Einführung von Edify, einem von Nvidia entwickelten Modell, das 2D-Text in 2D-Bilder umwandelt.
🌐 Die Zukunft der generativen KI und ihre Kontrolle
In diesem Absatz wird auf die stetige Verbesserung der Kontrollierbarkeit und Genauigkeit von generativer KI durch spezifische Modelle und Verfahren eingegangen. Es wird die Verwendung von AI Foundry und Omniverse beschrieben, die es ermöglichen, generative KI mithilfe von Vorlagen und Anpassungen zu steuern. Der Fokus liegt auf der Schaffung von 3D-Modellen und der Kontrolle von Bildern, die aus generierten Texten resultieren. Der Redner diskutiert auch die Bedeutung der Softwareentwicklung für Nvidia und wie diese die KI-Technologie optimieren kann, um die Energieeffizienz zu erhöhen und neue Märkte zu erschließen.
⚡ Energieverbrauch und zukünftige Entwicklung der KI
Der vierte Absatz thematisiert den Energieverbrauch von generativer KI und wie dies in den kommenden Jahrzehnten eine Herausforderung darstellen könnte. Der Redner diskutiert die stetige Skalierung der KI-Modelle und die damit verbundene steigende Rechenleistung, die für deren Training erforderlich ist. Es wird auch auf die positiven Auswirkungen von generativer KI eingegangen, wie z.B. die Verringerung des Energieverbrauchs durch die Generierung anstatt des Abrufs von Daten. Der Redner betont, dass generative AI dazu beitragen kann, die Energieeffizienz in verschiedenen Sektoren zu verbessern und wie die Zukunft der Datenzentren möglicherweise von der Platzierung in Energieüberschuss-Regionen abhängen wird.
Mindmap
Keywords
💡Deep Learning
💡Generative AI
💡Programmable Shading
💡Nvidia
💡RTX
💡DLSS
💡Omniverse
💡Energieverbrauch
💡Tensor Core GPUs
💡DSL
Highlights
Beschleunigung des Deep Learning um eine Million Mal, was es ermöglicht hat, große Sprachmodelle zu erstellen.
Millionfache Reduzierung von Kosten und Energie für die Entwicklung von generativer KI.
Einführung des ersten Computers für Deep Learning, der DJX1, für autonome Fahrzeuge, Robotik und generative AI.
Die Transformer-Technologie hat das moderne maschinelles Lernen revolutioniert.
Einführung von RTX, der weltweit ersten Echtzeit-Interaktiv-Rays-Tracing-Plattform.
DLSS (Deep Learning Super Sampling) reduziert die Energieverbrauch durch intelligente Pixel-Generierung.
Chat GPT als schnellstes wachsendes Service, der Industrien weltweit beeinflusst.
Optimismus bezüglich der Kontrollierbarkeit und Genauigkeit von generativer AI durch neue Technologien.
Einführung von Guard railing zur Fokussierung der AI-Antworten auf bestimmte Bereiche.
Retrieval-Augmented Generation zur Verbesserung der Datensicherheit und Autorität in AI-Systemen.
Entwicklung von Edify, einem 2D-Text-zu-2D-Bild-Foundation-Modell für generative AI.
AI Foundry zur Erstellung von Modellen mit Kundendaten für eine bessere Kontrolle und Anpassung.
Omniverse zur Zusammensetzung von multimodalen Daten für eine präzisere Kontrolle von generativer AI.
Erstellung digitaler Charaktere zur Interaktion mit AI und Verbesserung der Benutzererfahrung.
Die Bedeutung von Software-Entwicklung für die Zukunft von NVIDIA und die Erreichung von beschleunigten Anwendungen.
Einführung von DS-Libraries (Domain-Specific Libraries) für generative AI und andere Anwendungen.
Die Herausforderungen der Energieverbrauchseffizienz bei der Entwicklung von generativer AI.
Perspektiven für die Zukunft von Datenzentren in Bezug auf Energieverbrauch und Standortwahl.
Die Rolle von generativer AI in der Steigerung der Produktivität und der Förderung der Energieeffizienz.
Transcripts
we've accelerated deep learning by a
million times which is the reason why
it's now possible for us to create these
large language models a million times
speed up a million times reduction in
cost and energy is what made it possible
for us to make General generative AI
possible and so I made a cartoon for you
I made a cartoon for you of our journey
did you make it or did gener of AI I had
it made I had it made that's what CEOs
do we don't do anything we just have it
be
done and so this is this cartoon here is
really terrific so these are some of the
most important moments in the computer
industry the IBM system 360 of course
the invention of modern Computing the
teapot 1975 the Utah teapot 1979 Ray
tracing 1986 programmable shading of
course most of the animated movies that
we see today wouldn't be possible if not
for programmable shading originally done
on the CRA supercomputer and then in
1993 Nvidia was founded Chris Curtis and
I founded the company uh
1995 Windows PC revolutionized the
personal computer industry put a
personal computer in every home and
every desk 2001 we invented the first
programmable shading GPU and that that
really drove uh vast majority of
nvidia's Journey up to that point But at
the background of everything we were
doing was accelerated Computing so that
you can solve problems that normal
computers can't and the application we
chose first was computer graphics and it
was probably one of the best decisions
we ever made because computer Graphics
was insanely computationally in
intensive and remain so for the entire
31 years that that uh Nvidia has been
here it was also incredibly high volume
because we applied computer Graphics to
an application at the time that wasn't
mainstream 3D Graphics video games the
combination of very large volume very
complicated Computing problem led to
a very large R&D budget for us which
drove the flywheel of our company one
day in 2012 we made our first Contact
you know Star Trek first Contact with
artificial intelligence that first
Contact was Alex net and was in 2012
very big moment we made the observation
that alexnet was an incredible
breakthrough in computer vision but at
the core of it that it was a new way of
writing software instead of Engineers
given input imagining what the output
was going to be right algorithms we now
have a computer that given an input an
example outputs would figure out what
the program is in the middle that
observation and that we can use this
technique to solve a whole bunch of
problems that previously wasn't solvable
was a great observation and we changed
everything in our company to pursue it
from the processor to the systems to the
the software stack all the algorithms
Nvidia basic research pivoted towards
working on deep learning and so in 20
2016 we introduced the first computer we
built for deep learning and we called it
djx1 and I delivered the first djx1
outside of our company I built it for
NVIDIA to build models for self-driving
cars and Robotics and such and and
generative AI for graphics somebody saw
an example of djx1 Elon reached out to
me and said hey I would love to have one
of those for a startup company we
starting and so I delivered the first
one to a company at the time uh that
knew nobody knew about called open Ai
and so that was 2016 2017 was the
Transformer that revolutionized modern
machine learning modern deep learning in
2018 right here at sigraph we announced
RTX the world's first realtime
interactive rate Tracer R tracing
platform we call it RTX it was such a
big deal that we changed the name of GTX
which every body referred to our
graphics cards as to RTX and you
mentioned last year during your sigraph
keynote that RTX R tracing extreme was
one of the big important moments when
computer Graphics met AI That's right
but that had been happening for a while
actually so what was so important about
RTX in 2018 we made it possible to use a
parallel processor to accelerate rate
tracing um but even then we were rate
tracing at about five frames every
second depending on on how many Rays
we're talking about tracing and we were
doing it at 1080 resolution uh obviously
video games need a lot more than that
obviously realtime Graphics need more
than that this crowd definitely knows
what that means but for the folks who
are watching online the rendering
processes used to take a really long
time when you were making something it
used to take a Cay supercomputer to
render just a few pixels and now we have
our RTX to accelerate that rate tracing
but it was interactive it was time but
it wasn't fast enough to be a video game
and so we realized that we needed a big
boost probably something along the lines
of 20x or so maybe 50x or so boost and
the team uh invented dlss which
basically renders one pixel while it
uses AI to infer a whole bunch of other
pixels and so we basically taught an AI
that is conditioned on what it saw and
then fills in the dots for everything
else and now we're able to render fully
rate Trace fully path Trace simulations
at 4K resolution at 300 frames per
second made possible by by Ai and so
2018 came along 2022 as we all know chat
GPT came out fastest growing service in
history just about every industry is
going to be affected by this whether
it's scientific Computing trying to do a
better job uh predicting the weather
with uh a lot less energy and very
importantly robotics self-driving cars
are all going to be transformed by
generative AI I've gotten the sense from
talking to you recently that you are
optimistic that this these generative AI
tools will become more controllable more
accurate we all know that there are
issues with hallucinations low quality
outputs that people are using these
tools and they're maybe not getting
exactly the output that they're hoping
for right meanwhile they're using a lot
of energy which which we're going to
talk
about why are you so optimistic about
this what is what do you think is
pointing Us in the direction of this
generative AI actually becoming that
much more useful and controllable the
big breakthrough of Chad GPT was
reinforcement learning human feedback
which was the way of using humans to
align the AI on our core values or align
our AI on the skills that we would like
it to perform other breakthroughs have
arrived since then Guard railing which
causes the AI to focus its energy or
Focus its response in a particular
domain so that it doesn't wander off and
pontificate about all kinds of stuff
that you ask it about it would only
focus on the things that it's been
trained to do align to perform and it
has deep knowledge in the third
breakthrough is called retrieval
augmented generation which basically is
data that has been embedded so that we
understand the meaning of that data and
so it's a more authoritative data set it
goes beyond just the trained data set
for example it might be all of the
articles that you've ever written all of
the papers that you've ever written and
it could be essentially a a chatbot of
you so everything that I've ever written
or ever said could be vectorized and
then created into a semantic database
and then before an AI responds it would
search the appropriate content from that
Vector database and then augment it in
its gener generative process and you
think that is one of the most important
factors these three combinations really
made it possible for us to do that with
text now the thing that's really cool is
that we are now starting to figure out
how to do that with visual right and so
if you look at today's generative AI in
this particular case this is a edify
model that Nvidia created it's a 2d text
to 2D Foundation model it's multimodal
and we used we partnered with Getty to
use their library of data to train an AI
model and so this is a a text to uh 2D
image and you also created this slide
personally right I I had I personally
had this slide created and
so here's a prompt and this could be a
prompt for somebody who owns a brand in
this case Coca-Cola it could be a car it
could be a luxury product it could be
anything you use the prompt and generate
the image however it's hard to control
this prompt and it may hallucinate it
may create it in such a way that it's
not exactly what you want and to
fine-tune this using words is really
hard because it's very imprecise and so
the ability for us to now control that
image is difficult to do and so we've
created a way that allows us to control
and align that with more conditioning
and so the way you do that is we create
another model and it's edify 3D one of
our foundation models we've created this
AI Foundry where Partners can come and
work with us and we create the model for
them with their data we invent the model
and they bring their data and we uh
create a model that they can take with
them is it their data only uses their
data so this only uses all of the data
that's available on Shutterstock that
they have have the rights to to use the
train and so we now use prompt generator
3D we put that in a place where you
could compose data and content from a
lot of different modalities it could be
3D it could be AI it could be uh
animation it could be materials and so
we use Omniverse to compose all of these
multimodality data and now you can
control it you could change the pose you
could change the placement you could
change whatever you like and then you
take what comes out of Omniverse you now
augment it with the prompt it's a little
bit like retrieval augmented generation
this is now 3D augmented generation the
edified model is multimodal so it
understand the image understands the
prompt and it uses it in combination to
create a new image so now this is a
controlled image we can generate images
exactly the way we like it just now I
showed you Omniverse augmented
generation for images this is a rag this
is a uh retrieval augmented generative
Ai and we've created this digital human
front end basically the io of an AI that
has the ability to speak make eye
contact with you an animate in an
empathetic way you could decide to
connect your chat gbt or your AI to the
digital human or you can connect your
digital human to our retrieval augmented
generation this breakthrough is really
quite incredible and it makes it
possible for us amazing Graphics
researchers welcome to sigraph 2024 so
it makes it possible to animate using an
AI you you chat with the AI it generates
text that text then is translated to
sound text to speech that speech the
sound then animates the face and then
RTX path tracing does the rendering of
the digital human what I hear you
talking a lot about today these are
software developments right they're
relying on your gpus but ultimately this
is software this is NVIDIA going further
up the
stack meanwhile there are some companies
some folks in the generative AI space
who are in software and cloud services
but they're looking to go further down
the stack right they might be developing
their own chips or tpus that are
competitive with what you're doing how
crucial is this software strategy to
Nvidia maintaining its lead and actually
fulfilling some of these promises of
growth that people are looking at for
NVIDIA right now we've always been a
software company and even first and the
reason for that is because accelerated
Computing is not general purpose
Computing general purpose Computing can
take any program Python and just run it
and almost everybody's uh program can be
compiled to run effectively
unfortunately when you want to
accelerate fluid dynamics you have to
understand the the algorithms of fluid
dynamics so that you could uh refactor
it in such a way that it could be
accelerated and you have to design an
accelerator you have to design the coua
GPU so that it understands the
algorithms so that it could do a good
job accelerating it and the benefit of
course by redesigning the whole stack we
can accelerate applications 20 40 50
times 100 times over general purpose
Computing in the case of deep learning
over the course of last 10 to 12 years
or so we've accelerated deep learning by
a million times which is the reason why
now possible for us to create these
large language models a million times
speed up a million times reduction in
cost and energy is what made it possible
for us to make General generative AI
possible but that's designing a new
processor a new system tensor core gpus
the mvlink switch fabric is completely
groundbreaking for AI and if you don't
understand the algorithms the the
applications above it it's really hard
to figure out how to design that whole
stack what is the most important
part of Nvidia software ecosystem for
nvidia's future it takes a new library
we call it dsl's domain specific library
in generative AI that DSL is called
cdnn uh for SQL processing data frames
is called CDF we got a whole bunch of
coups every time we introduce a domain
specific Library it exposes accelerated
Computing to a new market but notice
every single time we want to open up a
new market like CF in order to do data
processing data processing is probably
what a third of the world's Computing
every company does data processing and
most companies data is in data frames in
tabular format and so in order to create
an acceleration library for tabular
formats was insanely hard because what's
inside those tables could be floating
Point numbers 64-bit integers it could
be letters and all kinds of stuff and so
we have to figure out a way to go
compute all that every single time we
open up a new market it just requires us
to reinvent everything of that Computing
that's the reason why we're working on
robotics that's the reason why we're
working on autonomous vehicles to
understand the algorithms that's
necessary to open up that market and to
understand the Computing layer
underneath it so that we can deliver
extraordinary results and so there's
nothing easy about it generative AI
takes up a lot of energy I'm just saying
my job's super
hard yeah go ahead let's talk about
energy yeah generative AI incredibly
energy
intensive I am going to read from my
note cards here According to some
research chat gbt a single query takes
up nearly 10 times the electricity to
process a single Google search data
centers consume 1 to 2% of overall
worldwide energy but some say that it
could be as much as 3 to 4% some say as
much as 6% by the end of the decade data
center workloads tripled between 2015
and 2019 that was only 2019 AI
generative AI is taking up a large
portion of all of that is there going to
be enough energy to fulfill the demand
of what you want to build and do yes and
um a couple of observations so first
there there are three or four model
makers that are pushing to Frontier a
couple of years ago they're they're
probably three times that many this year
that are pushing the frontiers of of
models and the size of the models are
call it uh twice as large every year and
in order to train a model that's twice
as large you need more than twice as
much data and so the computational load
is growing call it a factor of four each
year just for simple thinking now that's
one of the reasons why Blackwell is so
highly anticipated because we
accelerated the application so much
using the same amount of energy and so
this is an example of accelerating
applications at constant energy constant
cost you're making it cheaper and
cheaper now the important thing though
is I've only highlighted 10 companies
the world has tons of companies Nvidia
is selling gpus to a whole lot of
companies and a whole lot of different
data centers and so question is what's
happening at the core the first thing
that's actually happening is the end of
CPU scaling and the beginning of
accelerated Computing text completion
speech recognition recommender systems
that are used in data centers all over
the world everyone is moving from CPUs
to accelerated Computing because uh they
want to save energy accelerated
Computing helps you save so much energy
20 times 50 times and doing the same
processing generative AI is probably
consuming let's pick a very large number
probably a 1% or so of the world's
energy but remember even if the data
centers uh consume 4% of the world the
goal of generative AI is not training
the goal of generative AI is inference
and the inference ideally we create new
models for predicting weather predicting
new materials allow us to optimize our
supply chain reduce the amount of energy
consumed and wasted gasoline as we
deliver products and so the goal is
actually to reduce the energy consumed
of the 96% the second thing the next
thing I'll say about generative AI is
remember in the the traditional way of
doing Computing is called retrieval
based Computing everything is
pre-recorded all the stories are written
pre-recorded all the images are
pre-recorded all the videos are
pre-recorded everything is stored off in
a data center somewhere pre-recorded
generative AI reduces the amount of
energy necessary to go run to a data
center over the network re retrieve
something and bring it over the network
don't forget 60% of the energy is
consumed on the internet moving the
electrons around moving the bits and
bites around and so generative AI is
going to reduce the amount of energy on
the internet because instead of having
to go retrieve the information we can
generate it right there on the spot
because we understand the context we
probably have some uh content on the
device and we can generate the response
so that you don't have to go retrieve it
AI doesn't care where it goes to school
today's data centers are built near the
power grid where Society is of course
because that's where we need it in the
future you're going to see data centers
being built in different parts of the
world where there's excess energy it's
just that it costs a lot of money to
bring that energy to society maybe it's
in a desert maybe it's in places that
has a lot of sustainable energy we can
put data centers where there's less
population and more energy there's a lot
of energy in the world and what we need
to do is move data centers out closer to
where there's excess energy and not put
everything near population AI doesn't
care where it's trained part i' never
heard that phrase before AI doesn't care
where it goes to school and that's
interesting yeah it's true I'm going to
think on that generative AI is going to
increase productivity it's going to en
enable us to discover new science make
things more energy efficient so that
accelerated Compu lights just came on
because what what we were talking about
energy and all of a sudden it's like the
Earth was
like okay Jensen thank you so much I
think we're probably going to get kicked
off stage soon thank you everybody we'll
be right back
تصفح المزيد من مقاطع الفيديو ذات الصلة
Wie erstelle ich gute Prompts für ChatGPT & Co?
Mehr als nur GTA - Ein Rückblick auf 25 Jahre Rockstar Games
OpenAI Releases GPT-4o Mini! How Does It Compare To Other Models?
Recycling: Wie aus Scherben Flaschen werden | Wie geht das? | NDR Doku
KI-Jobs der ZUKUNFT - 6 Top-Bezahlte Jobs in der KI-REVOLUTION
7. Frank Block, Head of IT Data Science at Roche, Switzerland
5.0 / 5 (0 votes)