Nvidia CEO Jensen Huang talks blowout quarter, AI, inferencing, ongoing demand, and more
Summary
TLDRYahoo財経のマーケットドミネーションで、テックエディターであるダン・ハウリーと共に、Nvidiaの創業者CEOジェンセン・ウォンが登場。Nvidiaは、強力な財務第1四半期のデータセンター収益が前年比4127%増加し、AI投資の勢いが続いていることを示した。ブラックウェルという次世代チップも今年発売予定で、その収益も今年中に大きくなると予想されている。さらに、10対1のフォワード株式分割と配当アップも発表された。ブラックウェルはトレンドパラメーターAIモデル向けに設計され、データセンターの技術革新に貢献する。AIファクトリーとして構築されたNvidiaのプラットフォームは、複雑なシステムで、高ボリュームでの製造が難しく、供給不足が続きそうだ。自動車業界もデータセンターの大きな分野となり、テスラをはじめとする自走車の分野でAI技術が重要な役割を果たしている。
Takeaways
- 🚀 Nvidiaは、予想をはるかに超える強力な財務第1四半期を報告し、データセンターの収益が前年比で4,127%増加しました。
- 📈 AIの支出の勢いは続いており、Nvidiaはまた楽観的な販売予測を発表しました。
- 🔄 Nvidiaは10対1のフォワード株式分割と配当金の引き上げを発表しました。
- 💡 BlackwellはNvidiaの次世代チップで、今年に出荷され、今年にはBlackwellの収益が大幅に寄与すると予想されています。
- 🌟 BlackwellはトレンドパラメータAIモデル向けに設計されており、データの処理量がモデルサイズの倍に増加するごとに4倍増加していることを考慮して設計されています。
- 🛠️ Blackwellは生成的なAIに特化しており、情報の生成に必要とされる非常に複雑でパフォーマンスの高い技術に対応しています。
- 🔄 推論は従来は単純でしたが、生成的なAIの登場により、推論は非常に複雑化しました。Nvidiaのアーキテクチャの多様性により、人々は新しいAIを継続的に革新し、創造することができます。
- 📊 Nvidiaは推論市場において優位性を持っており、世界中のデータセンターとウェブ上で行われている推論の大部分がNvidiaで行われています。
- 🤖 自動運転車市場では、Teslaが最も進んでおり、すべての自動車メーカーがデータセンター内でAIを利用してさまざまな機能を強化しています。
- 🌐 NvidiaはAIファクトリーと呼ばれる非常に複雑なシステムを構築し、それをパートナーがさまざまなデータセンターに取り込める形で分割しています。
- 📈 クラウドプロバイダーからの収益は現在、データセンター収益の40%前後ですが、他の業界がAI分野に参入することで、両者とも成長が見込まれています。
Q & A
Nvidiaはどのくらいの成長を遂げていますか?
-Nvidiaは強力な財務第1四半期でアナリストの予想を大幅に上回りました。データセンターの収益は年々4,127%増加しました。
ブラックウェルという次の世代のチップはどのようなものか、そして今年の収益にどのような影響を与えると予想されますか?
-ブラックウェルはAIモデルのトレンドパラメーターを扱うための設計で、今年にも出荷されると予想されており、ホッパーよりも高価な製品です。そのため、今年の収益は大幅に増加すると予想されています。
ブラックウェルチップはどのような特徴を持っていますか?
-ブラックウェルはトレンドパラメーターAIモデル用に設計されており、非常に高速な推論を行います。また、データセンターへの適応性にも優れており、空気冷却、液体冷却、x86または新しく設計されたGraceブラックウェルスーパーチップに対応しています。
Nvidiaはどのようにして推論市場での競争優位を維持する予定ですか?
-Nvidiaは推論が非常に複雑な問題であること、そして多様なモデルを使用するソフトウェアスタックの複雑性から、競争優位を維持する戦略を立てています。ブラックウェルの登場もその一環です。
供給不足はどのように対処する予定ですか?
-ブラックウェルとホッパーのチップは非常に高い需要があり、供給が不足しているとのことです。NvidiaはAIファクトリーを構築し、それをパートナーに提供することで対処する予定です。
クラウドプロバイダーと他の業界はどのようにNvidiaのデータセンター収益に寄与していますか?
-クラウドプロバイダーはデータセンター収益の40%程度を占めていますが、他の業界もAIを活用し始めており、将来的には双方が成長すると予想されています。
NvidiaはどのようにしてAIファクトリーを構築していますか?
-NvidiaはCPU、GPU、複雑なメモリを組み合わせ、MVリンクやInfiniBandスイッチ、Ethernetスイッチなどを使ってAIファクトリーを構築しています。これらは非常に複雑なスパインと呼ばれるmvyリンクで接続され、ソフトウェアによって管理されています。
自動車業界においてNvidiaのデータセンターはどのような役割を果たしていますか?
-自動車業界はデータセンターの中で最も大きな企業分野となっています。Nvidiaは自走車のトレーニングに必要な膨大なビデオデータを処理するAI技術を提供しています。
ブラックウェルはどのような種類のデータセンターに適応できますか?
-ブラックウェルは空気冷却、液体冷却、x86、Graceブラックウェルスーパーチップ、InfiniBandデータセンター、Ethernetデータセンターなど、多様なデータセンターに適応できます。
Nvidiaは今後どのような市場ニーズに対応する予定ですか?
-Nvidiaは今後もAIの多様な分野における需要に対応し、特に物理世界を理解し、ビデオから学ぶ新しいタイプのAI技術に焦点を当てています。
NvidiaはどのようにしてAI技術を進化させていく予定ですか?
-Nvidiaはビデオから学ぶことによって物理世界を理解するAI技術を開発し、多様な業界でその能力を拡大する予定です。また、ビデオを用いたトレーニング能力も強化していく予定です。
Outlines
🚀 Nvidiaの財務報告とブラックウェルチップの紹介
Yahoo財務のジュリー・ハイマンとテックエディターダン・ハウリーによるインタビュー。Nvidiaは強力な財務第1四半期の結果を発表し、予想をはるかに上回る業績を収めた。データセンターの収益は年々4,127%増加し、AIの支出が勢いを増していることが示された。さらに、10対1のフォワード株式分割と配当金の引き上げも発表された。Nvidiaの創設者でありCEOであるジェンセン・ウォンが会議通話から直ちに参加し、来年度にも発売される次世代チップ「ブラックウェル」について語った。ブラックウェルはトレンドパラメーターAIモデル向けに設計されており、データセンターの技術革新に貢献する。また、斬新なインファーレンス技術が組み込まれており、これはジェネラティブAIのためのもので、情報生成に必要不可欠だという。
📈 Nvidiaの供給不足とAIファクトリーの構築
Nvidiaはホッパーおよびブラックウェルチップの需要が非常に高く、来年まで供給が不足するという見通しを示した。ブラックウェルはAIファクトリーとして販売され、CPU、GPU、複雑なメモリなどからなる非常に複雑なシステムだ。MVリンク、InfiniBandスイッチ、イーサネットスイッチなどからなるネットワークに接続され、多くのソフトウェアが必要とされる。NvidiaはAIファクトリーを一つのホリスティックなユニットとして構築し、クラウドプロバイダーやパートナーがさまざまなデータセンターに組み込むことができるように分散させている。これにより、様々な業界がAIを活用し始めており、クラウドプロバイダーだけでなく、消費者インターネットサービスプロバイダー、自動車メーカーなどにもNvidiaのチップが求められている。
🚗 オートモテイル業界におけるNvidiaの役割
自動運転車はNvidiaの技術を活用し、ビデオから直接学習することで効果的なモデルをトレーニングすることができる。従来は画像にラベリングを施して学習させていたが、現在はビデオをカーに投入させ、自ら認識させる方法が主流となっている。これは大きなトレーニング施設を必要とし、ビデオのデータ量が非常に高いためである。物理世界の理解に必要なAI技術は、ビデオから学ぶことで最も効果的であり、これは言語モデルを理解するのに使われているのと同じ技術だ。Nvidiaはこれにより、自動車業界におけるデータセンターの最大の企業セクターとして位置づけられており、その技術は他にも物理世界を理解する次世代のAIにも必要不可欠となっている。
Mindmap
Keywords
💡Nvidia
💡Blackwell
💡データセンター
💡AIモデル
💡推論(inferencing)
💡生成的AI(generative AI)
💡株式分割(stock split)
💡配当(dividend)
💡供給制約(Supply constrained)
💡クラウドプロバイダー
Highlights
Nvidia exceeded analyst expectations in its fiscal first quarter with data center revenue soaring by 4,127% year-over-year.
The company provided a bullish sales forecast indicating continued AI spending momentum.
Nvidia announced a 10 for one forward stock split and an increase in its dividend.
Blackwell, Nvidia's next-generation chip, is set to ship this year with significant expected revenue contributions.
Blackwell is designed for trillion parameter AI models, addressing the rapid growth in model sizes.
Inference technology has evolved from recognition to generation of information with generative AI.
Blackwell supports various cooling methods and processor architectures for flexible data center deployment.
Nvidia's architecture offers a competitive advantage in the shift towards inference in the AI market.
Nvidia faces supply constraints for both Hopper and Blackwell chips due to high demand until next year.
Nvidia builds AI factories, which are complex systems with CPUs, GPUs, and sophisticated memory.
Cloud providers currently account for mid-40% of data center revenue, but other industries are expected to grow as well.
Meta's investment in large language models and generative AI work is highlighted as particularly significant.
Elon Musk's infrastructure and Tesla's full self-driving technology using generative AI is discussed.
Startup company Recursion uses Nvidia's technology for drug discovery through molecule generation.
Nvidia's technology is being deployed across various industries for tasks like understanding and generating content.
Automotive is now the largest enterprise vertical within Nvidia's data center business.
Tesla is leading in self-driving cars, but all car manufacturers are expected to adopt autonomous capabilities.
Nvidia's technology is used for training AI models with video, which is a more effective approach than labeled images.
The next generation of AI requires grounding in physical AI to understand the world through video training.
Transcripts
I'm Julie Heyman host of Yahoo finances
Market domination here with our Tech
editor Dan Howley Nvidia has done it
again the chip giant blowing past
analyst expectations in its strong
fiscal first quarter data center Revenue
alone soaring by 4 127% year-over-year
and the company also gave another
bullish sales forecast which shows that
AI spending momentum continues a pace on
top of all that the company also
announced a 10 for one forward stock
split and ra its dividend joining us now
Nvidia founder and CEO Jensen Wong fresh
off the conference call Jensen welcome
thank you so much for being with
us I'm happy to be here nice to see you
guys you too I want to start uh with
Blackwell which is your next Generation
chip it's shipping this year you said on
the call you also said on the call we
will see a lot of Blackwell Revenue this
year so if we're looking at about $28
billion in Revenue in the current
quarter and Blackwell is a more
expensive product than Hopper the chip
series out now what does that imply
about Revenue in the fourth quarter and
for the full
year well it should be significant yeah
Blackwell Blackwell and and as you know
we guide one quarter at a time and but
what I what I could tell you about about
Blackwell is this this is this is um a
giant leap in in um uh in Ai and it was
designed for trillion parameter
AI models and this is as you know we're
already at two trillion parameters uh
models sizes are growing about doubling
every six months and the amount of
processing uh between the size of the
model the amount of data is growing four
times and so the ability for uh these
data centers to keep up with these large
models really depends on the technology
that we bring bring to them and so the
Blackwell is is designed uh also for
incredibly fast inferencing and
inference used to be about recognition
of things but now inferencing as you
know is about generation of information
generative Ai and so whenever you're
talking to chat GPT and it's generating
information for you or drawing a picture
for you or recognizing something and
then drawing something for you that
generation is a brand new uh inferencing
technology is really really complicated
and requires a lot of performance and so
Blackwell is designed for large models
for generative a I and we designed it to
fit into any data center and so it's air
cooled liquid cooled x86 or this new
revolutionary processor we designed
called Grace Grace blackwall super chip
and then um uh you know supports uh
infinite band data centers like we used
to but we also now support a brand new
type of data center ethernet we're going
to bring AI to ethernet data centers so
the number of ways that you could deploy
Blackwell is way way higher than than
Hopper generation so I'm excited about
that I I I want to talk about the the
inferencing Jensen you know some
analysts have brought up the idea that
as we move over towards inferencing from
the the training that there may be some
inhouse companies uh uh processors from
companies that those made from Microsoft
Google Amazon maybe more suited for the
actual inferencing I guess how does that
impact Nvidia
then well inferencing used to be easy
you know when people started talking
about inference uh generative AI didn't
exist and now generative AI is is uh uh
of course is about prediction but it's
about prediction of the next token or
prediction of the next pixel or
prediction of the next frame and all of
that is complicated and and generative
AI is also used for um understanding the
cont in order to generate the content
properly you have to understand the
context and what what is called memory
and so now the memory size is incredibly
large and you have to have uh context
memory you have to be able to generate
the next token really really fast it
takes a whole lot of tokens to make an
image takes a ton of tokens to make a
video and takes a lot of tokens to be
able to uh reason about a particular
task so that it can make a plan and so
gener the the the gener generative AI um
era really made inference a million
times more complicated and as you know
the number of chips that were intended
for inference uh kind of kind of fell by
the wayside and now people are talking
talk about building new Chips you know
the versatility of invidious
architecture makes it possible for
people to continue to innovate and
create these amazing new Ai and then now
black Wall's coming so in other words
you think you still have a competitive
Advantage even as the market sort of
shifts to
inferencing we have a great position in
inference because inference is just a
really complicated problem you know and
the software stack is complicated the
type of models that people use is
complicated there's so many different
types it's just going to be a giant
market market opportunity for us the
vast majority of the world's inferencing
today as as people are experiencing in
their data centers and on the web vast
majority of the inferencing today is
done on Nvidia and so we we I expect
that to continue um you said on the call
a couple of times that you'll be Supply
constrained for both Hopper and then
Blackwell uh chips well until next year
because of the vast demand that's out
there um what can you do about that are
there any sort of levers you can pull to
help increase
Supply copper demand grew throughout
this
quarter after we announced
Blackwell and so that kind of tells you
how much demand there is out there
people want to deploy these data centers
right now they want to put our gpus to
work right now and start making money
and start saving money and so so that
that demand is just so strong um you
know it's really important to take a
step back and realize that that what we
build is not a GPU chip we call it
Blackwell and we call it GPU but we're
really building AI factories these AI
factories have CPUs and gpus and really
complicated memory the systems are
really complicated it's connected by MV
link there's an MV link switch there's
infiniband switches infiniband Nicks and
then now we have ethernet switches and
ethernet Nicks and all of this connected
together with this incredibly
complicated spine called mvy link and
then the amount of software that it
takes to build all this and run all this
is incredible and so these AI factories
are essentially what we build we build
it as a as a holistic unit as a holistic
architecture and platform but then we
disaggregate it so that our partners
could take it and put it into Data
Centers of any kind and every single
cloud has slightly different
architectures and different stacks and
our our stacks and our architecture can
now deeply integrated into theirs but
everybody's a little different so we
build it as an AI Factory we then
disaggregated so that everybody can have
ai factories this is just an incredible
thing and we do this at very hard very
high volume it's just very very hard to
do and so every every component every
every part of our data center uh is the
most complex computer the world's ever
made and so it's sensible that almost
everything is
constrained Jess I want to ask about the
uh Cloud providers versus the the other
industries that you said are are getting
into the the gener AI game or or getting
Nvidia chips you had mentioned that uh
in uh comments in the actual release
that we heard from uh CFO CL Crest uh
that 40% mid 40% of data center Revenue
comes from those Cloud providers as we
start to see these other Industries open
up what does what does that mean for
NVIDIA well will the cloud providers
kind
of uh shrink I guess their share and
then will these other Industries pick up
where those Cloud providers
were I expect I expect them both to grow
uh a couple of different areas of course
uh the consumer internet service
providers this last quarter of course a
big stories from meta the uh the
incredible scale that that um Mark is
investing in uh llama 2 was a
breakthrough llama 3 was even more
amazing they're creating models that
that are that are activating uh large
language model and generative AI work
all over the world and so so the work
that meta is doing is really really
important uh you also saw uh uh Elon
talking about uh the incredible
infrastructure that he's building and
and um one of the things that's that's
really revolutionary about about the the
version 12 of of uh Tesla's uh full
self-driving is that it's an endtoend
generative model and it learns from
watching video surround video and it it
learns about how to drive uh end to end
and Jed using generative AI uh uh
predict the next the path and and the
how distur the uh how to understand and
how to steer the car and so the the
technology is really revolutionary and
the work that they're doing is
incredible so I gave you two examples a
a startup company that we work with
called recursion has built a
supercomputer for generating molecules
understanding proteins and generating
molecule molecules for drug Discovery uh
the list goes on I mean we can go on all
afternoon and and just so many different
areas of people who are who are now
recognizing that we now have a software
and AI model that can understand and be
learn learn almost any language the
language of English of course but the
language of images and video and
chemicals and protein and even physics
and to be able to generate almost
anything and so it's basically like
machine translation and uh that
capability is now being deployed at
scale in so many different Industries
Jensen just one more quick last question
I'm glad you talked about um the auto
business and and what you're seeing
there you mentioned that Automotive is
now the largest vertical Enterprise
vertical Within data center you talked
about the Tesla business but what is
that all about is it is it self-driving
among other automakers too are there
other functions that automakers are
using um within data center help us
understand that a little bit better well
Tesla is far ahead in self-driving cars
um but every single car someday will
have to have autonomous capability it's
it's safer it's more convenient it's
more more fun to drive and in order to
do that uh
it is now very well known very well
understood that learning from video
directly is the most effective way to
train these models we used to train
based on images that are labeled we
would say this is a this is a car you
know this is a car this is a sign this
is a road and we would label that
manually it's incredible and now we just
put video right into the car and let the
car figure it out by itself and and this
technology is very similar to the
technology of large language models but
it requires just an enormous training
facility and the reason for that is
because there's videos the data rate of
video the amount of data of video is so
so high well the the same approach
that's used for learning physics the
physical world um from videos that is
used for self-driving cars is
essentially the same um AI technology
used for grounding large language models
to understand the world of physics uh so
technologies that are uh like Sora which
is just incredible um uh and other
Technologies vo from from uh uh Google
incredible the ability to generate video
that makes sense that are conditioned by
human prompt that needs to learn from
video and so the next generation of AIS
need to be grounded in physical AI needs
to be needs to understand the physical
world and the the best way to teach
these AIS how the physical world behaves
is through video just watching tons and
tons and tons of videos and so the the
combination of this multimodality
training capability is going to really
require a lot of computing demand in the
years to come Jensen as always super
cool stuff and great to be able to talk
to you Dan and I really appreciate it
Jensen Wong everybody founder and CEO of
Nvidia great to see you guys thank you
Browse More Related Video
AI Weekly: Musk startup generates $6 billion in fresh funding | REUTERS
Is it Too Late to Buy Arm Holdings Stock? | ARM Stock Analysis | ARM Stock Prediction
Stock market rallies around artificial intelligence boom
【決算速報】エヌビディア:会社側は2025年1月期1Qも業績好調見込む/【銘柄レポート】スーパー・マイクロ・コンピューター:エヌビディア好業績を見て目標株価引き上げ(今中 能夫)【楽天証券 トウシル】
Nvidia $290 Target by Cathie Wood | Nvidia Stock News
好決算も…エヌビディア株はバブル?モヤモヤAI相場 日本企業の今後は【NIKKEI NEWS NEXT】
5.0 / 5 (0 votes)