Nvidia CEO Jensen Huang talks blowout quarter, AI, inferencing, ongoing demand, and more

Yahoo Finance
22 May 202412:21

Summary

TLDRYahoo財経のマーケットドミネーションで、テックエディターであるダン・ハウリーと共に、Nvidiaの創業者CEOジェンセン・ウォンが登場。Nvidiaは、強力な財務第1四半期のデータセンター収益が前年比4127%増加し、AI投資の勢いが続いていることを示した。ブラックウェルという次世代チップも今年発売予定で、その収益も今年中に大きくなると予想されている。さらに、10対1のフォワード株式分割と配当アップも発表された。ブラックウェルはトレンドパラメーターAIモデル向けに設計され、データセンターの技術革新に貢献する。AIファクトリーとして構築されたNvidiaのプラットフォームは、複雑なシステムで、高ボリュームでの製造が難しく、供給不足が続きそうだ。自動車業界もデータセンターの大きな分野となり、テスラをはじめとする自走車の分野でAI技術が重要な役割を果たしている。

Takeaways

  • 🚀 Nvidiaは、予想をはるかに超える強力な財務第1四半期を報告し、データセンターの収益が前年比で4,127%増加しました。
  • 📈 AIの支出の勢いは続いており、Nvidiaはまた楽観的な販売予測を発表しました。
  • 🔄 Nvidiaは10対1のフォワード株式分割と配当金の引き上げを発表しました。
  • 💡 BlackwellはNvidiaの次世代チップで、今年に出荷され、今年にはBlackwellの収益が大幅に寄与すると予想されています。
  • 🌟 BlackwellはトレンドパラメータAIモデル向けに設計されており、データの処理量がモデルサイズの倍に増加するごとに4倍増加していることを考慮して設計されています。
  • 🛠️ Blackwellは生成的なAIに特化しており、情報の生成に必要とされる非常に複雑でパフォーマンスの高い技術に対応しています。
  • 🔄 推論は従来は単純でしたが、生成的なAIの登場により、推論は非常に複雑化しました。Nvidiaのアーキテクチャの多様性により、人々は新しいAIを継続的に革新し、創造することができます。
  • 📊 Nvidiaは推論市場において優位性を持っており、世界中のデータセンターとウェブ上で行われている推論の大部分がNvidiaで行われています。
  • 🤖 自動運転車市場では、Teslaが最も進んでおり、すべての自動車メーカーがデータセンター内でAIを利用してさまざまな機能を強化しています。
  • 🌐 NvidiaはAIファクトリーと呼ばれる非常に複雑なシステムを構築し、それをパートナーがさまざまなデータセンターに取り込める形で分割しています。
  • 📈 クラウドプロバイダーからの収益は現在、データセンター収益の40%前後ですが、他の業界がAI分野に参入することで、両者とも成長が見込まれています。

Q & A

  • Nvidiaはどのくらいの成長を遂げていますか?

    -Nvidiaは強力な財務第1四半期でアナリストの予想を大幅に上回りました。データセンターの収益は年々4,127%増加しました。

  • ブラックウェルという次の世代のチップはどのようなものか、そして今年の収益にどのような影響を与えると予想されますか?

    -ブラックウェルはAIモデルのトレンドパラメーターを扱うための設計で、今年にも出荷されると予想されており、ホッパーよりも高価な製品です。そのため、今年の収益は大幅に増加すると予想されています。

  • ブラックウェルチップはどのような特徴を持っていますか?

    -ブラックウェルはトレンドパラメーターAIモデル用に設計されており、非常に高速な推論を行います。また、データセンターへの適応性にも優れており、空気冷却、液体冷却、x86または新しく設計されたGraceブラックウェルスーパーチップに対応しています。

  • Nvidiaはどのようにして推論市場での競争優位を維持する予定ですか?

    -Nvidiaは推論が非常に複雑な問題であること、そして多様なモデルを使用するソフトウェアスタックの複雑性から、競争優位を維持する戦略を立てています。ブラックウェルの登場もその一環です。

  • 供給不足はどのように対処する予定ですか?

    -ブラックウェルとホッパーのチップは非常に高い需要があり、供給が不足しているとのことです。NvidiaはAIファクトリーを構築し、それをパートナーに提供することで対処する予定です。

  • クラウドプロバイダーと他の業界はどのようにNvidiaのデータセンター収益に寄与していますか?

    -クラウドプロバイダーはデータセンター収益の40%程度を占めていますが、他の業界もAIを活用し始めており、将来的には双方が成長すると予想されています。

  • NvidiaはどのようにしてAIファクトリーを構築していますか?

    -NvidiaはCPU、GPU、複雑なメモリを組み合わせ、MVリンクやInfiniBandスイッチ、Ethernetスイッチなどを使ってAIファクトリーを構築しています。これらは非常に複雑なスパインと呼ばれるmvyリンクで接続され、ソフトウェアによって管理されています。

  • 自動車業界においてNvidiaのデータセンターはどのような役割を果たしていますか?

    -自動車業界はデータセンターの中で最も大きな企業分野となっています。Nvidiaは自走車のトレーニングに必要な膨大なビデオデータを処理するAI技術を提供しています。

  • ブラックウェルはどのような種類のデータセンターに適応できますか?

    -ブラックウェルは空気冷却、液体冷却、x86、Graceブラックウェルスーパーチップ、InfiniBandデータセンター、Ethernetデータセンターなど、多様なデータセンターに適応できます。

  • Nvidiaは今後どのような市場ニーズに対応する予定ですか?

    -Nvidiaは今後もAIの多様な分野における需要に対応し、特に物理世界を理解し、ビデオから学ぶ新しいタイプのAI技術に焦点を当てています。

  • NvidiaはどのようにしてAI技術を進化させていく予定ですか?

    -Nvidiaはビデオから学ぶことによって物理世界を理解するAI技術を開発し、多様な業界でその能力を拡大する予定です。また、ビデオを用いたトレーニング能力も強化していく予定です。

Outlines

00:00

🚀 Nvidiaの財務報告とブラックウェルチップの紹介

Yahoo財務のジュリー・ハイマンとテックエディターダン・ハウリーによるインタビュー。Nvidiaは強力な財務第1四半期の結果を発表し、予想をはるかに上回る業績を収めた。データセンターの収益は年々4,127%増加し、AIの支出が勢いを増していることが示された。さらに、10対1のフォワード株式分割と配当金の引き上げも発表された。Nvidiaの創設者でありCEOであるジェンセン・ウォンが会議通話から直ちに参加し、来年度にも発売される次世代チップ「ブラックウェル」について語った。ブラックウェルはトレンドパラメーターAIモデル向けに設計されており、データセンターの技術革新に貢献する。また、斬新なインファーレンス技術が組み込まれており、これはジェネラティブAIのためのもので、情報生成に必要不可欠だという。

05:00

📈 Nvidiaの供給不足とAIファクトリーの構築

Nvidiaはホッパーおよびブラックウェルチップの需要が非常に高く、来年まで供給が不足するという見通しを示した。ブラックウェルはAIファクトリーとして販売され、CPU、GPU、複雑なメモリなどからなる非常に複雑なシステムだ。MVリンク、InfiniBandスイッチ、イーサネットスイッチなどからなるネットワークに接続され、多くのソフトウェアが必要とされる。NvidiaはAIファクトリーを一つのホリスティックなユニットとして構築し、クラウドプロバイダーやパートナーがさまざまなデータセンターに組み込むことができるように分散させている。これにより、様々な業界がAIを活用し始めており、クラウドプロバイダーだけでなく、消費者インターネットサービスプロバイダー、自動車メーカーなどにもNvidiaのチップが求められている。

10:01

🚗 オートモテイル業界におけるNvidiaの役割

自動運転車はNvidiaの技術を活用し、ビデオから直接学習することで効果的なモデルをトレーニングすることができる。従来は画像にラベリングを施して学習させていたが、現在はビデオをカーに投入させ、自ら認識させる方法が主流となっている。これは大きなトレーニング施設を必要とし、ビデオのデータ量が非常に高いためである。物理世界の理解に必要なAI技術は、ビデオから学ぶことで最も効果的であり、これは言語モデルを理解するのに使われているのと同じ技術だ。Nvidiaはこれにより、自動車業界におけるデータセンターの最大の企業セクターとして位置づけられており、その技術は他にも物理世界を理解する次世代のAIにも必要不可欠となっている。

Mindmap

Keywords

💡Nvidia

Nvidiaはビデオの主題であるチップ製造大手企業です。ビデオでは彼らが過去の予想を超えた業績と、データセンターの収益増加について触れられています。また、彼らが発表した株式分割や配当アップも話題となっています。

💡Blackwell

BlackwellはNvidiaの次世代チップであり、ビデオではその出荷が今年行われると発表されています。BlackwellはトレンドパラメーターAIモデル向けに設計されており、ビデオではその収益が今年中に顕著になると予測されています。

💡データセンター

データセンターはAIモデルを処理するための施設で、ビデオではNvidiaがデータセンター向けに設計したBlackwellチップがその需要を満たすために重要な役割を果たしていると示唆されています。

💡AIモデル

AIモデルとは人工知能を用いた学習モデルを指し、ビデオではトレンドパラメーターAIモデルがBlackwellチップの設計に重要な影響を与えていると説明されています。

💡推論(inferencing)

推論はAIが学習したモデルを用いて予測や判断を行うプロセスです。ビデオでは、Nvidiaが新しいBlackwellチップを通じて推論を最適化し、AI分野でのリーダーシップを維持する意図があると述べています。

💡生成的AI(generative AI)

生成的AIとは、AIが新しい情報やコンテンツを生み出す能力を持つ技術です。ビデオではBlackwellチップがこの分野に特化しており、Nvidiaがその分野での成長を期待していると示されています。

💡株式分割(stock split)

株式分割は企業が株式数を増やし、各株の価値を下げることで投資家への株式の入手を容易にする行為です。ビデオではNvidiaが10対1の株式分割を発表し、市場へのポジティブな反応を期待しています。

💡配当(dividend)

配当は企業が利益の一部を株主に分配する行為です。ビデオではNvidiaが配当を引き上げることが話題で、投資家の興味を高める要因となっています。

💡供給制約(Supply constrained)

供給制約とは製品の需要が供給を上回る状況を指し、ビデオではNvidiaがBlackwellとHopperチップの需要が非常に高く、供給が制約されていると述べています。

💡クラウドプロバイダー

クラウドプロバイダーはインターネットを通じてコンピューティングサービスを提供する企業です。ビデオではクラウドプロバイダーがNvidiaのデータセンター収益の重要な部分を占めており、他の産業もAI分野に参入し始めていると示唆されています。

Highlights

Nvidia exceeded analyst expectations in its fiscal first quarter with data center revenue soaring by 4,127% year-over-year.

The company provided a bullish sales forecast indicating continued AI spending momentum.

Nvidia announced a 10 for one forward stock split and an increase in its dividend.

Blackwell, Nvidia's next-generation chip, is set to ship this year with significant expected revenue contributions.

Blackwell is designed for trillion parameter AI models, addressing the rapid growth in model sizes.

Inference technology has evolved from recognition to generation of information with generative AI.

Blackwell supports various cooling methods and processor architectures for flexible data center deployment.

Nvidia's architecture offers a competitive advantage in the shift towards inference in the AI market.

Nvidia faces supply constraints for both Hopper and Blackwell chips due to high demand until next year.

Nvidia builds AI factories, which are complex systems with CPUs, GPUs, and sophisticated memory.

Cloud providers currently account for mid-40% of data center revenue, but other industries are expected to grow as well.

Meta's investment in large language models and generative AI work is highlighted as particularly significant.

Elon Musk's infrastructure and Tesla's full self-driving technology using generative AI is discussed.

Startup company Recursion uses Nvidia's technology for drug discovery through molecule generation.

Nvidia's technology is being deployed across various industries for tasks like understanding and generating content.

Automotive is now the largest enterprise vertical within Nvidia's data center business.

Tesla is leading in self-driving cars, but all car manufacturers are expected to adopt autonomous capabilities.

Nvidia's technology is used for training AI models with video, which is a more effective approach than labeled images.

The next generation of AI requires grounding in physical AI to understand the world through video training.

Transcripts

play00:00

I'm Julie Heyman host of Yahoo finances

play00:02

Market domination here with our Tech

play00:05

editor Dan Howley Nvidia has done it

play00:07

again the chip giant blowing past

play00:09

analyst expectations in its strong

play00:11

fiscal first quarter data center Revenue

play00:14

alone soaring by 4 127% year-over-year

play00:18

and the company also gave another

play00:20

bullish sales forecast which shows that

play00:22

AI spending momentum continues a pace on

play00:25

top of all that the company also

play00:27

announced a 10 for one forward stock

play00:28

split and ra its dividend joining us now

play00:32

Nvidia founder and CEO Jensen Wong fresh

play00:35

off the conference call Jensen welcome

play00:37

thank you so much for being with

play00:39

us I'm happy to be here nice to see you

play00:42

guys you too I want to start uh with

play00:44

Blackwell which is your next Generation

play00:46

chip it's shipping this year you said on

play00:48

the call you also said on the call we

play00:51

will see a lot of Blackwell Revenue this

play00:54

year so if we're looking at about $28

play00:56

billion in Revenue in the current

play00:58

quarter and Blackwell is a more

play01:01

expensive product than Hopper the chip

play01:03

series out now what does that imply

play01:06

about Revenue in the fourth quarter and

play01:08

for the full

play01:10

year well it should be significant yeah

play01:12

Blackwell Blackwell and and as you know

play01:15

we guide one quarter at a time and but

play01:18

what I what I could tell you about about

play01:20

Blackwell is this this is this is um a

play01:23

giant leap in in um uh in Ai and it was

play01:28

designed for trillion parameter

play01:30

AI models and this is as you know we're

play01:33

already at two trillion parameters uh

play01:36

models sizes are growing about doubling

play01:39

every six months and the amount of

play01:42

processing uh between the size of the

play01:45

model the amount of data is growing four

play01:48

times and so the ability for uh these

play01:51

data centers to keep up with these large

play01:52

models really depends on the technology

play01:54

that we bring bring to them and so the

play01:57

Blackwell is is designed uh also for

play02:00

incredibly fast inferencing and

play02:03

inference used to be about recognition

play02:04

of things but now inferencing as you

play02:06

know is about generation of information

play02:09

generative Ai and so whenever you're

play02:11

talking to chat GPT and it's generating

play02:13

information for you or drawing a picture

play02:15

for you or recognizing something and

play02:17

then drawing something for you that

play02:19

generation is a brand new uh inferencing

play02:22

technology is really really complicated

play02:24

and requires a lot of performance and so

play02:27

Blackwell is designed for large models

play02:29

for generative a I and we designed it to

play02:31

fit into any data center and so it's air

play02:33

cooled liquid cooled x86 or this new

play02:36

revolutionary processor we designed

play02:38

called Grace Grace blackwall super chip

play02:41

and then um uh you know supports uh

play02:43

infinite band data centers like we used

play02:45

to but we also now support a brand new

play02:47

type of data center ethernet we're going

play02:49

to bring AI to ethernet data centers so

play02:52

the number of ways that you could deploy

play02:54

Blackwell is way way higher than than

play02:56

Hopper generation so I'm excited about

play02:58

that I I I want to talk about the the

play03:00

inferencing Jensen you know some

play03:01

analysts have brought up the idea that

play03:03

as we move over towards inferencing from

play03:06

the the training that there may be some

play03:09

inhouse companies uh uh processors from

play03:11

companies that those made from Microsoft

play03:13

Google Amazon maybe more suited for the

play03:16

actual inferencing I guess how does that

play03:18

impact Nvidia

play03:21

then well inferencing used to be easy

play03:25

you know when people started talking

play03:26

about inference uh generative AI didn't

play03:28

exist and now generative AI is is uh uh

play03:32

of course is about prediction but it's

play03:34

about prediction of the next token or

play03:36

prediction of the next pixel or

play03:37

prediction of the next frame and all of

play03:40

that is complicated and and generative

play03:43

AI is also used for um understanding the

play03:46

cont in order to generate the content

play03:48

properly you have to understand the

play03:49

context and what what is called memory

play03:52

and so now the memory size is incredibly

play03:54

large and you have to have uh context

play03:57

memory you have to be able to generate

play03:59

the next token really really fast it

play04:01

takes a whole lot of tokens to make an

play04:03

image takes a ton of tokens to make a

play04:05

video and takes a lot of tokens to be

play04:07

able to uh reason about a particular

play04:10

task so that it can make a plan and so

play04:13

gener the the the gener generative AI um

play04:17

era really made inference a million

play04:19

times more complicated and as you know

play04:22

the number of chips that were intended

play04:24

for inference uh kind of kind of fell by

play04:27

the wayside and now people are talking

play04:29

talk about building new Chips you know

play04:31

the versatility of invidious

play04:32

architecture makes it possible for

play04:34

people to continue to innovate and

play04:36

create these amazing new Ai and then now

play04:39

black Wall's coming so in other words

play04:41

you think you still have a competitive

play04:43

Advantage even as the market sort of

play04:46

shifts to

play04:47

inferencing we have a great position in

play04:50

inference because inference is just a

play04:51

really complicated problem you know and

play04:53

the software stack is complicated the

play04:55

type of models that people use is

play04:57

complicated there's so many different

play04:58

types it's just going to be a giant

play05:00

market market opportunity for us the

play05:02

vast majority of the world's inferencing

play05:04

today as as people are experiencing in

play05:06

their data centers and on the web vast

play05:08

majority of the inferencing today is

play05:10

done on Nvidia and so we we I expect

play05:12

that to continue um you said on the call

play05:15

a couple of times that you'll be Supply

play05:16

constrained for both Hopper and then

play05:19

Blackwell uh chips well until next year

play05:21

because of the vast demand that's out

play05:24

there um what can you do about that are

play05:26

there any sort of levers you can pull to

play05:28

help increase

play05:30

Supply copper demand grew throughout

play05:35

this

play05:36

quarter after we announced

play05:39

Blackwell and so that kind of tells you

play05:41

how much demand there is out there

play05:43

people want to deploy these data centers

play05:45

right now they want to put our gpus to

play05:47

work right now and start making money

play05:50

and start saving money and so so that

play05:53

that demand is just so strong um you

play05:56

know it's really important to take a

play05:58

step back and realize that that what we

play06:00

build is not a GPU chip we call it

play06:03

Blackwell and we call it GPU but we're

play06:05

really building AI factories these AI

play06:08

factories have CPUs and gpus and really

play06:11

complicated memory the systems are

play06:12

really complicated it's connected by MV

play06:14

link there's an MV link switch there's

play06:17

infiniband switches infiniband Nicks and

play06:20

then now we have ethernet switches and

play06:21

ethernet Nicks and all of this connected

play06:24

together with this incredibly

play06:25

complicated spine called mvy link and

play06:28

then the amount of software that it

play06:29

takes to build all this and run all this

play06:31

is incredible and so these AI factories

play06:34

are essentially what we build we build

play06:36

it as a as a holistic unit as a holistic

play06:39

architecture and platform but then we

play06:41

disaggregate it so that our partners

play06:44

could take it and put it into Data

play06:45

Centers of any kind and every single

play06:48

cloud has slightly different

play06:49

architectures and different stacks and

play06:51

our our stacks and our architecture can

play06:53

now deeply integrated into theirs but

play06:55

everybody's a little different so we

play06:57

build it as an AI Factory we then

play06:59

disaggregated so that everybody can have

play07:01

ai factories this is just an incredible

play07:04

thing and we do this at very hard very

play07:06

high volume it's just very very hard to

play07:08

do and so every every component every

play07:10

every part of our data center uh is the

play07:13

most complex computer the world's ever

play07:15

made and so it's sensible that almost

play07:17

everything is

play07:18

constrained Jess I want to ask about the

play07:21

uh Cloud providers versus the the other

play07:22

industries that you said are are getting

play07:24

into the the gener AI game or or getting

play07:27

Nvidia chips you had mentioned that uh

play07:30

in uh comments in the actual release

play07:32

that we heard from uh CFO CL Crest uh

play07:35

that 40% mid 40% of data center Revenue

play07:38

comes from those Cloud providers as we

play07:40

start to see these other Industries open

play07:43

up what does what does that mean for

play07:45

NVIDIA well will the cloud providers

play07:47

kind

play07:48

of uh shrink I guess their share and

play07:50

then will these other Industries pick up

play07:52

where those Cloud providers

play07:54

were I expect I expect them both to grow

play07:58

uh a couple of different areas of course

play08:00

uh the consumer internet service

play08:03

providers this last quarter of course a

play08:06

big stories from meta the uh the

play08:09

incredible scale that that um Mark is

play08:12

investing in uh llama 2 was a

play08:14

breakthrough llama 3 was even more

play08:16

amazing they're creating models that

play08:18

that are that are activating uh large

play08:21

language model and generative AI work

play08:23

all over the world and so so the work

play08:25

that meta is doing is really really

play08:27

important uh you also saw uh uh Elon

play08:30

talking about uh the incredible

play08:31

infrastructure that he's building and

play08:34

and um one of the things that's that's

play08:36

really revolutionary about about the the

play08:39

version 12 of of uh Tesla's uh full

play08:42

self-driving is that it's an endtoend

play08:44

generative model and it learns from

play08:46

watching video surround video and it it

play08:49

learns about how to drive uh end to end

play08:52

and Jed using generative AI uh uh

play08:56

predict the next the path and and the

play08:58

how distur the uh how to understand and

play09:01

how to steer the car and so the the

play09:03

technology is really revolutionary and

play09:05

the work that they're doing is

play09:06

incredible so I gave you two examples a

play09:09

a startup company that we work with

play09:10

called recursion has built a

play09:12

supercomputer for generating molecules

play09:14

understanding proteins and generating

play09:15

molecule molecules for drug Discovery uh

play09:18

the list goes on I mean we can go on all

play09:20

afternoon and and just so many different

play09:22

areas of people who are who are now

play09:25

recognizing that we now have a software

play09:28

and AI model that can understand and be

play09:29

learn learn almost any language the

play09:32

language of English of course but the

play09:33

language of images and video and

play09:36

chemicals and protein and even physics

play09:39

and to be able to generate almost

play09:40

anything and so it's basically like

play09:42

machine translation and uh that

play09:45

capability is now being deployed at

play09:46

scale in so many different Industries

play09:49

Jensen just one more quick last question

play09:51

I'm glad you talked about um the auto

play09:53

business and and what you're seeing

play09:54

there you mentioned that Automotive is

play09:56

now the largest vertical Enterprise

play09:58

vertical Within data center you talked

play10:00

about the Tesla business but what is

play10:03

that all about is it is it self-driving

play10:05

among other automakers too are there

play10:07

other functions that automakers are

play10:10

using um within data center help us

play10:12

understand that a little bit better well

play10:14

Tesla is far ahead in self-driving cars

play10:18

um but every single car someday will

play10:20

have to have autonomous capability it's

play10:23

it's safer it's more convenient it's

play10:25

more more fun to drive and in order to

play10:28

do that uh

play10:29

it is now very well known very well

play10:31

understood that learning from video

play10:34

directly is the most effective way to

play10:36

train these models we used to train

play10:39

based on images that are labeled we

play10:42

would say this is a this is a car you

play10:44

know this is a car this is a sign this

play10:46

is a road and we would label that

play10:48

manually it's incredible and now we just

play10:51

put video right into the car and let the

play10:54

car figure it out by itself and and this

play10:56

technology is very similar to the

play10:58

technology of large language models but

play11:00

it requires just an enormous training

play11:02

facility and the reason for that is

play11:04

because there's videos the data rate of

play11:06

video the amount of data of video is so

play11:08

so high well the the same approach

play11:12

that's used for learning physics the

play11:14

physical world um from videos that is

play11:17

used for self-driving cars is

play11:19

essentially the same um AI technology

play11:22

used for grounding large language models

play11:25

to understand the world of physics uh so

play11:28

technologies that are uh like Sora which

play11:30

is just incredible um uh and other

play11:32

Technologies vo from from uh uh Google

play11:35

incredible the ability to generate video

play11:38

that makes sense that are conditioned by

play11:40

human prompt that needs to learn from

play11:42

video and so the next generation of AIS

play11:45

need to be grounded in physical AI needs

play11:48

to be needs to understand the physical

play11:50

world and the the best way to teach

play11:53

these AIS how the physical world behaves

play11:56

is through video just watching tons and

play11:58

tons and tons of videos and so the the

play12:00

combination of this multimodality

play12:02

training capability is going to really

play12:04

require a lot of computing demand in the

play12:06

years to come Jensen as always super

play12:10

cool stuff and great to be able to talk

play12:12

to you Dan and I really appreciate it

play12:14

Jensen Wong everybody founder and CEO of

play12:18

Nvidia great to see you guys thank you

Rate This

5.0 / 5 (0 votes)

Related Tags
Nvidia業績AI技術データセンターBlackwellGPUインファレンスクラウドプロバイダーセルフドライビングテクノロジー
Do you need a summary in English?