Why Elon Musk Is Betting Big On Supercomputers To Boost Tesla And xAI

CNBC
23 Sept 202415:53

Summary

TLDRElon Musk is investing heavily in supercomputers for Tesla and xAI, with plans to spend over $1 billion on Project Dojo by 2024. These machines, optimized for AI, will enhance Tesla's autonomous driving and train the Optimus robot. Despite challenges like hardware supply and environmental impact, Musk envisions supercomputers as key to Tesla's future in AI and robotics, potentially revolutionizing the company's valuation and market presence.

Takeaways

  • πŸš€ Elon Musk is investing heavily in supercomputers, with Tesla planning to spend over $1 billion by the end of 2024 on Project Dojo.
  • πŸ’» Supercomputers are distinct from data centers, optimized for high-speed calculations and data processing, crucial for tasks like AI model training.
  • πŸš— The supercomputing power is intended to enhance Tesla's autonomous driving capabilities and realize the long-awaited robotaxis.
  • πŸ€– Supercomputers are also essential for training Tesla's humanoid robot, Optimus, which is slated for factory deployment.
  • πŸ’° Musk's overall AI investment for Tesla is projected to reach $10 billion this year, highlighting a significant commitment to AI technology.
  • πŸ”— Musk's new AI venture, xAI, is developing a chatbot named Grok, competing with established chatbots like ChatGPT and Gemini.
  • 🌟 Tesla's AI supercomputer cluster, Cortex on X, is under construction at the company's headquarters in Austin, Texas.
  • πŸ’‘ The Colossus supercomputer by xAI in Memphis, Tennessee, is operational and is claimed by Musk to be the world's most powerful AI training system.
  • πŸ”‘ GPUs are critical to these supercomputers, with Tesla and xAI competing for these resources, affecting Tesla's AI infrastructure development.
  • 🌐 Environmental concerns are rising with the massive electricity and water consumption of supercomputers, impacting sustainability.
  • 🚦 There are doubts about Tesla's path to full autonomy, with some critics arguing that Dojo alone won't solve the technical challenges of FSD.

Q & A

  • What is Project Dojo and how much is Tesla planning to invest in it by the end of 2024?

    -Project Dojo is Tesla's in-house supercomputer project aimed at improving autonomous driving capabilities. Tesla plans to spend well over $1 billion on it by the end of 2024.

  • How do supercomputers differ from data centers in terms of computation?

    -While both supercomputers and data centers scale up to handle large amounts of computation, supercomputers are designed for extremely high-speed calculations and data processing with tighter interconnections and lower latency, which is crucial for tasks like training large AI models.

  • What is the purpose of using supercomputers in Tesla's autonomous driving technology?

    -Supercomputers are intended to enhance Tesla's autonomous driving capabilities by processing large volumes of data captured by Tesla vehicles to improve their Autopilot and Full Self-Driving (FSD) systems.

  • What is the role of supercomputers in training Tesla's humanoid robot Optimus?

    -Supercomputers are essential for training Optimus by processing and analyzing vast amounts of data to enable the robot to perform complex tasks in Tesla's factories starting from the next year.

  • What is the total amount of money Elon Musk plans to spend on AI this year according to the script?

    -Elon Musk plans to spend $10 billion this year on AI.

  • How does xAI's chatbot Grok compare with other chatbots in the market?

    -xAI's chatbot Grok is designed to compete with OpenAI's ChatGPT and Google's Gemini chatbots, aiming to offer an alternative in the AI chatbot market.

  • What is the significance of Cortex, the AI supercomputer cluster teased by Elon Musk?

    -Cortex, being built at Tesla's Austin, Texas headquarters, represents a significant step in Tesla's AI capabilities, indicating a focus on developing advanced AI systems.

  • What is the Colossus supercomputer and where is it located?

    -The Colossus supercomputer is a powerful AI training system located in Memphis, Tennessee, and is claimed by Musk to be the most powerful in the world, powered by 100,000 Nvidia A100 GPUs.

  • Why did Elon Musk divert Nvidia's H100 GPUs from Tesla to his social media company X?

    -Musk diverted the GPUs because he claimed Tesla was not ready to utilize them, and they would have otherwise remained unused in a warehouse.

  • What is the main goal of Tesla's custom-built supercomputer Dojo?

    -The main goal of Dojo is to process and train AI models using the vast amounts of video and data captured by Tesla vehicles to improve their driver assistance features.

  • What is the controversy surrounding Tesla's Autopilot and FSD systems?

    -There is controversy because despite their names suggesting autonomy, both Autopilot and FSD require active driver supervision. Regulators have criticized Tesla for false advertising, and a report found links between Autopilot and a significant number of Tesla crashes.

  • What is the D1 chip and how does it relate to Project Dojo?

    -The D1 chip is a custom-designed chip by Tesla, manufactured in seven nanometer technology, and is integral to Project Dojo as it is designed specifically for training Tesla's self-driving systems with an emphasis on machine learning and reducing latency.

  • What are the environmental concerns associated with supercomputers like Dojo?

    -Supercomputers require massive amounts of electricity and water for cooling, raising concerns about their environmental impact, especially in terms of energy consumption and water usage.

Outlines

00:00

πŸ’» Elon Musk's Foray into Supercomputing

Elon Musk is expanding his ventures into supercomputing with Project Dojo, aiming to spend over $1 billion by 2024. Supercomputers are designed for high-speed data processing and calculations, which is crucial for training large AI models like those needed for Tesla's autonomous driving and the humanoid robot Optimus. Musk's new AI venture, xAI, also requires these powerful machines for its chatbot Grok, competing with established platforms like ChatGPT and Gemini. Tesla's existing supercomputer projects include Cortex in Austin, Texas, and Dojo in Buffalo, New York, with xAI's Colossus in Memphis, Tennessee, already operational. These machines are critical for Musk's vision of AI advancement across his companies.

05:01

πŸš— Tesla's Custom Supercomputer: Dojo

Tesla's custom supercomputer, Dojo, is central to its transformation into an AI robotics company. Announced in 2021, Dojo is designed to enhance Tesla's Autopilot and Full Self-Driving (FSD) systems by processing vast amounts of data from its vehicles. Despite regulatory scrutiny and competition from companies like Waymo, Cruise, and Zoox, Tesla is banking on Dojo to achieve full autonomy. The supercomputer is also expected to boost Tesla's market value significantly. Dojo uses a custom chip, the D1, manufactured by TSMC, which is optimized for machine learning tasks. Tesla's approach to designing Dojo from the ground up allows for optimization across the entire system, potentially giving them an edge in the AI race.

10:03

πŸ€– Broader Applications and Challenges of Supercomputing

The potential of Tesla's supercomputers extends beyond self-driving cars, with possibilities for training robots like Optimus in various tasks. However, there are significant challenges, including hardware supply, especially reliance on Nvidia GPUs, and the technical hurdles of achieving full autonomy without lidar systems. There are also concerns about the environmental impact of these power-hungry machines. Some critics question the business viability of supercomputing and AI for Tesla, suggesting the company should focus on its core EV business. Despite these issues, Musk sees potential for supercomputers to greatly increase Tesla's value and transform industries.

15:04

🌐 The Future of Supercomputing in Tesla's Ecosystem

While some are skeptical about the immediate profitability of Tesla's supercomputing and AI ventures, others see it as a strategic move that could redefine the company's position in the market. The scale of investment and potential applications of supercomputing in Tesla's ecosystem are vast, with the possibility of creating a significant competitive advantage. However, there are valid concerns about the environmental impact and the need for a clear business model to capitalize on these technological advancements.

Mindmap

Keywords

πŸ’‘Supercomputer

A supercomputer is an extremely powerful computer that can perform a vast number of calculations and process data at exceptionally high speeds. In the context of the video, Elon Musk's Tesla is building an in-house supercomputer called Project Dojo to improve autonomous driving capabilities and train AI models like the humanoid robot Optimus. The video mentions that supercomputers are designed for high-speed computation and have tighter interconnections between computations compared to data centers.

πŸ’‘Project Dojo

Project Dojo is Tesla's in-house supercomputer initiative aimed at enhancing the company's autonomous driving technology. The project is highlighted in the video as a significant investment, with Tesla planning to spend over $1 billion by the end of 2024. Dojo is designed to process and train AI models using data captured by Tesla vehicles, which is crucial for advancing driver assistance features like Autopilot and Full Self-Driving (FSD).

πŸ’‘xAI

xAI is Elon Musk's new AI venture, which is developing large language models and AI products, including a chatbot named Grok. The video script positions xAI as a company needing powerful supercomputers to train its AI models, directly competing with established AI entities like OpenAI's ChatGPT and Google's Gemini. xAI's supercomputer, Colossus, is mentioned as being up and running, indicating the scale of Musk's AI ambitions.

πŸ’‘AI Training

AI training refers to the process of teaching AI models to perform tasks by feeding them large amounts of data. The video emphasizes the importance of AI training for Tesla's autonomous driving capabilities and for xAI's chatbot Grok. It mentions that supercomputers like Project Dojo and Colossus are necessary for handling the intensive computation required for effective AI training.

πŸ’‘Bandwidth and Latency

Bandwidth refers to the maximum rate of data transfer across a given path, while latency is the delay before the transfer of data begins. In the video, these terms are critical for the functioning of supercomputers, particularly in the context of AI training. High bandwidth and low latency are necessary for the efficient passing of data between computations, which is essential for training large AI models like those used in autonomous driving and chatbots.

πŸ’‘Robotaxis

Robotaxis are autonomous, taxi-like vehicles that can transport passengers without a human driver. The video discusses Tesla's ambition to bring robotaxis to market, which is a key driver behind the development of supercomputers like Project Dojo. The technology aims to improve Tesla's autonomous driving capabilities to the point where they can operate commercial autonomous taxi services.

πŸ’‘Optimus

Optimus is Tesla's humanoid robot project, which is intended to be used in the company's factories starting the following year, as mentioned in the video. The development and training of Optimus require supercomputers to process and analyze vast amounts of data, illustrating the diverse applications of AI and supercomputing beyond just autonomous vehicles.

πŸ’‘Nvidia A100 GPUs

Nvidia A100 GPUs are high-performance graphics processing units used in AI training and supercomputing. The video notes that Tesla's supercluster in Memphis, called Colossus, is powered by 100,000 of these GPUs, making it one of the most powerful AI training systems globally. GPUs are favored for their ability to handle the specific workloads of training large language models and other AI tasks.

πŸ’‘Dojo D1 Chip

The Dojo D1 Chip is a custom AI training chip designed by Tesla, as discussed in the video. This chip, manufactured in seven nanometer technology, is packed with 50 billion transistors and is designed specifically for machine learning training and high bandwidth requirements. The D1 chip exemplifies Tesla's commitment to creating specialized hardware to meet the unique demands of AI training.

πŸ’‘Autopilot and FSD

Autopilot and Full Self-Driving (FSD) are Tesla's driver assistance systems, which are not fully autonomous and require driver supervision, as clarified in the video. These systems are integral to Tesla's vision of achieving full vehicle autonomy. The development of these systems is closely tied to the capabilities of Tesla's supercomputers, which process the vast amounts of data generated by Tesla vehicles to improve the AI models underlying Autopilot and FSD.

πŸ’‘Zettascale Supercomputers

Zettascale supercomputers represent the next leap in computing power, with a capability of 1000 exaflops, as mentioned in the video. This is a significant upgrade from exascale supercomputers, which perform at 1 exaflop, or 1 quintillion calculations per second. The mention of zettascale computers in the video underscores the rapid advancement in supercomputing technology and the potential future impact on AI development.

Highlights

Elon Musk plans to invest over $1 billion by the end of 2024 on Tesla's in-house supercomputer, Project Dojo.

Supercomputers are designed for high-speed calculations and data processing, different from data centers.

Musk aims to use Project Dojo to enhance Tesla's autonomous driving and realize the robotaxi vision.

Supercomputers are essential for training Tesla's humanoid robot Optimus.

Tesla plans to spend $10 billion this year on AI.

xAI, Musk's AI venture, is developing a chatbot named Grok to compete with ChatGPT and Google's chatbots.

Tesla's AI supercomputer cluster Cortex is being built in Austin, Texas.

Tesla announced a $500 million investment to build the Dojo supercomputer in Buffalo, New York.

xAI's Colossus supercomputer in Memphis, Tennessee, is operational.

xAI secured $6 billion in series B funding, raising its valuation to $24 billion.

Colossus is powered by 100,000 Nvidia A100 GPUs, making it one of the world's most powerful AI training systems.

GPUs are crucial for training large language models due to their architecture.

Musk's companies, Tesla and xAI, are in competition for scarce AI chips.

Tesla's Dojo supercomputer is designed to improve the company's AI capabilities in robotics and self-driving cars.

Dojo's custom chip, the D1, is manufactured using seven nanometer technology.

Dojo's infrastructure is designed from the ground up for optimal AI training.

Tesla's Dojo supercomputer is capable of 1.1 exaflops of compute.

Dojo could potentially train robots like Optimus using data from Tesla vehicles.

Musk envisions Optimus could make Tesla a $25 trillion company.

Challenges for Tesla include securing enough hardware and overcoming skepticism about full autonomy.

Tesla faces criticism for not using lidar systems in its autonomous vehicles.

There are environmental concerns regarding the electricity and water usage of supercomputers.

Some question the business case for supercomputers and AI within Tesla.

Transcripts

play00:00

Tech titan Elon Musk is known for being a car guy, a

play00:02

rocket guy, a social media guy, and now he's also a

play00:06

supercomputer guy.

play00:07

And Elon Musk says Tesla will spend well over $1

play00:10

billion by the end of 2024 on building an in-house

play00:13

supercomputer known as Project Dojo.

play00:16

Although supercomputers look a lot like data centers,

play00:18

they're designed to perform calculations and process

play00:20

data at extremely high speeds.

play00:22

Both of them are about scaling up to very large amounts

play00:26

of computation. However, like in a data center, you

play00:29

have a lot of small parallel tasks that are not

play00:32

necessarily connected to each other.

play00:34

Whereas for example, when you're training a very large

play00:37

AI model, those are not entirely independent

play00:40

computations. So you do need tighter interconnection

play00:43

between those computations.

play00:44

And the passing of data back and forth needs to be

play00:46

potentially at a much higher bandwidth and a much

play00:49

lower latency.

play00:50

Musk wants to use the supercomputing power to improve

play00:52

Tesla's autonomous driving capabilities, and finally

play00:55

deliver on the company's years-long promise to bring

play00:57

robotaxis to market.

play00:58

Supercomputers are also needed to train Tesla's

play01:01

humanoid robot Optimus, which the company plans to use

play01:04

in its factories starting next year.

play01:05

All in all, Musk says that Tesla plans to spend $10

play01:08

billion this year on AI.

play01:10

Musk's new AI venture, xAI, also needs powerful

play01:13

supercomputers to train xAI's chatbot Grok, which

play01:16

directly competes with OpenAI's ChatGPT and Google's

play01:19

Gemini chatbots.

play01:20

Several of Musk's supercomputer projects are already

play01:23

in development. In August, Elon Musk teased Tesla's AI

play01:26

supercomputer cluster called Cortex on X.

play01:29

Cortex is being built at Tesla's Austin, Texas

play01:31

headquarters. Back in January, Tesla also announced

play01:34

that it planned to spend $500 million to build its

play01:36

Dojo supercomputer in Buffalo, New York.

play01:39

Meanwhile, Musk just revealed that xAI's Colossus

play01:41

supercomputer in Memphis, Tennessee was up and

play01:43

running. CNBC wanted to learn more about what Musk's

play01:46

bet on supercomputers might mean for the future of his

play01:48

companies, and the challenges he faces in the ultra

play01:51

competitive world of AI development.

play01:56

You had supercomputers, if you go to any of the

play01:58

national labs, they're used for everything from

play02:01

simulation materials to discovery to climate modeling,

play02:04

to modeling nuclear reactions and so on and so forth.

play02:07

However, what's unique about AI supercomputers is that

play02:09

they are entirely optimized for AI.

play02:11

Musk launched xAI in 2023 to develop large language

play02:15

models and AI products like its chatbot Grok, as an

play02:18

alternative to AI tools being created by OpenAI,

play02:20

Microsoft and Google.

play02:22

Despite being one of its original founders, Elon Musk

play02:24

left OpenAI in 2018 and has since become one of the

play02:27

company's harshest critics.

play02:29

In June, it was announced that xAI would build a

play02:31

supercomputer in Memphis, Tennessee to carry out the

play02:33

task of training Grok.

play02:34

It would represent the city's largest multi-billion

play02:38

dollar capital investment by a new to market company

play02:42

in Memphis history.

play02:43

The announcement came at the heels of xAI securing $6

play02:46

billion in series B funding, raising its valuation at

play02:49

the time from 18 to $24 billion.

play02:52

By early September, Musk announced that his training

play02:54

supercluster in Memphis, called Colossus, was online.

play02:57

The supercluster is powered by 100,000 Nvidia A100

play03:00

graphics processing units, or GPUs, making it the most

play03:04

powerful AI training system in the world, according to

play03:06

Musk. He went on to say that the cluster would double

play03:08

in size in the next few months.

play03:10

These GPUs have been around for a while.

play03:12

They started off in laptops, in desktops to be able to

play03:16

offload graphics work from the core CPU.

play03:20

So this is an accelerator.

play03:21

If you go back sort of ten years or so, 15 years ago,

play03:25

online gaming was blowing up and people wanted to game

play03:28

at speed, and then they realized that having graphics

play03:32

and the general purpose of the game on the same

play03:35

processor just led to constraints.

play03:37

So it's a very specific task to train a large language

play03:41

model and doing that on a classic CPU.

play03:44

You can. It works, but it's one of those examples

play03:47

where the particular architecture of a GPU plays well

play03:51

for that type of workload.

play03:53

In fact, GPUs became so popular that chipmakers like

play03:56

Nvidia for a time had a hard time keeping up with

play03:59

demand. The fight for GPUs has even caused competition

play04:02

among Musk's own companies.

play04:04

Musk's social media company X and xAI are closely

play04:07

intertwined, with X hosting xAI's Grok chatbot on its

play04:10

site, and xAI using some capacity in X data centers to

play04:14

train the large language models that power Grok.

play04:16

In December 2023, Elon Musk ordered Nvidia to ship

play04:21

12,000 of these very coveted AI chips H100 GPUs from

play04:26

Nvidia to X instead of to Tesla when they had been

play04:30

reserved for Tesla.

play04:31

So he effectively delayed Tesla's being able to

play04:34

continue building out data center and AI

play04:37

infrastructure by five six months.

play04:39

The incident was one example that shareholders used in

play04:41

a lawsuit against Musk and Tesla's board of directors

play04:44

that accused them of breach of fiduciary duty.

play04:46

They argued that after founding xAI, Musk began

play04:49

diverting scarce talent and resources from Tesla to

play04:51

his new company. Musk defended his decision on X,

play04:54

saying that Tesla was not ready to utilize the chips

play04:57

and that they would have just sat in a warehouse had

play04:59

he not diverted them.

play05:00

Musk has gone as far as to suggest that Tesla should

play05:03

invest $5 billion into xAI.

play05:06

Still, Musk has big plans on how artificial

play05:08

intelligence can transform Tesla.

play05:10

In January, he wrote on X that Tesla should be viewed

play05:12

as an AI robotics company rather than a car company.

play05:15

Key to this transformation is Tesla's custom-built

play05:17

supercomputer called Dojo, details of which the

play05:20

company first publicly announced during Tesla's AI day

play05:22

presentation in 2021.

play05:24

There's an insatiable demand for speed, as well as

play05:28

capacity for neural network training.

play05:31

And Elon prefetched this in a few years back, he asked

play05:34

us to design a super fast training computer, and

play05:37

that's how we started Project Dojo.

play05:39

During the company's Q2 earnings call last year, Musk

play05:42

told investors that Tesla would spend over $1 billion

play05:44

on Dojo by the end of 2024.

play05:46

A few months later, Morgan Stanley predicted that Dojo

play05:49

could boost Tesla's value by $500 billion.

play05:52

Dojo's main job is to process and train AI models

play05:55

using the huge amounts of video and data captured by

play05:57

Tesla vehicles.

play05:59

The goal is to improve Tesla's suite of driver

play06:01

assistance features, which the company calls

play06:03

Autopilot, as well as its more robust Full

play06:05

Self-Driving, or FSD system.

play06:07

They've sold what is it, 5 million plus cars?

play06:11

Each one of those cars typically has eight cameras

play06:13

plus in it. They're streaming all of that video back

play06:17

to Tesla. So what can they do with that training set?

play06:21

Obviously they can develop full self-driving and

play06:23

they're getting close to that.

play06:25

Despite their names, neither Autopilot nor FSD make

play06:27

Tesla vehicles autonomous and require active driver

play06:30

supervision, as Tesla states on its website.

play06:33

The company has garnered scrutiny from regulators who

play06:35

say that Tesla falsely advertised the capabilities of

play06:37

its autopilot and FSD systems.

play06:40

A 2024 report by the National Highway Traffic Safety

play06:42

Administration also found that out of the 956 Tesla

play06:46

crashes the agency reviewed, 467 of those could be

play06:50

linked to Autopilot. But reaching full autonomy is

play06:52

critical for Tesla, whose sky-high valuation is

play06:55

largely dependent on bringing robotaxis to market,

play06:57

analysts say. The company reported lackluster results

play07:00

in its latest earnings report and has fallen behind

play07:03

other automakers working on autonomous vehicle

play07:05

technology. These include Alphabet-owned Waymo, which

play07:07

is already operating fully autonomous taxis

play07:09

commercially in several U.S.

play07:11

cities, GM's Cruise and Amazon's Zoox.

play07:14

In China, competitors include Didi and Baidu.

play07:17

Tesla hopes Dojo will change that.

play07:19

According to Musk, Dojo has been running tasks for

play07:21

Tesla since 2023, and since Dojo has a very specific

play07:24

task: to train Tesla's self-driving systems, the

play07:27

company decided that it's best to design its own chip,

play07:30

called the D1.

play07:31

This chip is manufactured in seven nanometer

play07:34

technology. It packs 50 billion transistors in a

play07:37

miserly six 45 millimeter square.

play07:40

One thing you'll notice 100% of the area out here is

play07:44

going towards machine learning, training and

play07:47

bandwidth. This is a pure machine learning machine.

play07:50

For high-performance computing, it is very common to

play07:53

have supercomputers that have CPUs and GPUs, However,

play07:56

increasingly, AI supercomputers also contain

play07:59

specialized chips that are specially designed for AI

play08:02

workloads, and the Dojo D1 is an example of that.

play08:05

One of the key things that came through when I was

play08:07

looking at D1 is latency.

play08:09

It's training on a video feed that's coming from

play08:12

cameras in cars.

play08:13

So the big thing is kind of, how do you move those big

play08:16

files around and how do you handle for latency?

play08:20

Aside from the D1, which is being manufactured by

play08:22

Taiwanese chipmaker TSMC, Tesla is also designing the

play08:26

entire infrastructure of its supercomputer from the

play08:28

ground up.

play08:28

Designing a custom supercomputer gives them the

play08:31

opportunity to optimize the entire stack, right?

play08:34

Go from the algorithms and the hardware.

play08:36

Make sure that they are designed to work perfectly in

play08:38

concert with each other.

play08:40

It's not just Tesla, but if you see a lot of the

play08:42

hyperscalers, the Googles of the world, the Metas, the

play08:44

Microsofts, the Amazons, they all have their own

play08:47

custom chips and systems designed for AI.

play08:51

In the case of Dojo, the design looks something like

play08:53

this. 25 D1 chips make up what Tesla calls a training

play08:56

tile. With each tile containing its own hardware for

play08:59

cooling, data transfer and power, and acting as a

play09:02

self-sufficient computer.

play09:03

Six tiles make up a tray and two trays make up a

play09:06

cabinet. Finally, ten cabinets make up a hexapod,

play09:09

which Tesla says is capable of 1.1 exaflops of

play09:12

compute. To put that into context, one exaflop is

play09:16

equal to 1 quintillion calculations per second.

play09:19

This means that if each person on the planet completed

play09:21

one calculation per second, it would still take over

play09:24

four years to do what an exascale computer can do in

play09:27

one second.

play09:28

It is impressive, right?

play09:29

But there are other supercomputers, certainly, that

play09:32

are performing in that ballpark as well.

play09:34

One of those supercomputers is located at the

play09:36

Department of Energy's Oak Ridge National Laboratory

play09:38

in Tennessee. The system, called Frontier, operates at

play09:41

1.2 exaflops and has a theoretical peak performance of

play09:44

2 exaflops.

play09:46

The supercomputer is being used to simulate proteins

play09:48

to help develop new drugs, model turbulence to improve

play09:51

the engine designs of airplanes and create large

play09:54

language models. The next generation of zettascale

play09:57

supercomputers are already in development.

play10:00

A zettaflop supercomputer has a computing capability

play10:02

equal to 1000 exaflops.

play10:05

As for Dojo, Dickens says its utility could go beyond

play10:08

turning Teslas into autonomous vehicles.

play10:10

If you wanted to train a robot on how to dig a hole,

play10:13

how many Tesla cars have driven past somebody digging

play10:16

a hole on the side of the road?

play10:18

And could you then point that at Optimus and say, hey,

play10:21

I've got hundreds of hours of how people dig holes?

play10:25

I want to train you as a robot.

play10:27

I know how to dig holes.

play10:29

So I think you've got to think wider of Tesla than

play10:32

just a car company.

play10:34

At a shareholder meeting this summer, Musk claimed that

play10:36

Optimus could turn Tesla into a $25 trillion company.

play10:40

But not everyone is convinced.

play10:41

It's a daydream of robots to replace people.

play10:44

It's a lofty goal. The price points, you know, don't

play10:49

sound too logical to me, but, you know, it's a great

play10:51

aspirational goal. It's something that could be

play10:53

transformational for humanity if we make it work.

play10:57

EVs have worked. I just call me a very, very heavy

play11:00

skeptic on this robot.

play11:02

Despite all this potential for Musk's supercomputers,

play11:05

the tech titan and his companies have quite a few

play11:07

challenges to overcome as they figure out how to scale

play11:09

the technology and use it to bolster their businesses.

play11:13

One such challenge is securing enough hardware.

play11:16

Although Tesla is designing its own chips, Musk is

play11:18

still highly dependent on Nvidia's GPUs.

play11:21

For example, in June, Musk said that Tesla would spend

play11:23

between $3 and $4 billion this year on Nvidia

play11:26

hardware. Here he is talking about the supply of

play11:28

Nvidia chips during Tesla's latest financial results

play11:31

call.

play11:32

We are seeing is that demand for Nvidia hardware is so

play11:34

high that it's often difficult to get the GPUs.

play11:38

I guess I'm quite concerned about actually being able

play11:41

to get state-of-the art Nvidia GPUs when we want them,

play11:44

and I think this therefore requires that we put a lot

play11:48

more effort on Dojo in order to ensure that we've got

play11:51

the training capability that we need.

play11:52

Even if Musk did have all the chips he wanted, not

play11:55

everyone agrees that Tesla is close to full autonomy,

play11:58

and that Dojo is the solution to achieving this feat.

play12:01

Unlike many other automakers working on autonomous

play12:03

vehicles, Tesla has chosen to forgo the use of

play12:05

expensive lidar systems in its cars, instead opting

play12:08

for a vision only system using cameras.

play12:10

The issues in FSD are related to the sensors on the

play12:13

cars. People who drive these vehicles report phantom

play12:16

obstacles on the road, where the car will just

play12:18

suddenly brake, or the steering wheel will tweak you

play12:21

aside where you're dodging something that doesn't

play12:23

exist. If you imagine a white tractor trailer truck

play12:26

that falls over and it's a cloudy day, you've got

play12:29

white on white and a scenario that is not very easily

play12:33

computed and very easily recognized.

play12:35

Now, a driver that's paying attention is going to see

play12:37

this and hit the brakes hard.

play12:38

You know, a computer, it can easily be fooled by that

play12:41

kind of situation. So for them to get rid of that,

play12:43

they need to add other sensors onto the vehicles.

play12:46

They've been vehemently against lidar.

play12:48

They need to change the fundamental design.

play12:50

Dojo is not going to fix that. Dojo is just not going

play12:52

to fix the core problems in FSD.

play12:54

At times, even Musk has questioned Dojo's future.

play12:57

We're pursuing the dual path of Nvidia and Dojo.

play13:00

Think of Dojo as a long shot.

play13:03

It's a long shot worth taking because the payoff is

play13:05

potentially very high.

play13:07

But it's not something that is a high probability.

play13:09

It's not like a sure thing at all.

play13:13

It's a high risk, high payoff program.

play13:16

And then there are the environmental concerns.

play13:18

Supercomputers like those being built by Musk and

play13:20

other tech giants, require massive amounts of

play13:22

electricity to power them, even more than the energy

play13:24

used in conventional computing and an exorbitant

play13:27

amount of water to cool them.

play13:28

For example, one analysis found that after globally

play13:31

consuming an estimated 460 terawatt hours in 2022

play13:35

datacenters' total electricity consumption could reach

play13:37

more than 1000 terawatt hours in 2026.

play13:41

This demand is roughly equivalent to the electricity

play13:43

consumption of Japan. In a study published last year,

play13:46

experts also predicted that global AI demand for water

play13:49

may account for up to 6.6 billion cubic meters of

play13:52

water withdrawal by 2027.

play13:54

This summer, environmental and health groups said xAI

play13:57

is adding to smog problems in Memphis, Tennessee,

play13:59

after the company started running at least 18 natural

play14:02

gas burning turbines to power its supercomputer

play14:04

facility without securing the proper permits.

play14:08

xAI did not respond to a CNBC request for comment.

play14:11

But beyond supply chain issues and environmental

play14:13

concerns, some question whether supercomputers and AI

play14:17

are good business.

play14:18

Tesla is a company with car manufacturer problems with

play14:22

AI and robotics aspirations.

play14:24

They're not about to make any money from AI anytime

play14:26

soon, and this FSD that was promised five years ago

play14:29

isn't really about to happen either.

play14:31

Dojo is a massive project.

play14:33

It's interesting and exciting, and it could open

play14:35

tremendous frontiers for them.

play14:37

It's just I think we need to be skeptical, right?

play14:40

How does someone make money with this?

play14:41

There's no visibility on that whatsoever.

play14:43

It seems like, you know, a shot in the dark.

play14:46

Instead, Irwin suggests that Musk stick to what he

play14:48

knows: making EVs.

play14:50

I'm very bearish on the stock.

play14:52

I see the fundamental value in EVs, right.

play14:55

If Tesla goes into Thailand and India and they invest

play14:59

billions in India, the supply chain will bounce into

play15:01

existence. They're going to be a cost leader in the

play15:03

world. You know, they need to get out there with a

play15:05

mini car. So if they do, the outlook for the company

play15:08

will change.

play15:08

But Dickens is more positive.

play15:10

I think Tesla is changing the supercomputer paradigm

play15:13

here because the scale of their investment, the deep

play15:17

pockets that they've got and their ability to put all

play15:20

that investment to a single use case.

play15:23

Now, you can argue that FSD has been promised for a

play15:26

long time, and you can think what you want to think of

play15:29

Elon, all of the above is valid, but they are ahead,

play15:32

and they've already built a moat that the likes of GM,

play15:36

Stellantis and Ford won't get close to any time soon

play15:39

or ever.

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Elon MuskAI SupercomputerTesla DojoAutonomous VehiclesRobotaxisOptimus RobotxAIGrok ChatbotAI DevelopmentElectric VehiclesTech Innovation