《與楊立昆的對話:人工智能是生命線還是地雷?》- World Governments Summit

Will保哥
18 Feb 202424:50

Summary

TLDR本视频讲述了人工智能(AI)当前激动人心的进展及其对世界的深远影响。从自动驾驶系统到医疗保健和科学研究,AI的应用正在加速科技进步。视频还探讨了AI在内容审核、社交网络中的关键作用,并强调了AI作为解决方案而非问题的重要性。此外,讨论了大型AI模型的开发成本,以及开源技术如何使发展中国家和小型企业能够利用AI推动创新。视频最后强调了AI的未来潜力,包括改善全球知识共享和安全性,同时指出了达到人类级智能所需的技术突破和挑战。

Takeaways

  • 🤖 人工智能正在帮助我们变得更聪明,它在短期内已经在交通、医疗保健、药物设计等领域发挥作用。
  • 🚗 现代汽车配备了驾驶辅助系统,这些系统能自动识别障碍并停车,预示着未来汽车将实现自动驾驶。
  • 🔬 在科学和技术的进步方面,人工智能承诺将加速材料科学和化学等领域的发展。
  • 🌐 目前最大的人工智能应用之一是社交网络上的内容审核,这是一个复杂的问题,但AI在解决虚假信息和仇恨言论方面扮演着解决方案的角色。
  • 💻 要在人工智能研究的前沿,需要庞大的超级计算机资源,例如至少10,000个GPU。
  • 🌍 对于开发中国家和小型公司来说,通过微调开源技术和基础模型,可以实现对人工智能的利用,而不需要巨大的投资。
  • 📡 人工智能未来将被视为基础软件平台,类似于互联网,这意味着它的许多部分将基于开源软件。
  • 👓 我们将更多地依赖人工智能助手,它们将通过我们的智能手机或智能眼镜等设备提供信息和帮助。
  • 🚀 人工智能领域的未来发展将需要新的架构和技术突破,例如提高视频处理能力和改进记忆系统。
  • 🌟 对政府和决策者的建议:需要建立人工智能主权,提供教育和训练,并创建国家级的计算资源以促进人工智能生态系统的发展。

Q & A

  • 为什么开源的AI系统很重要?

    -开源的AI系统可以被任何人定制,也可以移植到不同的硬件上,应用范围更广。类似互联网的开源软件,开源AI系统可以得到更快的进步。

  • 当前最大的语言模型与猫的大脑相比如何?

    -当前最大的语言模型的参数数量相当于猫大脑神经元连接数的一半左右,也就是猫的智力水平的一半。

  • 我们现在距离通用人工智能还有多远?

    -通用人工智能可能还需要几十年时间才能实现,至少10年以上。人们总是高估技术进步的速度。

  • 实现通用AI还需要哪些突破?

    -需要从视频中学习世界运行规律的新神经网络结构,需要记忆与推理的能力,还需要分层规划与控制的能力。

  • 当前AI最大规模训练需要多少算力?

    -前沿的AI研究至少需要数万张GPU,一个有竞争力的规模是16000张GPU,成本在10亿美元以上。

  • 未来算力需求会减少吗?

    -硬件和算法都在进步,系统可以使用更低精度,智能设备会内置小型神经网络芯片,未来算力需求会有所降低。

  • 开源与专有AI系统的区别是什么?

    -开源系统应用范围更广,更容易定制,但专有系统集成度更高,使用更简单。开源AI最终会超过专有系统。

  • 为什么说生成对抗网络不是AI的未来?

    -如果AI系统能从视频中学习世界运行规律,它们不大可能是生成对抗网络,这种监督学习更适合未来的AI。

  • 各国应如何实现AI主权?

    -各国应共享少数开源基础模型,同时投资教育和算力资源,在此基础上开发自定义的AI应用。

  • 人工智能技术给人类带来的威胁有多大?

    -目前的人工智能实际上还远未达到令人担心的水平。但在更长远的未来,机器的智能可能会超过人类。

Outlines

00:00

人工智能的应用

讨论了人工智能在交通、医疗、科学等领域的应用

05:05

人工智能系统的开发

讨论了开发人工智能系统所需的计算能力和资源

10:07

人工智能的发展

讨论了人工智能发展面临的挑战和前景

Mindmap

Keywords

💡artificial intelligence

Artificial intelligence (AI) refers to computer systems that are designed to perform tasks that would otherwise require human intelligence. In the context of this video, AI is discussed both in terms of current capabilities as well as future possibilities, and how it can be applied to assist humans in areas like transportation, healthcare, and more.

💡machine learning

Machine learning is a subset of AI focused on algorithms that can learn from data to make predictions or decisions without being directly programmed. The video discusses innovations in machine learning models and architectures that could enable more advanced AI.

Highlights

如果要在人工智能研究的最前沿,必须拥有数万个GPU组成的超级计算机基础设施

开源模型可以为国家和初创公司提供利用人工智能的基础平台

人工智能将成为类似互联网这样的基础软件平台

人工智能系统将介入我们与数字世界的所有交互

开放和多样的人工智能系统至关重要,就像新闻媒体的多样性一样

硬件和算法正在持续进步,提高人工智能系统的效率

在未来几年内实现通用人工智能仍需要重大突破

我们需要新的神经网络结构来理解世界的运作

未来的人工智能可能不是“生成式”的

我们需要能存储和提取记忆的人工智能系统

我们需要能进行逻辑推理的人工智能系统

我们需要能进行分层规划的人工智能系统

开源人工智能不能因为想象中的危险而过早受到监管

教育和培训是推广人工智能的最大障碍

要实现人工智能主权,就需要一些开源基础模型

Transcripts

play00:00

We go to events today, all around the world.

play00:03

We see people very excited about artificial intelligence.

play00:07

I would like to know what specifically about AI at this moment excites you.

play00:12

Too many things to list them all.

play00:14

But, you know, ultimately,

play00:18

AI is going to help us

play00:21

be smarter ourselves.

play00:22

It's going to assist us.

play00:24

And intelligence is, you know, the commodity

play00:28

is in the highest demand.

play00:29

If you want to, you know, want to make the world better.

play00:33

And so that's the long term effect of AI,

play00:38

the positives, the benefits

play00:41

of AI technology.

play00:43

But in the short term,

play00:46

what does AI do to us today?

play00:49

And it's used in transportation.

play00:52

All of our cars now have, you know, assistance

play00:55

driving assistance systems in them that

play00:58

makes the car stop automatically if there is an obstacle.

play01:01

And eventually the car will drive themselves.

play01:03

Not yet completely

play01:05

in healthcare, in drug design, in science, material design,

play01:10

for example, material science, chemistry.

play01:13

A lot of promises there that are really exciting

play01:16

for the progress of science and technology, where AI is going to help us make

play01:22

faster progress in all kinds of domains.

play01:26

So that is really exciting on the side of applications.

play01:29

And of course,

play01:32

the biggest deployment of AI today might surprise you,

play01:35

but it's a little behind the scenes.

play01:36

It's basically content moderation on communication networks,

play01:39

social networks in particular.

play01:42

And that actually is a very complex problem.

play01:46

A lot of people today are really scared about the use of AI

play01:49

for things like disinformation and hate speech and,

play01:53

you know, all kinds of nefarious purpose.

play01:56

But in fact, for all of those problems,

play01:59

AI currently is actually the solution.

play02:01

It's not the problem. It's the solution.

play02:03

It is with AI that we detect all those kind of all those bad things

play02:06

that need to be dealt with.

play02:10

So, you know, the negatives.

play02:12

People pay too much attention to the negatives, I think.

play02:15

Jan, if you allow me these applications that we are seeing now,

play02:18

protein folding, large language models,

play02:21

all of these different technologies that are based today on technologies

play02:24

are powered by compute capability.

play02:29

There is a number that we usually hear and is that no nation, no company,

play02:33

no organization, no research institution can be relevant in the world of AI today

play02:38

if they do not have access to a minimum of 10,000 GPUs, 10,000 GPUs.

play02:44

Is this number correct?

play02:47

Yeah, if you want to be on top of the research,

play02:52

like at the frontier of AI research,

play02:55

the threshold is basically a supercomputer composed of lots of GPUs

play03:00

that are tightly interconnected using optical communication.

play03:03

And the basic unit there is 16,000 GPUs.

play03:08

For various economical reasons,

play03:11

that's basically a billion dollars or more.

play03:15

And, you know, it's a big infrastructure.

play03:18

So, large tech companies like Meta,

play03:23

Google, Microsoft and a few others have the capability for this,

play03:27

but it's really in the hands of a small number.

play03:29

Now, these things are for training what we call foundation models

play03:33

or base models, right?

play03:34

This is very expensive, it requires a lot of expertise,

play03:37

a lot of computation.

play03:39

And if you want to be at the top, you know,

play03:42

you need at least that much resources

play03:45

unless you come up with some new concepts

play03:47

that nobody has thought about before, which may occur.

play03:51

But then once those base models have been trained,

play03:53

fine tuning them for your interests,

play03:57

your local language, culture, centers of interest,

play04:01

value system, whatever it is, it's not that expensive.

play04:05

A lot of people can do this, small companies can do this.

play04:07

But one of the things that a lot of countries could do

play04:11

to basically, as a springboard to creating an ecosystem of AI

play04:16

in the country, is to provide relatively cheap compute resources,

play04:22

not just to startups.

play04:25

Large companies usually can afford to do this,

play04:28

but not just to startups, but also to academic groups.

play04:31

That's really crucial, because academic groups at the moment are,

play04:35

particularly in certain areas, are wondering whether they can contribute

play04:37

at all, because they don't, most of them don't have access to enough

play04:41

computing resources.

play04:43

So a billion dollars, 16,000 GPUs,

play04:46

if Ignatian or a developing nation or a company or a startup

play04:49

doesn't want to invest that amount, they can leverage open source technology,

play04:53

is what I'm saying.

play04:54

And I know Meta is very big on this with the Llama models.

play04:59

The UAE has invested very heavily with Falcon LLM as well.

play05:04

Please explain to us a little bit more what about open source

play05:07

and how can this be leveraged by developing nations and companies?

play05:11

Yeah, so I think AI is going to to be viewed as a

play05:15

as a as a foundation platform,

play05:19

software platform, a little bit like the software platform of the Internet.

play05:23

Right. So the Internet runs on open source software,

play05:26

Linux, Apache, MySQL, you know, et cetera, right?

play05:30

All the software stack.

play05:33

And it's not just the Internet.

play05:35

You're you're you're you know, the entire cell phone system

play05:38

runs on open source software.

play05:39

You don't realize that.

play05:40

But but cell phone towers run on open source software stack.

play05:44

So the reason for this is that whenever a piece of software is infrastructure,

play05:48

it needs to be shared.

play05:49

It needs to be secure and safe.

play05:52

And the best way for it to be secure and safe is to have a lot of eyeballs

play05:56

looking at the source code and fine tuning it and improving it.

play06:00

So I think, you know, AI is going to be a common infrastructure

play06:03

because it's going to be like the repository of all human knowledge.

play06:07

If you want, to some extent,

play06:09

all of our digital diet is going to be mediated by those AI systems.

play06:11

We're not going to go to a search engine anymore.

play06:13

We're just going to be talking, talking to our AI assistant.

play06:16

It's going to be living in our mobile phones, perhaps.

play06:19

But perhaps in smart glasses. Right.

play06:22

So you're walking around and,

play06:24

you know, you don't know where to go.

play06:25

You ask your assistant like, you know, where am I now?

play06:27

What is this building?

play06:28

And the system, you know, there's a camera and it can tell you this.

play06:31

You can basically buy today or if not today within three months.

play06:36

So, you know, we're going to be talking with our assistant all the time,

play06:39

which means our digital diet is going to be mediated by those systems,

play06:42

which is why it needs to be open, diverse, free to some extent,

play06:48

just like the press, just like the media.

play06:52

We need a diversity of AI assistance

play06:56

so that we're not all kind of, you know, getting the same information from the same source.

play07:01

I want to go back to the 16,000 GPUs for just one moment here.

play07:06

How long do you foresee this demand on computing power will remain?

play07:10

We saw in the blockchain when it first came out, it was very compute intensive.

play07:15

There were fundamental shifts in how the blockchain operated,

play07:17

went from proof of work to proof of stake.

play07:20

Will we see such a shift in AI?

play07:22

Will algorithms improve that will be more efficient

play07:24

and will not be in this much need of computing power?

play07:30

It's a two year waiting list today to get access to some of these GPUs

play07:34

from these manufacturers and vendors.

play07:35

Yeah, that's true.

play07:36

And it's in part due to companies like META and Microsoft,

play07:42

which are the two biggest buyers of GPUs from Nvidia,

play07:45

from Jen San Wong's company who was here earlier.

play07:50

As well as the UAE now?

play07:52

That's a good question.

play07:53

I don't know how many GPUs there are in the UAE,

play07:55

but most people actually rent GPUs out of cloud service providers, right?

play08:00

They don't necessarily have their own their own facilities.

play08:03

But to give you an idea, by the end of the year,

play08:07

Mark Zuckerberg announced that META will basically have access to 600,000 GPUs.

play08:13

Many of them are used for research and development,

play08:15

but most of them are actually used for production.

play08:17

So when you talk to the AI assistant, you need a GPU to run it.

play08:22

But anyway, so there's going to be a lot of progress,

play08:24

both on the hardware, the hardware is making a lot of progress,

play08:31

not because of Moore's law, because Moore's law is kind of saturating,

play08:35

but because chips are being designed that are more appropriate,

play08:40

they're more efficient to run the type of neural nets that we are interested in.

play08:44

There's two big families of architectures of neural nets

play08:46

that need to be run efficiently.

play08:48

And you have to figure out, you know,

play08:49

what type of silicon hardware architecture is appropriate for that.

play08:54

And people are making progress along this line.

play08:57

So there's progress on that side.

play08:59

There is exploitation of the fact that when you do the computation

play09:02

for a deep learning system, a neural network,

play09:05

the precision of the computation doesn't need to be very high,

play09:10

only a few bits.

play09:11

The normal computer makes calculation on numbers

play09:16

that are represented on 64 bits or 32 bits.

play09:19

But when it's a neural net, you can actually train it with only 16 bits.

play09:24

And once it's trained, you can quantize that down to eight bits

play09:27

or even sometimes four bits, which allows even very large systems

play09:30

to run on sort of more regular hardware.

play09:35

Laptops that are going to come out this year,

play09:38

most of them will have neural net accelerators hardwired in them

play09:41

with enough memory to run fairly sizable neural net on the laptop.

play09:47

And then, you know, down the down the line,

play09:50

you're going to have to to run very basic front end neural nets

play09:54

on your smart glasses, on your smartphones.

play09:56

Smartphones now come with neural net accelerators at least at the high end.

play10:01

There's a lot of research into very low power electronics

play10:03

to put them into into smart glasses as well.

play10:07

And then pretty soon you'll have,

play10:10

you know, AI neural net systems in every embedded device.

play10:13

Your, you know, your vacuum cleaner, your automated lawnmower,

play10:20

you know, camera is maybe on the ceiling of a retirement home

play10:22

that can detect if people fall on the floor, things like that.

play10:25

Those things are going to be everywhere, including in,

play10:27

you know, three dollar microcontrollers.

play10:32

A lot of these are open source based systems or open source models.

play10:35

You know, again, coming back to Metas open source systems, Falcon and the UAE,

play10:38

JACE and the UAE, why if it is so good

play10:42

and it enables developing nations to be able to leverage, enable startups

play10:45

to be able to leverage artificial intelligence, why then are we hearing

play10:49

another side where people are sort of pushing against open source

play10:53

and and maybe promoting closed source AI systems,

play10:58

whereas open source really built the Internet, built what we see today

play11:01

to a large extent?

play11:03

Well, you know, at the beginning of the Internet,

play11:06

the software infrastructure for the Internet was actually not open source.

play11:09

It was a big battle between Sun Microsystems and Microsoft

play11:11

to provide the operating systems, web servers, et cetera.

play11:15

And they both lost, right?

play11:16

They lost to open source platforms because open source platforms

play11:19

make progress faster.

play11:20

So I think we're going to see a similar phenomenon in AI

play11:23

that proprietary platform, there is space for them.

play11:27

But they they belong to a particular business model where, you know,

play11:31

you basically have a subscription to those systems with an API.

play11:36

But they are difficult to to customize for your own application

play11:41

because you don't have access to the code.

play11:43

You can't port it to your hardware.

play11:44

You can't run it locally.

play11:46

You know, you have to use servers that are somewhere

play11:48

otherwise close to the U.S.

play11:49

So it's the amount of

play11:53

the spectrum of applications you can you can do with closed source system

play11:56

is not as large as with with open source.

play12:00

And what we're seeing over the last year or so is that open source models

play12:04

that are being released are, you know, inching their way

play12:07

towards the same level of performance as the best closed source one.

play12:11

So at some point they're going to cross and then be the end of proprietary.

play12:15

What's going to happen now after that is that the the base layer of open source

play12:21

is going to be used.

play12:22

There's going to be, you know, three or four good open source platforms

play12:26

that are going to be used by everyone to build commercial,

play12:29

possibly closed source applications on top of it

play12:33

for for businesses, for consumers, for,

play12:36

you know, a government operation, for science, for whatever you want.

play12:41

You know, as long as the license of the open source

play12:43

are sufficiently liberal for that.

play12:45

So I think that's the future.

play12:47

I think it's the better future that we can imagine.

play12:49

But the reason why there is pressure also

play12:52

to essentially regulate open source out of existence

play12:56

is because of this imaginary fear

play13:00

that powerful AI systems are dangerous.

play13:03

At the moment, they're not.

play13:04

We're really far from human level intelligence.

play13:07

You know, there were stories about the fact that you could use an LLM

play13:11

to give you instructions of how to make a chemical weapon

play13:15

or bioweapon or something.

play13:16

That turns out to be false.

play13:18

Those systems are trained on public data.

play13:19

They can't really invent anything.

play13:21

So, you know, at least today now in a, you know, some time in the future,

play13:26

those systems might actually be smart enough to really give you

play13:29

useful information better than you can get with a search engine.

play13:32

But it's just not true today.

play13:34

I heard you once say that if we take all the data available in the world today

play13:38

and we take all the AI models developed in the world today

play13:41

and we put them together into one system,

play13:43

it will still not be as smart as a household cat.

play13:47

Can you please explain a little bit more on this?

play13:50

Yeah. The brain of a house cat is

play13:54

it's about 800 million neurons.

play13:57

You have to multiply this by about 2000 to get the number of

play14:01

synapses, the connections between neurons,

play14:03

which is the equivalent of number of parameters in an LLM.

play14:05

The biggest LLMs that we have at the moment that are practical

play14:08

or have a few hundred billion

play14:11

parameters, the equivalent of synapses.

play14:14

So we're maybe the size of a cat.

play14:18

But why is it that those systems are not nearly as smart as a cat?

play14:21

You know, a cat can remember, first of all,

play14:24

understands the physical world, can plan complex actions,

play14:28

can do some level of reasoning, actually much better than the biggest LLMs.

play14:32

And so what that tells you is that we're missing something really,

play14:36

really conceptually, something really big to get machines

play14:40

to be as intelligent as the type of learning abilities

play14:43

that we observe in animals and humans.

play14:46

We still have some breakthroughs to get to, you know,

play14:49

AGI, whatever you want to call it, human level AI is not just around the corner.

play14:53

There's no question it will happen.

play14:55

There's no question that at some point in the future,

play14:57

we will have machines that are smarter than us in all domains where we're smart.

play15:01

They'll be working for us.

play15:02

They're not going to want to, you know, take over the world.

play15:06

But, you know, we'll set their their goals

play15:10

and they'll be executing those goals for us.

play15:13

But they will be smarter than us in many ways.

play15:17

But we're not there yet.

play15:18

You know, we still have to discover some major breakthroughs

play15:22

before before we get there.

play15:24

No, thank you.

play15:24

Then, you know, I come from the artificial intelligence

play15:27

office of the federal government.

play15:29

In essence, we are policymakers,

play15:30

but you are coming from the front lines of AI development.

play15:35

We talk about artificial general intelligence.

play15:37

We hear about superhuman intelligence or human level intelligence.

play15:41

How far are we really?

play15:44

Will we see it in our lifetime?

play15:47

Uh, maybe in the lifetime of some people in this room.

play15:52

Not sure about me.

play15:54

No, this is going to it's going to take decades.

play15:56

You know, I mean, we're going to make progress over the next few years.

play15:59

Perhaps, you know, if we're lucky, progress will go faster than we expect.

play16:05

But but this this is not this is not three years from now.

play16:10

This is most likely not five years from now.

play16:13

Probably more than 10 years.

play16:15

And maybe within 20.

play16:17

OK, so so that's a guess.

play16:21

Now, when I'm saying this, I'm taking a huge risk because every single

play16:25

AI researcher in the past in the history of AI for the last 65

play16:29

or 70 years has been overly optimistic about those kinds of prediction

play16:34

and turned out to be wrong.

play16:36

So there is this phenomenon that, you know, when there is sort of a new paradigm,

play16:41

a new way of, you know, getting machines to to do new things.

play16:46

We think that's it. That's the secret.

play16:48

Now we have the secret of intelligence.

play16:51

And, you know, within 10 years, we'll have machines that are as smart as humans.

play16:55

People have been saying this every five years since 1955.

play17:00

And they've been wrong, obviously.

play17:03

Some of the companies that are,

play17:05

you know, very well established in the AI business today

play17:09

started out 10 years ago telling everyone they're investors.

play17:13

AGI is just around the corner three years from now.

play17:16

They were wrong.

play17:17

The technique they were advocating turned out to not be as

play17:22

you know, as good as what they thought.

play17:26

And so I may be another one of those when I tell you this,

play17:30

you know, 20 year framework.

play17:32

Is there a breakthrough that needs to happen to reach that level

play17:36

of human level intelligence?

play17:37

Are we looking at more data?

play17:39

Are we looking at more computing power?

play17:41

Is there some algorithm that still needs to be developed

play17:44

that will be like, boom, we've we've we've unlocked it?

play17:48

OK, so certainly

play17:53

computation, more more compute power is going to help.

play17:56

There's no question that's necessary, but it's not sufficient.

play17:59

What we need is new architectures.

play18:01

When you say new algorithm,

play18:02

it depends what kind of algorithms you're talking about.

play18:04

So the basic algorithm we use for deep learning

play18:08

is called back propagation to adjust the parameters, right?

play18:11

That's with us, like that's going to stay with us.

play18:14

We don't have any good replacement for this or any even basic idea

play18:17

of how we could replace this.

play18:18

So this works really well.

play18:20

We're going to keep that.

play18:21

So deep learning is here to stay.

play18:22

That's the basis of future AI systems.

play18:25

But what we need are four breakthroughs.

play18:28

Basically, one is.

play18:30

The ability for systems to learn how the world works,

play18:35

mostly by observation and a bit by interactions,

play18:38

the way babies learn how the world works in the first few months of life

play18:43

and similar to how, you know, baby animals also also learn how the world works.

play18:47

So it turns out.

play18:51

You can, in principle, do this by training a system to predict.

play18:54

So you show a system.

play18:56

So that's the way algorithms are trained, right?

play18:58

You show a large neural network, a piece of text,

play19:02

and you mask the end of the text and you ask the system

play19:04

to predict the next word in that text.

play19:07

And if the system is properly trained on trillions of words,

play19:12

then it can produce the next word and then you shift that into the input

play19:15

and produce the next next word and etc.

play19:18

It's called autoregressive prediction.

play19:20

That's how all LLMs work today.

play19:22

Now, if you want systems like this to understand how the world works,

play19:27

why don't you do this with video?

play19:28

So replace the words by video frames

play19:31

and then ask the system to predict what's going to happen next in the video.

play19:34

Predicting the next frame is too easy.

play19:36

You have to ask it to predict

play19:39

multiple frames.

play19:40

And basically, we don't know how to do this properly.

play19:43

It doesn't work for video.

play19:45

What works for text doesn't work for video.

play19:48

And the only techniques that so far that has a chance of working for video

play19:53

is a new architecture that I've called JEPA.

play19:55

That means joint embedding predictive architecture.

play19:57

I'm not going to explain to you what it is, but here is a funny thing.

play20:00

It's not a generative architecture.

play20:02

So the joke I'm saying is not a joke at all.

play20:06

I really believe this.

play20:07

The future of AI is not generative.

play20:10

A lot of people now are talking about generative AI like it's,

play20:13

you know, the kind of the new thing.

play20:15

I think if we find ways to get machines

play20:18

to learn how the world works, they're not going to be generative.

play20:22

So, you know, it's new architectures, right?

play20:25

So getting machines to understand how the world works by basically watching video

play20:31

and the amount of data that we already have for this is more than enough.

play20:34

We just don't know what to do with it.

play20:37

To give you a comparison,

play20:39

a four year old child has been awake 16,000 hours in his overall life.

play20:45

16,000 hours of video is 30 minutes of YouTube uploads.

play20:48

So we have plenty of video. No problem.

play20:51

And it's much, much richer than all the text available on the Internet,

play20:58

which is why we need to get systems to become intelligent.

play21:02

We need them to be trained from high bandwidth signals like video.

play21:06

Text is just not sufficient.

play21:08

So that's the first thing.

play21:09

The second thing is systems that can store and remember.

play21:14

Basically, you have an associative memory in the human brain.

play21:17

There is a particular piece of the brain called the hippocampus

play21:20

that serves as our episodic short term and long term memory.

play21:24

If you don't have hippocampus,

play21:25

you can't remember things for more than a few minutes, two minutes.

play21:30

And LLMs today don't have persistent memory.

play21:34

The only memory is the pump that you give them.

play21:37

And that's just not a good thing.

play21:39

Third thing is reasoning.

play21:42

LLMs cannot reason.

play21:43

They just produce one word or the other without planning in advance

play21:46

what they're going to say.

play21:48

When most of us speak, we plan in advance what we're going to say.

play21:55

And then the last thing is planning, particularly hierarchical planning.

play21:59

When we want to execute a task, even a very simple one.

play22:05

We plan that task and a cat can do this, a dog can do this.

play22:09

No AI system today can do this.

play22:11

At least no LLM can do this.

play22:13

The only systems that can do a bit of planning are the ones that play games

play22:17

like chess and Go.

play22:18

They predict in advance what the possible moves are.

play22:22

But to some extent, that's actually simple for computers to do this kind of planning.

play22:26

Planning in the real world is much, much harder.

play22:28

We don't know how to do it.

play22:29

Thank you, I want to end with maybe this last question.

play22:32

We are at the World Government Summit.

play22:34

We have government officials.

play22:35

We have academia.

play22:36

We have researchers, policymakers from over 140 countries.

play22:41

It's a very unique opportunity for you to be able to give them

play22:44

maybe one piece of advice when it comes to AI

play22:47

for them to take back when they go home to think about what would that be?

play22:51

OK, I'm going to give several pieces of advice.

play22:53

First one is you need AI sovereignty.

play22:58

In your country, your region, your cultural community or linguistic community.

play23:03

And if you want sovereignty because those large models are so expensive to train.

play23:09

For the same reason, we don't need, you know, 10 different types of Internet

play23:13

for the same reason we don't need 10 different highways

play23:16

to go from Dubai to Abu Dhabi, you just need one.

play23:19

You only need a few base models

play23:22

that are open source so that anyone can do whatever they want with it.

play23:25

OK, so the first thing is don't legislate open source

play23:29

AI out of existence because of imaginary fears of dangers that don't exist.

play23:37

You know, premature regulation in that sense is very bad.

play23:43

So that's the first recommendation.

play23:45

Second recommendation, the biggest obstacle to the dissemination of AI

play23:50

within industry and everyday use and everything is

play23:56

education and training.

play23:57

So train your population, educate them about about AI

play24:03

and access to computing resources.

play24:05

So if you have a way of creating a national

play24:09

computing resource for academics, for startups,

play24:13

so that, you know, it sort of lowers the barrier of entry.

play24:16

You can create an ecosystem of AI on top of it. Do it.

play24:20

And then figure out how to use the the, you know, your archive,

play24:25

your cultural archives and use them to train, to fine

play24:30

tune these LLMs for your culture, your language, your value systems

play24:34

and your centers of interest.

play24:36

That's my recommendations.

play24:37

Thank you, Your Highness, Your Excellencies, ladies and gentlemen,

play24:40

thank you very much for joining us and a special round of applause

play24:42

for Yann LeCun for joining us, the World Government Summit.

play24:45

Thank you very much.

Rate This

5.0 / 5 (0 votes)

Do you need a summary in English?