Full interview: "Godfather of artificial intelligence" talks impact and potential of AI

CBS Mornings
25 Mar 202342:30

Summary

TLDRこのビデオのスクリプトでは、AIと機械学習の現在の重要な時点について語られています。特に、ChatGPTのような大規模言語モデルの可能性と、一般公衆の反応の大きさに注目しています。ニューラルネットワークと後方伝播の概念、ディープラーニングの進化、そしてこれらの技術が人間の脳の理解にどのように貢献するかに焦点を当て、AI技術の将来への展望を提供しています。また、これらの進歩が社会、職場、倫理的問題にどのように影響を与えるかについても考察しています。

Takeaways

  • 🔍 AIのこの瞬間は、言語モデルが素晴らしいことができるという事実を一般の人々が突然認識した画期的な瞬間です。
  • 🤖 チャットGPTに対する一般の反応は、研究者たちも驚くほど大きなものでした。
  • 🧠 AI分野には、論理と推論に基づく主流のAIと、生物学的なニューラルネットに基づく別の学派が存在しています。
  • ⚙️ ニューラルネットワークが1980年代にうまく機能しなかったのは、コンピュータの速度とデータセットの大きさが不十分だったからです。
  • 💡 ニューラルネットワークによる学習は、脳の働きを模倣することに基づいており、それは人間が読み書きや数学を学ぶ能力にも関連しています。
  • 📈 ディープラーニングの進化は、複数の表現層を持つニューラルネットを用いて複雑な学習を可能にする重要な転換点でした。
  • 🖥️ バックプロパゲーションとは、予測が間違っていた場合にネットワークの各接続の強度を調整して、より正確な予測を行うプロセスです。
  • 🔬 画像認識システムの進化は、物体を正確に認識できるようになったことで、AIの重要な成果の一つとなりました。
  • 🤔 AIが言語を理解するプロセスは、単に次の単語を予測すること以上のものであり、実際の理解を必要とします。
  • 🚀 AI技術の発展は、産業革命や電気の発明と同じくらいの規模で人類の生活に影響を与える可能性があります。

Q & A

  • ChatGPTに対する一般の人々の反応について、最初に使用したときの反応はどうでしたか?

    -ChatGPTそのものはそれほど驚きませんでしたが、一般の人々の反応の大きさには多少驚きました。

  • AIとニューラルネットワークにおける2つの主要な学派とは何ですか?

    -AIには「メインストリームAI」があり、論理と推論に基づいていました。もう一方の学派は「ニューラルネットワーク」で、生物学的な学習方法、すなわちニューロン間の接続の変化に重点を置いていました。

  • ニューラルネットワークが1980年代にうまく機能しなかった主な理由は何ですか?

    -コンピューターの処理能力が十分ではなく、データセットも十分に大きくなかったためです。

  • ChatGPTの訓練に用いられる「バックプロパゲーション」について、脳が同じ方法を使用しているとは思いますか?

    -いいえ、脳がバックプロパゲーションを使用しているとは思わず、AIと脳の学習方法には根本的な違いがあると考えています。

  • 2006年にAI研究で起きた重要な変化とは何ですか?

    -深層学習(ディープラーニング)の研究が始まり、複数のレイヤーを持つニューラルネットワークが複雑な学習を行うことができるようになりました。

  • 画像認識システムにおいて、ニューラルネットワークのアプローチが従来の方法とどのように異なるか?

    -ニューラルネットワークはランダムな重みから始めて、エラーを最小限に抑えるように重みを調整することにより、自動的に特徴検出器を学習します。従来の方法では、人が手動で特徴を定義していました。

  • ニューラルネットワークの研究が加速した理由は何ですか?

    -コンピュータの処理能力の向上とデータの量が増加したため、ニューラルネットワークをより大きく、より複雑にすることができるようになりました。

  • ニューラルネットワークと人間の脳との間の主要な違いは何ですか?

    -ニューラルネットワークはデジタルコンピュータ上で動作し、高い計算能力を必要としますが、人間の脳はアナログで、はるかに低いエネルギーで動作します。

  • 将来、コーディングのスキルが必要かどうかについての見解は?

    -AIの発展により、コーディングの仕事の性質が変化する可能性がありますが、創造的な側面により重点を置くようになるかもしれません。

  • AIと脳の研究が異なる道を進むと考える理由は何ですか?

    -AIの発展はバックプロパゲーションに依存していますが、人間の脳が同じメカニズムを使用しているとは思わないため、理論的に異なる方向に進んでいると考えています。

Outlines

00:00

🤖 AIの発展とチャットGPTの影響

現在のAIの発展と機械学習の状況について議論し、チャットGPTがどのように驚くべき成果を上げ、大衆に注目されるようになったかについて説明しています。また、Microsoftがリリースしたことにより、チャットGPTの存在が広く知られるようになった点も触れています。

05:01

🧠 AIと脳の学習の比較

AIと脳の学習プロセスの比較、特にニューラルネットワークと脳の機能の類似性について説明しています。また、過去のAIの理論と現在までの発展、そしてその途中で起こった重要な出来事についても触れられています。

10:04

🖼️ 画像認識技術の進歩

画像認識技術の発展と、特に深層学習がその分野に与えた影響について説明しています。また、大容量データベースの構築と、それらを用いた学習プロセスでの成果についても触れられています。

15:06

🌐 AI技術の将来展望

AI技術の将来的な展望、特に低消費電力での運用や、AIと人間の関係性について議論しています。また、AIがもたらす政治的・経済的課題についても言及されています。

20:07

🚀 AIの発展速度とその影響

AIの発展速度が加速していることと、それが社会に与える可能性について述べています。また、AIがもたらす脅威や、それを管理するために必要な取り組みについても話し及んでいます。

25:07

🛠️ AI研究におけるカナダの役割

カナダにおけるAI研究の発展と、どのようにしてその分野のリーダーシップを確保したかについて説明しています。また、研究資金の提供方法や、研究者の交流を支援する取り組みについても触れられています。

Mindmap

Keywords

💡チャットGPT

チャットGPTは、大規模な言語モデルの一つであり、このビデオの主要テーマです。一般大衆にAIの進歩を広く認識させるきっかけとなった技術です。スクリプト内で、チャットGPTが公開された際の大きな反響について触れられており、それがどのようにしてAI技術の「ピボタルモーメント」を形成したかが語られています。

💡ニューラルネットワーク

ニューラルネットワークは、脳のニューロンのネットワークを模倣したコンピュータアルゴリズムです。このビデオでは、AIと機械学習の発展におけるニューラルネットワークの重要性が強調されています。特に、過去のAI研究がニューラルネットワークに焦点を当てる前の対立するアプローチや、ニューラルネットワークが現在どのようにして学習と認識を可能にしているかについて述べられています。

💡ディープラーニング

ディープラーニングは、多層ニューラルネットワークを使用して複雑なパターンを学習する技術です。このビデオでは、2006年頃から始まったディープラーニングの時代がAI研究における転換点であったことが語られています。特に、音声認識や画像認識におけるその成功が触れられ、これがどのようにして技術の発展に寄与したかが説明されています。

💡バックプロパゲーション

バックプロパゲーションは、ニューラルネットワークの学習において誤差を逆伝播させることで重みを調整する方法です。スクリプトでは、このアルゴリズムがどのようにして初期のニューラルネットワークの問題を克服し、学習過程を可能にしたかが説明されています。特に、画像認識タスクにおけるその応用例が詳細に語られています。

💡トランスフォーマー

トランスフォーマーは、自己注意メカニズムを用いたモデルで、特に言語処理タスクに革命をもたらしました。ビデオでは、トランスフォーマーがチャットGPTの基盤技術であり、その発展がどのようにしてディープラーニング研究における新たな時代を開いたかが語られています。

💡人工知能の倫理

人工知能の倫理は、AI技術の発展と応用に伴う倫理的、社会的な問題を扱います。ビデオでは、AIの影響について深く考察する部分があり、特に真実性、自律武器、仕事の未来など、AI技術が人間社会に与える潜在的な影響について議論されています。

💡ジェネラルAI

ジェネラルAI(汎用人工知能)は、あらゆる知的タスクを人間と同等以上にこなせるAIのことを指します。ビデオでは、このようなAIの実現が近づいているかもしれないという懸念が表明され、その潜在的な危険性についても言及されています。

💡データセット

データセットは、機械学習モデルを訓練するために使用されるデータの集合です。ビデオの中で、過去には限られたデータセットでの学習が挑戦であったことや、現代の大規模データセットがAIモデルの性能向上に貢献していることが説明されています。

💡計算能力

計算能力は、AIモデルを訓練し実行するために必要なコンピュータの処理速度やメモリ容量を指します。このビデオでは、過去のAI研究が限られた計算能力によって制約されていたが、現代の高性能コンピューティングによってこれらの制約が克服されたことが語られています。

Highlights

ChatGPT's impact demonstrates the capabilities of large language models, surprising many with its rapid adoption.

Early AI models like GPT-2 and Google's joke-explaining model showcased the potential of AI in natural language understanding.

The public's strong reaction to ChatGPT highlighted the growing interest and potential applications of AI technology.

AI's evolution from focusing on logic and reasoning to neural networks and learning through connections.

The significance of computational power and data availability in the advancement of neural networks since the 1980s.

The role of deep learning in enabling AI to tackle complex tasks like speech recognition and image identification.

Backpropagation's role in improving AI's learning capabilities, despite doubts about its similarity to brain functions.

The transformation in object recognition through neural networks, leading to significant improvements over traditional AI methods.

The emergence of generative models and transformers as key technologies in advancing AI capabilities.

The ongoing challenge of creating AI that can understand and replicate human reasoning and creativity.

The potential for AI to transform various industries and occupations, complementing human creativity and efficiency.

The necessity of considering ethical implications and governance in the development and deployment of AI technologies.

The debate over AI's understanding of truth and its ability to reconcile different worldviews.

The potential risks and benefits of AI in autonomous weapons and the importance of global cooperation in governance.

The foundational role of curiosity-driven research in AI's development and the significance of basic research funding.

The philosophical and practical questions surrounding AI sentience and its implications for society and technology.

Transcripts

play00:00

how would you describe this current

play00:02

moment in AI machine learning whatever

play00:04

we want to call it

play00:05

I think it's a pivotal moment chat GPT

play00:10

has shown that these big language models

play00:12

can do amazing things and the general

play00:15

public has suddenly caught on yeah we

play00:18

have because Microsoft released

play00:20

something

play00:21

and they're suddenly aware of stuff that

play00:24

people of the big companies have been

play00:25

aware of for the last five years yeah

play00:27

what did you think the first time you

play00:29

used chat GPT

play00:31

um

play00:32

it's well I've used lots of things that

play00:34

came before chat gpg that were quite

play00:36

similar so chat GPD itself

play00:40

didn't amaze me much gpt2 which was one

play00:43

of the earlier language Brands amazed me

play00:46

and a model at Google amazed me that

play00:48

could actually explain why a joke was

play00:50

funny oh really yeah in just natural

play00:53

language it'll tell you yeah you tell it

play00:55

a joke not for all jokes but for quite a

play00:58

few of them it can tell you why it's

play00:59

funny okay and that you it seems very

play01:02

hard to say it doesn't understand when

play01:04

it can tell you why a joke's funny so if

play01:06

chat GPT wasn't

play01:08

all that surprising or impressive were

play01:11

you surprised by The public's reaction

play01:12

to it because the reaction was big

play01:15

yes I think everybody was a bit

play01:17

surprised by how big the reaction was

play01:18

that it was the sort of fastest growing

play01:21

up ever yeah

play01:22

um

play01:24

maybe we shouldn't have been surprised

play01:26

but people the researchers had kind of

play01:29

got used to the fact that these things

play01:30

actually worked yeah you were famously

play01:33

like

play01:34

half a century ahead of the curve on

play01:37

this AI stuff go ahead correct me go

play01:40

ahead not really not really because

play01:42

there were two schools of thought in AI

play01:46

um there was mainstream Ai and then

play01:48

there was years but it was all about

play01:49

friends yeah that thought it was all

play01:51

about reasoning and logic and then there

play01:56

was neural Nets which won't call AI then

play01:59

um

play02:00

which thought that you better study

play02:02

biology because those were the only

play02:03

things that really worked

play02:05

and so mainstream AI based its theories

play02:08

on reasoning and logic and we based our

play02:11

theories on the idea that connections

play02:13

between neurons change and that's how

play02:14

you learn

play02:16

and it turned out in the long run

play02:20

um

play02:21

we came up trumps

play02:23

um but in the short term it looked kind

play02:25

of hopeless well looking back knowing

play02:28

what you know now do you think there's

play02:29

anything you could have said then that

play02:31

would have convinced people

play02:32

I could have said it then but it

play02:34

wouldn't have convinced people and what

play02:35

I could have said then is the only

play02:37

reason that neural networks weren't

play02:38

working really well in the 1980s was

play02:41

because the computers weren't fast

play02:42

enough and the data sets weren't big

play02:44

enough

play02:45

but back in the 80s

play02:46

the big issue was could you expect a big

play02:50

neural network with lots of neurons in

play02:52

it compute nodes and connections between

play02:55

them that learns by just changing the

play02:57

strengths of the connections could you

play02:59

expect that to just look at data and

play03:01

with no kind of innate prior knowledge

play03:03

learn how to do things and people in

play03:07

mainstream AI thought that was

play03:08

completely ridiculous it sounds a little

play03:10

ridiculous it is a little ridiculous but

play03:13

it works

play03:15

and how did you know or why did you

play03:16

Intuit that it would work because the

play03:18

brain works because that's you have to

play03:21

explain how come we can do things and

play03:23

how come we can do things like we didn't

play03:24

evolve for like reading reading's much

play03:28

too recent for us to have had

play03:29

significant evolutionary input to it but

play03:31

we can learn to do that and Mathematics

play03:33

we can learn that so there must be a way

play03:35

to learn in these neural networks

play03:38

yesterday Nick Frost who used to work

play03:40

with you told us that you are not really

play03:43

that interested in creating AI your core

play03:45

interest is just in understanding how

play03:47

the brain works yes I'd really like to

play03:49

understand how the brain works obviously

play03:50

if your failed theories of how the brain

play03:53

works lead to good technology you cash

play03:56

in on that and it gets get grants and

play03:58

things but

play03:59

um I really would like to know how the

play04:01

brain works and I think there's

play04:02

currently a Divergence between the

play04:05

artificial neural networks that are the

play04:06

basis of all this new Ai and how the

play04:09

brain actually works I think they're

play04:11

going different routes now so we're

play04:13

still not

play04:14

going about it the right way

play04:16

that's what I believe this is my

play04:18

personal opinion but all of the big

play04:20

models now use a technique called back

play04:22

propagation which you helped popularize

play04:24

popularize in the 80s very good

play04:27

um and I don't think that's what the

play04:29

brain is doing explain why okay there's

play04:32

a fundamental difference between two

play04:34

different there's two different paths to

play04:36

intelligence

play04:37

so One path is a biological path where

play04:40

you have Hardware that's a bit flaky an

play04:44

analog so what we have to do is

play04:46

communicate by using natural language

play04:48

also by showing people how to do these

play04:51

imitation and things like that but

play04:52

instead of being able to communicate a

play04:55

hundred trillion numbers we can only

play04:57

communicate what you could say in a

play04:59

sentence which is not that many bits per

play05:00

second yeah and so we're really bad at

play05:03

communicating compared with these

play05:06

current computer models that run on

play05:08

digital computers it's almost infinite

play05:10

they're able to that's a communication

play05:12

band with this huge yeah because they're

play05:14

exactly the same model they're clones of

play05:16

the same model running on different

play05:18

computers and because of that they can

play05:21

see huge amounts of data because

play05:23

different computers can see different

play05:25

datas and then they can combine what

play05:26

they learned more than any person could

play05:28

ever comprehend far more than any person

play05:30

could have become and yet somehow we're

play05:31

smarter than them still okay so they're

play05:33

like idiot savants right

play05:35

chat GPT knows much more than any one

play05:38

person if you had a competition about

play05:40

you know how much you know it would just

play05:42

wipe out any one person it was amazing

play05:43

at bar trivia yes it would do amazing it

play05:46

would do me and it can do all you can

play05:47

write poems it can

play05:49

you know

play05:50

um they're not so good at reasoning

play05:52

we're better at reasoning we have to

play05:55

extract our knowledge from much less

play05:59

data so we've got a hundred trillion

play06:01

connections most of which we learn but

play06:04

we only live for a billion seconds which

play06:06

isn't very long

play06:07

whereas things like chat GPT

play06:09

have run for much more time than that to

play06:12

absorb all this data but on many

play06:13

different computers

play06:15

1986 you publish a thing in nature that

play06:18

is the idea we're going to have a

play06:20

sentence of words and it'll predict the

play06:22

last word yes that was the first

play06:24

language model that's basically what

play06:26

we're doing now yes and no 1986 was a

play06:29

long time ago why still did people not

play06:31

say oh okay I think he's on to something

play06:33

oh because back then if you asked how

play06:35

much data I trained that model on I had

play06:38

a little

play06:39

um simple world of just family

play06:40

relationships there were 112 possible

play06:43

sentences and I trained it on 104 of

play06:46

them and checked out whether it got the

play06:48

last eight right okay and how would it

play06:50

do

play06:51

it got most of the last eight right okay

play06:53

it did better than symbolic AI so it's

play06:55

just that the computers weren't powerful

play06:57

enough at the time the computers we have

play06:59

now are millions of times faster they're

play07:01

parallel but they can do millions of

play07:03

times more competition so I did a little

play07:05

computation if I'd taken the

play07:09

um

play07:10

computer I had back in 1986

play07:12

and I started learning something on it

play07:15

it would still be running now and not

play07:18

have got there huh

play07:20

um and that's stuff that would now take

play07:22

a few seconds to learn did you know

play07:24

that's what was holding you back

play07:27

I didn't know it I believe that might be

play07:31

what was holding us back

play07:32

but people sort of made fun of the idea

play07:34

that

play07:35

the claim that well you know if I just

play07:37

had a much bigger computer and much more

play07:39

data everything would work and the

play07:40

reason it doesn't work now is because we

play07:41

haven't got enough data on enough

play07:42

compute that's seen as a sort of lame

play07:45

excuse for the fact that your thing

play07:46

doesn't work

play07:47

was it hard in the 90s doing this work

play07:50

in the 90s computers were improving but

play07:54

um yes so there were other learning

play07:57

techniques that are on small data sets

play07:59

worked at least as well as neural

play08:01

networks and were easier to explain and

play08:04

had much fancier mathematical theory

play08:06

behind them

play08:07

and so people within computer science

play08:12

lost interest in neural networks within

play08:14

psychology they didn't because within

play08:16

psychology they're interested in

play08:18

how people might actually learn and

play08:20

these other techniques looked even less

play08:22

plausible than back propagation here

play08:23

which is an interesting part of your

play08:25

background you came to this not because

play08:26

you were interested in computers

play08:28

necessarily but because you were

play08:29

interested in the brain yes I sort of

play08:31

decided I was interested in Psychology

play08:32

originally then I decided we were never

play08:35

going to understand how people work

play08:37

without understanding the brain the idea

play08:38

that you could do it without worrying

play08:40

about the brain that was a sort of

play08:41

fashionable idea back in the 70s but

play08:44

um I decided that wasn't on you had to

play08:45

understand how the brain worked so we

play08:47

fast forward now to the 2000s is there a

play08:50

key moment you think back to is a

play08:52

turning point when it's like okay

play08:54

our side is going to Prevail in this

play08:57

around 2006

play09:00

we started doing what we call Deep

play09:02

learning

play09:03

um before then it had been hard to get

play09:06

to neural Nets with many layers of

play09:08

representation

play09:09

to learn complicated things

play09:12

and we find better ways of doing it

play09:14

better ways of initializing the networks

play09:17

called pre-training and the p in chat

play09:20

gbt stands for pre-training okay

play09:23

um and the t is Transformer and G is

play09:25

generative and it was actually

play09:27

generative models provided this better

play09:29

way of pre-training neural ads so the

play09:32

seeds of it were there in 2006 in by

play09:35

2009 we'd already produced something

play09:38

that

play09:39

was better than the best speech

play09:41

recognizers and recognizing which

play09:43

phoneme you were saying using different

play09:45

technology than all the other speech

play09:46

recognizers were then then the standard

play09:49

approach which you've been tuned for 20

play09:52

for 30 years there were other people

play09:53

using neural Nets but they weren't using

play09:55

deep neural ads and then there's a big

play09:57

thing happens in 2012. yes to actually

play10:00

two big things okay one is that the

play10:04

research we'd done in 2009 done by two

play10:07

of my students over a summer that led to

play10:10

better speech recognition

play10:11

that got disseminated to all the big

play10:13

speech recognition Labs that Microsoft

play10:15

and IBM and Google

play10:17

and in 2012 Google was the first to get

play10:21

into a product and suddenly speech

play10:23

recognition on the Android became as

play10:25

good as Siri if not better

play10:28

so that was a deployment of deep neural

play10:31

Nets applied to speech recognition three

play10:33

years earlier at the same time as that

play10:35

happened within a few months of that

play10:37

happening

play10:38

two other students of mine

play10:41

developed an object recognition system

play10:44

that would look at images and tell you

play10:45

what the object was and it worked much

play10:47

better than previous systems how did

play10:49

this system work okay there was someone

play10:52

called Faith a Lee in her collaborators

play10:54

that created a big database of images

play10:56

like a million images of a thousand

play10:58

different categories you'd have to look

play11:00

at an image and give your best guess

play11:03

about what the primary object was in the

play11:05

image so the images would typically have

play11:07

one object in the middle yeah and I

play11:09

didn't have to say things like bullet

play11:10

train or Husky or and

play11:14

the other systems

play11:16

were getting like 25 errors and we were

play11:19

getting like 15 errors okay within a few

play11:22

years that 15 went down to three percent

play11:23

which was about human level and can you

play11:26

explain in a way people would understand

play11:27

the difference between the way they were

play11:29

doing it and the way your team did it

play11:31

I can try that's all we can hope for

play11:35

okay

play11:36

so suppose you wanted to recognize a

play11:38

bird in an image okay the image itself

play11:41

let's suppose it's a

play11:43

200 by 200 image

play11:46

that's got 200 times 200 pixels

play11:49

and each pixel has three values for the

play11:51

three colors RGB

play11:53

and so you've got 200 by 200 by 3

play11:57

numbers in the computer it's just

play11:58

numbers in the computer right

play12:00

and the job is

play12:02

to take those numbers in the computer

play12:04

and convert them to a string that says

play12:07

bird

play12:08

so how would you go about doing that and

play12:09

for 50 years people in standard AI tried

play12:12

to do that and couldn't

play12:14

got a bunch of numbers into a label that

play12:18

says bird

play12:19

so here's a way you might go about it

play12:21

at the first level of features you might

play12:23

make feature detectors things that you

play12:25

take little combinations of pixels okay

play12:27

so you might make a feature detector

play12:29

that said look if all these pixels are

play12:31

dark and all these pixels are bright I'm

play12:34

going to turn on okay and so that

play12:36

feature detector would represent an edge

play12:38

here okay a vertical Edge you might have

play12:40

another one that said if all these

play12:41

pixels are bright and all these picks as

play12:43

a dark I'll turn on that would be if

play12:45

each detector that represent in a

play12:46

horizontal Edge okay and you can have

play12:48

others for edges of different organs we

play12:49

had a lot of work to do all we've done

play12:50

is made a box right so we've got to have

play12:53

a whole lot of feature settings like

play12:54

that and that's what you actually have

play12:55

in your brain okay so if you look in a

play12:57

cattle monkey cortex it's got feature

play12:59

detectors like that

play13:01

um then at the next level

play13:03

you might say if you were worried up by

play13:06

hand you would create all these little

play13:07

feature detectors at the next level you

play13:10

would say

play13:11

um okay suppose I have two two Edge

play13:14

detectors that join at a fine angle

play13:17

that could just be a beak so the next

play13:19

level up will have a feature detector

play13:21

that detects two of the lower level

play13:22

detectors joining a fine angle okay

play13:25

we might also

play13:26

notice a bunch of edges that sort of

play13:28

form a circle we might have a detector

play13:30

for that okay then the next level up we

play13:32

might have a detector that says hey I

play13:34

found this beak like thing and I find a

play13:37

circular thing in roughly the right

play13:38

spatial relationship to make the eye and

play13:40

the beak of a bird

play13:41

and so at the next level up you'd have a

play13:43

bird detector that says if I see those

play13:45

two there I think it might be a bird

play13:47

okay and you could imagine wiring all

play13:50

that up by hand okay and so the idea of

play13:53

back propagation is

play13:54

just put in random weights to begin with

play13:57

and now the featured textures would just

play13:59

be rubbish whether it be garbage okay

play14:01

okay but look to see what it predicts

play14:04

and if it happened to predict bird it

play14:06

wouldn't but if it happened to leave the

play14:08

weights alone

play14:09

um you got it right the connection

play14:10

strings but if it predicts cat

play14:13

then what you do is you go backwards

play14:15

through the network and you ask the

play14:16

following question

play14:18

and you can ask this with a branch of

play14:20

mathematics called calculus but you just

play14:21

need to think about the question and the

play14:23

question is how should I change this

play14:25

connection strength so it's less likely

play14:28

to say cat I'm more likely to say bird

play14:30

that's called the ER the error the

play14:32

discrepancy right okay and you figure

play14:34

out for every connection strength how I

play14:36

should change a little bit to make it

play14:38

more likely to say bird and less likely

play14:40

to say cat and a person's figuring that

play14:42

out or the algorithm is set to work a

play14:44

person has said this is a bird so a

play14:48

person looked at the image and said it's

play14:49

a bird it's not a cat it's a bird so

play14:51

that's a label supplied by a person

play14:54

but then the algorithm back propagation

play14:56

is just a way of figuring out how to

play15:00

change every connection strength to make

play15:02

it more likely to say burden less likely

play15:04

to say cat it just keeps trying keep

play15:05

turning it just keeps doing that and now

play15:07

if you showed enough birds and enough

play15:09

cats when you showed a bird it'll say

play15:11

burden when you showed a cat it'll say

play15:12

cat and it turns out that works much

play15:15

much better than trying to wire

play15:16

everything by hand and that's what your

play15:18

students did on this image database

play15:20

that's why they did on the image check

play15:21

device yes and they got it to work

play15:22

really well now they were very clever

play15:25

students in fact one of them Ilya

play15:27

sutskova

play15:28

is also one of the main people buying

play15:30

chat gbt

play15:32

so that was a huge moment in Ai and chat

play15:34

gbt was another huge moment and he was

play15:36

actually involved in both of them yeah

play15:38

yeah

play15:39

I don't know maybe it's cold in the room

play15:41

you got to the end of the story I go

play15:43

shivers

play15:44

the idea that you do this little dial

play15:45

thing and it says bird it feels like

play15:47

just an amazing breakthrough yeah I it

play15:50

was

play15:51

um mainly because

play15:53

the other people in computer vision

play15:55

thought okay so these neural Nets they

play15:58

work for simple things like recognizing

play16:00

a handwritten digit but that's not a

play16:02

real complicated image with sort of

play16:04

natural background with stuff it's never

play16:06

going to work for these big complicated

play16:07

images and then suddenly it did

play16:10

and to their credit the people who've

play16:12

been really staunch critics of neural

play16:15

Nets and said these things are never

play16:16

going to work when they worked they did

play16:19

something that scientists don't normally

play16:21

do which she said oh it worked we'll do

play16:23

that people see it as a huge shift yes

play16:25

it was quite impressive that they

play16:27

flipped very fast because they saw that

play16:29

it worked better than what they were

play16:30

doing yeah you make this point

play16:33

that when people are thinking both about

play16:34

their machines and about ourselves in

play16:38

the way we think we think language in

play16:41

language out must be language in the

play16:43

middle yes and this is an important

play16:45

misunderstanding yeah can you just

play16:47

explain that I think that's complete

play16:49

rubbish yeah

play16:50

um so

play16:51

if that were true

play16:53

and it were just language in the middle

play16:55

you'd have thought that approach which

play16:57

is called symbolic AI yeah would have

play16:59

been really good at doing things like

play17:00

machine translation which is just taking

play17:02

English in and producing French art or

play17:04

something

play17:05

your thought manipulating symbols was

play17:09

the right approach for that but actually

play17:10

neural networks work much better than

play17:12

Google translate when they switched from

play17:14

doing that kind of approach to using

play17:15

your alerts really much better what I

play17:17

think you've got in the middle is you've

play17:19

got millions of neurons and some of them

play17:22

are active and some of them aren't and

play17:24

that's what's in there

play17:26

the only place you'll find the symbols

play17:28

are at the input and at the output

play17:30

we're not exactly at the University of

play17:32

Toronto we're close to University of

play17:33

Toronto at universities here and around

play17:37

the world we're teaching a lot of people

play17:39

to code does this still make sense to be

play17:41

teaching so many people to code

play17:44

um I don't know the answer to that in

play17:47

about 2015

play17:49

I famously said it didn't make sense to

play17:52

be teaching Radiologists to recognize

play17:54

things in images

play17:55

and because within the next five years

play17:58

um computers will be better at it yeah

play17:59

are we all about to be Radiologists

play18:01

though well then Coopers are not better

play18:03

I was wrong it's going to take 10 years

play18:05

not five I wasn't wrong in spirit I just

play18:07

got I factor of two computers are now

play18:09

comparable with Radiologists at a lot of

play18:12

medical images yeah they're not way

play18:13

better at all of them yet but they will

play18:15

get better yeah so I think there'll be a

play18:18

while when it's still worth having

play18:20

coders and I don't know how long that'll

play18:21

be but we'll need less of them yeah

play18:23

maybe or we'll need the same number and

play18:26

they'll be able to achieve a whole lot

play18:27

more

play18:29

um was talking about cohere we went over

play18:31

and visited them yesterday you're an

play18:32

investor uh in them maybe maybe the

play18:35

question is just like how they convince

play18:37

you what was the pitch that convinced

play18:39

you I want to invest in this so they're

play18:41

good people

play18:43

um and I've worked with several of them

play18:46

yeah and they were one of the first

play18:49

companies to realize that you need to

play18:52

take these big language models being

play18:54

developed to places like Google and

play18:55

other places open Ai and

play18:59

um make them available to companies

play19:01

so there's going to be it's going to be

play19:04

enormously valuable to companies to be

play19:05

able to use these big language models

play19:08

um

play19:09

and so that's what they've been doing

play19:11

and they've got a significant lead in

play19:13

that

play19:14

so

play19:15

that's why I think they're going to be

play19:16

successful another thing you've said

play19:18

that I just find fascinating so I want

play19:19

to get you to talk about it is

play19:21

the idea that there'll be kind of a new

play19:23

kind of computer that will be

play19:26

sent to this problem what is that idea

play19:30

so

play19:31

there's the biological route to

play19:33

intelligence where every brain is

play19:35

different and we have to communicate

play19:37

Knowledge from one to another by using

play19:38

language

play19:40

and there's the current AI version of

play19:42

neural Nets where you have identical

play19:44

models running on different computers

play19:45

and they can actually share the

play19:47

connection strength so they can share

play19:48

billions of numbers this is how we make

play19:51

a bird yeah so they can share all the

play19:53

connection strengths for recognizing a

play19:54

bird and one can learn to recognize cats

play19:57

and the other can learn to recognize

play19:58

birds and they can share their

play19:58

connection strengths and now each of

play20:00

them can do both things right and that's

play20:02

what's happening in these big language

play20:03

models they're sharing but that only

play20:05

works in digital computers because they

play20:07

have to be able to do identical things

play20:09

and you can't make different biological

play20:11

brains behave identically so you can't

play20:12

share the connections yeah but why

play20:14

wouldn't we stick with digital computers

play20:16

because of the power consumption you

play20:19

need a lot of power it's getting less as

play20:23

chips get better but

play20:25

um you need a lot of power to do this to

play20:27

run a digital computer you have to run

play20:29

it at such high power that it pays

play20:31

exactly in the right way whereas if

play20:33

you're willing to run at much lower

play20:35

power like the brain is

play20:37

then

play20:39

you'll allow a bit of noise and so on

play20:41

but that particular system will adapt to

play20:43

the kind of noise in that particular

play20:45

system and the whole thing will work

play20:47

even though you're not running it at

play20:50

such high power that it pays exactly as

play20:52

you intended

play20:53

and the difference is the brain runs on

play20:55

30 Watts a big AI system needs like a

play20:58

megawatt

play20:59

so we're training on 30 watts and these

play21:01

big a systems are using because they've

play21:02

got lots of copies of the same thing

play21:03

they're using like a megawatt so you

play21:06

know you're talking factor of the order

play21:09

of a thousand in the power requirements

play21:11

and so I think there's going to be a

play21:13

phase when we train on digital computers

play21:16

but once something's trained

play21:19

we run it on very low Power Systems

play21:22

so if you want your toaster to be able

play21:23

to have a conversation with you and you

play21:25

want to chip in it that only costs a

play21:27

couple of dollars

play21:28

but can do chat gbt that'd better be a

play21:31

low power animal chip what are kind of

play21:33

like the next

play21:35

things you think this technology will do

play21:37

that will impact people's lives

play21:40

it's hard to pick one thing I think this

play21:43

it's going to be everywhere right it's

play21:44

already sort of sort of getting to me

play21:46

everywhere chat GPT is just made a lot

play21:49

of people realize it it's going to be

play21:51

everywhere but it's already you know

play21:53

when Google does search it uses big

play21:55

neural Nets to help decide what's the

play21:57

best thing to show you we're at a

play21:59

transition point now where

play22:01

chat gbt is this kind of idiot savant

play22:03

and it's also doesn't really understand

play22:06

about truth

play22:08

is being trained on

play22:10

lots of inconsistent data it's trying to

play22:13

predict what someone will say next on

play22:14

the web yeah and people have different

play22:16

opinions and it has to have a kind of

play22:19

blend of all these opinions

play22:21

so that it can model what anybody might

play22:24

say

play22:25

it's very different from a person who

play22:28

tries to have a consistent world view

play22:29

yeah particularly if you want to act in

play22:32

the world

play22:33

um it's good to have a consistent world

play22:35

view and I think what's good one thing

play22:37

that's going to happen is we're going to

play22:39

move towards systems that

play22:42

um can understand Different World Views

play22:44

and can understand that okay if you have

play22:46

this world view

play22:48

then this is the answer and if you have

play22:51

this other world view then that's the

play22:53

answer we get our own truths well that's

play22:55

the problem right because what you and I

play22:58

probably believe unless you're an

play22:59

extreme relativist is that actually is a

play23:01

truth to the matter

play23:03

certainly on many topics on many topics

play23:05

or even most topics yeah like the Earth

play23:07

is actually not flat it just looks flat

play23:10

right

play23:11

yeah so do we really want a model that

play23:13

says well for some people like we don't

play23:15

know that's going to be a big issue and

play23:17

we don't know we don't know how to deal

play23:19

with other present yeah and I don't

play23:21

think Microsoft knows how to deal with

play23:22

it either they don't and it seems to be

play23:25

a huge governance challenge who makes

play23:28

these decisions

play23:30

it's very tricky things you don't want

play23:33

some big for-profit company deciding

play23:35

what's true but they're controlling how

play23:38

we turn the neurons Google is very

play23:41

careful not to do that at present

play23:43

um

play23:44

what Google will do is refer you to

play23:46

relevant documents which will have all

play23:49

sorts of opinions in them well they

play23:50

haven't released their chat product at

play23:52

least as we speak right um but we've

play23:54

seen at least the people that have

play23:55

released chat products feel like there

play23:58

are certain things they don't want to be

play24:00

said by their voice right so they go in

play24:03

there and meddle with it so it won't say

play24:05

offensive things yeah but there's a

play24:07

limit to what you can do that way

play24:08

there's always going to be things you

play24:09

didn't think of right yeah so I think

play24:11

Google is going to be far more careful

play24:12

than Microsoft when it does release the

play24:14

chatbot yeah and it'll probably

play24:18

um come with lots of warnings this is

play24:19

just a chatbot and and don't necessarily

play24:23

believe what it says careful in the

play24:24

labeling or carefullying in the way they

play24:26

meddle with it so it doesn't do lousy

play24:28

things all of those things careful in

play24:29

how they present it as a product and

play24:31

careful in how they train it yeah um

play24:35

and do a lot of work to prevent it from

play24:38

saying bad things and well who gets to

play24:40

decide what a bad thing is

play24:43

some bad things are fairly obvious but

play24:45

many of the most important ones are not

play24:47

yes so that is a big open issue at

play24:50

present I think Microsoft was extremely

play24:52

Brave to release chat GPT yeah do you

play24:56

see this as like a larger some people

play24:57

see this as a larger societal thing we

play24:59

need either regulation or big public

play25:01

debates about how we handle these issues

play25:04

well when it comes to the issue of

play25:05

what's true

play25:07

I mean do you want the government to

play25:08

decide what's true

play25:10

speak problem right yeah you don't want

play25:12

the government doing it either I'm sure

play25:15

you've thought deeply on this question

play25:16

for a long time how do we navigate the

play25:19

line between you just send it out into

play25:22

the world and we find ways to curate it

play25:25

like I say I don't know the answer and I

play25:28

don't believe anybody really knows how

play25:30

to handle these issues we're going to

play25:32

have to learn quite fast how to handle

play25:34

these issues because

play25:35

it's a big problem with president but

play25:37

yeah how how it's going to be done I

play25:40

don't know but I suspect as a first step

play25:43

at least these big language models are

play25:45

going to have to understand that there

play25:46

are different points of view and that

play25:48

completions it makes a relative to a

play25:51

point of view some people are worried

play25:52

that this could take off very quickly

play25:54

and we just might not be ready for that

play25:56

does that concern you

play25:59

it does a bit until quite recently I

play26:02

thought it was going to be like 20 to 50

play26:04

years probably have general purpose AI

play26:06

yeah and now I think it may be 20 years

play26:11

or less so okay some people think it

play26:13

could be like five

play26:15

is that silly I wouldn't completely rule

play26:18

that possibility out now and where's pre

play26:19

a few years ago I would have said no way

play26:21

okay and then some people say AGI

play26:26

could be massively dangerous to humanity

play26:28

because we just don't know what a system

play26:30

that's so much smarter than us will do

play26:31

do you share that concern

play26:34

I do a bit

play26:36

um

play26:37

I mean obviously what we need to do

play26:39

is make this synergistic

play26:43

have it so it helps people and I think

play26:46

the main issue here well one of the main

play26:48

issues is the political systems we have

play26:51

so I'm not confident that President

play26:55

Putin is going to use AI in ways to help

play26:57

people

play26:58

like even if say the US and Canada and a

play27:01

bunch of countries say okay we're going

play27:02

to put these guard rails up then how do

play27:04

you yeah it's it's

play27:06

pretty for things like autonomous lethal

play27:09

weapons

play27:10

okay we'd like to have something like

play27:12

Geneva conventions like chemical weapons

play27:14

people decided they were so nasty they

play27:16

weren't going to use them except just

play27:17

occasionally but I mean basically they

play27:20

don't use them people would love to get

play27:21

a similar treaty for autonomous lethal

play27:23

weapons but I don't think there's any

play27:25

way they're going to get that I think if

play27:26

Putin had an autonomous lethal weapons

play27:29

he would use them right away this is

play27:30

like the most pointed version of the

play27:32

question and you can just laugh it off

play27:33

or not answer it if you want but what do

play27:35

you think the chances are of AI just

play27:38

wiping out Humanity can we put a number

play27:40

on that

play27:41

it's somewhere between

play27:45

100 percent

play27:47

okay I think I think it's not

play27:51

inconceivable

play27:52

okay

play27:53

that's all I'll say I think if we're

play27:55

sensible

play27:57

we'll try and develop it so that it

play28:00

doesn't but what worries me is the

play28:01

political Citizens We're in yeah where

play28:03

it needs everybody to be sensible

play28:05

there's a massive political challenge it

play28:08

seems to me and there's a massive

play28:09

economic challenge in that you can have

play28:12

a whole lot of individuals who pursue

play28:14

the right course and yet the profit

play28:17

motive of Corporations may not be as

play28:19

cautious as the individuals who work for

play28:21

them

play28:24

Maybe

play28:25

I mean I only really know about Google

play28:28

that's the only Corporation I've worked

play28:29

in and they've been among the most

play28:31

cautious they're extremely cautious

play28:34

about AI because they've got this

play28:36

wonderful search engine that gives you

play28:38

the answers you want to see and

play28:41

they can't afford to

play28:43

risk that yeah whereas Microsoft has

play28:46

Bingham well if being if being

play28:48

disappeared in Microsoft would hardly

play28:50

notice yeah

play28:51

but it was easy for Google to take it

play28:54

slow when there wasn't someone nipping

play28:55

at their heels and this seems to be

play28:57

exactly yeah so Google has actually been

play28:59

in the lead I mean Transformers were

play29:00

invented at Google right the big

play29:02

language models early ones were at

play29:04

Google but and they kind of kept it in

play29:06

your lab they're being much more

play29:08

conservative and I think it might be so

play29:09

yes but now they feel this pressure yeah

play29:13

and so they're trying to they're

play29:15

developing a system called bad that

play29:17

they're going to put out there and

play29:18

they're doing lots and lots of testing

play29:19

of it

play29:21

um

play29:21

but they're going to be I think a lot

play29:23

more cautious than Microsoft

play29:25

you mentioned autonomous weapons let me

play29:27

give you a chance just tell the story

play29:28

what's the connection between that and

play29:30

how you ended up in Canada okay there

play29:33

were several reasons I came to Canada

play29:34

but one of them was certainly not

play29:37

wanting to take money from the U.S

play29:39

defense department this was at the time

play29:41

of Reagan when they were mining the

play29:45

harbors in Nicaragua

play29:47

and it was interesting I was at a big

play29:50

university in Pittsburgh

play29:52

and I was one of the few people there

play29:54

who thought that mining the harbors in

play29:56

Nicaragua was really wrong

play29:58

[Music]

play29:58

um

play30:00

so I felt like a fish out of water and

play30:02

you saw that this was where the money

play30:03

was coming from for this kind of work so

play30:06

that department almost all lemon came

play30:08

from the Transformer you started to talk

play30:10

about the concerns that bringing this

play30:13

technology to Warfare could present what

play30:16

what are your concerns

play30:17

oh that

play30:20

um the Americans would like to replace

play30:22

their soldiers by autonomous by AI

play30:25

soldiers

play30:26

and they're trying to work towards that

play30:28

and what evidence do you see of them

play30:32

I'm on a mailing list

play30:34

from the U.S defense department I'm not

play30:36

sure they know I'm on the meeting list

play30:38

it's a big list they didn't notice

play30:40

you're there you might be off tomorrow I

play30:42

might be off tomorrow what's on the list

play30:44

oh they just describe various things

play30:47

they're going to do there's some

play30:48

disgusting things on there okay we'll

play30:51

discussed you the thing that disgusted

play30:53

me most was a proposal for a

play30:56

self-healing minefield

play30:59

so the idea is look at it from the point

play31:01

of view of the minefield

play31:03

when some silly civilian trespasses into

play31:06

the Minefield they get blown up and that

play31:08

makes a hole in the poor my field so

play31:10

it's got a gap in now so it's not fit

play31:12

for purpose yeah so the ideas may be

play31:15

nearby Minds could communicate or maybe

play31:17

they could move over a bit

play31:19

and they call that healing and it was

play31:22

just the idea of

play31:24

talking about healing for these things

play31:27

that blow the legs off children I mean

play31:29

and the healing being about the

play31:30

Minefield healing yeah that disgusted me

play31:33

there is this argument that though the

play31:36

autonomous systems might play a role in

play31:39

helping the Warfighter it's ultimately a

play31:41

human making the decision here's what

play31:43

worries me if you wanted to make an

play31:45

effective autonomous Soldier

play31:47

you'd need to give it you'd need to give

play31:49

it the ability to create sub goals

play31:52

in other words it has to realize things

play31:54

like okay I want to kill that person

play31:55

over there but to get over there how am

play31:58

I going to get over there and then it

play32:00

has to realize well if I could get to

play32:02

that road I could get there more quickly

play32:04

so it has a sub goal of getting to the

play32:06

road

play32:07

so as soon as you give it the ability to

play32:09

create his own sub goals

play32:10

it's going to become more effective

play32:13

and so people like Putin are going to

play32:15

want robots like that and but as soon as

play32:17

it's got an ability to create sub goals

play32:20

you have What's called the alignment

play32:21

problem which is how do you how are you

play32:24

sure it's not going to create sub goals

play32:25

that are going to be

play32:28

um not good for people not good for you

play32:31

who knows who's on that road who knows

play32:33

on that road and if these systems are

play32:35

being developed by the military the idea

play32:38

of wiring in some rule that says never

play32:40

hurt a person

play32:42

well that's they're being designed to

play32:44

eat at people yeah

play32:46

do you see any way out of this is it a

play32:48

treaty is it what is it

play32:49

I think the best batch is something like

play32:51

a Geneva Convention but it's going to be

play32:52

very difficult I think if there was a

play32:55

lot of public outcry that might persuade

play32:56

I can imagine the Biden Administration

play32:58

going for something like that with

play33:00

enough public outcry

play33:01

but then you have to deal with Putin

play33:04

yeah

play33:06

um okay we've covered so much I think I

play33:08

have like two more things there's one

play33:10

more thing I want to say yeah yeah go

play33:11

for it you can ask me the question some

play33:14

people say that these big models are

play33:15

just autocomplete well on some level the

play33:18

models are autocomplete we're told that

play33:20

the large language models they're just

play33:22

predicting the next word is that not so

play33:24

simple no that's true they are just

play33:26

predicting the next word and so they're

play33:28

just auto-complete but

play33:31

ask yourself the question of what do you

play33:33

need to understand about what's being

play33:35

said so far in order to predict the next

play33:37

word accurately and basically you have

play33:39

to understand what's being said to bring

play33:41

language so you're just already complete

play33:43

too

play33:44

um in the sentences they are you can

play33:46

predict the next word maybe not as well

play33:48

as chat gbt yeah but to do that you have

play33:51

to understand the sentence so let me

play33:52

give you a little example from

play33:54

translation

play33:55

it's a very Canadian example okay

play33:58

suppose I take the sentence the trophy

play34:00

would not fit in the suitcase because it

play34:02

was too big

play34:04

and I want to translate that into French

play34:08

well when I say the trophy would not fit

play34:10

in the suitcase because it was too big

play34:13

you assume the it refers to Trophy I do

play34:16

and in French

play34:18

trophy has a particular gender so you

play34:20

know what pronoun to use yeah but

play34:22

suppose I say the trophy would not fit

play34:24

in the suitcase because it was too small

play34:26

now you think that it refers to suitcase

play34:29

right and that has a different gender in

play34:31

French

play34:32

so in order to translate that sentence

play34:34

to French you have to know when it

play34:37

wouldn't fit in because it was too big

play34:38

it's the trophy that's too big and when

play34:40

it wouldn't fit in because it was too

play34:41

small it's a suitcase that's too small

play34:43

and that means you have to understand

play34:44

about spatial relations and containment

play34:47

and so on yeah so you have to understand

play34:49

just to do machine translation or to

play34:52

predict that pronoun if you want to

play34:53

predict that pronoun you've got to

play34:54

understand what's being said it's not

play34:56

enough just to treat it as a string of

play34:58

words yeah yeah

play34:59

I mean this gets me to another thing

play35:01

you've pointed out which is kind of a

play35:04

either exciting or troubling idea that

play35:06

you working intimately in this field for

play35:09

as long as anyone describe the progress

play35:11

as well we had this idea and we tried it

play35:14

and it worked and so we get a couple

play35:16

decades of back propagation we have this

play35:19

idea for a Transformer now we'll do some

play35:21

trick but it could there's hundreds of

play35:24

other ideas that haven't been tried out

play35:26

yes so I think even if we didn't have

play35:29

any new ideas just making computers go

play35:32

faster and getting more data will make

play35:34

all this stuff work better we've seen

play35:35

that as they scale up chat gbt it's not

play35:38

radically new ideas there I think it's

play35:40

just more connections and more data to

play35:42

train it with yeah but in addition to

play35:44

that there's going to be new ideas like

play35:45

Transformers and they're going to make

play35:46

it work much better are we close to the

play35:48

computers coming up with their own ideas

play35:50

for improving themselves

play35:52

um yes we might be and then it could

play35:54

just go fast that's an issue right we

play35:58

have to

play35:59

think hard about how to control that

play36:00

yeah can we

play36:03

we don't know we haven't been there yet

play36:04

but we can try okay that seems kind of

play36:07

concerning

play36:09

um yes

play36:11

do you have any you're seen as like a

play36:13

Godfather of this industry do you have

play36:15

any concern about what you've wrought

play36:19

I do a bit on the other hand

play36:23

I think

play36:25

whatever's going to happen is pretty

play36:26

much inevitable that is one person

play36:29

stopping doing research wouldn't stop

play36:30

this happening if my impact is to make

play36:32

it happen a month earlier

play36:35

that's about the limit of what one

play36:36

person can do there's this idea of the

play36:39

and I'm going to get it wrong the short

play36:40

Runway and the long takeoff maybe we

play36:43

need time to prepare or maybe it's

play36:44

better if it happens quickly because

play36:45

then people will have urgency around the

play36:47

issue rather than like creep creep creep

play36:49

do you have any like thoughts on this I

play36:51

think time for repair would be good and

play36:53

so I think it's very reasonable for

play36:55

people to be worrying about those issues

play36:56

now even though it's not going to happen

play36:58

in the next year or two yeah people

play36:59

should be thinking about those issues we

play37:01

haven't even touched on job displacement

play37:04

um which is just my mistake for not

play37:06

bringing it up

play37:07

is this just going to eat up just job

play37:09

after job after job after job

play37:11

I think it's going to make jobs

play37:13

different people are going to be doing

play37:15

the more creative end and less of the

play37:17

routine end but what's the creative if

play37:19

it can write the poem and make the movie

play37:21

and all of that

play37:23

well if you go back in history and look

play37:25

at ATMs these cash machines came along

play37:27

and people said that's the end of bank

play37:30

tellers it wasn't actually the end of

play37:32

Bank tell us

play37:33

um the bank tellers now deal with more

play37:34

complicated things

play37:36

and take coders so people say you know

play37:40

these things can do simple coding and

play37:42

usually get it right you just need to

play37:44

get it to write the program and then

play37:46

just check it so you'll be able to work

play37:48

10 times as fast

play37:50

well either you could have

play37:53

10 of the programmers well you could

play37:56

have the same number the program as

play37:57

producing 10 times as much stuff yeah

play37:59

and I think there's going to be a lot of

play38:01

trade-offs like that you'll the once

play38:04

these things start being creative

play38:05

they'll be hugely more stuff created

play38:08

this is the biggest technological

play38:10

advancement sense

play38:12

is this another Industrial Revolution

play38:13

what is this how should people think of

play38:15

it

play38:16

I think it's comparable in scale with

play38:18

the Industrial Revolution or electricity

play38:21

electricity maybe the wheel or maybe the

play38:24

wheel

play38:25

yeah

play38:27

that was earlier yeah okay so buckle up

play38:31

yeah

play38:33

one of the reasons I got a Toronto got a

play38:36

big lead in AI

play38:37

is because of the

play38:40

policies of the granting agencies in

play38:42

Canada which don't have much money but

play38:45

they use some of that money to support

play38:47

curiosity-driven basic research okay and

play38:51

so in the states

play38:52

the funding comes you have to say what

play38:54

what products you're going to produce

play38:56

with it and so on yeah yeah some of the

play38:58

government money quite a lot of it is

play39:00

given to

play39:01

professors to employ graduate students

play39:05

and other researchers to explore things

play39:07

they're curious about and if they seem

play39:10

to be good at that then they get more

play39:12

money three years later

play39:13

and that's what supported both Joshua

play39:15

Benja and me it was money for curing

play39:18

curiosity driven basic research and

play39:21

we've seen that before even through

play39:23

Decades of not being able to show much

play39:25

yes even through decades not being able

play39:26

too much so that's one thing that

play39:28

happened in Canada another thing that

play39:29

happened was there's a Canadian

play39:31

organization called the Canadian

play39:33

Institute for advanced research

play39:35

that provides extra money to professors

play39:38

in areas where Canada is good and

play39:41

provides money for professors to

play39:42

interact with each other when they're

play39:44

far apart like in Vancouver and Toronto

play39:46

but also to interact with researchers in

play39:48

other parts of the world

play39:50

um like America and Britain and Israel

play39:51

and so on and

play39:54

see far setup of programming AI is set

play39:57

at one originally in the 1980s which is

play40:00

the one that brought me to Canada which

play40:01

was in symbolic AI

play40:04

yet you came I was an oddball okay I was

play40:07

kind of weird because I did this stuff

play40:09

everybody else thought was nonsense they

play40:11

recognized that I was good at this kind

play40:12

of nonsense and so if anyone's going to

play40:15

do the nonsense it might as well be him

play40:17

one of my letters of recommendation said

play40:18

that it said you know I don't believe in

play40:22

this stuff but if you want somebody to

play40:23

do it Jeff engines to go okay

play40:26

um and

play40:27

then after that program finished I went

play40:29

back to Britain for a few years and then

play40:31

when I came back to Canada they decided

play40:33

to fund a program in deep learning

play40:36

essentially sentience I think you have

play40:39

complaints with the even just how you

play40:40

define that right yeah I

play40:43

when it comes to sentience I'm amazed

play40:45

that

play40:47

um people can confidently pronounce

play40:49

these things are not sentient and when

play40:51

you ask them what they mean by sentient

play40:52

they say well they don't really know

play40:54

so how can you be confident they're not

play40:57

sentient if you don't know what sentient

play40:58

means so maybe they are already

play41:01

who knows I think whether they're

play41:03

sentient or not depends on what you mean

play41:05

by sentient so you better Define what

play41:07

you mean by sentient before you try and

play41:08

answer the question are they sentient

play41:10

does it matter what we think or does it

play41:12

only matter whether it effectively acts

play41:15

as if it is sentient

play41:17

it's a very good question Matt

play41:20

and what's your answer I don't have one

play41:22

sure okay because if it's not sentient

play41:24

but it decides for whatever reason that

play41:26

it believes it is and it needs to

play41:29

achieve some goal that is contrary to

play41:31

our interests but it believes in its

play41:32

interests does it really matter if in

play41:35

any human I think a good a good context

play41:38

to think of this in is an autonomous

play41:40

Lethal Weapon yeah okay so it's all very

play41:43

well saying it's not sentient but when

play41:45

it's hunting you down to shoot you

play41:48

um

play41:49

yeah you're going to start thinking it's

play41:51

sentient

play41:52

we're not really caring not an important

play41:54

standard anymore the kind of

play41:56

intelligence we're developing is very

play41:57

different from our intelligence so it's

play41:59

this idiot savant kind of intelligence

play42:01

yes so it's quite possible as

play42:03

if it is a tool center is essentially in

play42:06

a somewhat different way from us but

play42:08

your goal is to make it more like us and

play42:10

you think we'll get there and my goal is

play42:11

to understand us oh okay no but yeah and

play42:14

I think I think the way you understand

play42:15

us is by building things like us okay so

play42:18

that's I mean the physics is called

play42:19

Richard Feynman said you can't you don't

play42:22

can't understand things unless you can

play42:24

build them that's the real test of do

play42:26

you understand it and so you've been

play42:27

building so I've been building yeah

Rate This

5.0 / 5 (0 votes)

Related Tags
AI発展社会的影響インタビュー技術進歩ジョージ・デービットニューラルネットワーク深層学習自然言語処理AI倫理就業影響デジタルコンピューター
Do you need a summary in English?