Why AI progress seems "stuck" | Jennifer Golbeck | TEDxMidAtlantic

TEDx Talks
29 Aug 202412:53

Summary

TLDRThe video script discusses the current state of artificial intelligence, highlighting the difference between narrow AI and the hypothetical AGI. It questions the hype around AGI, suggesting that current AI tools, while powerful, are not yet at a level that poses an existential threat. The speaker addresses concerns about AI's reliability and potential for 'hallucinations,' the challenges in improving AI with data, and the economic feasibility of advancements. They also touch on the impact of AI on jobs, the issue of AI bias, and the unique aspects of human intelligence that AI cannot replicate, concluding with a humorous note on humanity's control over AI.

Takeaways

  • 🧠 The script discusses the current state of artificial intelligence (AI), highlighting that while AI has surpassed human performance in specific tasks, such as chess, the concept of artificial general intelligence (AGI) is still a topic of debate and concern.
  • 🔮 There is a growing concern among some in the tech industry about the potential dangers of AI, with calls for regulation due to its perceived power and threat to civilization.
  • 💡 The script suggests two reasons for the industry's focus on the dangers of AI: the potential for significant financial gain by emphasizing the power of the technology, and the cinematic allure of AI overtaking humanity, which distracts from current AI-related issues.
  • 🤖 The speaker questions the likelihood of achieving AGI, citing examples of current AI tools that are not yet perfect, such as Google's AI Search tool and the unreliability of generative AI in producing accurate responses.
  • 📈 The script points out that while there is significant investment in AI, the return on investment is not yet clear, and the sustainability of this investment is in question.
  • 🔍 A key challenge identified for AI is reliability, with algorithms often providing incorrect information, which is a hurdle to overcome before AI can live up to its hype.
  • 🎭 The concept of 'AI hallucination' is introduced, referring to the tendency of AI to fabricate information or responses, which is a significant issue that needs addressing.
  • 👨‍💼 The script argues that the fear of AI taking jobs may be overstated, as increased efficiency through AI tools could lead to profit rather than job losses.
  • 🔒 The issue of AI inheriting human biases is highlighted as a persistent problem that has not been solved, which is crucial when considering AI in decision-making roles.
  • 🛠️ The speaker expresses skepticism about the ability to solve the reliability and hallucination problems of AI, suggesting that the technology may have reached a plateau.
  • 🌐 Finally, the script emphasizes that human intelligence is defined by our emotional and creative capabilities, which AI cannot replicate, offering a reassurance that AI will not replace our core humanity.

Q & A

  • What is the concept of Artificial General Intelligence (AGI) discussed in the script?

    -AGI refers to the idea of AI that can perform at or above human levels on a wide variety of tasks, similar to the capabilities of human intelligence.

  • Why are some people in the tech industry concerned about the AI they are building?

    -They believe the AI is so powerful and dangerous that it poses a threat to civilization and may need to be regulated due to its potential to cause existential harm to humanity.

  • What are the two main reasons suggested in the script for the tech industry's concern about AI?

    -One is the potential for significant financial gain by emphasizing the power and danger of their technology, and the other is the cinematic appeal of the concept of AI overtaking humanity, which serves as a distraction from real-world AI problems.

  • What is the current state of AI in terms of achieving AGI according to the script?

    -The script suggests that while there is hype around AGI, the current state of AI, exemplified by tools like Google's AI Search, is far from achieving AGI and may be at a plateau rather than on a sharp upward trajectory.

  • What is the main challenge that needs to be solved to realize the hype around AI?

    -The main challenge is reliability, as AI algorithms often produce incorrect results or 'hallucinations,' which means they cannot be fully trusted to perform tasks without human correction.

  • What is an 'AI hallucination' as mentioned in the script?

    -AI hallucination refers to the phenomenon where AI makes up information or content that did not exist in the training data, leading to incorrect or misleading outputs.

  • Why is solving the AI hallucination problem important for the future of AI?

    -Solving the hallucination problem is important because it affects the reliability and trustworthiness of AI, which are crucial for AI to live up to its hype and be useful in practical applications.

  • What are the two factors mentioned in the script that AI tools need to improve upon?

    -AI tools need to improve upon the amount of data they are trained on and the underlying technology itself to enhance their capabilities and reliability.

  • How does the script address the concern about AI taking all of our jobs?

    -The script suggests that the concern is based on a misunderstanding, as AI can increase efficiency but does not necessarily replace jobs, especially considering the cost and availability of AI tools.

  • What is the fundamental issue with AI that the script suggests we should worry about?

    -The script suggests worrying about the issue of AI adopting human biases from training data, which has not been successfully addressed and can lead to problematic outcomes in decision-making.

  • What is the final point made in the script about human intelligence and AI?

    -The script concludes that human intelligence is defined by our ability to connect, have emotional responses, and creatively integrate information, which AI cannot replicate, thus distinguishing our humanity from AI capabilities.

Outlines

00:00

🧠 The Hype and Concerns Around AGI

The first paragraph discusses the current state of artificial intelligence (AI), highlighting its ability to outperform humans in specific tasks such as playing chess. It introduces the concept of artificial general intelligence (AGI), which is AI that can perform at or above human levels across a wide range of tasks. The speaker expresses concern about the discourse surrounding AGI, noting that while some in the tech industry warn of its potential dangers to civilization, others may be motivated by financial gain or the cinematic allure of AI overtaking humanity. The paragraph also points out that focusing on improbable futures can distract from real-world issues already arising from AI, such as racial bias in AI decision-making for prison release and the challenge of deep fakes.

05:01

🔮 The Reality of AGI and the Challenge of Reliability

The second paragraph delves into the challenges of achieving AGI, starting with the issue of reliability. It mentions AI's tendency to produce incorrect results, using Google's AI Search tool as an example. The speaker argues that the current trajectory of AI improvements may not be sufficient for achieving AGI and discusses 'AI hallucination,' where AI generates false information or images. The paragraph also addresses the high expectations set for AI in fields like law, where it has been used to write legal briefs, only to generate fictitious cases. The need for more data and technological advancements is highlighted, along with skepticism about the availability of sufficient high-quality data and the sustainability of investment in AI improvement.

10:01

🛠 The Future of AI: Improvements, Bias, and Human Connection

The final paragraph contemplates the future of AI, focusing on the need for substantial improvements in data and technology. It questions the economic viability of investing in AI to replace human workforces, given the availability of affordable, open-source AI tools. The speaker emphasizes the persistent issue of AI inheriting human biases and the futility of guardrails in addressing this problem. The paragraph concludes by distinguishing between human intelligence, defined by emotional connection and creativity, and AI, which lacks these core human attributes. It reassures that despite fears of AI overlords, humans retain control over technology, as we can always 'turn it off.'

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is central to the discussion, with a focus on its current capabilities and potential future developments, especially in relation to surpassing human performance in various tasks.

💡Generative AI

Generative AI is a subset of AI that can generate new content, such as text, images, or music, rather than just recognizing patterns or making predictions. The video script discusses the introduction of generative AI to the public and the ensuing discussions about its potential to achieve artificial general intelligence (AGI).

💡Artificial General Intelligence (AGI)

AGI represents the concept of AI that can perform at or above human levels across a broad range of tasks, as opposed to narrow AI that excels in specific tasks. The script explores the idea of AGI and the concerns surrounding the potential for AI to match or exceed human cognitive abilities.

💡Reliability

Reliability, in the context of AI, pertains to the consistency and accuracy of the AI's performance. The video emphasizes the importance of reliability in AI, noting that current algorithms often produce incorrect results, which undermines their usefulness and trustworthiness.

💡AI Hallucination

AI hallucination is a term used to describe the phenomenon where AI generates content that is factually incorrect or makes things up, even when presented with real data. The script provides examples of AI hallucination, such as creating non-existent threats of violence or generating misleading legal cases.

💡Deep Fakes

Deep fakes are AI-generated media, often videos or audio, that convincingly replace or superimpose people's images or voices without their consent. The video mentions deep fakes as an example of real-world problems caused by AI that deserve attention instead of focusing on speculative AGI scenarios.

💡Racial Bias

Racial bias in AI refers to the不公平 treatment or discrimination against certain racial groups by AI systems, often due to biased training data. The script discusses the issue of racial bias in AI, particularly in the context of AI being used in decision-making processes such as parole board decisions.

💡Incremental Improvements

Incremental improvements suggest small, gradual advancements in a technology or system. The video contemplates whether the current trajectory of AI development is leading towards AGI or if it is simply reaching a plateau with only incremental improvements expected in the future.

💡Productivity

Productivity in the video is discussed in the context of AI's potential to enhance human efficiency in the workplace. However, it also challenges the notion that increased productivity is the sole measure of human intelligence, emphasizing the importance of emotional intelligence and creativity.

💡Human Intelligence

Human intelligence encompasses not just cognitive abilities but also emotional responses, social connections, and creative thinking. The video contrasts human intelligence with AI, asserting that AI may imitate but cannot truly replicate the depth and complexity of human intelligence.

💡Elon Musk

Elon Musk, mentioned in the script, is a notable figure in the tech industry known for his views on AI, including predictions about the timeline for achieving AGI. His perspective adds to the debate on the pace of AI development and its potential impact on society.

💡Google AI Search

Google AI Search is an example of a specific AI application discussed in the video, which aims to provide direct answers to users' queries. The script uses it to illustrate the current limitations of AI in providing reliable and accurate information.

Highlights

Artificial intelligence has surpassed human performance in specific tasks such as chess.

The concept of artificial general intelligence (AGI) is gaining attention, with concerns about its potential threat to civilization.

Tech industry leaders are warning about the dangers of AI, advocating for regulation.

There is skepticism about the profitability and necessity of regulating powerful AI tools.

The fear of AI overtaking humanity is often sensationalized, distracting from current AI-related issues.

Elon Musk predicts AGI could be achieved within a year, despite current AI tools' limitations.

Google's AI Search tool exemplifies the current limitations of AI in providing accurate information.

The trajectory of AI development needs to be continuously upward to achieve AGI.

Reliability is a significant challenge for AI, as algorithms often produce incorrect results.

AI 'hallucination', or making up information, is a major issue that needs addressing.

The potential solution to AI hallucination may not be achievable with current technology.

Legal applications of AI have faced issues with accuracy and the creation of fictitious cases.

Even the best AI tools still hallucinate a significant percentage of the time.

The need for more data and technological improvements to enhance AI capabilities.

The challenge of finding reliable data to train AI, especially with the prevalence of low-quality content.

Investment in generative AI has not yet resulted in a sustainable financial return.

The debate over AI replacing jobs and the economic implications of increased efficiency.

AI's inability to replicate human emotional intelligence and creativity.

The persistent issue of AI inheriting human biases and the challenges in addressing this.

The importance of solving AI bias before widespread adoption in decision-making roles.

A reminder that, contrary to movies, we can always turn off AI if it becomes a threat.

Transcripts

play00:16

[Music]

play00:19

we've built artificial intelligence

play00:21

already that on specific tasks performs

play00:24

better than humans there is AI that can

play00:27

play chess and beat human Grand Masters

play00:30

but since the introduction of generative

play00:32

AI to the general public a couple years

play00:34

ago there's been more talk about

play00:37

artificial general intelligence or AGI

play00:40

and that describes the idea that there's

play00:42

AI that can perform at or above human

play00:45

levels on a wide variety of tasks just

play00:48

like we humans are able to do and people

play00:51

who think about AGI are worried about

play00:53

what it means if we reach that level of

play00:56

performance in the technology right now

play00:58

there's people from the tech industry

play01:00

coming out and saying the AI that we're

play01:02

building is so powerful and dangerous

play01:04

that it poses a threat to civilization

play01:07

and they're going to government and

play01:08

saying maybe you need to regulate us now

play01:11

normally when an industry makes a

play01:13

powerful new tool they don't say it

play01:15

poses an existential threat to humanity

play01:16

and that it needs to be limited so why

play01:18

are we hearing that language and I think

play01:21

there's two main reasons one is if your

play01:25

technology is so powerful that it can

play01:27

destroy civilization between between now

play01:30

and then there's an awful lot of money

play01:33

to be made with that and what better way

play01:35

to convince your investors to put some

play01:37

money with you than to warn that your

play01:39

tool is that dangerous the other is that

play01:42

the idea of AI overtaking humanity is

play01:45

truly a cinematic concept we've all seen

play01:48

those movies and it's kind of

play01:50

entertaining to think about what that

play01:51

would mean now with tools that we're

play01:53

actually able to put our hands on in

play01:56

fact it's so entertaining that it's a

play01:58

very effective distra action from the

play02:01

real problems already happening in the

play02:03

world because of AI the more we think

play02:06

about these improbable Futures the less

play02:10

time we spend thinking about how do we

play02:12

correct deep fakes or the fact that

play02:14

there's AI right now being used to

play02:16

decide whether or not people are let out

play02:18

of prison and we know it's racially

play02:20

biased but are we anywhere close to

play02:23

actually achieving AGI some people think

play02:26

so Elon Musk said that we'll achieve it

play02:28

within a year I think he posted this a

play02:30

few weeks ago but like at the same time

play02:32

Google put out their AI Search tool

play02:35

that's supposed to give you the answer

play02:36

so you don't have to click on a link and

play02:38

it's not going super

play02:44

well please don't eat

play02:46

rocks now of course these tools are

play02:49

going to get better but if we're going

play02:52

to achieve AGI or if they're even going

play02:54

to fundamentally change the way we work

play02:57

we need to be in a place where they are

play02:59

continuing on a sharp upward trajectory

play03:01

in terms of their abilities and that may

play03:04

be one path but there's also the

play03:06

possibility that what we're seeing is

play03:08

that these tools have basically achieved

play03:10

what they're capable of doing and the

play03:12

future is incremental improvements in a

play03:15

plateau so to understand the AI future

play03:18

we need to look at all the hype around

play03:20

it and get under there and see what's

play03:21

technically possible and we also need to

play03:23

think about where are the areas that we

play03:25

need to worry and where are the areas

play03:27

that we don't so if we want to realize

play03:30

the hype around AI the one main

play03:32

challenge that we have to solve is

play03:34

reliability these algorithms are wrong

play03:37

all the time like we saw with Google and

play03:40

Google actually came out and said after

play03:42

these bad search results um were

play03:44

popularized that they don't know how to

play03:46

fix this problem I use chat GPT every

play03:48

day I read a newsletter that summarizes

play03:51

discussions on farri message boards and

play03:53

so I download that data chat GPT helps

play03:54

me write a summary and it makes me much

play03:57

more efficient than if I had to do it by

play03:59

by hand

play04:00

but I have to correct it every day

play04:02

because it misunderstands something it

play04:04

takes out the context and so because of

play04:07

that we can't just rely on it to do the

play04:10

job for me and this reliability is

play04:12

really important now a subpart of

play04:16

reliability in this space is AI

play04:18

hallucination a great technical term for

play04:20

the fact that AI just makes stuff up a

play04:23

lot of the time I did this in my

play04:25

newsletter I said J gbt are there any

play04:27

people threatening violence if so give

play04:29

me the quotes and It produced these

play04:31

three really clear threats of violence

play04:33

that didn't sound anything like people

play04:34

talk on these message boards and I went

play04:36

back to the data and nobody ever said it

play04:38

it just made it up out of thin air and

play04:41

you may have seen this if you've used an

play04:43

AI image generator I asked it to give me

play04:45

a close up of people holding hands

play04:47

that's a hallucination and a disturbing

play04:49

one at

play04:51

that we have to solve this hallucination

play04:55

problem if this AI is going to live up

play04:57

to the hype and I I don't think it's a

play05:00

solvable problem with the way this

play05:02

technology works there are people who

play05:04

say we're going to have it taken care of

play05:05

in a few months but there's no technical

play05:07

reason to think that's the case because

play05:10

generative AI always makes stuff up when

play05:13

you ask it a question it's creating that

play05:16

answer or creating that image from

play05:18

scratch when you ask it's not like a

play05:20

search engine that goes and finds the

play05:22

right answer on a page and so because

play05:24

its job is to make things up every time

play05:27

I don't know that we're going to be able

play05:29

to get it to make up correct stuff and

play05:31

then not make up other stuff that's not

play05:33

what it's trained to do and we're very

play05:35

far from achieving that and in fact

play05:37

there are spaces where they're trying

play05:39

really hard one space that there's a lot

play05:41

of enthusiasm for AI is in the legal

play05:43

area where they hope it will help write

play05:45

legal briefs or do research some people

play05:48

have found out the hard way that they

play05:50

should not write legal briefs right now

play05:52

with chat GPT and send them to federal

play05:54

court because it just makes up cases

play05:57

that sound right and that's a real

play05:59

really fast way to get a judge mad at

play06:01

you and to get your case thrown out now

play06:04

there are legal research companies right

play06:06

now that advertise hallucination free

play06:09

generative Ai and I was really dubious

play06:13

about this and researchers at Stanford

play06:16

actually went in and checked it and they

play06:18

found the best performing of these

play06:20

hallucination free tools still

play06:22

hallucinate 177% of the time so like on

play06:26

one hand it's a great scientific

play06:28

achievement that we have built a tool

play06:30

that we can posee basically any query to

play06:33

and 60 or 70 or maybe even 80% of the

play06:36

time it gives us a reasonable answer but

play06:39

if we're going to rely on using those

play06:41

tools and they're wrong 20 or 30% of the

play06:43

time there's no model where that's

play06:45

really useful and that kind of leads us

play06:48

into how do we make these tools that

play06:51

useful because even if you don't believe

play06:53

me and you think we're going to solve

play06:54

this hallucination problem we're going

play06:55

to solve the reliability problem the

play06:57

tools still need to get better than they

play06:58

are now and there's two things they need

play07:00

to do that one is lots more data and two

play07:04

is the technology itself has to improve

play07:07

so where are we going to get that data

play07:08

because they've kind of taken all the

play07:10

reliable stuff online already and if we

play07:13

were to find twice as much data as

play07:15

they've already had that doesn't mean

play07:16

they're going to be twice as

play07:18

smart I don't know if there's enough

play07:20

data out there and it's compounded by

play07:22

the fact that one way that generative AI

play07:24

has been very successful is at producing

play07:27

lowquality content online that's bots on

play07:30

social media misinformation and these

play07:33

SEO pages that don't really say anything

play07:35

but have a lot of ads and come up high

play07:36

in the search results and if the AI

play07:39

starts training on pages that it

play07:41

generated we know from Decades of AI

play07:44

research that they just get

play07:45

progressively worse it's like the

play07:47

digital version of mad cow

play07:50

disease let's say we solve the data

play07:53

problem you still have to get the

play07:55

technology better and we've seen $50

play07:57

billion dollar in the last couple years

play08:00

invested in improving generative Ai and

play08:03

that's resulted in $3 billion in Revenue

play08:06

so that's not sustainable but of course

play08:08

it's early right companies may find ways

play08:10

to start using this technology but is it

play08:13

going to be valuable enough to justify

play08:16

the tens and maybe hundreds of billions

play08:17

of dollars of Hardware that needs to be

play08:20

bought to make these models get better I

play08:23

don't think so and we can kind of start

play08:25

looking at practical examples to figure

play08:27

that out and it leads us to think about

play08:29

where are the spaces we need to worry

play08:30

and not because one place that

play08:33

everybody's worried with this is that AI

play08:34

is going to take all of our jobs lots of

play08:36

people are telling us that's going to

play08:37

happen and people are worried about it

play08:39

and I think there's a fundamental

play08:41

misunderstanding at the heart of that so

play08:43

imagine this scenario we have a company

play08:45

and they can afford to employ two

play08:47

software engineers and if we were to

play08:49

give those software Engineers some

play08:51

generative AI to help write code which

play08:53

is something it's pretty good at let's

play08:54

say they're twice as efficient that's a

play08:57

big overestimate but it makes the math

play08:59

easy easy so in that case the company

play09:01

has two choices they could fire one of

play09:02

those software Engineers because the

play09:04

other one can do the work of two people

play09:06

now or they already could afford two of

play09:10

them and now they're twice as efficient

play09:13

so they're bringing in more money so why

play09:14

not keep both of them and take that

play09:17

extra profit the only way this math

play09:19

fails is if the AI is so expensive that

play09:22

it's not worth it but that would be like

play09:25

the AI is $100,000 a year to do one

play09:28

person's work of work so that sounds

play09:31

really expensive and practically there

play09:34

are already open-source versions of

play09:36

these tools that are low cost that

play09:37

companies can install and run themselves

play09:40

now they don't perform as well as the

play09:41

flagship models but if their half is

play09:44

good and really cheap wouldn't you take

play09:46

those over the one that cost a $100,000

play09:48

a year to do one person's work of course

play09:50

you would and so even if we solve

play09:52

reliability we solve the data problem we

play09:54

make the models better the fact that

play09:57

there are cheap versions of this

play09:58

available suggest that companies aren't

play10:01

going to be spending hundreds of

play10:02

millions of dollars to replace their

play10:03

Workforce with AI there are areas that

play10:06

we need to worry though because if we

play10:08

look at AI now there are lots of

play10:10

problems that we haven't been able to

play10:12

solve I have been building artificial

play10:14

intelligence for over 20 years and one

play10:16

thing we know is that if we train AI on

play10:19

human data the AI adopts human biases

play10:23

and we have not been able to fix that

play10:26

we've seen those biases start showing up

play10:28

in generative AI

play10:30

and the gut reaction is always well

play10:32

let's just put in some guard rails to

play10:33

stop the AI from doing the biased thing

play10:36

but one that never fixes the bias

play10:38

because the AI finds a way around it and

play10:40

two the guard rails themselves can cause

play10:42

problems so Google had an has an AI

play10:45

image generator and they tried to put

play10:46

guardrails in place to stop the bias in

play10:49

the results and it turned out it made it

play10:51

wrong this is a request for a picture of

play10:54

the signing of the Declaration of

play10:55

Independence and it's a great picture

play10:57

but it is not factual correct and so in

play11:00

trying to stop the bias we end up

play11:04

creating more reliability problems we

play11:07

haven't been able to solve this problem

play11:10

of bias and if we're thinking about

play11:12

deferring decision-making replacing

play11:14

human decision makers and relying on

play11:16

this technology and we can't solve this

play11:18

problem that's a thing that we should

play11:20

worry about and demand solutions to

play11:22

before it's just widely adopted and

play11:24

employed because it's sexy and I think

play11:27

there's one final thing that's missing

play11:29

here which is our human intelligence is

play11:31

not defined by our productivity at work

play11:35

at its core it's defined by our ability

play11:37

to connect with other people our ability

play11:39

to have emotional responses to take our

play11:43

past and integrate it with new

play11:44

information and creatively come up with

play11:46

new things and that's something that

play11:48

artificial intelligence is not now nor

play11:50

will it ever be capable of doing it may

play11:53

be able to imitate it and give us a

play11:54

cheap fact simile of genuine connection

play11:57

and empathy and creativity but it can't

play12:00

do those core things to our humanity and

play12:03

that's why I'm not really worried about

play12:05

AGI taking over civilization but if you

play12:09

come away from this disbelieving

play12:11

everything I have told you and right now

play12:13

you're worried about Humanity being

play12:15

destroyed by AI overlords the one thing

play12:17

to remember is despite what the movies

play12:20

have told you if it gets really bad we

play12:23

still can always just turn it

play12:26

off thank you

play12:46

MEC

Rate This

5.0 / 5 (0 votes)

相关标签
Artificial IntelligenceAGI DebateTech EthicsReliability IssuesAI HallucinationsData ChallengesJob AutomationBias in AIHuman ConnectionInnovation Future
您是否需要英文摘要?