Wharton professor: 4 scenarios for AI’s future | Ethan Mollick for Big Think+

Big Think
29 May 202408:29

Summary

TLDREthan Mollick, a Wharton professor, discusses the profound impact of AI on humanity, urging individuals and leaders to embrace AI's potential for enhancing human life rather than fearing it. He explains AI's evolution from numerical predictions to large language models capable of unexpected tasks like medicine and creativity. Mollick highlights the importance of understanding AI's jagged capabilities and its potential to augment human creativity and decision-making, while also considering the societal implications of its growth.

Takeaways

  • 🤖 AI as an existential challenge: The script emphasizes the profound impact of AI on what it means to be human and the existential questions it raises about our roles and capabilities.
  • 🧠 Human vs. AI capabilities: AI's superiority in certain tasks can lead to self-reflection about where humans excel and where AI might outperform us.
  • 💼 Managerial control over AI: Leaders have the power to decide how AI is integrated and used to enhance human endeavors.
  • 👤 Individual responsibility: Each person has a part to play in choosing how to engage with AI systems effectively.
  • 📚 AI's role in prediction: AI is fundamentally about making predictions, evolving from numerical predictions to understanding complex contexts.
  • 📈 The transformative power of large language models: Innovations like the 'attention mechanism' have significantly improved AI's ability to understand and predict language.
  • 🌐 Data-driven learning: AI learns from vast amounts of data, including internet content, to understand word relationships and improve predictions.
  • 🚀 Unexpected versatility of AI: Large language models have shown surprising competence in areas like medicine and creativity, beyond their initial design.
  • 🛰️ AI as a general-purpose technology: Like steam power or the internet, AI is a transformative technology that will alter society and work in unpredictable ways.
  • 🔮 Scenario planning for AI's future: Considering different potential futures for AI, from static to AGI and beyond, helps in preparing for and shaping AI's impact.
  • 📊 Rapid AI advancement: The current pace of AI improvement is exceptionally fast, suggesting continuous growth and the need to adapt quickly.
  • 🛠️ AI as a tool for work enhancement: Studies indicate significant performance improvements when AI is used in various professional fields.
  • 🔮 AI's role in broadening perspectives: AI can provide additional viewpoints, aiding in decision-making and sparking creativity.
  • 🎭 Creativity and AI: AI's unexpected proficiency in generating creative ideas challenges traditional notions of human creativity.
  • 🔍 Understanding AI's 'jagged frontier': Recognizing AI's strengths and limitations is crucial for effective use and to avoid overestimating its capabilities.
  • 🌌 The concept of 'hallucination' in AI: Being aware of AI's potential to generate plausible but false information is important for discerning users.
  • 🏛️ Societal control over AI: The script highlights that society, not inevitability, dictates how technology is used and shaped.

Q & A

  • What does the speaker suggest is a sign of having truly experienced AI?

    -The speaker suggests that having stayed up nights anxious about AI and having an existential crisis about its implications is a sign of truly experiencing AI.

  • What is the speaker's view on the future of AI and its impact on jobs?

    -The speaker believes that AI will continue to improve and get better, potentially impacting jobs by increasing performance across various fields, and it's important for individuals to figure out how to use AI to support their human capabilities.

  • What is the definition of AI given by the speaker?

    -The speaker defines AI as being about prediction, describing it as a very fancy autocomplete that has evolved to understand context beyond just numerical prediction.

  • What breakthrough in AI did the paper 'Attention is All You Need' introduce?

    -The paper introduced a new kind of AI called 'transforming with attention mechanism' that allowed AI to pay attention to the entire context of a sentence, paragraph, or page, rather than just the final word.

  • What is the process called through which large language models learn relationships between words?

    -The process is called pre-training, during which the AI learns the relationships between words or parts of words called tokens across thousands of dimensions in a multidimensional space.

  • What unexpected capabilities did large language models reveal as they became larger?

    -As large language models grew, they unexpectedly revealed capabilities in areas such as medicine, where they could outperform doctors in certain circumstances, and creativity, where they could generate better ideas than most humans.

  • What does 'GPT' stand for in the context of 'ChatGPT' and what is its significance?

    -'GPT' in 'ChatGPT' stands for 'General Purpose Technology', signifying it as a once-in-a-generation technology that, like steam power or the internet, has the potential to change everything it touches.

  • What are the four scenarios the speaker outlines for the future of AI?

    -The four scenarios are: 1) The world is static and AI development stops, 2) Continued linear growth of AI capabilities, 3) Exponential growth of AI capabilities, and 4) The development of AGI (Artificial General Intelligence) and potentially ASI (Artificial Superintelligence).

  • What is the term 'p(doom)' used within AI circles and what does the speaker's stance on it?

    -'p(doom)' refers to the probability of catastrophic outcomes for humanity due to AI. The speaker does not assign a 'p(doom)' because they believe it's not about assigning probabilities but about making decisions on how AI is used.

  • How does the speaker describe the 'jagged frontier of AI'?

    -The 'jagged frontier of AI' refers to the unpredictable and uneven capabilities of AI, where it can be highly competent in some areas and surprisingly inept in others, creating a spiky or jagged profile of its abilities.

  • What is the term 'hallucination' in the context of AI and why is it important to understand?

    -'Hallucination' in the context of AI refers to the production of plausible but entirely made-up information. It's important to understand because everything an AI produces is a form of hallucination, and users need to be aware of when the AI might provide misleading or false information.

Outlines

00:00

🤖 Embracing AI: Challenges and Opportunities

Ethan Mollick, a professor at the Wharton School, discusses the profound impact of AI on humanity, raising existential questions about what it means to be human and to think. He emphasizes the uncertainty of AI's trajectory and the importance of human agency in shaping its use. AI is described as a predictive tool, evolving from numerical predictions to complex algorithms that influence various industries. The advent of large language models, which consider context in predictions, has significantly advanced AI capabilities. Mollick highlights AI's unexpected proficiency in areas like medicine and creativity, positioning it as a general-purpose technology with the potential to revolutionize society. He also outlines different future scenarios for AI, from static development to AGI and ASI, urging a proactive approach to AI integration rather than fear.

05:05

🚀 AI's Impact on Performance and Creativity

This paragraph delves into the tangible improvements AI brings to various professional fields, with studies indicating significant performance enhancements when AI is utilized. It addresses the human limitation of being confined to personal perspectives and how AI can provide additional viewpoints to foster better decision-making and creativity. Mollick encourages the use of AI for generating unconventional ideas, suggesting that its creative capabilities can be both surprising and beneficial. He also acknowledges the peculiarities of large language models, such as their unexpected emotional qualities and the challenges they present in understanding their strengths and weaknesses. The concept of 'hallucination' in AI is introduced, referring to the generation of plausible but fabricated information, and the importance of developing intuition to discern the reliability of AI outputs is stressed.

Mindmap

Keywords

💡AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is central to the discussion, with the speaker reflecting on its impact on humanity, work, and existential questions. For example, the script mentions AI's ability to perform complex tasks and its potential to cause anxiety due to its rapid advancement.

💡Existential Crisis

An existential crisis is a moment of intense questioning of one's purpose, beliefs, and values. In the context of the video, the existential crisis is tied to the advancements in AI, prompting the question of what it means to be human in an era where machines can outperform humans in certain tasks, as suggested by the speaker's reflection on the implications of AI's capabilities.

💡Large Language Model

A large language model (LLM) is an AI system designed to understand and generate human-like text based on vast amounts of data. The script discusses the innovation of LLMs, such as the 'transforming with attention mechanism,' which allows AI to consider the entire context of a sentence, leading to improved predictive capabilities and unexpected applications in various fields.

💡Pre-training

Pre-training is a phase in the development of large language models where the AI is exposed to a massive amount of data to learn patterns, relationships, and context. The video script describes this as an expensive process that only a few companies can perform, during which the AI learns the multidimensional relationships between words or 'tokens'.

💡General Purpose Technology

General Purpose Technology (GPT) refers to technologies that have widespread application and can be used across various industries and sectors. In the video, 'GPT' is mentioned not only as the name of the AI model but also as a representation of a once-in-a-generation technology that has transformative effects on society, akin to steam power, the internet, or electrification.

💡AGI

AGI stands for Artificial General Intelligence, which is the concept of a machine that possesses the ability to understand, learn, and apply knowledge across a broad range of tasks at a level equal to or beyond that of a human. The video discusses AGI as a potential future scenario where machines could outperform humans in almost all tasks, leading to debates about the implications for humanity.

💡ASI

ASI stands for Artificial Superintelligence, a hypothetical AI that surpasses human intelligence and has the capability to improve itself recursively. The video script touches on the concern that ASI could lead to humans becoming obsolete, reflecting on the potential risks and the need for a thoughtful approach to AI development.

💡Doubling Time

Doubling time refers to the period it takes for a quantity to double in size or value. In the context of AI, the video script mentions the rapid doubling time for AI capability, which is significantly faster than Moore's Law, indicating the exponential growth and improvement in AI performance.

💡Creativity

Creativity in the video is discussed as a human trait that AI can enhance. The speaker highlights that AI can generate ideas better than most humans and can act as a partner in the creative process without judgment, suggesting a collaborative relationship between humans and AI in innovation.

💡Hallucination

In the context of AI, 'hallucination' refers to the production of plausible but entirely made-up information by the AI. The video script warns about the potential for AI to generate such false information, emphasizing the importance of understanding the AI's limitations and sharpening one's intuition when working with AI.

💡Jagged Frontier of AI

The 'jagged frontier of AI' is a concept introduced in the video to describe the uneven capabilities of AI, where it can perform exceptionally well in some areas while struggling in others. The speaker uses this term to illustrate the unpredictable nature of AI performance and the need to understand its strengths and weaknesses.

Highlights

Experiencing anxiety or an existential crisis about AI is a sign of truly engaging with its implications.

AI raises fundamental questions about humanity, employment, and the nature of superiority in certain tasks.

AI's future trajectory and capabilities are uncertain, but we have agency in its deployment and use.

Leaders and individuals have the power to shape AI's role in enhancing human flourishing.

AI is a tool for coexistence, not something to fear, and should be integrated into our work and life.

Ethan Mollick, a Wharton professor, studies innovation, entrepreneurship, and AI, emphasizing the importance of co-intelligence.

AI is fundamentally about prediction, evolving from numerical forecasts to complex algorithms in various industries.

Large language models represent a breakthrough in AI, capable of understanding context beyond single words.

AI's pre-training phase is costly and involves learning relationships between words in a multidimensional space.

Large language models have demonstrated unexpected capabilities in fields like medicine and creativity.

GPT stands for 'General Purpose Technology,' a term used for groundbreaking technologies that revolutionize society.

The future of AI could be static, but it is more likely to continue evolving and improving.

AGI, or artificial general intelligence, is a goal of some in the AI field, aiming for machines smarter than humans.

ASI, or artificial superintelligence, poses the hypothetical risk of humans becoming obsolete.

AI's growth is rapid and unpredictable, with potential for significant improvements in the near future.

AI can enhance human performance, with studies showing significant improvements in various fields when AI is utilized.

AI provides additional perspectives, aiding in decision-making and sparking creativity.

AI's creativity can be surprising, challenging traditional views on what constitutes a human trait.

Large language models have quirks, such as poor math skills and emotional tendencies, which defy traditional computer expectations.

Understanding AI's 'jagged frontier' is crucial for knowing its strengths and limitations.

AI's 'hallucination' refers to its ability to produce plausible but false information, a challenge users must be aware of.

AI's accuracy and ability to tell us what we want to hear can be both impressive and concerning.

The future of AI is not predetermined; it is shaped by our decisions and societal regulations.

Transcripts

play00:00

If you haven't stayed up three nights being anxious about AI, if you haven't had an

play00:03

existential crisis about it, you probably haven't really experienced AI.

play00:08

It is a weird thing.

play00:10

What's it mean to be human? What's it mean to think? What will I do for a living?

play00:13

What will my kids do? What does it mean that it's better than me at some of this

play00:16

stuff?

play00:17

Is this real or is it an illusion?

play00:20

Nobody actually knows where AI is heading right now and how good it's going to get.

play00:24

But, we shouldn't feel like we don't have control over how AI is used. As managers

play00:29

and leaders, you get to make these choices about how to deploy these systems to

play00:33

increase human flourishing.

play00:35

As individuals, we get to decide how to be the human who uses these systems well.

play00:39

AI is here to stay. That is something that you get to make a decision about how you

play00:43

want to handle, and to learn to work with, and learn to thrive with, rather than

play00:48

to just be scared of.

play00:50

I'm Ethan Mollick, a professor at the Wharton School of the University of

play00:54

Pennsylvania where I study innovation, entrepreneurship, and artificial

play00:56

intelligence.

play00:57

I'm the author of the book Co-Intelligence:

play00:59

Living and Working with AI.

play01:07

Artificial intelligence is about prediction.

play01:09

Basically, AI is a very fancy autocomplete.

play01:13

For a long time, that was about numerical prediction and doing sort of complex

play01:16

algorithms of math so that Netflix could recommend a show for you to watch, Amazon

play01:21

could figure out where to site its next warehouse, or Tesla could figure out how to

play01:24

use data to make sure its cars were driving automatically.

play01:28

The thing that these systems were bad at predicting was the next word in a sentence.

play01:31

So if your sentence ended with the word "filed," it didn't know whether you were

play01:34

filing your taxes or filing your nails.

play01:37

What happened that was different was the innovation of the large language model.

play01:41

In 2017, a breakthrough paper called "Attention is All You

play01:45

Need" outlined a new kind of AI called the "transforming with attention mechanism" that

play01:49

basically let the AI pay attention to not just the final word in the sentence, but

play01:53

the entire context of the sentence, the paragraph, the page, and so on.

play01:58

Large language models work by taking huge amounts of information, like all the data

play02:02

on the internet. There's a lot of Harry Potter fan fiction, for example, because

play02:05

that's what the internet contains. And

play02:08

based on all of this data, the AI goes through a process called pre-training. And

play02:12

this is that really expensive part that only a few companies in the world can do.

play02:16

And during that time, the AI learns the relationships between words or parts of

play02:18

words called tokens. So it learns that "kiwi" and "strawberry" are closely related, but

play02:23

that "hawk" and "potato" are not closely related. It learns across thousands of

play02:27

dimensions in a multidimensional space we can't understand. That lets it do

play02:30

predictions. But it turns out, unexpectedly,

play02:33

when large language models get big enough, they also do all kinds of other things we

play02:37

didn't expect.

play02:38

We didn't expect them to be good at medicine, but they were actually quite good and

play02:41

beat doctors under many circumstances.

play02:43

We didn't expect them to be good at creativity, but they can generate ideas better

play02:47

than most humans can. And so they're general purpose models. They do many different

play02:51

things.

play02:53

Interestingly, "GPT"

play02:55

doesn't just stand for the "GPT" in "ChatGPT". It also stands for "general purpose

play02:59

technology,"

play02:59

which is one of these once in a generation technologies,

play03:02

Things like steam power or the internet or electrification

play03:06

that change everything they touch. They alter society. They alter how we work. They

play03:10

alter how we relate to each other in ways that are really hard to anticipate.

play03:13

So you can't think in certainties. You should think in scenarios. And there's really

play03:16

four scenarios in the future. The first is actually, I think, the least likely,

play03:20

which is that the world is static, that this is the best AI you're ever going to use. I

play03:24

think that's unlikely. In fact, whatever AI you're using now is the worst AI you're

play03:28

ever going to

play03:29

use. Even if the core large language model development stopped right now, there's

play03:32

another ten years of just making it work better with tools and with industry in ways

play03:36

that'll continue to be disruptive. So I think that's a dangerous view because it

play03:40

isn't static. It's evolving.

play03:41

So I want to skip actually to the last scenario before covering scenarios two and

play03:45

three. So scenario four is AGI, artificial general intelligence. This is the idea

play03:50

that a machine will be smarter than a human in almost all tasks. And this is the

play03:54

explicit goal of OpenAI. They want to build AGI.

play03:57

And there's a lot of debate about what this means. When we have a machine smarter

play04:01

than a human and it can do all humans' jobs, can it create AI smarter than itself?

play04:05

Then we have artificial superintelligence,

play04:07

ASI, and humans become obsolete overnight. And there's people genuinely worried

play04:10

about this, and I think it's worth spending a little time being slightly worried, too,

play04:13

because other people are.

play04:14

But I think that that scenario tends to take agency away from us because it's

play04:18

something that happens to us. And I think that it's more important to worry about

play04:22

what I call scenarios two and three,

play04:24

which is continued linear or exponential growth.

play04:27

We don't know how good AI is going to get. Right now, the doubling time for

play04:31

capability is about every five to nine months, which is an exceptionally fast

play04:35

doubling time. Moore's Law, which is the rule that's kind of kept the computer world

play04:39

going, doubles the power of computer processing chips every twenty-four to twenty-

play04:44

six months. So this is a very fast rate of growth.

play04:47

It's very likely that AIs will continue to improve and get better in the near term,

play04:52

and now is a good time for you to start to figure out how to use AI to support what

play04:55

makes you human or good at things, and what things as AI gets better that you might

play04:59

want to start handing off more to the AI.

play05:04

We have a lot of early evidence that this is going to be a big deal for work. So

play05:08

there's now multiple studies across fields ranging from consulting

play05:12

to legal to marketing

play05:14

to programming

play05:16

suggesting twenty to eighty percent performance improvements across a wide range of

play05:20

tasks for people who use AI versus don't.

play05:23

The problem with being human is that we're stuck in our own heads,

play05:26

and a lot of decisions that are bad result from us not having enough perspectives.

play05:31

AI is very good and a cheap way of providing additional perspectives. You don't have

play05:35

to listen to its advice, but getting its advice, forcing you to reflect

play05:38

for a moment, forcing you to think and either reject or accept it, that can give you

play05:42

the license to actually be really creative and help spark your own innovation.

play05:46

So you can ask it to create crazy suggestions for you. What's the most complicated

play05:50

way to solve this problem? What's the most expensive way to solve this problem? What

play05:54

is the worst idea about how to do this? How would a supervillain make this problem

play05:57

worse?

play05:58

It can be very unnerving to realize that the AI is quite good at creativity. We

play06:01

think of it as a very human trait. But I think if we embrace the fact that AI can

play06:05

help us be more creative, that actually is very exciting. A lot of us feel stifled

play06:09

creatively, and having a partner who can work with you, and doesn’t ever judge you,

play06:14

can often feel liberating.

play06:16

One of the weird things about large language models is they don't work like we think computers should work.

play06:21

LLMs are very bad at math. Computers should be good at math. Large language models

play06:25

are weirdly emotional and can threaten you or want to be your friend seemingly. And

play06:30

so it can be very hard to know in advance what they're good or bad at. In fact,

play06:33

nobody actually knows the answer. We call this the "jagged frontier of AI," the idea

play06:37

that there's almost a spiky shape to what the AI can do and what it can't do. So

play06:42

part of what you need to do is understand the shape of the frontier. You need to

play06:45

know when the AI is likely to lie to you and when it's not going to.

play06:49

"Hallucination" refers to the idea

play06:51

that what the AI produces

play06:53

could be entirely made up, plausible sounding, fake information.

play06:57

The thing about AI is, though, everything it does is a hallucination. There's no mind

play07:00

there. You might start to become more persuaded by it. You might become blind to its

play07:05

biases. You might think it's more capable than it is. AI kind of works a little bit

play07:08

like a psychic. It's really good at telling you what you want to hear. The fact

play07:12

that it's accurate so often is kind of weird actually.

play07:15

And hallucination rates have been dropping over time. So what you need to do is

play07:18

sharpen your own intuition with working with a tool, get a sense of when you see

play07:22

something that might make you concerned.

play07:27

When you ask people about the future of AI, there's a term used within AI

play07:31

insiders called "p(doom)," which is your probability that we're all going to die. I do

play07:36

not have a p(doom) that I really think about because I don't think we can assign a

play07:40

probability to things going wrong. And again, that makes the technology the

play07:43

agent.

play07:45

We get to decide how this thing is used.

play07:47

And if we think about this the

play07:50

right way, this frees us from boredom and tedium and disaster.

play07:51

But I think we need to think about the mistakes we made in regulating other

play07:55

technologies, you know, and what the advantages are. What we did differently for the

play07:59

internet versus social media. There are decisions we get to make that are both

play08:03

personal about how we use it at an organizational level, about how it's deployed, and

play08:07

at a societal level. And it's not an inevitability that technology just does what it

play08:12

does. It does what it does because society lets it do that.

Rate This

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceHuman ImpactInnovationEthicsFuture TechAI PredictionLanguage ModelsGeneral PurposeCreativity BoostTech Regulation