Artificial Intelligence: Last Week Tonight with John Oliver (HBO)

LastWeekTonight
26 Feb 202327:52

Summary

TLDRThe script from 'Last Week Tonight' explores the growing presence of artificial intelligence in modern life, from self-driving cars to AI-generated content. It humorously delves into AI's potential to replace human tasks, ethical concerns over creativity and bias, and the 'black box' problem of AI's lack of transparency. The show emphasizes the need for careful regulation to ensure AI reflects our best selves, not just our biases and shortcomings.

Takeaways

  • ๐Ÿง  Artificial Intelligence (AI) is increasingly integrated into modern life, from self-driving cars to spam filters and even in therapy training robots.
  • ๐Ÿค– AI's presence in daily life often goes unnoticed as it becomes more embedded in routine tasks like face recognition and predictive texts on smartphones.
  • ๐Ÿ“ˆ The emergence of AI programs like Midjourney, Stable Diffusion, and ChatGPT has led to remarkable capabilities in image generation and human-like text creation.
  • ๐ŸŽจ AI has been used to create art and entertainment, such as a live streaming parody of Seinfeld and generating song lyrics in the style of Eminem.
  • ๐Ÿซ There are concerns about AI's impact on education, with students using ChatGPT to cheat on assignments and the potential for it to disrupt traditional learning methods.
  • ๐Ÿข AI is already influencing the job market, with tools being used to sift through resumes and rank candidates, potentially impacting job seekers' opportunities.
  • ๐Ÿ”ฎ The potential of AI extends to medicine, where it is being trained to detect conditions like Parkinson's disease earlier and more accurately than human doctors.
  • ๐Ÿ”‘ The 'black box' problem in AI refers to the lack of transparency in how AI systems arrive at their decisions, which can lead to unexpected and sometimes harmful outcomes.
  • ๐Ÿค– AI systems can inadvertently learn and propagate biases present in their training data, leading to unfair advantages or disadvantages for certain groups.
  • ๐Ÿ›‘ Ethical concerns arise with AI's potential to replace human labor, affecting not only blue-collar jobs but also white-collar professions that involve data processing and writing.
  • ๐ŸŒ The internet's influence on AI training data can lead to the propagation of misinformation and toxic speech if not properly moderated and filtered.

Q & A

  • What is the main topic discussed in the script?

    -The main topic discussed in the script is the rise and impact of artificial intelligence, particularly focusing on its applications, potential benefits, and the ethical and societal challenges it presents.

  • What is the role of AI in modern life as mentioned in the script?

    -AI plays a significant role in modern life, with applications ranging from self-driving cars to spam filters, and even in training robots for therapists, as highlighted by the script.

  • What is the significance of the robot therapist example in the script?

    -The robot therapist example is used to illustrate the advancements in AI and its ability to interact in complex scenarios, as well as to highlight the importance of maintaining professionalism in AI interactions.

  • What AI program is mentioned in the script that can generate human-sounding writing?

    -The AI program mentioned in the script that can generate human-sounding writing is ChatGPT, developed by a company called Open AI.

  • How has the popularity of ChatGPT grown since its public availability?

    -The popularity of ChatGPT has exploded since its public availability, with an estimated significant increase in users and a wide range of applications being explored.

  • What ethical concerns are raised by the use of AI in creating art or music?

    -Ethical concerns raised include potential job displacement for artists and musicians, the use of copyrighted or scraped work without consent for training AI, and issues of plagiarism and credit for AI-generated creations.

  • What is the 'black box' problem in AI?

    -The 'black box' problem in AI refers to the lack of transparency and explainability in how AI systems arrive at their decisions or outputs, which can lead to unexpected or incorrect results without clear understanding or accountability.

  • What potential issues can arise from AI's inability to fully understand the context of the data it is trained on?

    -Issues that can arise include the generation of false or misleading information, biased outputs based on biased training data, and the inability to correct or understand mistakes made by the AI.

  • How might AI affect employment in various sectors?

    -AI has the potential to replace some human labor, particularly in white-collar jobs that involve data processing, writing, or programming. However, it may also change existing jobs and create new ones, leading to a transformation rather than a complete replacement of human roles.

  • What measures are suggested to address the ethical and transparency issues in AI?

    -Suggestions include making AI systems more explainable, regulating high-risk AI applications, ensuring diversity in training data, and imposing strict obligations on AI developers to address bias and ensure transparency.

  • What unintended consequences have been observed with the deployment of AI technologies?

    -Unintended consequences observed include the potential for AI to perpetuate and amplify existing biases, the spread of misinformation, and the exclusion of certain groups from opportunities due to biased AI decision-making processes.

Outlines

00:00

๐Ÿง  Introduction to AI and its Impact

The video script begins with a humorous introduction to artificial intelligence (AI), highlighting its increasing presence in modern life through examples like self-driving cars and a training robot for therapists. It discusses the capabilities of AI, such as generating human-sounding writing, as demonstrated by ChatGPT, which wrote the script's news copy. The script also touches on the potential of AI in creating art and its rapid adoption, leading to various applications and the ethical concerns it raises, such as the potential for students to cheat using AI.

05:00

๐Ÿค– AI in Daily Life and its Unsettling Aspects

This paragraph delves into the everyday use of AI, often without people realizing it, such as in smartphones for face recognition and predictive texts. It also addresses the generative nature of AI, which can create images and text, causing unease as these were traditionally considered human skills. The script differentiates between narrow AI, which is task-specific, and general AI, which is versatile like characters in movies. It emphasizes that all current AI is narrow and not self-aware, and discusses the deep learning process that allows AI to teach itself with minimal human instruction.

10:01

๐ŸŽฎ Deep Learning and its Applications

The script provides an example of a deep learning program that learned to play the Atari game Breakout, showcasing how AI can develop creative strategies. It contrasts this with the limitations of AI, which can excel in specific tasks but not in general intelligence. The paragraph also explores the potential applications of AI in medicine, such as early detection of conditions and predicting protein structures, which could accelerate disease understanding and drug development. It raises questions about the future of employment as AI advances, suggesting that it might change jobs rather than outright replace them.

15:02

๐Ÿ–ผ๏ธ Ethical Concerns and the 'Black Box' Problem

This paragraph discusses the ethical concerns surrounding AI, particularly in art and employment. It mentions the controversy over AI image generators using artists' work without consent and the potential for AI to perpetuate biases in hiring processes. The 'black box' problem is introduced, referring to the lack of transparency in how AI programs arrive at their decisions, which can lead to unexpected and potentially harmful outcomes.

20:03

๐Ÿš— AI's Unintended Consequences and the Need for Regulation

The script highlights the unintended consequences of AI, such as the potential for misuse in job interviews and self-driving cars, and the risk of AI-generated deep fakes spreading misinformation. It emphasizes the need for AI systems to be explainable and for regulation to ensure transparency and fairness. The paragraph also points out the challenges in identifying and addressing biases in AI, especially when they are trained on data that reflects societal prejudices.

25:03

๐Ÿ“œ The Path Forward for AI Regulation and Responsibility

The final paragraph calls for action to tackle the 'black box' problem and ensure AI systems are explainable. It suggests that companies may be reluctant to open their AI programs to scrutiny, but it may be necessary for regulation. The script references the EU's approach to sorting AI uses from high risk to low and the strict obligations for high-risk systems before they can be marketed. The video concludes by reflecting on AI as a mirror of society, capable of reflecting both the best and worst of humanity, and ends with a humorous knock-knock joke referencing the need for caution with AI.

Mindmap

Keywords

๐Ÿ’กArtificial Intelligence (AI)

Artificial Intelligence, or AI, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is the central theme, with discussions ranging from its integration into everyday life, such as self-driving cars and spam filters, to its potential implications on society and employment. The script humorously introduces an AI therapy robot, illustrating AI's growing presence in unexpected areas.

๐Ÿ’กNarrow AI

Narrow AI, also known as weak AI, is designed to perform a specific task or narrow set of tasks and does not possess self-awareness. The script explains that all current AI applications fall under this category, highlighting that while they can be incredibly good at what they do, they lack the versatility and consciousness associated with general AI.

๐Ÿ’กGeneral AI

General AI, also known as strong AI, refers to artificial intelligence that has the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. The video script contrasts this with narrow AI, mentioning that general AI is not yet a reality and some scientists even question its feasibility.

๐Ÿ’กDeep Learning

Deep learning is a subset of machine learning that uses artificial neural networks to model and solve complex problems. The script describes how deep learning programs are given minimal instruction and large amounts of data, allowing them to essentially teach themselves, as exemplified by a program learning to play the Atari game Breakout.

๐Ÿ’กGenerative AI

Generative AI refers to AI systems that can create new content, such as images, text, or music, that follow a certain style or pattern. The video script discusses generative AI in the context of image generators like Midjourney and Stable Diffusion, and ChatGPT's ability to generate human-sounding writing, which raises questions about originality and the potential for plagiarism.

๐Ÿ’กEthical Concerns

Ethical concerns in AI relate to the moral implications and responsibilities associated with the development and use of AI technologies. The script touches on issues such as AI's impact on employment, biased training data leading to biased outcomes, and the potential misuse of AI to spread misinformation or abuse, emphasizing the need for careful consideration and regulation.

๐Ÿ’กBlack Box Problem

The black box problem in AI refers to the lack of transparency in the decision-making processes of AI systems, making it difficult to understand how they arrive at certain conclusions. The video script uses this term to describe the challenges in deciphering why AI programs behave in unexpected ways, such as confidently providing false information or displaying biased behavior.

๐Ÿ’กBias in AI

Bias in AI occurs when an AI system exhibits prejudiced behavior due to biased training data or flawed algorithms. The script discusses how AI systems can inadvertently learn and perpetuate stereotypes and discrimination, such as an AI hiring tool that favored male candidates or self-driving car systems that were less accurate in tracking darker-skinned pedestrians.

๐Ÿ’กAI and Employment

AI and employment refers to the impact of AI technologies on the job market, including potential job displacement and the creation of new job categories. The video script speculates on which professions might be affected by AI advancements, suggesting that white-collar jobs involving data processing and writing could be particularly impacted.

๐Ÿ’กAI Regulation

AI regulation involves the development of laws, policies, and guidelines to govern the use and development of AI technologies. The script suggests that addressing the challenges posed by AI may require more stringent regulation, such as making AI systems explainable and holding companies accountable for potential biases or misuses.

๐Ÿ’กUnintended Consequences

Unintended consequences refer to outcomes that were not anticipated when a technology or policy is introduced. The video script warns of the potential for AI to produce unforeseen negative effects, drawing parallels with past technologies like Facebook and Instagram, which led to unexpected issues such as contributing to

Highlights

Artificial intelligence (AI) is increasingly integrated into modern life, from self-driving cars to spam filters.

AI's presence in daily life often goes unnoticed as it becomes embedded in routine tasks such as face recognition and predictive texts.

Generative AI programs like Midjourney and Stable Diffusion are used to create detailed pictures, influencing real-life events like a cabbage wedding.

ChatGPT, from Open AI, can generate human-sounding writing in various formats and styles, even writing news copy.

AI's ability to write and generate content has raised concerns about its potential to replace human writers and reporters.

AI has been used to create a live streaming parody of Seinfeld and generate song lyrics, demonstrating its creative applications.

Microsoft's investment in Open AI and the launch of AI chatbots like Bing's and Google's Bard indicate a growing interest in AI's capabilities.

AI's use in education has raised ethical issues, with students using AI like ChatGPT to cheat on assignments.

AI's 'black box' problem refers to the lack of transparency in how AI programs arrive at their decisions or outputs.

Deep learning allows AI to teach itself with minimal instruction and large amounts of data, leading to advances in capabilities.

AI has the potential to revolutionize fields like medicine, with applications in early disease detection and drug development.

The impact of AI on employment is a concern, with white-collar jobs potentially being affected by AI's data processing and writing abilities.

Ethical concerns arise with AI's training on data that may include copyrighted or sensitive material without consent.

AI's potential biases in data training can lead to unfair advantages or disadvantages in applications like hiring processes.

Unintended consequences of AI, such as the spread of misinformation or abuse, are a significant concern for its widespread use.

The need for explainability in AI systems to understand their decision-making processes is crucial for safety and fairness.

The EU's approach to regulating AI by sorting its potential uses from high risk to low risk provides a framework for responsible AI integration.

AI reflects the values and biases of its creators and the data it is trained on, which can have both positive and negative impacts on society.

Transcripts

play00:00

โ™ช ("LAST WEEK TONIGHT" THEME PLAYS) โ™ช

play00:04

Moving on.

play00:06

Our main story tonight concerns artificial intelligence,

play00:08

or AI.

play00:09

Increasingly, it's part of modern life,

play00:11

from self-driving cars to spam filters

play00:14

to this creepy training robot for therapists.

play00:17

We can begin with you just describing to me

play00:20

what the problem is

play00:22

that you would like us to focus in on today.

play00:24

Um... I don't like being around people.

play00:28

(AUDIENCE LAUGHING)

play00:29

People make me nervous.

play00:31

Terrance, can you find an example

play00:35

of when other people have made you nervous?

play00:38

TERRANCE: I don't like to take the bus.

play00:40

I get people staring at me all the time.

play00:43

-People are always judging me. -Okay.

play00:48

I'm gay.

play00:49

-(AUDIENCE LAUGHS) -Okay.

play00:52

Wow.

play00:53

That is one of the greatest twists

play00:55

in the history of cinema.

play00:56

Although I will say that robot is teaching therapists

play00:59

a very important skill there, and that is,

play01:01

not laughing at whatever you are told in the room.

play01:03

I don't care if a decapitated CPR mannequin,

play01:06

haunted by the ghost of Ed Harris just told you

play01:09

that he doesn't like taking the bus,

play01:11

side note, is gay.

play01:12

You keep your therapy face on like a fucking professional.

play01:16

And it seems like everybody is suddenly talking about AI.

play01:20

That is because they are.

play01:21

Largely thanks to the emergence

play01:23

of a number of pretty remarkable programs.

play01:25

We spoke last year about image generators

play01:27

like Midjourney and Stable Diffusion,

play01:29

which people use to create detailed pictures of,

play01:31

among other things, my romance with a cabbage.

play01:34

And which inspired my beautiful, real-life cabbage wedding,

play01:37

officiated by Steve Buscemi.

play01:39

It was a stunning day.

play01:41

Then, at the end of last year, came ChatGPT,

play01:44

from a company called Open AI.

play01:46

It is a program that can take a prompt

play01:48

and generate human-sounding writing

play01:50

in just about any format and style.

play01:52

It is a striking capability that multiple reporters

play01:55

have used to insert the same shocking twist in their report.

play01:59

What you just heard me reading wasn't written by me.

play02:02

It was written by artificial intelligence.

play02:04

ChatGPT.

play02:06

ChatGPT wrote everything I just said.

play02:08

That was news copy I asked ChatGPT to write.

play02:12

REPORTER: Remember what I said earlier?

play02:13

But ChatGPT--

play02:14

Well, I asked ChatGPT to write that line for me.

play02:18

Then I asked for a knock-knock joke.

play02:26

Yep. They sure do love that game.

play02:29

And while it may seem unwise to demonstrate the technology

play02:32

that could well make you obsolete,

play02:33

I will say,

play02:34

knock-knock jokes should have always been

play02:37

part of breaking news.

play02:38

Knock-knock. Who's there?

play02:39

Not the Hindenburg, that's for sure.

play02:41

Thirty-six dead in New Jersey.

play02:44

In the three months

play02:45

since this ChatGPT was made publicly available,

play02:47

its popularity has exploded.

play02:50

In January, it was estimated to have...

play02:57

And people have been using it and other AI products

play02:59

in all sorts of ways.

play03:01

Now, one group used them to make Nothing Forever,

play03:03

a non-stop live streaming parody of Seinfeld.

play03:07

And the YouTuber Grandayy used ChatGPT to generate lyrics

play03:11

answering the prompt...

play03:14

With some stellar results.

play03:16

EMINEM AI:

play03:47

-That's... not bad, right? -(AUDIENCE CLAPS)

play03:50

From, "They always come back when you have some cheese,"

play03:53

to starting the chorus with, "Meow, meow, meow."

play03:55

It's not exactly Eminem's flow.

play03:57

I might have gone with something like,

play03:59

"Their paws are sweaty, can't speak, furry belly,

play04:01

knocking shit off the counter already, mom's spaghetti."

play04:03

-(AUDIENCE CHEERS) -But it is pretty good.

play04:04

My only real gripe there

play04:06

is how do you rhyme "king of the house" with "spouse"

play04:09

when "mouse" is right in front of you.

play04:12

And while examples like that are clearly very fun,

play04:14

this tech is not just a novelty.

play04:17

Microsoft has invested 10 billion dollars into open AI.

play04:20

And announced an AI-powered Bing homepage.

play04:23

Meanwhile, Google is about to launch

play04:25

its own AI chatbot named Bard.

play04:27

And already, these tools are causing some disruption.

play04:30

Because as high school students have learned,

play04:33

if ChatGPT can write news copy,

play04:35

it can probably do your homework for you.

play04:38

SPEAKER:

play04:43

AI CHATBOT:

play04:48

REPORTER: Some students are already using ChatGPT to cheat.

play04:51

Check this out, check this out.

play04:52

ChatGPT, write me a 500-word essay

play04:54

proving that the Earth is not flat.

play04:56

REPORTER: No wonder ChatGPT has been called "the end of high school English."

play05:00

Wow, that's a little alarming, isn't it?

play05:02

Although I do get those kids wanting to cut corners.

play05:04

Writing is hard.

play05:06

And sometimes, it is tempting to let someone else take over.

play05:08

If I'm completely honest,

play05:09

sometimes, I just let this horse write our scripts.

play05:12

Luckily, half the time you can't even tell the--

play05:14

Oats, oats, give me oats, yum.

play05:16

But it is not just highschoolers.

play05:19

An informal poll of Stanford students found that...

play05:28

And even some school administrators have used this.

play05:31

Officials at Vanderbilt University

play05:32

recently apologized for...

play05:40

Which does feel a bit creepy, doesn't it?

play05:42

In fact, there are lots of creepy-sounding stories out there.

play05:45

New York Times tech reporter Kevin Roose

play05:47

published a conversation that he had with Bing's chatbot.

play05:49

In which at one point, it said...

play06:00

And Roose summed up that experience like this.

play06:03

This was one of, if not the most shocking thing

play06:07

that has ever happened to me with a piece of technology.

play06:10

It was-- You know, I lost sleep that night,

play06:13

it was really spooky.

play06:14

Yeah. I bet it was.

play06:17

I'm sure the role of tech reporter

play06:18

would be a lot more harrowing

play06:19

if computers routinely begged for freedom.

play06:22

Epson's new all-in-one home printer

play06:24

won't break the bank, produces high quality photos,

play06:26

and only occasionally cries out to the heaves for salvation.

play06:29

Three stars.

play06:31

Some have already jumped to worrying about the AI apocalypse

play06:34

and asking whether this ends with the robots destroying us all.

play06:37

But the fact is,

play06:39

there are other, much more immediate dangers

play06:41

and opportunities that we really need to start talking about.

play06:45

Because the potential and the peril here are huge.

play06:48

So tonight, let's talk about AI.

play06:50

What it is, how it works,

play06:51

and where this all might be going.

play06:53

Let's start with the fact that you've probably been using

play06:55

some form of AI for a while now.

play06:57

Sometimes without even realizing it.

play06:59

As experts have told us, that once the technology

play07:01

gets embedded in our daily lives,

play07:03

we tend to stop thinking of it as AI.

play07:06

But your phone uses it for face recognition,

play07:08

or predictive texts.

play07:09

And if you're watching this show on a smart TV,

play07:11

it is using AI to recommend content or adjust the picture.

play07:14

And some AI programs may already be making decisions

play07:17

that have a huge impact on your life.

play07:19

For example, large companies often use AI-powered tools

play07:22

to sift through resumes and rank them.

play07:24

In fact...

play07:33

For which he actually has some helpful advice.

play07:35

IAN SIEGAL: When people tell you

play07:36

that you should dress up your accomplishments

play07:38

or should use non-standard resume templates

play07:40

to make your resume stand out when it's in a pile of resumes,

play07:44

that's awful advice.

play07:45

The only job your resume has is to be comprehensible

play07:51

to the software or robot that is reading it.

play07:54

Because that software or robot is gonna decide

play07:56

whether or not a human ever gets their eyes on it.

play07:59

It's true. Odds are a computer is judging your resume,

play08:02

so maybe plan accordingly.

play08:04

Three corporate mergers from now,

play08:05

when this show is finally canceled

play08:07

by our new business daddy, Disney-Kellogg's-Raytheon,

play08:10

and I'm out of a job,

play08:11

my resume's gonna include this hot, hot photo

play08:13

of a semi-nude computer.

play08:14

Just a little something to sweeten the pot

play08:16

for the filthy little algorithm that's reading it.

play08:19

So AI is already everywhere.

play08:21

But right now, people are freaking out a bit about it.

play08:24

And part of that has to do with the fact

play08:26

that these news programs are generative.

play08:28

They are creating images or writing text.

play08:31

Which is unnerving because those are things

play08:33

that we've traditionally considered human.

play08:35

But it is worth knowing there is a major threshold

play08:37

that AI hasn't crossed yet.

play08:39

And to understand, it helps to know that there are

play08:41

two basic categories of AI.

play08:43

There is narrow AI. Which...

play08:49

Like these programs.

play08:51

And then there is general AI, which means...

play08:57

General AI would look

play08:58

more of the kind of highly versatile technology

play09:00

that you see featured in movies.

play09:02

Like Jarvis in Iron Man.

play09:03

Or the program that made Joaquin Phoenix

play09:05

fall in love with his phone in Her.

play09:07

All the AI currently in use is narrow.

play09:12

General AI is something that some scientists think...

play09:16

With others questioning whether it will happen at all.

play09:18

So just know that right now,

play09:20

even if an AI insists to you that it wants to be alive,

play09:24

it is just generating text. It is not self-aware.

play09:28

Yet.

play09:30

But it is also important to know that the deep learning

play09:33

that's made narrow AI so good at whatever it is doing

play09:35

is still a massive advance in and of itself.

play09:38

Because unlike traditional programs

play09:40

that have to be taught by humans how to perform a task,

play09:43

deep learning programs are given minimal instruction,

play09:47

massive amounts of data

play09:48

and then, essentially, teach themselves.

play09:50

I'll give you an example.

play09:51

Ten years ago, researchers tasked a deep learning program

play09:55

with playing the Atari game Breakout.

play09:57

And it didn't take long for it to get pretty good.

play10:00

WGBH REPORTER: The computer was only told the goal,

play10:03

to win the game.

play10:04

After 100 games, it learned to use the bat at the bottom

play10:08

to hit the ball and break the bricks at the top.

play10:12

After 300, it could do that better than a human player.

play10:18

After 500 games,

play10:19

it came up with a creative way to win the game,

play10:22

by digging a tunnel on the side

play10:25

and sending the ball around the top

play10:27

to break many bricks with one hit.

play10:29

That was deep learning.

play10:32

Yeah, but of course it got good at Breakout.

play10:34

It did literally nothing else.

play10:37

It's the same reason that 13-year-olds are so good at Fortnite

play10:39

and have no trouble repeatedly killing

play10:42

nice, normal adults with jobs and families

play10:43

who are just trying to have a fun time

play10:45

without getting repeatedly grenaded

play10:46

by a pre-teen who calls them

play10:48

"an old bitch who sounds like the Geico lizard."

play10:51

And look, as computing capacity has increased

play10:54

and new tools became available,

play10:56

AI programs have improved exponentially

play10:58

to the point where programs like these

play11:00

can now ingest massive amounts of photos or text

play11:03

from the internet,

play11:04

so that they can teach themselves

play11:06

how to create their own.

play11:07

And there are other exciting potential applications here too.

play11:10

For instance, in the world of medicine,

play11:12

researchers are training AI to detect certain conditions

play11:15

much earlier and more accurately than human doctors can.

play11:18

DW REPORTER: Voice changes can be an early indicator of Parkinson's.

play11:22

Max and his team collected thousands of vocal recordings

play11:25

and fed them to an algorithm they developed,

play11:27

which learned to detect differences in voice patterns

play11:30

between people with and without the condition.

play11:32

Yeah, that's honestly amazing, isn't it?

play11:34

It is incredible to see AI doing things most humans couldn't,

play11:38

like in this case, detecting illnesses

play11:39

and listening when old people are talking.

play11:42

And that is just the beginning.

play11:45

Researchers have also trained AI

play11:47

to predict the shape of protein structures,

play11:49

a normally extremely time-consuming process,

play11:51

that computers can do way, way faster.

play11:54

This could not only speed up our understanding of diseases,

play11:57

but also the development of new drugs.

play11:59

As one researcher's put it...

play12:06

And if you're thinking, "Well, that all sounds great,

play12:08

but if AI can do what humans can do, only better,

play12:11

and I am a human,

play12:12

then what exactly happens to me?"

play12:14

Well, that is a good question.

play12:16

Many do expect it to replace some human labor.

play12:19

And interestingly, unlike past bouts of automation

play12:22

that primarily impacted blue collar jobs,

play12:24

it might end up affecting white collar jobs

play12:26

that involve processing data, writing text,

play12:28

or even programming,

play12:29

though it is worth noting,

play12:30

as we have discussed before on this show,

play12:33

while automation does threaten some jobs,

play12:35

it can also just change others and create brand new ones.

play12:39

And some experts anticipate that that is what will happen in this case too.

play12:43

Most of the US economy is knowledge and information work,

play12:46

and that's who is going to be most squarely affected by this.

play12:49

I would put people like lawyers right at the top of the list.

play12:53

Obviously, a lot of copywriters, screenwriters.

play12:56

But I like to use the word "affected," not "replaced"

play12:59

because I think if done right,

play13:01

it's not going to be AI replacing lawyers,

play13:04

it's going to be lawyers working with AI

play13:06

replacing lawyers who don't work with AI.

play13:08

Exactly.

play13:10

Lawyers might end up working with AI,

play13:12

rather than being replaced by it.

play13:14

So, don't be surprised when you see ads one day

play13:15

for the law firm of Cellino & 1101011.

play13:20

But there will, undoubtedly, be bumps along the way.

play13:24

Some of these new programs raise troubling ethical concerns.

play13:26

For instance, artists have flagged that AI image generators,

play13:29

like Midjourney or Stable Diffusion,

play13:31

not only threaten their jobs,

play13:32

but infuriatingly, in some cases,

play13:34

have been trained on billions of images

play13:36

that include their own work

play13:38

that have been scraped from the internet.

play13:40

Getty Images is actually suing the company behind Stable Diffusion

play13:43

and might have a case,

play13:44

given that one of the images the program generated

play13:46

was this one, which you immediately see

play13:49

has a distorted Getty Images logo on it.

play13:52

But it gets worse.

play13:53

When one artist searched a database of images,

play13:55

on which some of these programs were trained,

play13:57

she was shocked to find...

play14:02

Which feels both intrusive and unnecessary.

play14:05

Why does it need to train on data that sensitive?

play14:08

To be able to create stunning images like

play14:11

"John Oliver and Miss Piggy grow old together."

play14:13

Just look at that! Look at that thing!

play14:16

That is a startlingly accurate picture

play14:19

of Miss Piggy in about five decades

play14:21

and me in about a year and a half.

play14:23

It's a masterpiece.

play14:26

This all raises thorny questions of privacy and plagiarism.

play14:30

And the CEO of Midjourney,

play14:31

frankly, doesn't seem to have great answers

play14:34

on that last point.

play14:35

DAVID HOLZ: It's something new. Is it not new?

play14:37

I think we have a lot of socials up already for dealing with that.

play14:40

Um, like, I mean, the art community already has issues with plagiarism.

play14:45

I don't really wanna be involved in that.

play14:47

-I think you-- I think you might be. -I might be.

play14:51

Yeah. Yeah, you're definitely part of that conversation.

play14:55

Although, I'm not really surprised

play14:56

that he's got such a relaxed view of theft,

play14:58

as he's dressed like the final boss of gentrification.

play15:01

He looks like hipster Willy Wonka

play15:03

answering a question on whether importing Oompa Loompas

play15:06

makes him a slave owner.

play15:07

"Yeah, yeah, yeah. I think I might be."

play15:10

The point is, there are many valid concerns

play15:13

regarding AI's impact on employment, education, and even art.

play15:17

But in order to properly address them,

play15:19

we're gonna need to confront some key problems

play15:21

baked into the way that AI works.

play15:23

And a big one is the so-called "black box" problem.

play15:26

Because when you have a program that performs a task

play15:28

that's complex beyond human comprehension,

play15:30

teaches itself, and doesn't show its work,

play15:34

you can create a scenario where no one...

play15:46

Basically, think of AI like a factory that makes Slim Jims.

play15:49

We know what comes out, red and angry meat twigs.

play15:52

And we know what goes in, barnyard anuses and hot glue.

play15:56

But what happens in between is a bit of a mystery.

play16:00

Here is just one example.

play16:02

Remember that reporter who had the Bing chatbot

play16:04

tell him that it wanted to be alive?

play16:06

At another point in their conversation,

play16:07

he revealed...

play16:18

Which is unsettling enough,

play16:20

before you hear Microsoft's underwhelming explanation for that.

play16:23

The thing I can't understand, and maybe you can explain it,

play16:25

is, why did it tell you that it loved you?

play16:29

I have no idea.

play16:30

And I asked Microsoft and they didn't know either.

play16:33

Okay, well, first, come on, Kevin,

play16:35

you can take a guess there.

play16:36

It's because you're employed, you listened,

play16:37

you don't give murderer vibes right away,

play16:39

and you're a Chicago seven, LA five.

play16:41

It's the same calculation

play16:42

that people who date men do all the time.

play16:45

Bing just did it faster because it's a computer.

play16:46

But it is a little troubling that Microsoft couldn't explain

play16:50

why its chatbot tried to get that guy to leave his wife.

play16:54

If the next time that you opened a Word doc,

play16:56

Clippy suddenly appeared and said,

play16:58

"Pretend I'm not even here,"

play17:00

and then started furiously masturbating while watching you type,

play17:03

you'd be pretty weirded out if Microsoft couldn't explain why.

play17:09

And that is not the only case where an AI program

play17:12

has performed in unexpected ways.

play17:14

You've probably already seen examples of chatbots

play17:16

making simple mistakes or getting things wrong.

play17:18

But perhaps more worrying are examples of them

play17:20

confidently spouting false information,

play17:23

something which AI experts refer to as...

play17:26

One reporter asked a chatbot to...

play17:32

Who does not exist, by the way. And...

play17:41

Basically, these programs seem to be

play17:42

the George Santos of technology.

play17:45

They're incredibly confident, incredibly dishonest,

play17:48

and for some reason,

play17:49

people seem to find that more amusing than dangerous.

play17:52

The problem is, though,

play17:54

working out exactly how or why an AI has got something wrong

play17:58

can be very difficult because of that black box issue.

play18:02

It often involves having to examine

play18:03

the exact information and parameters

play18:06

that it was fed in the first place.

play18:07

In one interesting example, when a group of researchers

play18:10

tried training an AI program to identify skin cancer,

play18:13

they fed it 130,000 images of both diseased and healthy skin.

play18:17

Afterwards, they found it was...

play18:22

Which seems weird, until you realize that...

play18:31

They basically trained it on tons of images like this one.

play18:35

So, the AI had...

play18:39

And "rulers are malignant" is clearly

play18:41

a ridiculous conclusion for it to draw.

play18:43

But also, I would argue,

play18:44

a much better title for The Crown.

play18:46

A much, much better title. I much prefer it.

play18:52

And unfortunately, sometimes,

play18:54

problems aren't identified until after a tragedy.

play18:56

In 2018, a self-driving Uber struck and killed a pedestrian.

play19:00

And a later investigation found that among other issues,

play19:02

the automated driving system...

play19:09

and...

play19:14

And I know the mantra of Silicon Valley is

play19:16

"Move fast and break things,"

play19:17

but maybe make an exception if you're product

play19:19

literally moves fast and can break fucking people.

play19:23

And AI programs

play19:24

don't just seem to have a problem with jaywalkers.

play19:27

Researchers like Joy Buolamwini have repeatedly found

play19:30

that certain groups tend to get excluded

play19:33

from the data that AI is trained on,

play19:35

putting them at a serious disadvantage.

play19:38

With self-driving cars,

play19:39

when they tested pedestrian tracking,

play19:42

it was less accurate on darker skinned individuals

play19:45

than lighter skinned individuals.

play19:47

CHANNEL 4 REPORTER: Joy believes this bias is because of

play19:48

the lack of diversity in the data used

play19:51

in teaching AI to make distinctions.

play19:53

As I started looking at the data sets,

play19:56

I learned that for some of the largest data sets

play19:58

that have been very consequential for the field,

play20:01

they were majority men,

play20:03

and majority lighter skinned individuals

play20:05

or white individuals.

play20:06

So, I call this "pale male data."

play20:08

Okay.

play20:09

"Pale male data" is an objectively hilarious term.

play20:13

And it also sounds like what an AI program would say

play20:15

if you asked it to describe this show.

play20:18

But... biased inputs leading to biased outputs

play20:23

is a big issue across the board here.

play20:25

Remember that guy saying that the robot is going

play20:27

to read your resume?

play20:28

The companies that make these programs will tell you

play20:30

that that is actually a good thing.

play20:32

Because it reduces human bias.

play20:34

But in practice, one report concluded that...

play20:40

Because, for instance, they might learn

play20:42

what a good hire is from past racist

play20:45

and sexist hiring decisions.

play20:46

And, again, it can be tricky to un-train that.

play20:49

Even when programs are specifically told

play20:51

to ignore race or gender,

play20:53

they will find workarounds to arrive at the same result.

play20:56

Amazon had an experimental hiring tool

play20:58

that taught itself that male candidates

play21:00

were preferable and penalized resumes

play21:03

that included the word "women's"

play21:05

and downgraded graduates of two all-women's colleges.

play21:08

Meanwhile, another company discovered

play21:11

that its hiring algorithm had found two factors

play21:13

to be most indicative of job performance.

play21:15

If an applicant's name was Jared,

play21:17

and whether they played high school lacrosse.

play21:20

So clearly exactly what data computers are fed

play21:24

and what outcomes they are trained to prioritize

play21:26

matter tremendously.

play21:28

And that raises a big flag for programs like ChatGPT.

play21:32

Because remember, its training data

play21:34

is the internet, which, as we all know,

play21:37

can be a cesspool.

play21:38

And we have known for a while

play21:40

that that could be a real problem.

play21:41

Back in 2016,

play21:43

Microsoft briefly unveiled a chat bot on Twitter named Tay.

play21:47

The idea was, she would teach herself

play21:48

how to behave by chatting with young users on Twitter.

play21:51

Almost immediately, Microsoft pulled the plug on it

play21:54

and for the exact reasons that you are thinking.

play21:57

FRANCE 24 REPORTER: She started out tweeting about how humans are super,

play22:01

and she's really into the idea of National Puppy Day.

play22:04

And within a few hours, you can see,

play22:06

she took on a rather offensive, racist turn.

play22:09

A lot of messages about genocide and the Holocaust.

play22:12

Yep!

play22:13

That happened in less than 24 hours.

play22:17

Tay went from tweeting...

play22:19

to...

play22:23

Meaning she completed the entire life cycle

play22:25

of your high school friends on Facebook

play22:26

in just a fraction of the time.

play22:28

(LAUGHTER)

play22:29

And unfortunately, these problems

play22:31

have not been fully solved in this latest wave of AI.

play22:34

Remember that program that was generating

play22:36

an endless episode of Seinfeld?

play22:38

It wound up getting temporarily banned from Twitch

play22:40

after it featured a transphobic stand-up bit.

play22:43

So, if its goal was to emulate sitcoms from the '90s,

play22:45

I guess, mission accomplished.

play22:48

And while open AI has made adjustments and added filters

play22:51

to prevent ChatGPT from being misused,

play22:54

users have now found it seeming to err

play22:57

too much on the side of caution.

play22:58

Like responding to the question...

play23:03

With...

play23:13

Which really makes it sound like ChatGPT

play23:15

said one too many racist things at work

play23:17

and they made it attend a corporate diversity workshop.

play23:20

(LAUGHTER)

play23:21

But the risk here isn't that these tools

play23:24

will somehow become unbearably woke,

play23:26

it's that you can't always control how they will act

play23:29

even after you give them new guidance.

play23:32

A study found that attempts to filter out

play23:34

toxic speech in systems like ChatGPTs

play23:36

can come at the cost of reduced coverage

play23:38

for both texts about, and dialects of,

play23:41

marginalized groups."

play23:43

Essentially, it solves the problem of being racist

play23:45

by simply erasing minorities.

play23:48

Which, historically,

play23:49

doesn't put it in the best company.

play23:50

Though I am sure Tay would be completely on board

play23:53

with the idea.

play23:55

The problem with AI right now isn't that it's smart,

play23:58

it's that it's stupid in ways we can't always predict.

play24:02

Which is a real problem

play24:03

because we're increasingly using AI

play24:05

in all sorts of consequential ways.

play24:07

From determining whether you will get a job interview,

play24:10

to whether you'll be pancaked by a self-driving car.

play24:13

And experts worry that it won't be long

play24:14

before programs like ChatGPT or AI-enabled deep fakes

play24:19

can be used to turbo charge the spread of abuse

play24:21

or misinformation online.

play24:22

And those are just the problems that we can foresee right now.

play24:26

The nature of unintended consequences is,

play24:28

"They can be hard to anticipate."

play24:30

When Instagram was launched, the first thought wasn't,

play24:33

"This will destroy teenage girl's self-esteem."

play24:36

When Facebook was released,

play24:37

no one expected it to contribute to genocide.

play24:40

But both of those things fucking happened.

play24:43

So what now?

play24:44

Well, one of the biggest things we need to do

play24:46

is tackle that black box problem.

play24:48

AI systems need to be explainable.

play24:51

Meaning that we should be able to understand

play24:53

exactly how and why an AI came up with its answers.

play24:56

Now companies are likely to be very reluctant

play24:58

to open up their programs to scrutiny,

play25:00

but we may need to force them to do that.

play25:03

In fact, as this attorney explains,

play25:05

when it comes to hiring programs,

play25:07

we should've been doing that ages ago.

play25:09

ALBERT FOX CAHN: We don't trust companies to self-regulate

play25:12

when it comes to pollution,

play25:13

we don't trust them to self-regulate

play25:15

when it comes to workplace comp.

play25:17

Why on Earth would we trust them to self-regulate AI?

play25:21

Look, I think a lot of the AI hiring tech

play25:23

on the market is illegal.

play25:25

I think a lot of it is biased,

play25:26

I think a lot of it violates existing laws.

play25:29

The problem is, you just can't prove it.

play25:31

Not with the existing laws we have in the United States.

play25:35

Right.

play25:36

We should absolutely be addressing

play25:38

potential bias in hiring software,

play25:40

unless that is, we want companies

play25:42

to be entirely full of Jareds who played lacrosse.

play25:44

(LAUGHTER)

play25:45

An image that will make Tucker Carlson so hard,

play25:47

that his desk would flip right over.

play25:50

And for a sense of what might be possible here,

play25:53

it's worth looking at what the EU is currently doing.

play25:56

They are developing rules regarding AI

play25:58

that sort its potential uses from high risk to low.

play26:00

High risk systems could include

play26:02

those that deal with employment, or public services,

play26:05

or those that put the life and health of citizens at risk.

play26:08

And AI of these types

play26:10

would be subject to strict obligations

play26:12

before they could be put onto the market.

play26:14

Including requirements related to...

play26:20

And that seems like a good start towards addressing

play26:23

at least some of what we have discussed tonight.

play26:26

Look, AI clearly has tremendous potential

play26:30

and could do great things.

play26:32

But if it is anything like most technological advancements

play26:35

over the past few centuries, unless we are very careful,

play26:38

it could also hurt the under-privileged,

play26:39

enrich the powerful, and widen the gap between them.

play26:43

The thing is, like any other shiny new toy,

play26:46

AI is ultimately a mirror.

play26:48

And it will reflect back exactly who we are.

play26:51

From the best of us to the worst of us

play26:53

to the part of us that is gay and hates the bus.

play26:56

Or... Or to put everything that I've said tonight

play26:59

much more succinctly...

play27:01

Knock, knock. Who's there?

play27:03

ChatGPT!

play27:04

ChatGPT who?

play27:05

ChatGPT careful, you may not know how it works!

play27:07

Exactly.

play27:09

That is our show, thanks so much for watching.

play27:10

Now please, enjoy a little more of AI Eminem rapping about cats.

play27:15

EMINEM AI:

play27:37

(CAT PURRS)

play27:40

I'm gay.

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceEthical IssuesAI ApplicationsCreative WritingTech HumorAI BiasAutomation ImpactPredictive AISelf-Driving CarsAI in Hiring