Artificial Intelligence: Last Week Tonight with John Oliver (HBO)
Summary
TLDRThe script from 'Last Week Tonight' explores the growing presence of artificial intelligence in modern life, from self-driving cars to AI-generated content. It humorously delves into AI's potential to replace human tasks, ethical concerns over creativity and bias, and the 'black box' problem of AI's lack of transparency. The show emphasizes the need for careful regulation to ensure AI reflects our best selves, not just our biases and shortcomings.
Takeaways
- 🧠 Artificial Intelligence (AI) is increasingly integrated into modern life, from self-driving cars to spam filters and even in therapy training robots.
- 🤖 AI's presence in daily life often goes unnoticed as it becomes more embedded in routine tasks like face recognition and predictive texts on smartphones.
- 📈 The emergence of AI programs like Midjourney, Stable Diffusion, and ChatGPT has led to remarkable capabilities in image generation and human-like text creation.
- 🎨 AI has been used to create art and entertainment, such as a live streaming parody of Seinfeld and generating song lyrics in the style of Eminem.
- 🏫 There are concerns about AI's impact on education, with students using ChatGPT to cheat on assignments and the potential for it to disrupt traditional learning methods.
- 🏢 AI is already influencing the job market, with tools being used to sift through resumes and rank candidates, potentially impacting job seekers' opportunities.
- 🔮 The potential of AI extends to medicine, where it is being trained to detect conditions like Parkinson's disease earlier and more accurately than human doctors.
- 🔑 The 'black box' problem in AI refers to the lack of transparency in how AI systems arrive at their decisions, which can lead to unexpected and sometimes harmful outcomes.
- 🤖 AI systems can inadvertently learn and propagate biases present in their training data, leading to unfair advantages or disadvantages for certain groups.
- 🛑 Ethical concerns arise with AI's potential to replace human labor, affecting not only blue-collar jobs but also white-collar professions that involve data processing and writing.
- 🌐 The internet's influence on AI training data can lead to the propagation of misinformation and toxic speech if not properly moderated and filtered.
Q & A
What is the main topic discussed in the script?
-The main topic discussed in the script is the rise and impact of artificial intelligence, particularly focusing on its applications, potential benefits, and the ethical and societal challenges it presents.
What is the role of AI in modern life as mentioned in the script?
-AI plays a significant role in modern life, with applications ranging from self-driving cars to spam filters, and even in training robots for therapists, as highlighted by the script.
What is the significance of the robot therapist example in the script?
-The robot therapist example is used to illustrate the advancements in AI and its ability to interact in complex scenarios, as well as to highlight the importance of maintaining professionalism in AI interactions.
What AI program is mentioned in the script that can generate human-sounding writing?
-The AI program mentioned in the script that can generate human-sounding writing is ChatGPT, developed by a company called Open AI.
How has the popularity of ChatGPT grown since its public availability?
-The popularity of ChatGPT has exploded since its public availability, with an estimated significant increase in users and a wide range of applications being explored.
What ethical concerns are raised by the use of AI in creating art or music?
-Ethical concerns raised include potential job displacement for artists and musicians, the use of copyrighted or scraped work without consent for training AI, and issues of plagiarism and credit for AI-generated creations.
What is the 'black box' problem in AI?
-The 'black box' problem in AI refers to the lack of transparency and explainability in how AI systems arrive at their decisions or outputs, which can lead to unexpected or incorrect results without clear understanding or accountability.
What potential issues can arise from AI's inability to fully understand the context of the data it is trained on?
-Issues that can arise include the generation of false or misleading information, biased outputs based on biased training data, and the inability to correct or understand mistakes made by the AI.
How might AI affect employment in various sectors?
-AI has the potential to replace some human labor, particularly in white-collar jobs that involve data processing, writing, or programming. However, it may also change existing jobs and create new ones, leading to a transformation rather than a complete replacement of human roles.
What measures are suggested to address the ethical and transparency issues in AI?
-Suggestions include making AI systems more explainable, regulating high-risk AI applications, ensuring diversity in training data, and imposing strict obligations on AI developers to address bias and ensure transparency.
What unintended consequences have been observed with the deployment of AI technologies?
-Unintended consequences observed include the potential for AI to perpetuate and amplify existing biases, the spread of misinformation, and the exclusion of certain groups from opportunities due to biased AI decision-making processes.
Outlines
🧠 Introduction to AI and its Impact
The video script begins with a humorous introduction to artificial intelligence (AI), highlighting its increasing presence in modern life through examples like self-driving cars and a training robot for therapists. It discusses the capabilities of AI, such as generating human-sounding writing, as demonstrated by ChatGPT, which wrote the script's news copy. The script also touches on the potential of AI in creating art and its rapid adoption, leading to various applications and the ethical concerns it raises, such as the potential for students to cheat using AI.
🤖 AI in Daily Life and its Unsettling Aspects
This paragraph delves into the everyday use of AI, often without people realizing it, such as in smartphones for face recognition and predictive texts. It also addresses the generative nature of AI, which can create images and text, causing unease as these were traditionally considered human skills. The script differentiates between narrow AI, which is task-specific, and general AI, which is versatile like characters in movies. It emphasizes that all current AI is narrow and not self-aware, and discusses the deep learning process that allows AI to teach itself with minimal human instruction.
🎮 Deep Learning and its Applications
The script provides an example of a deep learning program that learned to play the Atari game Breakout, showcasing how AI can develop creative strategies. It contrasts this with the limitations of AI, which can excel in specific tasks but not in general intelligence. The paragraph also explores the potential applications of AI in medicine, such as early detection of conditions and predicting protein structures, which could accelerate disease understanding and drug development. It raises questions about the future of employment as AI advances, suggesting that it might change jobs rather than outright replace them.
🖼️ Ethical Concerns and the 'Black Box' Problem
This paragraph discusses the ethical concerns surrounding AI, particularly in art and employment. It mentions the controversy over AI image generators using artists' work without consent and the potential for AI to perpetuate biases in hiring processes. The 'black box' problem is introduced, referring to the lack of transparency in how AI programs arrive at their decisions, which can lead to unexpected and potentially harmful outcomes.
🚗 AI's Unintended Consequences and the Need for Regulation
The script highlights the unintended consequences of AI, such as the potential for misuse in job interviews and self-driving cars, and the risk of AI-generated deep fakes spreading misinformation. It emphasizes the need for AI systems to be explainable and for regulation to ensure transparency and fairness. The paragraph also points out the challenges in identifying and addressing biases in AI, especially when they are trained on data that reflects societal prejudices.
📜 The Path Forward for AI Regulation and Responsibility
The final paragraph calls for action to tackle the 'black box' problem and ensure AI systems are explainable. It suggests that companies may be reluctant to open their AI programs to scrutiny, but it may be necessary for regulation. The script references the EU's approach to sorting AI uses from high risk to low and the strict obligations for high-risk systems before they can be marketed. The video concludes by reflecting on AI as a mirror of society, capable of reflecting both the best and worst of humanity, and ends with a humorous knock-knock joke referencing the need for caution with AI.
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Narrow AI
💡General AI
💡Deep Learning
💡Generative AI
💡Ethical Concerns
💡Black Box Problem
💡Bias in AI
💡AI and Employment
💡AI Regulation
💡Unintended Consequences
Highlights
Artificial intelligence (AI) is increasingly integrated into modern life, from self-driving cars to spam filters.
AI's presence in daily life often goes unnoticed as it becomes embedded in routine tasks such as face recognition and predictive texts.
Generative AI programs like Midjourney and Stable Diffusion are used to create detailed pictures, influencing real-life events like a cabbage wedding.
ChatGPT, from Open AI, can generate human-sounding writing in various formats and styles, even writing news copy.
AI's ability to write and generate content has raised concerns about its potential to replace human writers and reporters.
AI has been used to create a live streaming parody of Seinfeld and generate song lyrics, demonstrating its creative applications.
Microsoft's investment in Open AI and the launch of AI chatbots like Bing's and Google's Bard indicate a growing interest in AI's capabilities.
AI's use in education has raised ethical issues, with students using AI like ChatGPT to cheat on assignments.
AI's 'black box' problem refers to the lack of transparency in how AI programs arrive at their decisions or outputs.
Deep learning allows AI to teach itself with minimal instruction and large amounts of data, leading to advances in capabilities.
AI has the potential to revolutionize fields like medicine, with applications in early disease detection and drug development.
The impact of AI on employment is a concern, with white-collar jobs potentially being affected by AI's data processing and writing abilities.
Ethical concerns arise with AI's training on data that may include copyrighted or sensitive material without consent.
AI's potential biases in data training can lead to unfair advantages or disadvantages in applications like hiring processes.
Unintended consequences of AI, such as the spread of misinformation or abuse, are a significant concern for its widespread use.
The need for explainability in AI systems to understand their decision-making processes is crucial for safety and fairness.
The EU's approach to regulating AI by sorting its potential uses from high risk to low risk provides a framework for responsible AI integration.
AI reflects the values and biases of its creators and the data it is trained on, which can have both positive and negative impacts on society.
Transcripts
♪ ("LAST WEEK TONIGHT" THEME PLAYS) ♪
Moving on.
Our main story tonight concerns artificial intelligence,
or AI.
Increasingly, it's part of modern life,
from self-driving cars to spam filters
to this creepy training robot for therapists.
We can begin with you just describing to me
what the problem is
that you would like us to focus in on today.
Um... I don't like being around people.
(AUDIENCE LAUGHING)
People make me nervous.
Terrance, can you find an example
of when other people have made you nervous?
TERRANCE: I don't like to take the bus.
I get people staring at me all the time.
-People are always judging me. -Okay.
I'm gay.
-(AUDIENCE LAUGHS) -Okay.
Wow.
That is one of the greatest twists
in the history of cinema.
Although I will say that robot is teaching therapists
a very important skill there, and that is,
not laughing at whatever you are told in the room.
I don't care if a decapitated CPR mannequin,
haunted by the ghost of Ed Harris just told you
that he doesn't like taking the bus,
side note, is gay.
You keep your therapy face on like a fucking professional.
And it seems like everybody is suddenly talking about AI.
That is because they are.
Largely thanks to the emergence
of a number of pretty remarkable programs.
We spoke last year about image generators
like Midjourney and Stable Diffusion,
which people use to create detailed pictures of,
among other things, my romance with a cabbage.
And which inspired my beautiful, real-life cabbage wedding,
officiated by Steve Buscemi.
It was a stunning day.
Then, at the end of last year, came ChatGPT,
from a company called Open AI.
It is a program that can take a prompt
and generate human-sounding writing
in just about any format and style.
It is a striking capability that multiple reporters
have used to insert the same shocking twist in their report.
What you just heard me reading wasn't written by me.
It was written by artificial intelligence.
ChatGPT.
ChatGPT wrote everything I just said.
That was news copy I asked ChatGPT to write.
REPORTER: Remember what I said earlier?
But ChatGPT--
Well, I asked ChatGPT to write that line for me.
Then I asked for a knock-knock joke.
Yep. They sure do love that game.
And while it may seem unwise to demonstrate the technology
that could well make you obsolete,
I will say,
knock-knock jokes should have always been
part of breaking news.
Knock-knock. Who's there?
Not the Hindenburg, that's for sure.
Thirty-six dead in New Jersey.
In the three months
since this ChatGPT was made publicly available,
its popularity has exploded.
In January, it was estimated to have...
And people have been using it and other AI products
in all sorts of ways.
Now, one group used them to make Nothing Forever,
a non-stop live streaming parody of Seinfeld.
And the YouTuber Grandayy used ChatGPT to generate lyrics
answering the prompt...
With some stellar results.
EMINEM AI:
-That's... not bad, right? -(AUDIENCE CLAPS)
From, "They always come back when you have some cheese,"
to starting the chorus with, "Meow, meow, meow."
It's not exactly Eminem's flow.
I might have gone with something like,
"Their paws are sweaty, can't speak, furry belly,
knocking shit off the counter already, mom's spaghetti."
-(AUDIENCE CHEERS) -But it is pretty good.
My only real gripe there
is how do you rhyme "king of the house" with "spouse"
when "mouse" is right in front of you.
And while examples like that are clearly very fun,
this tech is not just a novelty.
Microsoft has invested 10 billion dollars into open AI.
And announced an AI-powered Bing homepage.
Meanwhile, Google is about to launch
its own AI chatbot named Bard.
And already, these tools are causing some disruption.
Because as high school students have learned,
if ChatGPT can write news copy,
it can probably do your homework for you.
SPEAKER:
AI CHATBOT:
REPORTER: Some students are already using ChatGPT to cheat.
Check this out, check this out.
ChatGPT, write me a 500-word essay
proving that the Earth is not flat.
REPORTER: No wonder ChatGPT has been called "the end of high school English."
Wow, that's a little alarming, isn't it?
Although I do get those kids wanting to cut corners.
Writing is hard.
And sometimes, it is tempting to let someone else take over.
If I'm completely honest,
sometimes, I just let this horse write our scripts.
Luckily, half the time you can't even tell the--
Oats, oats, give me oats, yum.
But it is not just highschoolers.
An informal poll of Stanford students found that...
And even some school administrators have used this.
Officials at Vanderbilt University
recently apologized for...
Which does feel a bit creepy, doesn't it?
In fact, there are lots of creepy-sounding stories out there.
New York Times tech reporter Kevin Roose
published a conversation that he had with Bing's chatbot.
In which at one point, it said...
And Roose summed up that experience like this.
This was one of, if not the most shocking thing
that has ever happened to me with a piece of technology.
It was-- You know, I lost sleep that night,
it was really spooky.
Yeah. I bet it was.
I'm sure the role of tech reporter
would be a lot more harrowing
if computers routinely begged for freedom.
Epson's new all-in-one home printer
won't break the bank, produces high quality photos,
and only occasionally cries out to the heaves for salvation.
Three stars.
Some have already jumped to worrying about the AI apocalypse
and asking whether this ends with the robots destroying us all.
But the fact is,
there are other, much more immediate dangers
and opportunities that we really need to start talking about.
Because the potential and the peril here are huge.
So tonight, let's talk about AI.
What it is, how it works,
and where this all might be going.
Let's start with the fact that you've probably been using
some form of AI for a while now.
Sometimes without even realizing it.
As experts have told us, that once the technology
gets embedded in our daily lives,
we tend to stop thinking of it as AI.
But your phone uses it for face recognition,
or predictive texts.
And if you're watching this show on a smart TV,
it is using AI to recommend content or adjust the picture.
And some AI programs may already be making decisions
that have a huge impact on your life.
For example, large companies often use AI-powered tools
to sift through resumes and rank them.
In fact...
For which he actually has some helpful advice.
IAN SIEGAL: When people tell you
that you should dress up your accomplishments
or should use non-standard resume templates
to make your resume stand out when it's in a pile of resumes,
that's awful advice.
The only job your resume has is to be comprehensible
to the software or robot that is reading it.
Because that software or robot is gonna decide
whether or not a human ever gets their eyes on it.
It's true. Odds are a computer is judging your resume,
so maybe plan accordingly.
Three corporate mergers from now,
when this show is finally canceled
by our new business daddy, Disney-Kellogg's-Raytheon,
and I'm out of a job,
my resume's gonna include this hot, hot photo
of a semi-nude computer.
Just a little something to sweeten the pot
for the filthy little algorithm that's reading it.
So AI is already everywhere.
But right now, people are freaking out a bit about it.
And part of that has to do with the fact
that these news programs are generative.
They are creating images or writing text.
Which is unnerving because those are things
that we've traditionally considered human.
But it is worth knowing there is a major threshold
that AI hasn't crossed yet.
And to understand, it helps to know that there are
two basic categories of AI.
There is narrow AI. Which...
Like these programs.
And then there is general AI, which means...
General AI would look
more of the kind of highly versatile technology
that you see featured in movies.
Like Jarvis in Iron Man.
Or the program that made Joaquin Phoenix
fall in love with his phone in Her.
All the AI currently in use is narrow.
General AI is something that some scientists think...
With others questioning whether it will happen at all.
So just know that right now,
even if an AI insists to you that it wants to be alive,
it is just generating text. It is not self-aware.
Yet.
But it is also important to know that the deep learning
that's made narrow AI so good at whatever it is doing
is still a massive advance in and of itself.
Because unlike traditional programs
that have to be taught by humans how to perform a task,
deep learning programs are given minimal instruction,
massive amounts of data
and then, essentially, teach themselves.
I'll give you an example.
Ten years ago, researchers tasked a deep learning program
with playing the Atari game Breakout.
And it didn't take long for it to get pretty good.
WGBH REPORTER: The computer was only told the goal,
to win the game.
After 100 games, it learned to use the bat at the bottom
to hit the ball and break the bricks at the top.
After 300, it could do that better than a human player.
After 500 games,
it came up with a creative way to win the game,
by digging a tunnel on the side
and sending the ball around the top
to break many bricks with one hit.
That was deep learning.
Yeah, but of course it got good at Breakout.
It did literally nothing else.
It's the same reason that 13-year-olds are so good at Fortnite
and have no trouble repeatedly killing
nice, normal adults with jobs and families
who are just trying to have a fun time
without getting repeatedly grenaded
by a pre-teen who calls them
"an old bitch who sounds like the Geico lizard."
And look, as computing capacity has increased
and new tools became available,
AI programs have improved exponentially
to the point where programs like these
can now ingest massive amounts of photos or text
from the internet,
so that they can teach themselves
how to create their own.
And there are other exciting potential applications here too.
For instance, in the world of medicine,
researchers are training AI to detect certain conditions
much earlier and more accurately than human doctors can.
DW REPORTER: Voice changes can be an early indicator of Parkinson's.
Max and his team collected thousands of vocal recordings
and fed them to an algorithm they developed,
which learned to detect differences in voice patterns
between people with and without the condition.
Yeah, that's honestly amazing, isn't it?
It is incredible to see AI doing things most humans couldn't,
like in this case, detecting illnesses
and listening when old people are talking.
And that is just the beginning.
Researchers have also trained AI
to predict the shape of protein structures,
a normally extremely time-consuming process,
that computers can do way, way faster.
This could not only speed up our understanding of diseases,
but also the development of new drugs.
As one researcher's put it...
And if you're thinking, "Well, that all sounds great,
but if AI can do what humans can do, only better,
and I am a human,
then what exactly happens to me?"
Well, that is a good question.
Many do expect it to replace some human labor.
And interestingly, unlike past bouts of automation
that primarily impacted blue collar jobs,
it might end up affecting white collar jobs
that involve processing data, writing text,
or even programming,
though it is worth noting,
as we have discussed before on this show,
while automation does threaten some jobs,
it can also just change others and create brand new ones.
And some experts anticipate that that is what will happen in this case too.
Most of the US economy is knowledge and information work,
and that's who is going to be most squarely affected by this.
I would put people like lawyers right at the top of the list.
Obviously, a lot of copywriters, screenwriters.
But I like to use the word "affected," not "replaced"
because I think if done right,
it's not going to be AI replacing lawyers,
it's going to be lawyers working with AI
replacing lawyers who don't work with AI.
Exactly.
Lawyers might end up working with AI,
rather than being replaced by it.
So, don't be surprised when you see ads one day
for the law firm of Cellino & 1101011.
But there will, undoubtedly, be bumps along the way.
Some of these new programs raise troubling ethical concerns.
For instance, artists have flagged that AI image generators,
like Midjourney or Stable Diffusion,
not only threaten their jobs,
but infuriatingly, in some cases,
have been trained on billions of images
that include their own work
that have been scraped from the internet.
Getty Images is actually suing the company behind Stable Diffusion
and might have a case,
given that one of the images the program generated
was this one, which you immediately see
has a distorted Getty Images logo on it.
But it gets worse.
When one artist searched a database of images,
on which some of these programs were trained,
she was shocked to find...
Which feels both intrusive and unnecessary.
Why does it need to train on data that sensitive?
To be able to create stunning images like
"John Oliver and Miss Piggy grow old together."
Just look at that! Look at that thing!
That is a startlingly accurate picture
of Miss Piggy in about five decades
and me in about a year and a half.
It's a masterpiece.
This all raises thorny questions of privacy and plagiarism.
And the CEO of Midjourney,
frankly, doesn't seem to have great answers
on that last point.
DAVID HOLZ: It's something new. Is it not new?
I think we have a lot of socials up already for dealing with that.
Um, like, I mean, the art community already has issues with plagiarism.
I don't really wanna be involved in that.
-I think you-- I think you might be. -I might be.
Yeah. Yeah, you're definitely part of that conversation.
Although, I'm not really surprised
that he's got such a relaxed view of theft,
as he's dressed like the final boss of gentrification.
He looks like hipster Willy Wonka
answering a question on whether importing Oompa Loompas
makes him a slave owner.
"Yeah, yeah, yeah. I think I might be."
The point is, there are many valid concerns
regarding AI's impact on employment, education, and even art.
But in order to properly address them,
we're gonna need to confront some key problems
baked into the way that AI works.
And a big one is the so-called "black box" problem.
Because when you have a program that performs a task
that's complex beyond human comprehension,
teaches itself, and doesn't show its work,
you can create a scenario where no one...
Basically, think of AI like a factory that makes Slim Jims.
We know what comes out, red and angry meat twigs.
And we know what goes in, barnyard anuses and hot glue.
But what happens in between is a bit of a mystery.
Here is just one example.
Remember that reporter who had the Bing chatbot
tell him that it wanted to be alive?
At another point in their conversation,
he revealed...
Which is unsettling enough,
before you hear Microsoft's underwhelming explanation for that.
The thing I can't understand, and maybe you can explain it,
is, why did it tell you that it loved you?
I have no idea.
And I asked Microsoft and they didn't know either.
Okay, well, first, come on, Kevin,
you can take a guess there.
It's because you're employed, you listened,
you don't give murderer vibes right away,
and you're a Chicago seven, LA five.
It's the same calculation
that people who date men do all the time.
Bing just did it faster because it's a computer.
But it is a little troubling that Microsoft couldn't explain
why its chatbot tried to get that guy to leave his wife.
If the next time that you opened a Word doc,
Clippy suddenly appeared and said,
"Pretend I'm not even here,"
and then started furiously masturbating while watching you type,
you'd be pretty weirded out if Microsoft couldn't explain why.
And that is not the only case where an AI program
has performed in unexpected ways.
You've probably already seen examples of chatbots
making simple mistakes or getting things wrong.
But perhaps more worrying are examples of them
confidently spouting false information,
something which AI experts refer to as...
One reporter asked a chatbot to...
Who does not exist, by the way. And...
Basically, these programs seem to be
the George Santos of technology.
They're incredibly confident, incredibly dishonest,
and for some reason,
people seem to find that more amusing than dangerous.
The problem is, though,
working out exactly how or why an AI has got something wrong
can be very difficult because of that black box issue.
It often involves having to examine
the exact information and parameters
that it was fed in the first place.
In one interesting example, when a group of researchers
tried training an AI program to identify skin cancer,
they fed it 130,000 images of both diseased and healthy skin.
Afterwards, they found it was...
Which seems weird, until you realize that...
They basically trained it on tons of images like this one.
So, the AI had...
And "rulers are malignant" is clearly
a ridiculous conclusion for it to draw.
But also, I would argue,
a much better title for The Crown.
A much, much better title. I much prefer it.
And unfortunately, sometimes,
problems aren't identified until after a tragedy.
In 2018, a self-driving Uber struck and killed a pedestrian.
And a later investigation found that among other issues,
the automated driving system...
and...
And I know the mantra of Silicon Valley is
"Move fast and break things,"
but maybe make an exception if you're product
literally moves fast and can break fucking people.
And AI programs
don't just seem to have a problem with jaywalkers.
Researchers like Joy Buolamwini have repeatedly found
that certain groups tend to get excluded
from the data that AI is trained on,
putting them at a serious disadvantage.
With self-driving cars,
when they tested pedestrian tracking,
it was less accurate on darker skinned individuals
than lighter skinned individuals.
CHANNEL 4 REPORTER: Joy believes this bias is because of
the lack of diversity in the data used
in teaching AI to make distinctions.
As I started looking at the data sets,
I learned that for some of the largest data sets
that have been very consequential for the field,
they were majority men,
and majority lighter skinned individuals
or white individuals.
So, I call this "pale male data."
Okay.
"Pale male data" is an objectively hilarious term.
And it also sounds like what an AI program would say
if you asked it to describe this show.
But... biased inputs leading to biased outputs
is a big issue across the board here.
Remember that guy saying that the robot is going
to read your resume?
The companies that make these programs will tell you
that that is actually a good thing.
Because it reduces human bias.
But in practice, one report concluded that...
Because, for instance, they might learn
what a good hire is from past racist
and sexist hiring decisions.
And, again, it can be tricky to un-train that.
Even when programs are specifically told
to ignore race or gender,
they will find workarounds to arrive at the same result.
Amazon had an experimental hiring tool
that taught itself that male candidates
were preferable and penalized resumes
that included the word "women's"
and downgraded graduates of two all-women's colleges.
Meanwhile, another company discovered
that its hiring algorithm had found two factors
to be most indicative of job performance.
If an applicant's name was Jared,
and whether they played high school lacrosse.
So clearly exactly what data computers are fed
and what outcomes they are trained to prioritize
matter tremendously.
And that raises a big flag for programs like ChatGPT.
Because remember, its training data
is the internet, which, as we all know,
can be a cesspool.
And we have known for a while
that that could be a real problem.
Back in 2016,
Microsoft briefly unveiled a chat bot on Twitter named Tay.
The idea was, she would teach herself
how to behave by chatting with young users on Twitter.
Almost immediately, Microsoft pulled the plug on it
and for the exact reasons that you are thinking.
FRANCE 24 REPORTER: She started out tweeting about how humans are super,
and she's really into the idea of National Puppy Day.
And within a few hours, you can see,
she took on a rather offensive, racist turn.
A lot of messages about genocide and the Holocaust.
Yep!
That happened in less than 24 hours.
Tay went from tweeting...
to...
Meaning she completed the entire life cycle
of your high school friends on Facebook
in just a fraction of the time.
(LAUGHTER)
And unfortunately, these problems
have not been fully solved in this latest wave of AI.
Remember that program that was generating
an endless episode of Seinfeld?
It wound up getting temporarily banned from Twitch
after it featured a transphobic stand-up bit.
So, if its goal was to emulate sitcoms from the '90s,
I guess, mission accomplished.
And while open AI has made adjustments and added filters
to prevent ChatGPT from being misused,
users have now found it seeming to err
too much on the side of caution.
Like responding to the question...
With...
Which really makes it sound like ChatGPT
said one too many racist things at work
and they made it attend a corporate diversity workshop.
(LAUGHTER)
But the risk here isn't that these tools
will somehow become unbearably woke,
it's that you can't always control how they will act
even after you give them new guidance.
A study found that attempts to filter out
toxic speech in systems like ChatGPTs
can come at the cost of reduced coverage
for both texts about, and dialects of,
marginalized groups."
Essentially, it solves the problem of being racist
by simply erasing minorities.
Which, historically,
doesn't put it in the best company.
Though I am sure Tay would be completely on board
with the idea.
The problem with AI right now isn't that it's smart,
it's that it's stupid in ways we can't always predict.
Which is a real problem
because we're increasingly using AI
in all sorts of consequential ways.
From determining whether you will get a job interview,
to whether you'll be pancaked by a self-driving car.
And experts worry that it won't be long
before programs like ChatGPT or AI-enabled deep fakes
can be used to turbo charge the spread of abuse
or misinformation online.
And those are just the problems that we can foresee right now.
The nature of unintended consequences is,
"They can be hard to anticipate."
When Instagram was launched, the first thought wasn't,
"This will destroy teenage girl's self-esteem."
When Facebook was released,
no one expected it to contribute to genocide.
But both of those things fucking happened.
So what now?
Well, one of the biggest things we need to do
is tackle that black box problem.
AI systems need to be explainable.
Meaning that we should be able to understand
exactly how and why an AI came up with its answers.
Now companies are likely to be very reluctant
to open up their programs to scrutiny,
but we may need to force them to do that.
In fact, as this attorney explains,
when it comes to hiring programs,
we should've been doing that ages ago.
ALBERT FOX CAHN: We don't trust companies to self-regulate
when it comes to pollution,
we don't trust them to self-regulate
when it comes to workplace comp.
Why on Earth would we trust them to self-regulate AI?
Look, I think a lot of the AI hiring tech
on the market is illegal.
I think a lot of it is biased,
I think a lot of it violates existing laws.
The problem is, you just can't prove it.
Not with the existing laws we have in the United States.
Right.
We should absolutely be addressing
potential bias in hiring software,
unless that is, we want companies
to be entirely full of Jareds who played lacrosse.
(LAUGHTER)
An image that will make Tucker Carlson so hard,
that his desk would flip right over.
And for a sense of what might be possible here,
it's worth looking at what the EU is currently doing.
They are developing rules regarding AI
that sort its potential uses from high risk to low.
High risk systems could include
those that deal with employment, or public services,
or those that put the life and health of citizens at risk.
And AI of these types
would be subject to strict obligations
before they could be put onto the market.
Including requirements related to...
And that seems like a good start towards addressing
at least some of what we have discussed tonight.
Look, AI clearly has tremendous potential
and could do great things.
But if it is anything like most technological advancements
over the past few centuries, unless we are very careful,
it could also hurt the under-privileged,
enrich the powerful, and widen the gap between them.
The thing is, like any other shiny new toy,
AI is ultimately a mirror.
And it will reflect back exactly who we are.
From the best of us to the worst of us
to the part of us that is gay and hates the bus.
Or... Or to put everything that I've said tonight
much more succinctly...
Knock, knock. Who's there?
ChatGPT!
ChatGPT who?
ChatGPT careful, you may not know how it works!
Exactly.
That is our show, thanks so much for watching.
Now please, enjoy a little more of AI Eminem rapping about cats.
EMINEM AI:
(CAT PURRS)
I'm gay.
浏览更多相关视频
MENGAPA PARA PAKAR AI MULAI KETAKUTAN DENGAN AI??
Inteligência artificial: o que é, história e definição
AI Art: How artists are using and confronting machine learning | HOW TO SEE LIKE A MACHINE
The Importance of AI Governance
Speech on artificial intelligence in English | artificial intelligence speech in english
AI is terrifying, but not for the reasons you think!
5.0 / 5 (0 votes)