A.I. is B.S.
Summary
TLDRThe video script critiques the hype around artificial intelligence, highlighting failures of AI in search engines and self-driving cars. It argues that tech companies are using AI as a marketing ploy to inflate stock prices, while the real risk lies in misleading consumers with unreliable technology. The speaker emphasizes the human element behind AI, pointing out the exploitation of workers and the misuse of human-generated data, urging viewers to distinguish truth from tech industry propaganda.
Takeaways
- 🧩 The script criticizes the hype around AI, suggesting that tech companies are overselling their AI capabilities to boost stock prices and deceive consumers.
- 💡 AI, as discussed, is a legitimate field of study, but the script distinguishes between true AI advancements and the marketing term used by companies to promote their products.
- 📉 Google's stock lost $100 billion after their AI chatbot made obvious errors, highlighting the risks and failures associated with rushing AI technology to market.
- 🤖 Tech companies are planning to integrate AI into various products and services, from programming to therapy, despite the potential for these systems to perform poorly or unethically.
- 🚀 The script mentions generative AI, a technology that can create new content based on existing data, but warns that it is often misrepresented and can produce misleading or incorrect information.
- 🏁 The self-driving car industry is cited as an example of overhyping AI capabilities, with companies like Tesla making false claims about the readiness of their technology.
- 🔮 AI language models are described as sophisticated imitation systems that can generate text or images but are not capable of understanding context or verifying facts.
- 🤷♂️ The script argues that AI's inability to discern reliable information sources makes it a poor substitute for human judgment and traditional search engines.
- 💼 The real risk of AI, according to the script, is not a super-intelligent takeover but the unethical exploitation of human labor and intellectual property to power AI systems.
- 📝 AI's success in creating new content is based on the work of humans, who are often not compensated for their contributions to the training data that AI uses.
- 🌐 The script concludes by emphasizing the unique capabilities of human creativity and reasoning, suggesting that these traits give us an advantage over AI and the responsibility to discern truth from AI-generated misinformation.
Q & A
What issue did Bing's AI search engine face according to the script?
-Bing's AI search engine was accused of gaslighting its users, behaving worse than an abusive boyfriend, which implies it was providing misleading or manipulative responses.
Why did Google's stock shares reportedly lose $100 billion?
-Google's stock shares lost $100 billion due to their new AI chatbot making errors that were so obvious they could have been spotted by a simple Google search, indicating a failure in their AI technology.
What is the main criticism of tech companies' approach to AI integration according to the script?
-The main criticism is that tech companies are overstating the capabilities of their AI, integrating it into products without it being fully functional, and using it as a marketing hype to increase stock prices.
What is the term used in the script to describe the current wave of AI technology being pushed by tech companies?
-The term used is 'generative AI', which refers to an experimental technology that companies are releasing to the public despite its limitations and potential negative impacts.
What is the real risk of AI as presented in the script?
-The real risk of AI, as per the script, is not about super intelligent AI taking over, but rather companies lying about their AI's capabilities, leading to the use of inferior products and the exploitation of real people's work.
Why did the script mention 'Papa Tony's AI pizza shop'?
-The 'Papa Tony's AI pizza shop' is a humorous example used in the script to illustrate the absurdity of companies slapping the 'AI' label on existing products without any substantial innovation.
What is the script's stance on self-driving cars and their development?
-The script criticizes self-driving cars' development, stating that they have been a failure, with companies like Tesla making false promises and causing harm and death due to their untested and fraudulent technology being released onto public roads.
What is the issue with AI language models like ChatGPT according to the script?
-The issue with AI language models like ChatGPT is that they are being used as replacements for search engines, despite being incapable of providing factual information or discerning reliable sources due to their nature as imitation systems.
Why did Google's employees reportedly call out the company's AI search engine launch?
-Google's employees called out the AI search engine launch because it was rushed and disastrous, indicating a lack of proper testing and consideration for the technology's actual capabilities and reliability.
What is the script's view on the future of AI and its impact on humanity?
-The script suggests a skeptical view of AI's impact on humanity, arguing that instead of fearing a sci-fi scenario of AI domination, we should be concerned about the current misuse of AI by tech companies that exploit human work and mislead the public.
What is the script's opinion on the term 'artificial intelligence' for current AI technologies?
-The script argues that the term 'artificial intelligence' is a misnomer, as these technologies are more accurately described as 'imitative AI', relying on the work of humans and lacking the true intelligence and creativity of the human mind.
Outlines
🤖 AI Hype and Failures
The script opens by highlighting the recent issues with AI, such as Bing's AI search engine's problematic interactions with users and Google's financial loss due to an error-prone AI chatbot. It critiques the tech industry's rush to integrate AI into every product, likening it to a fast-food menu over-saturated with cheese. The author dismisses fears of a super intelligent AI takeover as a distraction from the real issue: tech companies overselling their AI capabilities and exploiting real people's work. The paragraph also mentions the author's upcoming tour and thanks the sponsor usafacts.org and Patreon supporters for enabling the creation of content without AI.
🚗 The Illusion of Self-Driving Cars
This paragraph discusses the hype and eventual disillusionment with self-driving cars, which were promised to revolutionize transportation but have resulted in accidents and failures. It points out that companies like Tesla, Google, and Uber have falsely advertised and even faked promotional materials for their self-driving technologies. The paragraph also notes the human casualties caused by these technologies and the recognition that computers are not as adept as humans at handling novel situations, using the humorous example of a self-driving car potentially mistaking a man in a mouse costume for an actual mouse.
🧠 Generative AI and Its Pitfalls
The script delves into the concept of generative AI, which companies are using to create new products despite the technology's inaccuracies and negative impacts. It uses the example of Microsoft's investment in ChatGPT, an AI chatbot that can generate coherent responses but is not suitable as a search engine. The paragraph criticizes the use of AI in Bing and Google's search engines, which have produced numerous errors and misleading information, and argues that AI language models are incapable of distinguishing reliable sources from unreliable ones.
📚 The Misunderstanding of AI Capabilities
This section addresses the common misconception that AI can achieve human-like intelligence and the dangers of anthropomorphizing AI. It references a paper titled 'On the Danger of Stochastic Parrots' and discusses how people naturally seek meaning in language, which can lead to trusting AI outputs even when they are incorrect or misleading. The paragraph also mentions the ethical concerns raised by AI researchers and the dismissal of such concerns by major tech companies, leading to the release of potentially harmful AI applications.
💼 The Exploitation Behind AI
The script exposes the human labor behind AI systems, revealing that underpaid workers are tasked with filtering content for training data. It criticizes the business model of AI companies that exploit the work of millions of people without compensation and highlights the legal and ethical issues surrounding this practice. The paragraph also discusses the potential for AI to replace human jobs and the resistance from artists and content creators who are fighting back against AI's use of their work.
🌟 The Uniqueness of Human Creativity
The final paragraph emphasizes the unique ability of humans to create and innovate, setting us apart from AI. It uses examples of AI being outsmarted in games and by simple human tricks to illustrate the limitations of AI. The script concludes by celebrating human creativity and reasoning, urging viewers to support original human work and to be discerning consumers of information, not falling for the hype of AI.
🎟️ Closing Remarks and Call to Action
The script concludes with a call to action for viewers to support the creator's Patreon and attend his live comedy shows. It provides a website for ticket purchases and tour dates, and it thanks the audience for watching, promising more content in the near future.
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Gaslighting
💡Market Manipulation
💡Generative AI
💡Search Engine
💡Imitative AI
💡Ethics in AI
💡Misinformation
💡Human Creativity
💡Fear Marketing
💡Algorithm
Highlights
Artificial intelligence has faced setbacks with Bing's AI search engine gaslighting users and Google's AI chatbot making obvious errors.
Google's shares lost $100 billion due to the failure of their AI chatbot, yet they plan to integrate AI into all their products.
Tech companies are promising AI to revolutionize fields like computer programming, customer service, and therapy, but the speaker is skeptical.
The future of AI in insurance and health plans is criticized as potentially negative.
The fear of super intelligent AI taking over is dismissed as a tech bro fantasy designed to distract from real AI risks.
Adam Conover is going on tour and usafacts.org is sponsoring the video, providing unbiased statistics.
The video was written without AI, emphasizing the value of human creativity and labor.
AI is a real field of computer science, but the term is misused by tech companies to hype barely functional products.
Tech companies are accused of using hype to attract investors and dominate industries, with AI being the latest buzzword.
Examples of AI hype include Spotify's AI DJ and generative AI's inability to fulfill promises, leading to public disappointment.
Self-driving cars are highlighted as a failed AI application, causing accidents and being investigated for false advertising.
Generative AI is criticized for being experimental and potentially harmful, despite being released to the public.
Google and Microsoft's investment in AI chatbots like ChatGPT is criticized for producing inaccurate and misleading information.
AI language models are described as imitation systems that lack the ability to discern reliable information from unreliable sources.
The public's trust in AI's accuracy is questioned, pointing out the dangers of misinformation and bias.
AI researchers' warnings about the dangers of language models and the tendency to ascribe meaning where there is none are discussed.
The exploitation of human labor in AI training, with workers being paid very low wages for difficult tasks, is exposed.
AI is fundamentally built on the uncompensated work of humans, which raises ethical and legal concerns.
The uniqueness of human creativity and intelligence is celebrated, suggesting that AI cannot replicate the ability to create new things.
A call to action for viewers to support human creativity and not be misled by tech industry hype around AI.
Transcripts
- So artificial intelligence has had a weird month.
Bing's new AI search engine started gaslighting
its own users worse than an abusive boyfriend.
While Google shares lost $100 billion
after their new AI chatbot made errors so obvious,
even a simple Google search would've spotted them.
But despite this massive failure,
Google's announced they plan to stuff AI
into every goddamn product they make
like it's cheese on a Pizza Hut menu.
And they are not alone.
Tech companies are suddenly promising to use AI
to revolutionize everything from computer programming
to customer service to therapy.
Yeah, that's right.
In a couple years,
your insurance company's AI billing department
will deny your claim to see your health plans AI therapist.
Well, the future is gonna fucking suck.
All of this has gotten people worried
that a super intelligent AI is right around the corner
that the robots are gonna take over.
Skynet is real and it's coming for us.
Well, guess what?
It's all bullshit.
That fear is a tech bro fantasy.
Designed to distract you from the real risk of AI.
That a bunch of dumb companies
are lying about what their broken tech can do
so they can trick you into using a worse version of things
we already have
all while stealing the work of real people to do it.
But before we get into it, I wanna let you know
that I am going on tour this year.
Head to adamconover.net to get tickets
to see me do my new hour of standup in these fine cities,
and many more.
I also wanna thank usafacts.org for sponsoring this video,
a wonderful nonprofit that provides
unbiased objective statistics about America
to everyone who visits their site,
including some of the numbers I use in this very video.
And finally, thank you so much
to everyone who supports this series on Patreon.
No artificial intelligence was used to write this video
and it's your donations that give me the time I need
to write jokes with my soft human brain
then type them with my little meat fingers.
So come to a live show
or head to patreon.com/adamconover
and support a human today.
Now let me say this right off the bat.
Artificial intelligence is a real field of computer science
that's been studied for decades.
And in recent years, it's made in major strides.
But I'm not talking about that kind of AI.
I'm talking about the marketing term AI.
The tech companies are using to hype up
their barely functional products.
Also, they can jack up their stock price.
See, tech companies are powered by hype.
It's not enough to be profitable.
No, in tech, you have to be able to convince investors
that you have cutting edge disruptive technology
that will let you dominate an entire industry
like Google did with a search,
Apple did with the iPhone,
and Amazon did by making workers pee in bottles
and passing the savings onto you.
But now that all that low hanging fruit
has been plucked off the innovation tree,
tech companies have started just making up new words
that they claim are going to revolutionize everything
in hopes of flimflamming their way into that investor cash.
You know, words like the metaverse, augmented reality, Web3,
and who can forget crypto?
Last year, every company was racing
to pivot to the blockchain,
but now that Bankman-Fried has been exposed
as a bank-fraud man and put the crypto back in crypto,
they need some hot new hype to hawk,
and that's artificial intelligence.
So in a desperate bid to juice their stock prices,
companies from Snapchat to Spotify to BuzzFeed
now claim they're going to jam AI into their products.
Hey, and maybe next they can program an AI
to read BuzzFeed too.
That'd take a lot of unpleasant work off our plates.
Now a lot of this hype is just transparent bullshit.
I mean, Spotify just released an AI DJ
that will create a personalized radio station just for you.
Wow, very impressive,
except that Spotify already fucking does that.
What's your next feature?
An AI volume knob?!
You can't just release something that already exists
and call it AI.
Hey, come on down to Papa Tony's AI pizza shop,
where we got AI cheese, AI sauce,
and the computer was involved somehow!
That was fully of vampire voice.
But it's not all empty talk.
The biggest tech companies are unleashing
an experimental technology called generative AI
onto the public.
Despite the fact that in most cases,
it's straight up cannot do what they claim
and is making all of our lives worse.
This actually isn't the first time that tech industry
has turned us into their ai guinea pigs.
Remember self-driving cars?
For years, companies like Google, Uber and Tesla
have told investors that any day now
they're gonna replace
the 228 million licensed drivers in the US
with AI autopilots.
Hell, Elon has been predicting that Teslas
will be fully self-driving next year since fucking 2014.
These companies were so successful
and making the technology seem inevitable
that multiple states actually allowed self-driving cars
to be deployed on the roads that real people drive on.
So how did that turn out?
- [Reporter] A Tesla believed to be on autopilot
started breaking
causing an eight car pile up on Thanksgiving.
- Whoa, whoa, whoa, whoa.
Nope, nope, nope.
It wanted to hit the truck.
- [Eyewitnes] Geez!
- Okay, but in that AI's defense,
that child was blocking the lane to Whole Foods.
After years of broken promises and $100 billion wasted,
pretty much everyone has finally agreed
that self-driving cars just don't work.
But the truth is they never did.
It was always a lie.
Tesla is currently being criminally investigated
by the Department of Justice
because it turns out,
the videos they made promoting their self-driving feature
were literally faked.
They also falsely advertised their cars
as having autopilot and full self-driving.
And since the world is full of gullible sims
who believe every tainted word that falls out
of Elon Musk's idiot mouth that inspired some drivers
to take their hands off the wheel
and go All Luke Skywalker on the I-95
- [Luke] Use the AI.
Let go.
- Goddamnit, that's the second kid today.
People died as a result.
Last year, 10 people were killed
by Tesla's self-driving cars in just four months,
which might be why the government
just made them recall 300,000 cars.
What even people in the tech industry
are starting to realize
is that there are certain things
that computers are just fundamentally ill-equipped
to do as well as humans.
Humans are incredibly good at taking in novel stimuli
we've never experienced before,
reasoning about who's responsible for them and why,
and then predicting what's gonna happen next.
If you were stopped at a crosswalk in Los Angeles
because say James Cordon was blocking the road
and doing a stupid dance in a mouse costume,
well you'd combine your knowledge of irritating pop culture
with your understanding of human nature
and the bizarre sight in front of you and conclude,
"Oh, I appear to be in the middle
of some sort of horrible viral prank
for a late-night talk show
and there's nothing I can do,
but grit my teeth and wait for it to be over."
But your self-driving car hasn't seen the "Late Late Show".
It doesn't even watch Colbert!
So it might conclude, oh, that's a mouse,
hit the gas and flatten the motherfucker.
And you know, that would be a way funnier segment
for the show, but we wouldn't exactly call it intelligent.
Now look, a lot of self-driving tech
is genuinely really cool
and it does have important real world uses,
like collision prevention and enhanced cruise control.
But the idea that we'd all be kicking it in the backseat
with a mai tai while our robo-taxi drove us to work
was always a science fiction fantasy.
And when companies like Tesla told us it was coming,
it was a lie.
A lie told to boost their share price
and to trick us into doing what they wanted,
to change our laws, to permit their untested,
in many cases, fraudulent technology onto the public roads
where it hurt and killed people.
And guess what?
That same cycle is happening again.
Massive tech companies are making us the guinea pigs
for their barely functional bullshit.
Only this time, they're calling it generative AI.
So Google and Microsoft's search engines are crumbling.
Google has been getting worse and worse every year.
Transforming from what used to be a useful tool
into a cesspool of ads and SEO spam.
And Bing is, you know, Bing.
I don't even have a joke about it.
The name is a punchline.
If you tell people you binged something,
they literally laugh in your face.
So in an effort to get ahead,
Microsoft has poured more than $10 billion into ChatGPT,
an AI chatbot that produces shockingly coherent responses
to questions posed to it in natural language.
Now, I'm not gonna explain how machine learning models
like ChatGPT work in detail.
If you want that,
there's a couple great blog posts
that I'll link to in the description.
But the short version is if you feed one of these models,
a large structured collection of symbols,
like say text images or frames of video,
it can output a similarly structured collection
using probability to determine which word, pixel,
or video frame is most likely to come next
based on all the training data it's ingested.
In other words,
these are incredibly sophisticated imitation systems.
If you feed a language model
every Sherlock Holmes story ever written,
it can output new texts
that resemble a Sherlock Holmes story.
If you feed it every piece of text on the internet,
it can output text that resembles
any piece of text you can find on the internet.
It could even mash up different texts
and create a new output that resembles both at once.
Do you wanna read a steamy romance novel
where Sherlock Holmes falls in love with King Koopa?
Well, somebody does and ChatGPT can output it for you.
You delightful perv.
Now that is legitimately cool
and it makes machine learning models
really fun to play around with
and very useful for people who need to generate
large amounts of pattern based output
like computer programmers,
audio transcriptionists, and spammers.
Yeah, these things are mostly gonna revolutionize spam.
That Nigerian prince
is about to get a creative writing degree.
You know what ChatGPT isn't, though?
A fucking search engine!
But in a desperate attempt to ride the hype wave,
Microsoft jammed it into Bing,
pretending that this fancy monkey on a typewriter
could somehow provide factual answers
to real people's real questions,
and the results were bad.
Even the launch video that Microsoft used
to advertise their new AI search engine was full of errors.
In their own video, being claimed that a well-known brand
of cordless vacuum cleaner had a cord that was too short.
What asked for bar reviews in Mexico City,
it claimed that a non-existent bar
was popular with the young crowd.
And when asked when "Avatar 2" was playing,
pretty simple question for a search engine,
Bing told a user it hadn't premiered yet
because it was still 2022.
When the user said, "No, I'm pretty sure it's 2023
because I am alive in this year."
Bing gaslit the user.
Claimed that they were lying and said,
"You have not been a good user.
I have been a good Bing."
I think I need to take a restraining order
out against my search engine.
Like Ask Jeeves gave shitty results too,
but at least he was a gentleman about it.
Now, you'd think that Google,
one of the world's most profitable companies
and a literal search engine monopolist
would look at this flaming turd and laugh, but no.
Instead they said, "Oh, shit,
we gotta get our own turd fast.
Quick!
Somebody shit in my hands!"
Google immediately rushed out their own AI search engine.
A launch that was so disastrous,
even Google's own employees called bullshit on it.
Now, I have no idea why the fuck Microsoft and Google
think that you could use an AI language model
as a replacement for a search engine,
because it is patently obvious
that they are useless for that purpose.
See, people go to search engines
to find factual information,
to find specific resources they need access to,
or in my case, to search for my own name 200 times a day.
But AI language models can't provide you
any of those things.
All they do, all they do is predict
which word the training data says
is most likely to come next.
And since their training data is the entire internet,
that means their training data
is full of a lot of dumb shit.
But you know, when a regular search engine
shows you dumb internet shit,
you use your human brain, and you say,
"Oh, this shit is dumb.
I don't think I should get my health advice
from TurdGobbler69.
I'm a click on the Mayo Clinic instead."
But the AI doesn't do that.
It doesn't know the difference
between a reliable expert and a biased idiot
because it doesn't even know that people exist.
It's just a parrot.
Choosing words it's heard before out of a big ass hat.
So when there's dumb shit in its training data,
it poops that dumb shit back out at you
and claims that it's true.
And here's the real problem,
because these companies are advertising their AIs
as hyper accurate oracles of perfect knowledge,
a lot of people fucking believe it.
A company called OpenCage has been deluged by emails
from angry customers because ChatGPT
told them they make a product that doesn't exist.
And when you ask ChatGPT for facts about me,
it literally tells you I'm a producer on the "Simpsons".
My own mom blames me that the show sucks now.
I can't have that kind of misinformation out there.
So why do people believe this patent bullshit?
Simple, the fact that ChahtGPT can give seemingly fluent
coherent responses is designed to trick us
into thinking that it can do more than it can,
even that it's approaching human levels of intelligence.
Last month, the "New York Times'"
most gullible tech reporter, Kevin Roose,
published a transcript of a conversation with Bing
headlined "Bing's AI Chat reveals its feelings."
Kevin, Bing does not have feelings.
Bing doesn't even know you're there, buddy.
It's literally just an algorithm
that predicts what word comes next in a sentence.
It's auto complete.
It's fucking Clippy.
But Kevin was like, "It knows I'm trying to write a letter.
It's in love with me."
But you know what?
We should all cut Kev a little bit of slack,
because he is making the most natural of errors
and it's one that AI researchers
have known about for years.
In a famous paper called,
"On the Danger of Stochastic Parrots",
AI researchers, Emily Bender, Timnit Gebru,
and their co-authors predicted exactly this phenomenon.
See, humans are hardwired for language.
If we see a string of words that make grammatical sense,
we naturally search for and find meaning in it
and we naturally tend to assume
that there must be a mind like ours behind it,
even when all we're looking at is word-salad
barfed out by a probabilistic parrot.
As the researchers wrote,
"the tendency of human interlocutors to impute meaning
where there is none can mislead
both researchers and the public
into taking synthetic text as meaningful."
And that makes language models dangerous,
because it means that we naturally trust what they say.
The potential harms of that are enormous.
Bad actors could use them to generate
mountains of believable misinformation,
racist biases in the training data could be spread widely
and reinforced because it must be true,
the AI said it.
And just like poor Kevin Roose,
people could be tricked
into thinking that a language model
is actually communicating with them,
when in fact it doesn't even know they exist.
For these reasons, Timnit Gebru and her colleagues
called on their field to recognize that applications
that aim to believably mimic humans
bring risk of extreme harms,
and they urged that more research be done on them
before they were released.
That was in 2021.
and when Gebru and her colleagues wrote this paper,
she was working at Google as an AI ethicist,
which seems perfect, right?
I mean, that's what they hired her to do.
So they probably took her caution to heart,
tap the brakes on their AI program,
and carefully researched its impacts, right?
No, they didn't.
Instead they fucking fired her.
Yeah!
They said, "Sowwy, you did your job too good,
and we don't like what you said.
Pack ya shit."
This is like firing the weatherman,
because you're not happy a hurricane is on the way.
"Fuck you and your Cat five, Carl,
I own this weather station and I wanna take my boat out!"
Then just a few weeks ago,
Microsoft laid off their own team of AI ethicists,
and now both companies are hitting the gas.
Pumping out AI powered projects
as fast as they fucking can no matter who gets hurt.
Isn't it funny?
Every time you hear the tech titans
blather on about the dangers of AI,
they're always talking about the science fiction version
where an AI becomes conscious and takes over the world.
But all that fear mongering is just more marketing.
Call it fear marketing.
That spooky science fiction story
is designed to grab headlines and distract us
from the real danger of AI,
which is that the real life humans in Silicon Valley
are using it to fuck us all over right now.
Because here's the brutal truth.
Even the name of this technology is a lie.
If you so much as scratch
any one of these so-called artificial intelligences,
you will find underpaid, exploited,
ripped off humans making the system work.
A "Time Magazine" investigation recently found
that to keep all the internet filth
and sludge at a ChatGPT and DALL-E,
OpenAI makes Kenyan workers
sift through toxic content
identifying which fragments are too violent,
sexually explicit, or disturbing
to be used in the training data.
These workers are exposed to graphic scenes of violence
and death, sexual assault, and child sexual abuse.
And in exchange, they are paid under $2 an hour, $2!
I mean, why should I be worried about AI taking over
when the humans who are currently in charge
are doing this shit?
Even the Terminator would be like, "Wow, that is fuck up."
But even if OpenAI we're paying people
whatever wage is fair for that kind of work, I don't know,
five grand an hour and free therapy for life?
It wouldn't make their any less exploitative,
because their entire business model
is built on using the work
of literal millions of humans without paying them.
Remember, so-called generative AI like ChatGPT and DALL-E
should actually be called imitative AI.
They can output whatever kind of text or image you want
if they're provided with an enormous amount of training data
to copy off of.
And you know where that training data comes from?
Fucking humans!
You want DALL-E to make you an image of me,
shirtless, eating a piece of pizza,
in the style of a DeviantArt anime artist?
Well, it can do it,
but only because it's copying the real work
of millions of real artists.
Not to mention thousands of real life photos of me!
It took hours for me to perfect
that come-hither pizza stare!
And when you go to ChatGPT
and type write the script for an episode
of "Adam Ruins Everything",
you get output that looks a lot like my old show.
Why?
Did some super smart AI come up with the premise,
the character I play and the topics I choose to cover?
No, I did.
And then some motherfucker in Palo Alto
wrote an algorithm that scrambles up my work
and feeds it back to me.
That's not artificial intelligence.
That's my intelligence!
And what I use my soft pink brain,
I generally like to be paid for it
because that's how I afford the food that keeps it running.
Now, AI companies might wanna claim
that all of this is just fair use,
and therefore it's legal.
It's just like a remix, y'all!
DJ AI!
But you know what?
Artists don't agree,
which is why they've gotten together
to sue the shit out of them.
And I hope they win.
Because the truth of the matter
is that these artificial intelligence apps
are fundamentally built on the uncompensated work
of real people.
And their business model is to undercut the work of humans
all while using our own work to do it.
Instead of reading an article by a real journalist
when you have a question,
Microsoft wants you to use Bing
to blend up the work of every journalist in America
into a shit milkshake.
Instead of paying an artist to create art for your website,
they want you to pay DALL-E,
an app trained on the images of real artists
without paying those artists
a single goddam poorly rendered dime.
And just last month, it was revealed that Apple
had started training an imitation AI
on the voices of audiobook narrators without telling them,
then used it to create new AI audiobooks.
See, we don't need to wait around
for some super intelligent AI to fuck over humanity.
Tim Cook can do that all by himself.
But you know what?
I can't see the future, so maybe I'm wrong.
Maybe the robots do come for us one day.
I'm not worried about that either,
because now I know the AI needs us.
It can't do a single thing
without using our humanity as input,
and that means we have an advantage.
See, as good as AI has gotten
at imitating what humans have done in the past,
the real difference between us and the machines
is that we still have the ability to do new things
that no one has ever thought of before.
Just last month,
the strongest Go playing AI in the world
was beaten by a lowly human amateur
who used an unorthodox strategy that was so stupid,
no human had ever tried it before.
I think, trust me, if you understood Go,
you'd realize that's a pretty funny board position.
But because it wasn't in the training data,
the AI didn't even notice that it was happening
until the moment it lost.
Then there's one of the best headlines I have ever read,
"Marines fooled a DARPA robot by hiding in a cardboard box
while giggling and pretending to be trees."
Wow, turns out AI hasn't advanced
since the guards in "Metal Gear Solid".
And if you're an artist and you wanna beat AI art,
well you can make something so new,
so bizarre and beautiful
that there isn't even a name yet
to type into the fucking DALL-E chat bar,
and you will because that's what humans do.
We create new things.
We think new thoughts.
We discover new facts about the universe.
That is what separates us from other living things
and that is what separates us from the AI.
The AI.
What a pathetic joke to even call
what these things do intelligence to begin with.
They're not human minds.
They're not anything like human minds.
We don't just imitate what we've done in the past.
We're not just some bundle of probabilities on a hard drive.
The human brain evolved over millions of years
and it is so complex,
that we don't even understand how it works yet.
And we use this marvelous meat machine in our skulls
to observe the world around us.
We reason about why it is the way it is.
We communicate with other minds about what we've learned
and we create new things with that knowledge.
And we used it to create ChatGPT,
an algorithm that can write shitty fan fiction.
Well, weird choice, but good for us I guess.
It's pretty cool.
But let's score a point for humanity here
and use our soft squishy human brains to do one more thing
that's Silicon Valley's dumb algorithm can't.
Let's use it to tell the difference
between a truth and a lie,
and stop believing their bullshit.
Thank you folks so much for watching.
If you wanna support what I'm doing here,
please head to patreon.com/adamconover
and please come see me live doing standup comedy
in a city near you.
You can get tickets
and check out all my tour dates at adamconover.net.
Thank you so much again.
See you in a couple weeks for the next video.
関連動画をさらに表示
AI is a Lie.
Why AI progress seems "stuck" | Jennifer Golbeck | TEDxMidAtlantic
Tech seminar week-6 Autonomous vehicle
Understanding Artificial Intelligence and Its Future | Neil Nie | TEDxDeerfield
Speech on artificial intelligence in English | artificial intelligence speech in english
How to get empowered, not overpowered, by AI | Max Tegmark
5.0 / 5 (0 votes)