AI Just Changed Everything … Again
Summary
TLDRThe video script by Matt Ferrell from 'Undecided' delves into the complexities of generative AI, emphasizing its long-standing history and recent advancements like OpenAI's ChatGPT 4.0. Ferrell highlights the ethical concerns surrounding AI's data consumption and its potential to replace human creativity. He also addresses the lack of transparency in AI development and the environmental impact of training models. Despite the challenges, Ferrell showcases positive AI applications, like language dubbing and content summarization, and encourages viewers to consider their stance on AI's role in society and its future implications.
Takeaways
- 🧠 AI is not a new concept; generative AI, including large language models and image generators, has been evolving for a long time.
- 💬 ChatGPT 4.0 is a significant milestone in AI, but it's part of a larger trend of AI development rather than a sudden breakthrough.
- 👀 The public is both excited and concerned about AI advancements, reflecting a mix of optimism and apprehension about the future.
- 🔍 Generative AI works by identifying patterns in data and using them to create new outputs, but it is not the same as artificial general intelligence (AGI).
- 🕵️♂️ AI's history dates back to the 1940s, with foundational work on algorithms and neural networks, showing a long-standing interest in AI development.
- 🎨 Concerns about AI center around the use of human creativity and data without consent or compensation, raising ethical questions about its development and use.
- 🤖 The reliance on human labor for training AI systems, such as content moderation and data labeling, highlights the ongoing need for human involvement in AI processes.
- 🔮 The 'black box' nature of AI decision-making processes raises transparency and accountability issues, especially when AI is used to make critical decisions.
- 🌐 The sudden accessibility and mass appeal of AI technologies have amplified public awareness and scrutiny of AI's impact on society and the workforce.
- 🛠️ AI tools have practical applications that can streamline tasks and enhance productivity, such as language translation and content creation.
- ♻️ The environmental impact of training and running AI models, including the significant consumption of electricity and water, is a growing concern.
Q & A
What is the primary focus of the video?
-The primary focus of the video is to discuss the current state and implications of generative AI, particularly in light of recent advancements such as OpenAI's ChatGPT 4.0.
What does the speaker mean by 'AI isn’t new'?
-The speaker means that AI technologies have been developing for many years, even though the recent advancements in generative AI have brought it to the forefront of public attention.
What are some examples of generative AI mentioned in the video?
-Examples of generative AI mentioned include large language models like ChatGPT and image generators like Midjourney.
What is the difference between AI and AGI as discussed in the video?
-AI, as discussed, refers to systems designed for specific tasks using pattern recognition, whereas AGI (Artificial General Intelligence) refers to hypothetical systems capable of performing any intellectual task that a human can do, which remains a goal rather than a reality.
What concerns are raised about the use of data in training AI models?
-Concerns include the ethical implications of using vast amounts of human-created data without consent, the potential for AI to replace human creativity, and the lack of a social contract for AI training.
What historical examples of AI are provided to illustrate its long development history?
-Historical examples include ELIZA, an early chatbot from the 1960s, and the perceptron, an early neural network model developed in the 1950s.
How does the video address the suddenness of AI advancements?
-The video explains that while AI has been around for decades, its recent rapid development and widespread accessibility as consumer products have made its impact more noticeable and concerning to the public.
What ethical issues related to AI development and deployment are highlighted?
-Ethical issues include the exploitation of human labor for training data, the lack of transparency in AI operations, and the environmental impact of AI training and usage.
What are some positive applications of AI mentioned in the video?
-Positive applications include AI tools for dubbing videos in multiple languages, automating tedious tasks, aiding in medical discoveries, and enhancing productivity in various fields.
What does the speaker suggest about the future handling of AI technologies?
-The speaker suggests that we should hold tech companies accountable for their use of training data, advocate for regulation, and support human creators to navigate the rapid advancements in AI technology.
Outlines
🤖 AI's Evolution and Public Perception
Matt Ferrell introduces the topic of generative AI, spurred by the OpenAI ChatGPT 4.0 announcement, and dispels the notion that AI is a new phenomenon. He emphasizes that while AI technologies are indeed powerful and rapidly developing, their underlying concepts have been around for decades. Ferrell discusses the public's mixed feelings towards AI and aims to provide a balanced perspective, highlighting the importance of understanding AI's true nature and its implications on society. He clarifies that AI, in this context, refers specifically to generative AI, such as large language models and image generators, which function by identifying patterns in data. These tools are not examples of artificial general intelligence (AGI) and are more specialized in their applications.
🧠 Historical Context of AI and Neural Networks
The video script delves into the history of AI, tracing its roots back to the 1940s with the work of McCulloch and Pitts, who laid the mathematical foundation for classifying input data. It continues through the development of the perceptron by Frank Rosenblatt in 1957, which was an early attempt to simulate neural networks. The script discusses how AI systems, including chatbots like ELIZA and more recent ones like 'Eugene Goostman,' have always been anthropomorphized, leading to misconceptions about their capabilities. It also touches on the use of AI in various industries and the author's personal experiences with AI, from gaming to professional applications, emphasizing the reliance of AI on human data and the ethical considerations surrounding its development and use.
🔍 Ethical and Social Implications of AI
The script addresses the ethical concerns surrounding AI, particularly the use of human-generated content to train AI models without consent or compensation. It highlights the lack of a social contract for AI training and the potential for AI to replace human creators. The sudden increase in AI's public presence and its portrayal as a consumer product raise concerns about transparency, biases, and the exploitation of human labor behind the scenes. The 'black box' nature of neural networks and their lack of full understanding present further challenges, as these systems are increasingly used to make important decisions. The script also points out the environmental impact of training and using AI models, noting the massive resources required and questioning the sustainability of current practices.
🛠 Practical Applications and Future Outlook of AI
Matt Ferrell shares his personal experiences with AI tools, such as using AI to dub videos in different languages and AI-assisted content summarization in Notion. He also discusses the use of AI in Photoshop for creating video thumbnails and acknowledges the potential of AI in accelerating discoveries in fields like medicine and energy. However, he cautions against over-reliance on AI, stressing that it is still in a developmental stage and requires human oversight. The script concludes by posing questions to the audience about how they will engage with AI and its implications, encouraging a thoughtful and proactive stance towards the technology's rapid advancement and its integration into society.
Mindmap
Keywords
💡Generative AI
💡ChatGPT 4.0
💡Artificial General Intelligence (AGI)
💡Neural Networks
💡Pattern Recognition
💡Data Training
💡Ethical Concerns
💡Historical AI
💡Human-AI Interaction
💡Transparency
Highlights
Generative AI is evolving rapidly, highlighted by OpenAI's latest ChatGPT 4.0 announcement.
OpenAI's new video-generating model, Sora, demonstrates the power and fast development of AI technologies.
AI of the 2020s isn’t new, but its consequences are profound and affect everyone.
Generative AI tools like ChatGPT and image generators are powerful, yet not examples of artificial general intelligence (AGI).
AI has been around for a long time, solving problems across various industries, from space telescopes to sales recommendations.
The history of AI includes early programs like ELIZA, which laid the groundwork for today's chatbots.
The Turing test, though not universally agreed upon, has been used to evaluate the humanness of AI chatbots.
Neural networks, developed as early as 1943, simulate the human brain to enable AI learning with less human intervention.
OpenAI has profited off the work of YouTubers and other content creators without their consent, raising ethical concerns.
There is no existing social contract for generative AI training, unlike how humans learn from each other’s work.
The sudden public availability and mass appeal of AI have changed how people interact with these technologies.
AI tools still require significant human input to produce quality outputs, indicating humans aren't going anywhere.
The resources required to train and use AI models, such as electricity and water, pose sustainability concerns.
AI and machine learning are changing the world rapidly, necessitating accountability and preparation for the future.
The discussion around AI’s impact should continue, focusing on regulation, supporting human creators, and ethical use of data.
Transcripts
Let’s Stop Pretending AI Is New
I’ve been thinking a lot about generative AI lately. It’s kind of hard not to with
the latest OpenAI ChatGPT announcement. Can you write a short poem about the OpenAI announcement
on ChatGPT 4.0. "Sure, here's a short poem about the ChatGPT 4.0 announcement. A spark
in the realm of mind so vast, ChatGPT 4.0 is here at last. With thoughts that weave like
threads of gold, And tales new and wisdom old." The technologies we’re witnessing are powerful,
impressive, and developing fast. Everything you’re been seeing on screen, for example,
is footage from OpenAI’s new video-generating model, Sora. But let’s peel back the
algorithmically patterned wallpaper for a moment, and take a hard look at the structure behind it.
The AI of the 2020s isn’t new. But its consequences are. If you’re watching this, they’ve
already affected you. So how should we, the public, respond to tools that rely upon more data
than we could ever fathom? How can they change our relationship to work? And…do we need to panic?
I’m Matt Ferrell … welcome to Undecided.
This video is brought to you by Brilliant, but more on that later.
A lot of people are both excited and scared about the state of AI right now, and rightfully so. One
of my goals with this channel, though, is to provide you with reasons to remain optimistic.
Today, I’m going to try to put the recent explosion of interest in AI into context.
Before we get into it, I want to be clear. When I use the word “AI,”
I’m specifically referring to generative AI. That includes large language models,
or LLMs, like ChatGPT, and image generators like Midjourney.
Basically, these programs are meant to perform specific tasks. And to
describe the way they work as simply as possible, they identify patterns. When
they find patterns in a given input that matches the data they’ve been trained on,
they use that data as a springboard to form a new output. Or at least that’s the idea.
What’s key is that these tools are not examples of artificial general intelligence (AGI),
or the Marvins and HALs of sci-fi spaceships. They’re far more narrow
than that. Overeager or not, tech companies do recognize that AGI is still a goal.
My main goal with this video is to contribute nuance to larger
conversations about AI as a whole. Which is why I want to start by reminding you:
AI Isn’t New
I know that to some that statement might seem obvious, and to others that might be confusing,
so let me clarify. Actually, I have a couple of friends that can help me with that.
“Most people don't realize this, but AI has been around for a long time,
and it helps solve all kinds of problems across all kinds of industries. Before
starting my channel, I spent eight years as a rocket scientist at MIT,
and part of my job was deploying machine learning algorithms to help space telescopes and long range
radars detect really small and fast moving objects. We ended up building a few neural
networks and training them to understand what to look for and what they can ignore.”
“Lets start with my experiences before the 2020's. I was a software engineer at
Salesforce, and we had this product called Einstein and it brought AI to your data. It
was a lot like Netflix's recommendations. When you watch one show, it'll tell you,
hey, you'll like this show as well. But it was largely pattern based.”
For reference, Einstein launched in 2016. But we can go even further back than the
2000s. Researchers have been picking at what we now know as generative AI
for way longer than you might think. Let me tell you about the time I first met ELIZA.
Our first family computer was a Commodore 64. Yup...64KB of RAM with no disk or hard
drive of any kind. My brother, Sean, and I would spend hours sitting in our little
upstairs playroom nook plugging in lines of programming code from a book of BASIC.
ELIZA is one program that’s stuck with me all these years. It would ask you questions
and then follow up on your answers in the style of a Rogerian psychologist.
This was important to the illusion because Rogerians encourage therapy patients to do
most of the talking. The technological trick behind the scenes was that ELIZA searched for
“keys” in your sentence. In other words, it was looking for patterns. For example:
“What did you do today?”
“I played with a Hot Wheels car.”
“Tell me more about the Hot Wheels car.”
For a little kid in the 1980s, this was mind-blowing, and it felt like you were talking
to something alive inside the computer … until you turned it off and lost the entire program.
Sound familiar? In any case, ELIZA is just one of many in a long line of precursors
to the chatbots we know today. And if you observe collective reactions to
these types of programs across history, you’ll notice that people’s tendency to
anthropomorphize AI helps perpetuate false ideas about its capacities.
We can look at the persona of the chatbot known as “Eugene Goostman” for another example.
You’ve probably heard of Turing tests, which are basically an interpretation of a concept famously
discussed by mathematician Alan Turing. In a formative 1950 paper, he proposed a theoretical
“imitation game” to determine a machine’s ability to exhibit behavior indistinguishable from a
human’s. Since then, various groups have organized competitions with panels of judges to evaluate the
“humanness” of chatbots — though it’s important to know that Turing tests don’t have universally
agreed upon rules, and not everyone finds this form of assessment valuable.
When it comes to Goostman, its creators sought to give the bot a “personality” by establishing
a backstory. He…I mean it… is meant to act like a 13-year-old Ukrainian boy with a pet
guinea pig…so you can probably see how this might have made the bot more convincing during
Turing tests. I mean, when have middle school conversations not been awkward and clunky?
So, is this cast of characters all that removed from what we’re
contending with now? Yes and no. Yes in the sense that, speaking broadly,
these Bots from Before operated within systems that directly involved human hands, whether
through programming languages or mimicking inputs from crowd-sourced conversations.
This is unlike the popular large language models of today, which use machine learning.
And more specifically, it’s the “deep” kind of learning:
AKA neural networks. The whole point of these networks is to simulate the human brain,
therefore allowing AI systems to “learn” with less intervention.
The chatbots that have already set the past few years abuzz are built upon different foundations,
yes. But these foundations themselves are just as old. Within the context of U.S. history,
it was in 1943 that scientists Warren McCulloch and Walter Pitts laid out the
mathematical groundwork for an algorithm to classify input data. You know…the same sort
of tasks you complete every time a website asks you to complete a CAPTCHA to prove your humanity.
Then, in 1957, psychologist Frank Rosenblatt further advanced what would become the basis
of neural networks through what he called “the perceptron.” He then married math to
metal by building a “Mark I” version of the machine. Its purpose? To recognize images.
So lets just take a quick second to read some news? Here’s a few quotes
from the introduction to a piece from the New York Times on machine learning:
“Computer scientists, taking clues from how the brain works, are developing new
kinds of computers that seem to have the uncanny ability to learn by themselves.
…The new computers are called neural networks because they contain units that function roughly
like the intricate network of neurons in the brain. Early experimental systems,
some of them eerily human-like, are inspiring predictions of amazing advances.”
Oh wait, hang on. This paper is dated…1987. Right
around the time I was punching in ELIZA code into my Commodore 64.
To give you a more recent peek into how long we’ve tinkered with machine learning,
I can discuss my own career. Once upon a time, I used to work on competitive multiplayer games.
You could win prizes by beating other players, so there was a huge incentive for people to
cheat. To counter that, the development team created bot detection systems. They
would allow us to analyze move history data from previous matches, which would reveal
the subtle differences between how humans and cheat programs play. It was pretty effective.
But we needed human data to make a comparison. And like the chatterbots of yore,
our modern Bards and Copilots fundamentally rely upon human data to operate. Be it a
quirky conversation partner in the 1980s or an aspiring assistant in the 2010s, AI systems
interpret massive amounts of information and make their best guesses as to what to do with
it. Without all the data that we produce, they can’t do much. And that’s part of the problem.
Wrapping your head around the concepts of AI and LLMs can be overwhelming.
That’s why I spent time going through the new course, “How LLMs Work” at today’s sponsor,
Brilliant. It gets hands-on with real language models and helps you learn how to tune an LLM
to generate different kinds of output. I found it extremely helpful. Brilliant does a wonderful job
breaking complex topics down with hands-on problem solving that let you play with the concepts. It
builds your critical thinking skills through doing and not by memorizing. If you’re like me,
you’re probably very busy and may not think you have the time to take a course, but Brilliant
is built around bite-sized lessons to break down concepts into very understandable parts … in just
a few minutes every day. They have something for everyone, like “Thinking in Code,” which
develops your mind to think like a programmer and write robust programs. To try everything
Brilliant has to offer for free for a full 30 days, visit https://brilliant.org/Undecided
or click on the link in the description. You’ll also get 20% off an annual premium
subscription. Thanks to Brilliant and to all of you for supporting the channel.
Why Are People So Concerned?
“The question I always have is, where does that data and training come from? It does
come from human art, right? Whether it's writers or artists, painters,
or videographers. So I do worry, are we using our creativity to train AI to basically replace us?”
As a YouTube creator, I think it’s for the best that I start with the
AI-generated elephant in the room. OpenAI has profited off my work. OpenAI has profited off
of every YouTuber’s work. OpenAI has profited off of any work that’s ever been published on
the Internet. And we know this because the company ran out of online text to scrape,
so it went out of its way to develop a transcription program that could capture
every sound on the Internet it could. Every video, every podcast, every audiobook. It’s already done,
and none of us have seen a cent for it, so much as acknowledgement that we had a part in it.
Companies now want your forum replies and blog posts, too, while they’re at it.
Late last year, Ed Newton-Rex, a musician who uses AI himself,
pointed out that there’s no existing social contract for generative AI training. Meaning,
you can’t justify the mass consumption of virtually all the communications published
on the internet by comparing the practice to how humans learn. As he wrote in a tweet:
“Every creator who wrote a book, or painted a picture, or composed a song,
did so knowing that others would learn from it. That was priced in. This is definitively
not the case with AI. Those creators did not create and publish their work in the
expectation that AI systems would learn from it and then be able to produce competing content
at scale. The social contract has never been in place for the act of AI training.”
Don’t get me wrong: OpenAI is not the only one doing
this. That’s another thing. The act of hijacking people’s voices, art styles,
and identities without their consent is already being legitimized because of how easy
it is with generative AI. Just a few weeks ago, someone trained a model on Marques Brownlee’s
reviews to build a product recommendation tool using his likeness. Did he have anything to do
with it or any idea it was even being created? I'll give you one answer, No.
Another reason for the negative response toward the spike in AI advancement is,
well, the suddenness of it all. I know I just said that this stuff isn’t anything
new. But what I mean is that up until very recently, the average person didn’t interact
with AI…in a way that they were immediately aware of. What’s changed is that companies
are now presenting AI as a consumer product for everyone. It’s leapt from research computers
to social media and smartphone apps. In other words, it’s more accessible than ever before.
“So my experience with machine learning before 2020 was pretty minimal, mostly playing games
against the computer, which was some kind of form of machine learning or rule based system,
though I didn't really know it at the time. Currently, day to day though,
I use it a lot more. I use it in coding during my PhD,
and also when I'm exploring broad topics, both in the PhD and during YouTube video research.”
Then there’s even more big picture problems that threaten both livelihoods and lives, and a lot
of it comes down to transparency. For years, tech giants have deliberately obscured the human labor
they exploit to reinforce incorrect assumptions that AI has reached major milestones. In essence,
these systems have been behaving more like a Mechanical Turk. By mid-2022, over a thousand
workers working remotely from India were reviewing transactions for Amazon’s “Just Walk Out” shopping
system. They make the magic happen, not fully autonomous “deep learning techniques.”
In Amazon’s words, though, they’re a vague group of “associates” keeping things accurate.
Similarly, the ChatGPT we know wouldn’t exist without Kenyan workers. In late 2021, OpenAI
partnered with the data labeling company Sama to outsource the excruciating process of identifying
graphic content — that way, it could train GPT-3 to not reproduce it. After reading up to hundreds
of passages depicting violent topics like suicide and sexual abuse in explicit detail for nine hours
a day, Kenya-based Sama employees would take home less than $2 an hour for their trouble.
Another major issue is that the mechanics of neural networks still aren’t entirely
understood. That lends itself to a host of complicated consequences
that are best summed up by the concept of the “black box.” The black box is the
opaque middle of a hypothetical system. You know your input and you know your
output…but you can’t see the process that got you from point A to point B.
But if you can’t decipher the internal workings of a tool that is being used to make decisions,
how do you ensure that it’s working properly? How do you prevent it from furthering biases
that cause harm? These questions are not just the stuff of dystopian stories. Algorithms determining
the “riskiness” of human beings have already been around for a while. Steven Spielberg’s
movie adaptation of “Minority Report” came out in 2002, but England and Wales had already begun
implementation of the Offender Assessment System (or OASys) in 2001. It’s still in use today.
Again, what’s changed is the public availability and mass appeal of the
technology, not so much the actual systems. The innovations that at least seemed incremental are
now overpowering in their speed, scale, and scope. It’s like we can’t catch our
collective breath. Developers are continuing to concentrate more and more resources into AI,
businesses are rushing to brand themselves as “AI-first,” and every month there’s another
eye-popping spectacle…that might really just be a dumpster fire.
So, remember those Sora clips I showed earlier? Yeah, about that…the Toronto-based
video production company Shy Kids actually used Sora to produce its short film “Air Head.” The
ratio of footage the team generated versus what actually made it into the final minute
and a half cut was about 300:1. And there was a lot of “we’ll fix that in post.” I’d
suggest you read the fine print before you use generators, but I doubt it would be legible.
What Does This All Mean?
Well, you’ve heard from my peers already. What do I think about all this? Overall,
I’d say I'm torn. AI is amazing, but the origins of the current suite of products are unethical
for a number of reasons. And most critically, the damage has already been done. We’ve already
explored that angle, so let’s move into the positives, the more optimistic side of this stuff.
The number of use cases for these tools is dizzying, so I’ll stick to talking about the
applications that I can vouch for, ones that are workable right now — not what’s plausible,
promised, or someday possible. All that could be its own video.
If you haven’t noticed, I have actually been using an AI tool to dub my videos in other languages now
for quite a while. It’s been a little hit or miss, but offering multiple audio tracks helps me reach
more viewers across the world. We’ve received some pretty good feedback (and…some bad). It’s kind of
trippy to hear my own voice speaking a language that I can’t. You can check it out on this video.
Then there’s what’s available in Notion, which is the platform I use to plan my videos. Since
it introduced AI, I’ve been able to make the video production process more convenient. I’ve set up a
system that automatically pulls online articles relevant to topics I cover, then summarizes them
into a short paragraph. This makes it super easy to comb through countless headlines.
I also use a lot of Photoshop’s AI tools when making my video’s thumbnails. I don’t generate
images from scratch, but oftentimes I like an existing photo that’s been shot vertically.
That won’t work for the aspect ratio I need, so I scale the canvas up, click content aware fill,
and bam … instant landscape orientation. And I’m not alone — other YouTubers do this, too.
This is barely scraping the surface, considering all the tedious tasks we could automate,
and all the discoveries that can be sped up using AI. New drugs, improved battery chemistries,
nuclear fusion calculations, some of this stuff is already happening right now.
But we can’t get ahead of ourselves here. AI is still reliant on humans. You don’t
push a toddler in a tricycle down a steep hill, so we shouldn't expect proficiency
from technology that is quite literally still in training. Over and over again,
businesses have placed too much confidence into an AI tool and regretted the decision immediately.
“...one thing that I learned from actually using these tools every day is just how
important people are to the generative AI process. It still takes a lot of work to get
the outputs that you want in the quality that you need, so humans aren't going anywhere.”
On top of all the other problems I’ve mentioned, the amount of resources required
to train and use these models can’t be ignored. It’s not just electricity,
but water for cooling and space for data centers. According to a 2023 study, Google,
Microsoft, and Meta withdrew about 2.2 billion cubic meters’ worth of water in 2022…which
is twice the total annual water use of all of Denmark. “Not sustainable” is an understatement.
What Do We Do?
I’ve given you a lot to digest so far, and even then my points are far from exhaustive. But I’d
like to come back to the question I posed earlier: should we be freaking out? I don’t think we need
to panic. I think we need to hold these tech companies accountable for how they handled the
training data…and be prepared for where this tech is heading. AI and machine learning may
not be new, but these new AI tools are already changing our world … and fast. So, how will you
move forward? Will you change your relationship to social media? Will you advocate for regulation?
Will you prioritize doing the inconvenient thing — supporting human creators like me?
What do you think? Jump into the comments and let me know and be sure to listen to
my follow up podcast Still TBD where we’ll keep this conversation going. And as always,
I include a link in the description to my full script with citations
and sources if you want to learn more. I’ll see you in the next one.
Weitere ähnliche Videos ansehen
Mustafa Suleyman on The Coming Wave of AI, with Zanny Minton Beddoes
AI is terrifying, but not for the reasons you think!
AI and the Death of Creativity
Generative AI Is About To Reset Everything, And, Yes It Will Change Your Life | Forbes
The Future of AI Music
Wo wir in Sachen KI wirklich stehen und was uns erwartet: Deep Dive mit Philipp "Pip" Klöckner
5.0 / 5 (0 votes)