AI Just Changed Everything … Again

Undecided with Matt Ferrell
28 May 202418:28

Summary

TLDRThe video script by Matt Ferrell from 'Undecided' delves into the complexities of generative AI, emphasizing its long-standing history and recent advancements like OpenAI's ChatGPT 4.0. Ferrell highlights the ethical concerns surrounding AI's data consumption and its potential to replace human creativity. He also addresses the lack of transparency in AI development and the environmental impact of training models. Despite the challenges, Ferrell showcases positive AI applications, like language dubbing and content summarization, and encourages viewers to consider their stance on AI's role in society and its future implications.

Takeaways

  • 🧠 AI is not a new concept; generative AI, including large language models and image generators, has been evolving for a long time.
  • 💬 ChatGPT 4.0 is a significant milestone in AI, but it's part of a larger trend of AI development rather than a sudden breakthrough.
  • 👀 The public is both excited and concerned about AI advancements, reflecting a mix of optimism and apprehension about the future.
  • 🔍 Generative AI works by identifying patterns in data and using them to create new outputs, but it is not the same as artificial general intelligence (AGI).
  • 🕵️‍♂️ AI's history dates back to the 1940s, with foundational work on algorithms and neural networks, showing a long-standing interest in AI development.
  • 🎨 Concerns about AI center around the use of human creativity and data without consent or compensation, raising ethical questions about its development and use.
  • 🤖 The reliance on human labor for training AI systems, such as content moderation and data labeling, highlights the ongoing need for human involvement in AI processes.
  • 🔮 The 'black box' nature of AI decision-making processes raises transparency and accountability issues, especially when AI is used to make critical decisions.
  • 🌐 The sudden accessibility and mass appeal of AI technologies have amplified public awareness and scrutiny of AI's impact on society and the workforce.
  • 🛠️ AI tools have practical applications that can streamline tasks and enhance productivity, such as language translation and content creation.
  • ♻️ The environmental impact of training and running AI models, including the significant consumption of electricity and water, is a growing concern.

Q & A

  • What is the primary focus of the video?

    -The primary focus of the video is to discuss the current state and implications of generative AI, particularly in light of recent advancements such as OpenAI's ChatGPT 4.0.

  • What does the speaker mean by 'AI isn’t new'?

    -The speaker means that AI technologies have been developing for many years, even though the recent advancements in generative AI have brought it to the forefront of public attention.

  • What are some examples of generative AI mentioned in the video?

    -Examples of generative AI mentioned include large language models like ChatGPT and image generators like Midjourney.

  • What is the difference between AI and AGI as discussed in the video?

    -AI, as discussed, refers to systems designed for specific tasks using pattern recognition, whereas AGI (Artificial General Intelligence) refers to hypothetical systems capable of performing any intellectual task that a human can do, which remains a goal rather than a reality.

  • What concerns are raised about the use of data in training AI models?

    -Concerns include the ethical implications of using vast amounts of human-created data without consent, the potential for AI to replace human creativity, and the lack of a social contract for AI training.

  • What historical examples of AI are provided to illustrate its long development history?

    -Historical examples include ELIZA, an early chatbot from the 1960s, and the perceptron, an early neural network model developed in the 1950s.

  • How does the video address the suddenness of AI advancements?

    -The video explains that while AI has been around for decades, its recent rapid development and widespread accessibility as consumer products have made its impact more noticeable and concerning to the public.

  • What ethical issues related to AI development and deployment are highlighted?

    -Ethical issues include the exploitation of human labor for training data, the lack of transparency in AI operations, and the environmental impact of AI training and usage.

  • What are some positive applications of AI mentioned in the video?

    -Positive applications include AI tools for dubbing videos in multiple languages, automating tedious tasks, aiding in medical discoveries, and enhancing productivity in various fields.

  • What does the speaker suggest about the future handling of AI technologies?

    -The speaker suggests that we should hold tech companies accountable for their use of training data, advocate for regulation, and support human creators to navigate the rapid advancements in AI technology.

Outlines

00:00

🤖 AI's Evolution and Public Perception

Matt Ferrell introduces the topic of generative AI, spurred by the OpenAI ChatGPT 4.0 announcement, and dispels the notion that AI is a new phenomenon. He emphasizes that while AI technologies are indeed powerful and rapidly developing, their underlying concepts have been around for decades. Ferrell discusses the public's mixed feelings towards AI and aims to provide a balanced perspective, highlighting the importance of understanding AI's true nature and its implications on society. He clarifies that AI, in this context, refers specifically to generative AI, such as large language models and image generators, which function by identifying patterns in data. These tools are not examples of artificial general intelligence (AGI) and are more specialized in their applications.

05:04

🧠 Historical Context of AI and Neural Networks

The video script delves into the history of AI, tracing its roots back to the 1940s with the work of McCulloch and Pitts, who laid the mathematical foundation for classifying input data. It continues through the development of the perceptron by Frank Rosenblatt in 1957, which was an early attempt to simulate neural networks. The script discusses how AI systems, including chatbots like ELIZA and more recent ones like 'Eugene Goostman,' have always been anthropomorphized, leading to misconceptions about their capabilities. It also touches on the use of AI in various industries and the author's personal experiences with AI, from gaming to professional applications, emphasizing the reliance of AI on human data and the ethical considerations surrounding its development and use.

10:06

🔍 Ethical and Social Implications of AI

The script addresses the ethical concerns surrounding AI, particularly the use of human-generated content to train AI models without consent or compensation. It highlights the lack of a social contract for AI training and the potential for AI to replace human creators. The sudden increase in AI's public presence and its portrayal as a consumer product raise concerns about transparency, biases, and the exploitation of human labor behind the scenes. The 'black box' nature of neural networks and their lack of full understanding present further challenges, as these systems are increasingly used to make important decisions. The script also points out the environmental impact of training and using AI models, noting the massive resources required and questioning the sustainability of current practices.

15:09

🛠 Practical Applications and Future Outlook of AI

Matt Ferrell shares his personal experiences with AI tools, such as using AI to dub videos in different languages and AI-assisted content summarization in Notion. He also discusses the use of AI in Photoshop for creating video thumbnails and acknowledges the potential of AI in accelerating discoveries in fields like medicine and energy. However, he cautions against over-reliance on AI, stressing that it is still in a developmental stage and requires human oversight. The script concludes by posing questions to the audience about how they will engage with AI and its implications, encouraging a thoughtful and proactive stance towards the technology's rapid advancement and its integration into society.

Mindmap

Keywords

💡Generative AI

Generative AI refers to artificial intelligence systems that can create new content, such as text, images, or videos, based on the data they were trained on. In the video, generative AI is exemplified by tools like ChatGPT and image generators like Midjourney. These tools perform specific tasks by identifying patterns in input data to produce new outputs.

💡ChatGPT 4.0

ChatGPT 4.0 is a version of OpenAI's language model known for its advanced natural language processing capabilities. The video discusses its announcement and its significance in the context of generative AI. ChatGPT 4.0 represents a leap in the ability of AI to understand and generate human-like text, highlighting both the potential and the ethical concerns of such technology.

💡Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI) is the concept of a machine with the ability to understand, learn, and apply intelligence across a wide range of tasks, similar to a human being. The video clarifies that the current AI technologies, including ChatGPT and other generative AIs, are not AGI but are more narrowly focused systems.

💡Neural Networks

Neural networks are computing systems inspired by the human brain's neural networks. They consist of layers of nodes that process data and are fundamental to deep learning, a subset of machine learning. The video explains how neural networks form the backbone of modern AI systems, allowing them to learn and make predictions based on data.

💡Pattern Recognition

Pattern recognition is the ability of AI systems to identify patterns in input data and use these patterns to generate new outputs. This concept is central to how generative AI, including ChatGPT, operates. The video uses examples like Netflix recommendations and historical AI programs like ELIZA to illustrate pattern recognition.

💡Data Training

Data training involves feeding an AI system large amounts of data to learn from. The video highlights the ethical concerns surrounding data training, such as the use of human-created content without consent. Training data is crucial for the functioning of AI systems, as it forms the basis for their ability to generate accurate and relevant outputs.

💡Ethical Concerns

Ethical concerns in AI refer to issues related to data privacy, consent, and the potential for AI to replace human jobs. The video addresses these concerns by discussing how AI companies use vast amounts of data without proper acknowledgment or compensation to creators and the broader implications for society.

💡Historical AI

Historical AI refers to the early developments and milestones in the field of artificial intelligence. The video traces AI's history back to early programs like ELIZA and mentions key figures and developments, such as the creation of neural networks and the perceptron. This context helps viewers understand that while AI's principles are not new, their applications are rapidly evolving.

💡Human-AI Interaction

Human-AI interaction describes the ways in which people engage with AI systems. The video explores this interaction through examples like AI-generated content, language translation, and tools that assist with tasks like video production. Understanding this interaction is crucial for assessing AI's impact on daily life and work.

💡Transparency

Transparency in AI refers to the clarity and openness about how AI systems operate and make decisions. The video discusses the 'black box' nature of neural networks, where the internal workings of AI systems are not fully understood, leading to concerns about accountability and fairness. Transparency is essential for building trust and ensuring ethical AI use.

Highlights

Generative AI is evolving rapidly, highlighted by OpenAI's latest ChatGPT 4.0 announcement.

OpenAI's new video-generating model, Sora, demonstrates the power and fast development of AI technologies.

AI of the 2020s isn’t new, but its consequences are profound and affect everyone.

Generative AI tools like ChatGPT and image generators are powerful, yet not examples of artificial general intelligence (AGI).

AI has been around for a long time, solving problems across various industries, from space telescopes to sales recommendations.

The history of AI includes early programs like ELIZA, which laid the groundwork for today's chatbots.

The Turing test, though not universally agreed upon, has been used to evaluate the humanness of AI chatbots.

Neural networks, developed as early as 1943, simulate the human brain to enable AI learning with less human intervention.

OpenAI has profited off the work of YouTubers and other content creators without their consent, raising ethical concerns.

There is no existing social contract for generative AI training, unlike how humans learn from each other’s work.

The sudden public availability and mass appeal of AI have changed how people interact with these technologies.

AI tools still require significant human input to produce quality outputs, indicating humans aren't going anywhere.

The resources required to train and use AI models, such as electricity and water, pose sustainability concerns.

AI and machine learning are changing the world rapidly, necessitating accountability and preparation for the future.

The discussion around AI’s impact should continue, focusing on regulation, supporting human creators, and ethical use of data.

Transcripts

play00:00

Let’s Stop Pretending AI Is New

play00:02

I’ve been thinking a lot about generative  AI lately. It’s kind of hard not to with  

play00:06

the latest OpenAI ChatGPT announcement. Can you  write a short poem about the OpenAI announcement  

play00:13

on ChatGPT 4.0. "Sure, here's a short poem  about the ChatGPT 4.0 announcement. A spark  

play00:19

in the realm of mind so vast, ChatGPT 4.0 is  here at last. With thoughts that weave like  

play00:25

threads of gold, And tales new and wisdom old."  The technologies we’re witnessing are powerful,  

play00:32

impressive, and developing fast. Everything  you’re been seeing on screen, for example,  

play00:37

is footage from OpenAI’s new video-generating  model, Sora. But let’s peel back the  

play00:41

algorithmically patterned wallpaper for a moment,  and take a hard look at the structure behind it.

play00:46

The AI of the 2020s isn’t new. But its  consequences are. If you’re watching this, they’ve  

play00:51

already affected you. So how should we, the  public, respond to tools that rely upon more data  

play00:56

than we could ever fathom? How can they change  our relationship to work? And…do we need to panic?

play01:02

I’m Matt Ferrell … welcome to Undecided. 

play01:11

This video is brought to you by  Brilliant, but more on that later.

play01:14

A lot of people are both excited and scared about  the state of AI right now, and rightfully so. One  

play01:19

of my goals with this channel, though, is to  provide you with reasons to remain optimistic.  

play01:23

Today, I’m going to try to put the recent  explosion of interest in AI into context.

play01:28

Before we get into it, I want to  be clear. When I use the word “AI,”  

play01:31

I’m specifically referring to generative  AI. That includes large language models,  

play01:35

or LLMs, like ChatGPT, and image  generators like Midjourney.

play01:39

Basically, these programs are meant  to perform specific tasks. And to  

play01:43

describe the way they work as simply as  possible, they identify patterns. When  

play01:47

they find patterns in a given input that  matches the data they’ve been trained on,  

play01:51

they use that data as a springboard to form  a new output. Or at least that’s the idea.

play01:56

What’s key is that these tools are not examples  of artificial general intelligence (AGI),  

play02:01

or the Marvins and HALs of sci-fi  spaceships. They’re far more narrow  

play02:06

than that. Overeager or not, tech companies  do recognize that AGI is still a goal.

play02:11

My main goal with this video is  to contribute nuance to larger  

play02:14

conversations about AI as a whole. Which  is why I want to start by reminding you:

play02:19

AI Isn’t New

play02:21

I know that to some that statement might seem  obvious, and to others that might be confusing,  

play02:25

so let me clarify. Actually, I have a couple  of friends that can help me with that.

play02:29

“Most people don't realize this, but  AI has been around for a long time,  

play02:32

and it helps solve all kinds of problems  across all kinds of industries. Before  

play02:37

starting my channel, I spent eight  years as a rocket scientist at MIT,  

play02:41

and part of my job was deploying machine learning  algorithms to help space telescopes and long range  

play02:46

radars detect really small and fast moving  objects. We ended up building a few neural  

play02:51

networks and training them to understand  what to look for and what they can ignore.”

play02:55

“Lets start with my experiences before  the 2020's. I was a software engineer at  

play02:59

Salesforce, and we had this product called  Einstein and it brought AI to your data. It  

play03:03

was a lot like Netflix's recommendations.  When you watch one show, it'll tell you,  

play03:07

hey, you'll like this show as well.  But it was largely pattern based.”

play03:10

For reference, Einstein launched in 2016.  But we can go even further back than the  

play03:14

2000s. Researchers have been picking  at what we now know as generative AI  

play03:18

for way longer than you might think. Let me  tell you about the time I first met ELIZA.

play03:23

Our first family computer was a Commodore  64. Yup...64KB of RAM with no disk or hard  

play03:29

drive of any kind. My brother, Sean, and  I would spend hours sitting in our little  

play03:33

upstairs playroom nook plugging in lines  of programming code from a book of BASIC. 

play03:38

ELIZA is one program that’s stuck with me  all these years. It would ask you questions  

play03:43

and then follow up on your answers in  the style of a Rogerian psychologist.  

play03:47

This was important to the illusion because  Rogerians encourage therapy patients to do  

play03:51

most of the talking. The technological trick  behind the scenes was that ELIZA searched for  

play03:55

“keys” in your sentence. In other words,  it was looking for patterns. For example:

play04:00

“What did you do today?”

play04:01

“I played with a Hot Wheels car.”

play04:03

“Tell me more about the Hot Wheels car.”

play04:05

For a little kid in the 1980s, this was  mind-blowing, and it felt like you were talking  

play04:09

to something alive inside the computer … until  you turned it off and lost the entire program.  

play04:14

Sound familiar? In any case, ELIZA is just  one of many in a long line of precursors  

play04:19

to the chatbots we know today. And if  you observe collective reactions to  

play04:23

these types of programs across history,  you’ll notice that people’s tendency to  

play04:26

anthropomorphize AI helps perpetuate  false ideas about its capacities.

play04:31

We can look at the persona of the chatbot  known as “Eugene Goostman” for another example.  

play04:35

You’ve probably heard of Turing tests, which are  basically an interpretation of a concept famously  

play04:39

discussed by mathematician Alan Turing. In a  formative 1950 paper, he proposed a theoretical  

play04:44

“imitation game” to determine a machine’s ability  to exhibit behavior indistinguishable from a  

play04:50

human’s. Since then, various groups have organized  competitions with panels of judges to evaluate the  

play04:55

“humanness” of chatbots — though it’s important  to know that Turing tests don’t have universally  

play04:59

agreed upon rules, and not everyone  finds this form of assessment valuable.

play05:04

When it comes to Goostman, its creators sought  to give the bot a “personality” by establishing  

play05:08

a backstory. He…I mean it… is meant to act  like a 13-year-old Ukrainian boy with a pet  

play05:16

guinea pig…so you can probably see how this  might have made the bot more convincing during  

play05:20

Turing tests. I mean, when have middle school  conversations not been awkward and clunky?

play05:30

So, is this cast of characters  all that removed from what we’re  

play05:33

contending with now? Yes and no. Yes  in the sense that, speaking broadly,  

play05:38

these Bots from Before operated within systems  that directly involved human hands, whether  

play05:43

through programming languages or mimicking  inputs from crowd-sourced conversations.

play05:47

This is unlike the popular large language  models of today, which use machine learning.  

play05:52

And more specifically, it’s  the “deep” kind of learning:  

play05:55

AKA neural networks. The whole point of these  networks is to simulate the human brain,  

play05:59

therefore allowing AI systems to  “learn” with less intervention.

play06:03

The chatbots that have already set the past few  years abuzz are built upon different foundations,  

play06:07

yes. But these foundations themselves are just  as old. Within the context of U.S. history,  

play06:12

it was in 1943 that scientists Warren  McCulloch and Walter Pitts laid out the  

play06:17

mathematical groundwork for an algorithm to  classify input data. You know…the same sort  

play06:21

of tasks you complete every time a website asks  you to complete a CAPTCHA to prove your humanity.

play06:27

Then, in 1957, psychologist Frank Rosenblatt  further advanced what would become the basis  

play06:32

of neural networks through what he called  “the perceptron.” He then married math to  

play06:37

metal by building a “Mark I” version of the  machine. Its purpose? To recognize images.

play06:43

So lets just take a quick second to  read some news? Here’s a few quotes  

play06:46

from the introduction to a piece from  the New York Times on machine learning:

play06:50

“Computer scientists, taking clues from  how the brain works, are developing new  

play06:53

kinds of computers that seem to have the  uncanny ability to learn by themselves.

play06:58

…The new computers are called neural networks  because they contain units that function roughly  

play07:01

like the intricate network of neurons in  the brain. Early experimental systems,  

play07:06

some of them eerily human-like, are  inspiring predictions of amazing advances.”

play07:10

Oh wait, hang on. This paper is dated…1987. Right  

play07:15

around the time I was punching in  ELIZA code into my Commodore 64.

play07:19

To give you a more recent peek into how  long we’ve tinkered with machine learning,  

play07:22

I can discuss my own career. Once upon a time,  I used to work on competitive multiplayer games.  

play07:27

You could win prizes by beating other players,  so there was a huge incentive for people to  

play07:32

cheat. To counter that, the development  team created bot detection systems. They  

play07:36

would allow us to analyze move history data  from previous matches, which would reveal  

play07:41

the subtle differences between how humans and  cheat programs play. It was pretty effective.

play07:46

But we needed human data to make a  comparison. And like the chatterbots of yore,  

play07:50

our modern Bards and Copilots fundamentally  rely upon human data to operate. Be it a  

play07:56

quirky conversation partner in the 1980s or  an aspiring assistant in the 2010s, AI systems  

play08:01

interpret massive amounts of information and  make their best guesses as to what to do with  

play08:05

it. Without all the data that we produce, they  can’t do much. And that’s part of the problem.

play08:11

Wrapping your head around the concepts  of AI and LLMs can be overwhelming.  

play08:16

That’s why I spent time going through the new  course, “How LLMs Work” at today’s sponsor,  

play08:20

Brilliant. It gets hands-on with real language  models and helps you learn how to tune an LLM  

play08:25

to generate different kinds of output. I found it  extremely helpful. Brilliant does a wonderful job  

play08:30

breaking complex topics down with hands-on problem  solving that let you play with the concepts. It  

play08:35

builds your critical thinking skills through  doing and not by memorizing. If you’re like me,  

play08:40

you’re probably very busy and may not think you  have the time to take a course, but Brilliant  

play08:43

is built around bite-sized lessons to break down  concepts into very understandable parts … in just  

play08:49

a few minutes every day. They have something  for everyone, like “Thinking in Code,” which  

play08:53

develops your mind to think like a programmer  and write robust programs. To try everything  

play08:58

Brilliant has to offer for free for a full 30  days, visit https://brilliant.org/Undecided  

play09:03

or click on the link in the description.  You’ll also get 20% off an annual premium  

play09:07

subscription. Thanks to Brilliant and to  all of you for supporting the channel.

play09:11

Why Are People So Concerned?

play09:13

“The question I always have is, where does  that data and training come from? It does  

play09:17

come from human art, right? Whether  it's writers or artists, painters,  

play09:21

or videographers. So I do worry, are we using our  creativity to train AI to basically replace us?”

play09:27

As a YouTube creator, I think it’s  for the best that I start with the  

play09:29

AI-generated elephant in the room. OpenAI has  profited off my work. OpenAI has profited off  

play09:35

of every YouTuber’s work. OpenAI has profited  off of any work that’s ever been published on  

play09:40

the Internet. And we know this because the  company ran out of online text to scrape,  

play09:46

so it went out of its way to develop a  transcription program that could capture  

play09:50

every sound on the Internet it could. Every video,  every podcast, every audiobook. It’s already done,  

play09:56

and none of us have seen a cent for it, so much  as acknowledgement that we had a part in it.  

play10:01

Companies now want your forum replies  and blog posts, too, while they’re at it.

play10:06

Late last year, Ed Newton-Rex,  a musician who uses AI himself,  

play10:10

pointed out that there’s no existing social  contract for generative AI training. Meaning,  

play10:15

you can’t justify the mass consumption of  virtually all the communications published  

play10:19

on the internet by comparing the practice  to how humans learn. As he wrote in a tweet:

play10:23

“Every creator who wrote a book, or  painted a picture, or composed a song,  

play10:27

did so knowing that others would learn from  it. That was priced in. This is definitively  

play10:32

not the case with AI. Those creators did  not create and publish their work in the  

play10:36

expectation that AI systems would learn from it  and then be able to produce competing content  

play10:42

at scale. The social contract has never  been in place for the act of AI training.”

play10:47

Don’t get me wrong: OpenAI  is not the only one doing  

play10:50

this. That’s another thing. The act of  hijacking people’s voices, art styles,  

play10:54

and identities without their consent is  already being legitimized because of how easy  

play10:59

it is with generative AI. Just a few weeks ago,  someone trained a model on Marques Brownlee’s  

play11:04

reviews to build a product recommendation tool  using his likeness. Did he have anything to do  

play11:09

with it or any idea it was even being  created? I'll give you one answer, No.

play11:14

Another reason for the negative response  toward the spike in AI advancement is,  

play11:18

well, the suddenness of it all. I know I  just said that this stuff isn’t anything  

play11:21

new. But what I mean is that up until very  recently, the average person didn’t interact  

play11:26

with AI…in a way that they were immediately  aware of. What’s changed is that companies  

play11:31

are now presenting AI as a consumer product for  everyone. It’s leapt from research computers  

play11:37

to social media and smartphone apps. In other  words, it’s more accessible than ever before.

play11:42

“So my experience with machine learning before  2020 was pretty minimal, mostly playing games  

play11:47

against the computer, which was some kind of  form of machine learning or rule based system,  

play11:52

though I didn't really know it at the  time. Currently, day to day though,  

play11:55

I use it a lot more. I use  it in coding during my PhD,  

play11:59

and also when I'm exploring broad topics, both  in the PhD and during YouTube video research.”

play12:04

Then there’s even more big picture problems that  threaten both livelihoods and lives, and a lot  

play12:08

of it comes down to transparency. For years, tech  giants have deliberately obscured the human labor  

play12:13

they exploit to reinforce incorrect assumptions  that AI has reached major milestones. In essence,  

play12:19

these systems have been behaving more like a  Mechanical Turk. By mid-2022, over a thousand  

play12:24

workers working remotely from India were reviewing  transactions for Amazon’s “Just Walk Out” shopping  

play12:30

system. They make the magic happen, not  fully autonomous “deep learning techniques.”  

play12:35

In Amazon’s words, though, they’re a vague  group of “associates” keeping things accurate.

play12:41

Similarly, the ChatGPT we know wouldn’t exist  without Kenyan workers. In late 2021, OpenAI  

play12:47

partnered with the data labeling company Sama to  outsource the excruciating process of identifying  

play12:52

graphic content — that way, it could train GPT-3  to not reproduce it. After reading up to hundreds  

play12:59

of passages depicting violent topics like suicide  and sexual abuse in explicit detail for nine hours  

play13:04

a day, Kenya-based Sama employees would take  home less than $2 an hour for their trouble.

play13:10

Another major issue is that the mechanics  of neural networks still aren’t entirely  

play13:14

understood. That lends itself to  a host of complicated consequences  

play13:17

that are best summed up by the concept  of the “black box.” The black box is the  

play13:21

opaque middle of a hypothetical system.  You know your input and you know your  

play13:25

output…but you can’t see the process  that got you from point A to point B.

play13:29

But if you can’t decipher the internal workings  of a tool that is being used to make decisions,  

play13:33

how do you ensure that it’s working properly?  How do you prevent it from furthering biases  

play13:37

that cause harm? These questions are not just the  stuff of dystopian stories. Algorithms determining  

play13:42

the “riskiness” of human beings have already  been around for a while. Steven Spielberg’s  

play13:46

movie adaptation of “Minority Report” came out  in 2002, but England and Wales had already begun  

play13:52

implementation of the Offender Assessment System  (or OASys) in 2001. It’s still in use today.

play13:58

Again, what’s changed is the public  availability and mass appeal of the  

play14:01

technology, not so much the actual systems. The  innovations that at least seemed incremental are  

play14:06

now overpowering in their speed, scale,  and scope. It’s like we can’t catch our  

play14:11

collective breath. Developers are continuing  to concentrate more and more resources into AI,  

play14:16

businesses are rushing to brand themselves as  “AI-first,” and every month there’s another  

play14:20

eye-popping spectacle…that might  really just be a dumpster fire.

play14:24

So, remember those Sora clips I showed  earlier? Yeah, about that…the Toronto-based  

play14:29

video production company Shy Kids actually used  Sora to produce its short film “Air Head.” The  

play14:35

ratio of footage the team generated versus  what actually made it into the final minute  

play14:38

and a half cut was about 300:1. And there  was a lot of “we’ll fix that in post.” I’d  

play14:45

suggest you read the fine print before you use  generators, but I doubt it would be legible.

play14:50

What Does This All Mean?

play14:51

Well, you’ve heard from my peers already.  What do I think about all this? Overall,  

play14:55

I’d say I'm torn. AI is amazing, but the origins  of the current suite of products are unethical  

play15:00

for a number of reasons. And most critically,  the damage has already been done. We’ve already  

play15:05

explored that angle, so let’s move into the  positives, the more optimistic side of this stuff.

play15:09

The number of use cases for these tools is  dizzying, so I’ll stick to talking about the  

play15:13

applications that I can vouch for, ones that  are workable right now — not what’s plausible,  

play15:18

promised, or someday possible.  All that could be its own video.

play15:22

If you haven’t noticed, I have actually been using  an AI tool to dub my videos in other languages now  

play15:26

for quite a while. It’s been a little hit or miss,  but offering multiple audio tracks helps me reach  

play15:36

more viewers across the world. We’ve received some  pretty good feedback (and…some bad). It’s kind of  

play15:41

trippy to hear my own voice speaking a language  that I can’t. You can check it out on this video.

play15:48

Then there’s what’s available in Notion, which  is the platform I use to plan my videos. Since  

play15:53

it introduced AI, I’ve been able to make the video  production process more convenient. I’ve set up a  

play15:57

system that automatically pulls online articles  relevant to topics I cover, then summarizes them  

play16:02

into a short paragraph. This makes it super  easy to comb through countless headlines.

play16:07

I also use a lot of Photoshop’s AI tools when  making my video’s thumbnails. I don’t generate  

play16:12

images from scratch, but oftentimes I like an  existing photo that’s been shot vertically.  

play16:17

That won’t work for the aspect ratio I need, so  I scale the canvas up, click content aware fill,  

play16:22

and bam … instant landscape orientation. And  I’m not alone — other YouTubers do this, too.

play16:29

This is barely scraping the surface, considering  all the tedious tasks we could automate,  

play16:33

and all the discoveries that can be sped up using  AI. New drugs, improved battery chemistries,  

play16:38

nuclear fusion calculations, some of this  stuff is already happening right now.

play16:42

But we can’t get ahead of ourselves here.  AI is still reliant on humans. You don’t  

play16:46

push a toddler in a tricycle down a steep  hill, so we shouldn't expect proficiency  

play16:50

from technology that is quite literally  still in training. Over and over again,  

play16:55

businesses have placed too much confidence into  an AI tool and regretted the decision immediately.

play17:00

“...one thing that I learned from actually  using these tools every day is just how  

play17:04

important people are to the generative AI  process. It still takes a lot of work to get  

play17:09

the outputs that you want in the quality that  you need, so humans aren't going anywhere.”

play17:15

On top of all the other problems I’ve  mentioned, the amount of resources required  

play17:18

to train and use these models can’t  be ignored. It’s not just electricity,  

play17:22

but water for cooling and space for data  centers. According to a 2023 study, Google,  

play17:26

Microsoft, and Meta withdrew about 2.2 billion  cubic meters’ worth of water in 2022…which  

play17:32

is twice the total annual water use of all of  Denmark. “Not sustainable” is an understatement.

play17:38

What Do We Do?

play17:40

I’ve given you a lot to digest so far, and even  then my points are far from exhaustive. But I’d  

play17:44

like to come back to the question I posed earlier:  should we be freaking out? I don’t think we need  

play17:49

to panic. I think we need to hold these tech  companies accountable for how they handled the  

play17:53

training data…and be prepared for where this  tech is heading. AI and machine learning may  

play17:58

not be new, but these new AI tools are already  changing our world … and fast. So, how will you  

play18:04

move forward? Will you change your relationship  to social media? Will you advocate for regulation?  

play18:09

Will you prioritize doing the inconvenient  thing — supporting human creators like me?

play18:14

What do you think? Jump into the comments  and let me know and be sure to listen to  

play18:17

my follow up podcast Still TBD where we’ll  keep this conversation going. And as always,  

play18:22

I include a link in the description  to my full script with citations  

play18:25

and sources if you want to learn  more. I’ll see you in the next one.

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
AI EthicsGenerative AIChatGPTMachine LearningNeural NetworksCreator RightsTech ImpactAI HistoryInnovation ConcernsFuture of AI
هل تحتاج إلى تلخيص باللغة الإنجليزية؟