AI is terrifying, but not for the reasons you think!

Aperture
17 Jul 202413:34

Summary

TLDRThis script explores the rapid evolution of AI and its potential to replace human jobs, causing anxiety among workers. It discusses the environmental impact of AI's energy consumption, copyright issues in AI training, and the ethical concerns surrounding AI biases and content moderation. The narrative emphasizes the need for a sustainable, ethical AI development path.

Takeaways

  • πŸ€– The rapid evolution of artificial intelligence (AI) has sparked fears of robots taking over and replacing human jobs, with some estimates suggesting AI could replace 300 million full-time jobs.
  • 🌐 Generative AI is increasingly accessible, leading to anxiety among workers about their roles being automated, as highlighted by a Price Waterhouse Cooper survey showing nearly one-third of respondents were concerned about job displacement by technology within three years.
  • 🎨 Creatives worldwide are worried about the impact of AI on the authenticity of art, questioning whether human creativity will still be necessary in a world dominated by AI-generated content.
  • πŸš€ Concerns about AI reaching a point of uncontrollable self-improvement are prevalent, with the fear that advanced AI could surpass human intelligence and become autonomous, leading to unpredictable consequences.
  • 🌳 The environmental impact of AI is significant, with large AI models like Bloom consuming as much energy as 30 homes in a year and emitting substantial carbon dioxide, highlighting the need for sustainable AI development.
  • 🌞 Solar Slice is a startup aiming to mitigate the environmental impact of AI by allowing individuals to fund the construction of large-scale solar farms, supporting the transition to clean energy and reducing carbon emissions.
  • πŸ“š Copyright issues surrounding AI training data have become a major concern, with companies like Open AI facing legal challenges for using copyrighted content from sources like YouTube without permission.
  • πŸ“ˆ The growth of large language models has been exponential, increasing 2,000 times in size over the last five years, which raises questions about the environmental and ethical implications of such massive data consumption.
  • πŸ–ΌοΈ Artists, musicians, and writers are increasingly concerned about their work being used in AI training without consent or compensation, leading to legal disputes and calls for clearer regulations on AI data use.
  • πŸ” AI biases can perpetuate societal prejudices, with discriminatory data leading to unfair outcomes in applications like law enforcement, healthcare, and job recruitment, emphasizing the need for unbiased AI training data.

Q & A

  • What is the primary concern regarding the evolution of artificial intelligence?

    -The primary concern is that artificial intelligence might evolve at an incomprehensibly fast pace, potentially leading to AI systems that are uncontrollable and could replace human jobs.

  • What did Goldman Sachs report about AI's impact on employment?

    -Goldman Sachs published a report stating that AI could replace the equivalent of 300 million full-time jobs.

  • What did a Price Waterhouse Cooper survey from May 2022 reveal about workers' concerns?

    -The survey found that almost one-third of respondents were worried about their employment roles being replaced by technology in the next three years.

  • How does the proliferation of AI affect creatives worldwide?

    -Creatives worldwide are fearful that art as they know it faces an existential threat with the proliferation of AI, questioning whether human authenticity is necessary in the world anymore.

  • What is the environmental impact of AI models like Bloom?

    -Bloom, an AI model focused on ethics, transparency, and consent, was found to use as much energy as 30 homes in one year and emit 20 tons of carbon dioxide, highlighting the environmental impact of AI.

  • What is the significance of the AI model Bloom's energy consumption in comparison to larger models?

    -Bloom's energy consumption is relatively small compared to larger models like GPT, which are assumed to use at least 20 times more energy, indicating a significant environmental cost.

  • What is Solar Slice and how does it aim to address the climate crisis?

    -Solar Slice is a startup that allows individuals to fund the construction of large-scale solar farms, accelerating the transition to clean energy. Users can sponsor a slice of a solar farm, track its energy production and carbon savings, and earn eco points for further environmental contributions.

  • What are the copyright issues surrounding AI model training?

    -AI models require massive amounts of data to work effectively, often scraping content from platforms like YouTube without permission. This raises ethical and legal questions about the use of copyrighted material in AI training.

  • How did the New York Times respond to AI companies using their content for training?

    -The New York Times sued AI companies like Open AI for copyright infringement, demanding the destruction of chatbot models and training data that include copyrighted material.

  • What are the implications of AI biases in various sectors?

    -AI biases can lead to tangible damage in sectors like law enforcement, healthcare, and job applicant tracking. These biases perpetuate societal prejudices and can result in lower accuracy results for certain demographics.

  • What is the role of Sama in AI content moderation and what were the reported issues?

    -Sama provided laborers to Open AI for content moderation, sifting through extremist, sexual, and violent content. However, employees reportedly suffered from post-traumatic stress disorder and were paid less than $2 an hour, highlighting the human cost of AI development.

  • What are some potential solutions to the ethical and legal issues in AI development?

    -Tools like Spawning and Code Carbon are being developed to help artists control their work's use in AI training and measure AI's environmental impact, respectively. These tools could lead to better understanding and regulation of AI's social, legal, and environmental impacts.

Outlines

00:00

πŸ€– AI's Impact on Jobs and the Environment

The first paragraph discusses the widespread fear of AI replacing human jobs, citing a Goldman Sachs report that suggests AI could displace 300 million full-time jobs. It also touches on the anxiety among creatives about AI's potential to disrupt the art world and questions the necessity of human authenticity. The environmental impact of AI is highlighted, with the creation of the AI model Bloom, which emphasizes ethics and sustainability but still consumes significant energy. The paragraph also mentions the energy consumption of large AI models like GPT and the lack of transparency from tech companies about their energy use. Finally, it introduces Solar Slice, a startup that allows individuals to fund solar farms and contribute to clean energy.

05:02

πŸ“š AI and Copyright Law: A Complex Relationship

This paragraph delves into the legal and ethical issues surrounding AI training, particularly in relation to copyright law. It describes how AI models like Open AI's Whisper transcribe YouTube videos to train chatbots, potentially violating copyright and YouTube's terms of service. The paragraph also covers the New York Times' lawsuit against AI companies for using their content in AI training, marking a significant legal challenge. The discussion extends to the broader implications for content creators, such as visual artists and musicians, whose work may be used in AI training without permission or compensation. The ethical and legal debates around AI training are highlighted, with no clear consensus on the right course of action.

10:02

🌐 AI Biases and the Real-World Consequences

The third paragraph addresses the issue of AI biases, which can perpetuate societal prejudices and lead to tangible harm, particularly in law enforcement and healthcare. It discusses how AI models trained on discriminatory data can result in misidentification and unfair treatment, such as in facial recognition systems. The paragraph also touches on the impact of AI on content moderation, where workers are exposed to traumatic content, leading to mental health issues. The ethical implications of AI training and use are explored, including the need for transparency and fair compensation for content creators. The paragraph concludes by suggesting that while AI is advancing rapidly, addressing its social, legal, and environmental impacts is crucial for creating a responsible AI ecosystem.

Mindmap

Keywords

πŸ’‘Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. In the video, AI is portrayed as a rapidly evolving technology that has the potential to replace human jobs, which is a major concern for many. The script discusses the fear of AI taking over and the ethical implications of its development.

πŸ’‘Generative AI

Generative AI is a type of AI that can create new content, such as text, images, or music, based on existing data. The script mentions generative AI as a source of anxiety for workers and creatives, as it is becoming more accessible and could potentially replace human roles in various industries.

πŸ’‘Ethics

Ethics in the context of AI refers to the moral principles that guide the development and use of AI technologies. The script highlights the importance of ethics in AI, particularly in terms of transparency, consent, and environmental impact. It also touches on the ethical concerns related to AI's impact on employment and the potential for bias in AI models.

πŸ’‘Environmental Impact

The environmental impact of AI refers to the effects that AI technologies have on the environment, particularly in terms of energy consumption and carbon emissions. The script discusses the high energy usage and carbon footprint associated with training large AI models, such as Bloom and GPT, and the need for more sustainable AI practices.

πŸ’‘Copyright Law

Copyright law is a legal framework that protects the rights of creators over their intellectual property. In the video, copyright law is discussed in relation to AI, as companies like Open AI have been accused of using copyrighted content to train their AI models without permission, raising legal and ethical questions.

πŸ’‘Data Scraping

Data scraping is the process of extracting data from websites, often without permission. The script mentions Open AI's use of data scraping to train its AI models, particularly by transcribing YouTube videos into text documents, which has led to legal disputes and questions about the legality and ethics of such practices.

πŸ’‘Bias

Bias in AI refers to the unfair or prejudiced treatment of certain groups or individuals by AI systems, often as a result of the data used to train the models. The script highlights the potential for AI to perpetuate societal biases, such as racism and sexism, and the negative consequences this can have in various applications, including law enforcement and healthcare.

πŸ’‘Content Moderation

Content moderation involves the review and management of content on digital platforms to ensure it complies with community standards or legal requirements. The script discusses the challenges of content moderation in the context of AI, particularly the emotional toll it can take on human workers who are tasked with reviewing disturbing content to train AI models.

πŸ’‘Sustainability

Sustainability in the context of AI refers to the development and use of AI technologies in a way that minimizes negative environmental and social impacts. The script mentions the need for AI to be more sustainable, both in terms of its environmental footprint and its social implications, such as the impact on employment and the potential for bias.

πŸ’‘Transparency

Transparency in AI refers to the openness and clarity with which AI developers and companies disclose information about their technologies and practices. The script emphasizes the importance of transparency in AI, particularly in relation to the data used to train AI models and the potential for bias or unethical practices.

πŸ’‘Legal Implications

Legal implications in the context of AI refer to the potential legal consequences and challenges that may arise from the development and use of AI technologies. The script discusses various legal issues related to AI, including copyright infringement, the misuse of personal data, and the potential for AI to perpetuate discrimination.

Highlights

The fear of robots taking over due to the rapid evolution of artificial intelligence is widespread.

AI is projected to replace 300 million full-time jobs, causing anxiety among workers.

Generative AI is accessible and poses an existential threat to creatives worldwide.

Concerns about human authenticity in a world increasingly dominated by AI are growing.

The potential for AI to become uncontrollable through self-improvement is a significant fear.

AI's environmental impact, such as energy consumption and carbon emissions, is a growing concern.

Bloom, an AI model focused on ethics and sustainability, still consumes significant energy.

Large language models like GPT and Gemini have grown exponentially, increasing environmental impacts.

The energy required for AI systems primarily comes from non-renewable sources, exacerbating the climate crisis.

Solar Slice is a startup aiming to accelerate clean energy transition through large-scale solar farms.

Copyright issues surrounding AI training data, such as using YouTube videos, are extensively discussed.

Open AI's use of YouTube videos for training models raises legal and ethical questions.

The New York Times sued Open AI for copyright infringement over the use of their content in AI training.

News Corp has a licensing deal with Open AI, contrasting with the New York Times' legal action.

AI training on human-created content raises questions about artists' rights and compensation.

AI models can perpetuate societal biases, leading to issues in law enforcement and healthcare.

Content moderation for AI training involves dealing with disturbing content, impacting workers' mental health.

Tools like Spawning and Code Carbon are emerging to help manage AI's social, legal, and environmental impacts.

The need for guardrails and new regulations on artificial intelligence is becoming increasingly apparent.

Transcripts

play00:00

the robots are going to take over that's

play00:02

the fear isn't it with the evolution of

play00:05

artificial intelligence moving at an

play00:07

almost incomprehensibly Fast Pace it's

play00:09

easy to understand why we get

play00:10

preoccupied with this idea everywhere we

play00:13

turn there's headlines about AI stealing

play00:15

human jobs golden Sachs even published a

play00:18

report last year saying that AI could

play00:20

replace the equivalent of 300 million

play00:22

full-time jobs generative AI is more

play00:25

accessible than ever and workers are

play00:27

anxious a price Waterhouse Cooper survey

play00:29

from May 2022 found that almost onethird

play00:31

of respondents were worried about their

play00:33

employment roles being replaced by

play00:35

technology in the next 3 years creatives

play00:38

worldwide are fearful that art as we

play00:39

know it faces an existential threat with

play00:41

the proliferation of AI and for the

play00:44

first time we're seriously asking

play00:45

whether human authenticity is a

play00:47

necessary part of the world anymore of

play00:50

course the worst fear is that artificial

play00:51

intelligence will reach a point of

play00:52

self-improvement so Advanced that it

play00:54

will become uncontrollable if the AI can

play00:57

teach itself and Achieve Superior

play00:59

intelligence to us

play01:00

mere mortals what will become of our

play01:03

future these doomsday scenarios are an

play01:06

important part of the conversation the

play01:08

truth is nobody knows what will happen

play01:09

in 10 or 20 years let alone 10 and 20

play01:12

minutes we can try to predict the path

play01:14

that AI will take but two short years

play01:16

ago we were all playing around with the

play01:17

first public release of chat GPT

play01:20

completely enthralled with its mere

play01:21

existence and now it's just a regular

play01:23

part of many people's lives besides we

play01:26

don't need to preoccupy ourselves with

play01:27

being controlled by robots there's

play01:29

plenty Happ happening right now that

play01:31

should raise some red flags generally

play01:34

speaking we think advanced technology is

play01:35

synonymous with sustainability but

play01:38

that's not often the case there are

play01:40

always trade-offs the hope is that the

play01:42

technology is beneficial enough to

play01:44

society and the environment that the

play01:46

trade-offs are worth it and it might

play01:48

feel like AI exists out there in the

play01:49

cloud pinging our computers and phones

play01:51

when we need it and it's not wrong

play01:54

however as we all know it the cloud

play01:55

isn't just floating up in the sky ai's

play01:57

cloud is built of metal and silicon it's

play01:59

powered by energy and every AI query

play02:01

that comes through is a cost to the

play02:03

planet a team of 1,000 researchers

play02:06

joined together to try and address this

play02:07

growing concern they created an AI model

play02:10

called Bloom which stands for

play02:11

biologically localized and online

play02:13

One-Shot multitask learning that

play02:15

emphasize ethics transparency and

play02:17

consent they discovered that training

play02:19

this environmentally friendly model used

play02:21

as much energy as 30 homes in one year

play02:23

and emitted 20 tons of carbon dioxide in

play02:26

comparison to a behemoth like chat gbt

play02:28

Bloom is small potatoes so AI

play02:31

researchers assume that bigger models

play02:32

like GPT use at least 20 times more

play02:35

energy the exact number remains a

play02:37

mystery though because tech companies

play02:39

aren't required to disclose information

play02:40

on energy consumption and not to mention

play02:43

that the current Trend in AI follows the

play02:44

rule of bigger is better large language

play02:47

models like chat GPT and Google's Gemini

play02:49

grew 2,000 times in size Over The Last 5

play02:51

Years with that growth comes inevitable

play02:54

and often undiscussed environmental

play02:56

impacts one of these environmental

play02:58

impacts is the amount of energy that

play02:59

computers need to process the large

play03:01

volume of information required to run

play03:03

these AI systems most of this energy is

play03:06

gotten from non-renewable sources which

play03:07

is only worsening our climate crisis if

play03:10

you want to do something about the

play03:11

climate crisis then you should check out

play03:13

the sponsor of today's episode solar

play03:15

slice solar slice is a startup that lets

play03:18

you fund the construction of large-

play03:20

scale solar Farms accelerating the

play03:22

transition to clean energy all you need

play03:24

to do is sponsor a slice of their large

play03:26

scale solar farm a solar slice which

play03:29

adds 50 W of solar to the grid and

play03:31

reduces harmful emissions to measure

play03:33

just how much impact you're making their

play03:34

app allows you to track real-time data

play03:36

on your slices energy production and

play03:38

carbon savings as your slices generate

play03:41

clean energy you earn Eco points which

play03:43

you can then use to buy more slices

play03:44

plant trees or fund other meaningful

play03:46

climate friendly projects to make even

play03:49

more impact you can share your progress

play03:51

with others create group impact goals

play03:53

with friends or send solar slices to

play03:55

your eco-conscious friends as gifts to

play03:57

learn more visit solar slice.com

play04:00

there you'll find a link to their

play04:01

Kickstarter Campaign which will help

play04:03

fund the construction of their first

play04:04

solar farm and the development of their

play04:06

app back to our story on the other hand

play04:09

the growing copyright issues surrounding

play04:10

how these AI models are trained have

play04:12

been discussed extensively simply stated

play04:15

copyright law protects intellectual

play04:16

property and content from being used or

play04:18

sold without permission from the

play04:19

copyright holder until recently the

play04:21

implications were relatively easy to

play04:23

Define and prosecute when necessary with

play04:26

AI it's a different story recently open

play04:28

AI was called out for using YouTube

play04:30

videos to train its models these large

play04:32

language models need massive amounts of

play04:34

data to work effectively yes it's

play04:36

important that they can answer simple

play04:38

questions like what temperature to cook

play04:39

chicken at but perhaps more importantly

play04:41

they need to be able to generate

play04:42

coherent human-like sentences but how do

play04:45

they learn to talk like a human from

play04:48

other humans of course but is it ethical

play04:51

or legal for a company like open AI to

play04:53

scrape online sources like YouTube that

play04:55

might not approve of such scraping open

play04:57

AI reportedly used its audio trans

play04:59

description model whisper and an attempt

play05:01

to get over the hump of hazy AI

play05:03

copyright law the model transcribed

play05:05

files from YouTube videos into plain

play05:07

text documents creating the data sources

play05:09

needed to train its AI chat Bots whisper

play05:12

transcribed over a million hours of

play05:13

YouTube videos uploaded by millions of

play05:15

users some of whom derive part or all of

play05:18

their income from creating content on

play05:19

the platform open AI knew this was

play05:21

legally questionable but believed they

play05:23

could claim it was fair use of online

play05:25

content open AI president Greg Brockman

play05:27

was Hands-On in collecting videos used

play05:29

in the training and the company

play05:31

maintains that it uses publicly

play05:32

available data to train its AI models

play05:35

the scraping violated YouTube's rules

play05:37

which ban the use of content for

play05:38

applications independent of the site

play05:41

interestingly Google which owns YouTube

play05:43

knew about open ai's actions but didn't

play05:45

report them because they are allegedly

play05:47

doing some content scraping of their own

play05:48

for the Gemini AI model YouTube isn't

play05:51

the only company that's pushing back

play05:53

against AI training in 2023 the New York

play05:55

Times accused open AI of stealing

play05:57

intellectual property ensued both it and

play05:59

in Microsoft open ai's financial backer

play06:01

for copyright

play06:03

infringement with this move the times

play06:05

became the first major American Media

play06:07

organization to sue an artificial

play06:08

intelligence company over its content

play06:10

being used to train chat Bots the suit

play06:13

called for companies like open AI to

play06:14

destroy chatbot models and training data

play06:17

that is copyrighted New York Times

play06:18

material it's the first test of legal

play06:20

issues around generative AI technology

play06:23

and could have major implications for

play06:24

training large language

play06:26

models while the times understandably

play06:28

has issues with his Catal of 13 billion

play06:30

articles being used without permission

play06:32

News Corp which owns the New York Post

play06:34

and the Wall Street Journal has taken

play06:36

the polar opposite approach as of May

play06:38

2024 the company has a multi-year

play06:40

licensing deal in place reportedly worth

play06:42

$250 million that Grants open AI access

play06:45

too much of its content open AI has also

play06:48

Inked deals with Fox Media in the

play06:50

Atlantic perhaps out of the harsh

play06:51

reality artificial companies like it

play06:53

will be facing moving forward all of the

play06:55

major players creating these massive

play06:57

language model AI programs are starting

play06:58

to hit the limit of data available to

play07:00

train them Google now has a deal with

play07:02

Reddit to license content from the

play07:04

website to train Gemini Meta Even

play07:06

considered buying book publisher Simon

play07:08

and Shuster and its 100 Years of

play07:10

material outright so it could get access

play07:12

to all of its content while these

play07:14

companies fight it out over who gets

play07:15

access to what there are real

play07:17

implications for the people who create

play07:19

this content visual artists musicians

play07:21

and writers are watching their work show

play07:23

up in AI texts and images this happens

play07:25

when an AI is trained on certain texts

play07:27

and images and learns to identify and

play07:29

replicate patterns in the data when the

play07:31

program is meant to generate music art

play07:33

or text the data it trains on has to be

play07:35

created by humans notable authors like

play07:38

Jonathan Fran and George RR Martin and

play07:40

John ginsum filed a lawsuit after

play07:42

learning that AI had absorbed tens of

play07:44

thousands of books actress and comedian

play07:46

Sarah Silverman sued meta in open AI for

play07:49

using her Memoir as a training text just

play07:51

like chat Bots it's difficult to

play07:53

identify what art has been used to train

play07:55

these models because companies like open

play07:57

AI which owns the popular image

play07:58

generator

play08:00

don't disclose their data sets others

play08:02

like stability AI which owns the

play08:04

generative AI model stable diffusion are

play08:06

clear about which data they're using but

play08:08

they are still taking artist's work

play08:10

without permission or payment the legal

play08:12

recourse for artists is difficult

play08:14

experts are of two minds and some feel

play08:16

that this type of AI training infringes

play08:18

on copyright law but others feel it's

play08:20

still above the board and that the

play08:21

lawsuits will fail and the truth is that

play08:24

nobody knows because we're in Uncharted

play08:26

Territory that once seemed like merely

play08:28

the subject of Science Fiction movies in

play08:30

the 2013 Spike Jones movie her while

play08:33

Keem Phoenix's character falls in love

play08:34

with an AI virtual assistant voiced by

play08:36

Scarlett Johansson 11 years later life

play08:39

is imitating art after open AI announced

play08:42

a new personal assistant called Sky it

play08:44

was easy to notice that his voice

play08:46

sounded a lot like Johansson's Sam Alman

play08:48

the company's CEO has noted that her is

play08:50

one of his favorite movies turns out

play08:52

he'd been courting Johansson to voice

play08:54

the new AI assistant but she declined

play08:57

the offer after hearing Sky's voice jo

play08:59

Johansson threatened a lawsuit against

play09:01

open AI for actors politicians athletes

play09:03

or anyone else in the public eye it's

play09:05

easy to see how AI could completely

play09:07

upend someone's life if their image

play09:09

Voice or likeness is replicated that

play09:11

upending is already happening right now

play09:14

well it is clear that AI companies are

play09:16

knowingly pushing the limits of

play09:17

copyright law they're also inadvertently

play09:19

causing even more harm whether the

play09:21

companies are intentional about it AI

play09:23

models are inevitably trained on the

play09:25

discriminatory data littered across the

play09:27

internet AI models and Cody patterns and

play09:29

beliefs representing racism sexism and

play09:31

other prejudices if these biases are

play09:34

deployed in settings intended for use

play09:36

specifically in law enforcement they can

play09:38

lead to tangible damage to innocent

play09:40

people for example if AI models are

play09:42

shown more images of white faces than

play09:44

darker skin tones they will have more

play09:46

trouble identifying features of dark

play09:48

skinned people if Police use AI to try

play09:51

and catch criminals the odds are higher

play09:52

that their systems will mistakenly

play09:54

identify dark skinned individuals more

play09:56

often or if AI is used to generate

play09:58

friends ex sketch the model will take

play10:00

all of the biases that's been fed and

play10:02

spit them back out in the sketch prompts

play10:04

like gang member or terrorist will

play10:06

inevitably whip up a stereotype that

play10:08

could totally be off the mark the

play10:10

implications in law enforcement are easy

play10:12

to see but they're also much further

play10:14

reaching in healthcare computer aided

play10:16

diagnosis systems have returned lower

play10:18

accuracy results for black patients than

play10:20

white patients in job applicant tracking

play10:22

Amazon stopped using a highering

play10:24

algorithm after it saw that the

play10:25

algorithm favored words like executed

play10:27

and captured which were more often found

play10:30

on men's

play10:31

resumΓ©s AI biases perpetuate human

play10:33

societal biases and can come from

play10:35

historical or current social inequality

play10:37

if you ask an AI to generate an image of

play10:39

a scientist it'll most likely show a

play10:41

middle-aged white man with glasses what

play10:44

does that say to young girls of color

play10:45

who want to be

play10:47

scientists these missteps Foster

play10:49

mistrust among marginalized groups and

play10:50

could lead to slower adoption of some AI

play10:53

technology the ethical issues aren't

play10:55

solely embedded in the training and use

play10:56

of these models they're happening right

play10:58

here in the physical world as well

play11:00

content moderation is a famously

play11:02

difficult job people sift through some

play11:03

of the worst images descriptions and

play11:05

sounds on social media platforms online

play11:07

forms and Retail sites they ensure that

play11:10

disturbing scenes don't wind up on our

play11:11

screens or in our ears AI might be

play11:14

getting smart but it doesn't self-

play11:16

moderate Time Magazine did a deep dive

play11:18

into a company called Sama in January

play11:21

2023 Sama provided open AI with laborers

play11:24

tasked with combing through some of the

play11:25

worst extremist sexual and violent

play11:27

content on the internet to ensure it

play11:29

didn't end up in the AI training regimen

play11:32

former s employees said they suffered

play11:34

post-traumatic stress disorder while on

play11:36

the job and after sifting through these

play11:37

horrific things to make matters worse

play11:40

employees mostly located in Kenya were

play11:42

paid less than $2 an hour the company

play11:45

claimed it was lifting people out of

play11:46

poverty but the time article described

play11:48

claims of the work being torture

play11:50

individuals regularly had to work past

play11:52

assigned hours and despite some Wellness

play11:54

services offered to them many

play11:55

experienced irreversible emotional

play11:57

effects the narrative that AI can

play12:00

eliminate workers is true but the

play12:01

workers it takes to make AI possible are

play12:03

still suffering so what's the solution

play12:06

is there

play12:07

one for artists a company called

play12:09

spawning created a tool that can help

play12:11

them better understand and control which

play12:12

art ends up in training databases the

play12:15

company stability AI does train its

play12:17

models on existing text and images

play12:18

available online but it's looking at

play12:20

ways to ensure that creatives are paid

play12:22

royalties for using their work another

play12:25

tool called code carbon has emerged

play12:26

which runs in parallel to AI training

play12:28

and measures missions this might help

play12:30

users make informed choices about which

play12:32

AI model to use based on how sustainable

play12:34

its operations are these are important

play12:37

and worthy starts but no single tool can

play12:39

solve such complex issues by creating

play12:42

tools that can measure AI social legal

play12:44

and environmental impacts we can start

play12:46

to understand how bad these problems are

play12:48

this hopefully can lead to creating

play12:51

guardrails and Advising legislators on

play12:53

how to develop new regulations on

play12:54

artificial

play12:55

intelligence it might feel like AI is

play12:57

moving quickly and that's because as it

play12:59

is the existential worry about robots

play13:01

taking over is a fun and scary one to

play13:04

entertain however we do have real issues

play13:06

centered around our potential digital

play13:07

overlords happening as we speak it's not

play13:10

too late to find ways to create an

play13:11

artificially intelligent world that we

play13:13

all want to live in but users and

play13:15

companies alike have to decide that path

play13:18

together

play13:19

[Music]

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceEthical ConcernsEnvironmental ImpactJob ReplacementAI BiasCopyright IssuesContent ScrapingSustainable AIClimate CrisisCreative Rights