AI Deepfakes Could Destroy The World Economy

Valuetainment
19 Jun 202312:40

Summary

TLDRThe video script addresses the growing concern over AI, particularly deep fakes, which can manipulate public opinion and spread misinformation. It highlights the views of tech leaders like Elon Musk and Brad Smith, who advocate for a pause in AI development. The script outlines the evolution of deep fakes, from Thomas Edison's early experiments to modern applications, and discusses the increasing prevalence and potential dangers, including political misinformation, fraud, and social unrest. It also offers strategies for protecting against deep fakes, such as media literacy, verification tools, and advocating for legislation. The video emphasizes the importance of awareness and education to combat the negative impacts of AI-generated misinformation.

Takeaways

  • 🚫 Concerns over AI: The script discusses the concerns of influential figures like Elon Musk and Steve Wozniak about the rapid advancement of AI, suggesting a call for a halt to assess potential risks.
  • 🎭 Deep Fakes: Microsoft's President, Brad Smith, and Google's CEO, Sundar Pichai, both express their biggest concern about AI as the creation of deep fakes, which can lead to misinformation and have serious implications.
  • πŸ“Š Deep Fake Growth: The script highlights the exponential growth in the number of deep fakes created, with a projection of over 100 million deep fakes in 2023.
  • πŸ“š Historical Context: The concept of deep fakes is not new, dating back to 1898 when Thomas Edison manipulated footage to influence public opinion.
  • πŸ•ŠοΈ Positive AI Use: Despite the negative aspects, the script also mentions the potential positive uses of AI, referencing a video about using AI for good.
  • πŸ‘₯ Impact on Society: Deep fakes can have a wide range of negative impacts, including political misinformation, fraud, corporate espionage, blackmail, defamation, legal implications, national security risks, social unrest, erosion of trust, and invasion of privacy.
  • πŸ›‘οΈ Protection Measures: The script provides several ways to protect against deep fakes, such as media literacy, verifying information, using detection tools, maintaining privacy, setting up alerts, reporting deep fakes, legislative support, being careful in online communications, and raising awareness.
  • πŸ” Detection Tips: It offers specific tips from DHS for identifying deep fakes, such as blurring, unnatural blinking, and changes in background or lighting.
  • πŸ€” Call for Thought: The script encourages viewers to think critically about the content they consume and to verify before reacting, suggesting a return to a more cautious and considered approach to information.
  • πŸ‘ Positive Engagement: The script ends with a call to action for viewers to engage positively with the content, through likes and subscriptions, indicating the importance of community and shared learning.

Q & A

  • What is the main concern expressed by Elon Musk and Steve Wozniak regarding AI in their open letter?

    -Elon Musk and Steve Wozniak, along with a thousand others, expressed concern about the rapid development of AI and suggested a six-month halt to allow for a reassessment of its potential risks and implications.

  • What does Brad Smith, the president of Microsoft, consider as the biggest threat from AI?

    -Brad Smith identifies deep fakes as his biggest concern regarding AI, fearing that they will contribute to the spread of misinformation.

  • How did Sundar Pichai, the CEO of Google, address the issue of deep fakes in his interview with CBS?

    -Sundar Pichai acknowledged in a 60-minute interview that AI could make it easier to create fake news and images, including videos, known as deep fakes.

  • What is a deep fake and why is it a significant issue?

    -A deep fake is a manipulated video or audio file that makes it appear as if someone said or did something they did not. It is a significant issue because it can be used to spread misinformation, deceive, and manipulate public opinion, as well as create non-consensual explicit content.

  • What was the first instance of a deep fake, and who was responsible for it?

    -The first instance of a deep fake was in 1898 by Thomas Edison, who mixed real footage with staged footage to manipulate the truth and fuel patriotism in America.

  • How has the number of deep fakes grown over the years according to the data provided?

    -The number of deep fakes has grown exponentially, from 14.6 million in 2019 to an expected 106.4 million in 2023.

  • What percentage of deep fake videos found by Deep Trade were non-consensual pornographic featuring women?

    -According to a study by Deep Trade, 96% of all deep fake videos they found were non-consensual pornographic featuring women.

  • What are the different types of deep fakes mentioned in the script?

    -The script mentions several types of deep fakes including puppet deep fakes, mouth swap deep fakes, face swap deep fakes, synthetic media deep fakes, and audio deep fakes.

  • What are some of the potential threats and issues associated with deep fakes as outlined in the script?

    -The potential threats and issues include political misinformation, fraud, corporate espionage, blackmail and defamation, spread of fake news, legal implications, national security risks, social unrest, erosion of trust, and invasion of privacy.

  • What steps can individuals take to protect themselves against deep fakes?

    -Individuals can protect themselves by practicing media literacy, verifying information, using detection tools, maintaining privacy, setting up alerts for their name, reporting deep fakes, supporting legislative measures, being cautious in online communications, using secure communication platforms, and raising awareness and educating others about deep fakes.

  • What advice does the Department of Homeland Security (DHS) offer for identifying deep fakes?

    -DHS advises to look for signs such as blurring in certain areas, unnatural movements, changes in background or lighting, and inconsistencies in tone or speech. They also suggest considering the context of the message and whether it can answer related questions.

Outlines

00:00

πŸ€– Concerns Over AI and Deepfakes

The script addresses widespread concerns about AI, particularly deepfakes, which are manipulated videos that can spread misinformation. It mentions an open letter by influential figures like Elon Musk and Steve Wozniak advocating for a pause in AI development. The president of Microsoft, Brad Smith, and Google CEO Sundar Pichai express their concerns about deepfakes contributing to fake news and misinformation. The script provides examples of deepfake incidents involving celebrities and political figures, illustrating the potential for market manipulation and public confusion. It also introduces Better Health, an online therapy service, as a sponsor and describes its benefits and services.

05:02

πŸ“ˆ The Evolution and Impact of Deepfakes

This paragraph delves into the history of deepfakes, tracing back to 1898 when Thomas Edison manipulated footage to influence public opinion. It discusses the evolution of deepfake technology from video rewrite programs in 1997 to generative adversarial networks (GANs) in 2014. The paragraph outlines various types of deepfakes, including puppet, mouth swap, face swap, synthetic media, and audio deepfakes. It presents data showing the exponential growth in the number of deepfakes created, highlighting the increasing prevalence and potential dangers of this technology. The paragraph also cites a study indicating that 96% of deepfake videos are non-consensual and pornographic, emphasizing the serious implications for privacy and consent.

10:03

πŸ›‘οΈ Protecting Against the Threat of Deepfakes

The final paragraph outlines the potential threats posed by deepfakes, such as political misinformation, fraud, corporate espionage, blackmail, defamation, fake news, legal implications, national security risks, social unrest, erosion of trust, and invasion of privacy. It offers a series of recommendations for individuals to protect themselves against deepfakes, including media literacy, verifying information, using detection tools, maintaining privacy, setting up alerts, reporting deepfakes, supporting legislative measures, being cautious in online communications, using secure communication platforms, and raising awareness through education. The paragraph concludes with advice from DHS on identifying deepfakes, such as inconsistencies in facial features, unnatural blinking, and changes in background or lighting.

Mindmap

Keywords

πŸ’‘AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is a central theme, with concerns raised by influential figures like Elon Musk and Steve Wozniak about its rapid development and potential misuse, particularly in the creation of deep fakes.

πŸ’‘Deep Fakes

Deep fakes are AI-generated media in which a person's likeness is swapped with another's in a video or audio file, creating a convincing but false representation. The video discusses deep fakes extensively, highlighting their potential to spread misinformation and manipulate public opinion, as well as their use in non-consensual pornography.

πŸ’‘Misinformation

Misinformation refers to false or misleading information that is spread, often unintentionally. The video emphasizes the role of AI, particularly deep fakes, in exacerbating the spread of misinformation, which can have serious societal and political consequences.

πŸ’‘Microsoft

Microsoft is mentioned in the script as the company whose president, Brad Smith, expressed his concerns about AI, specifically deep fakes, and their potential to create more misinformation. This underscores the significance of deep fakes from the perspective of a leading tech company.

πŸ’‘Google

Google's CEO, Sundar Pichai, is quoted in the video as acknowledging the role of AI in facilitating the creation of fake news and images, including deep fakes. This highlights the awareness among tech industry leaders about the ethical and societal implications of AI advancements.

πŸ’‘Better Help

Better Help is an online therapy service mentioned as a sponsor in the video. It represents a positive application of technology, providing access to mental health resources, contrasting with the video's main focus on the darker aspects of AI and deep fakes.

πŸ’‘Synthetic Media

Synthetic media refers to content that is artificially generated or manipulated, often using AI. In the context of the video, synthetic media is a category of deep fakes that can create highly realistic but false scenarios, such as a fake press conference in the Rose Garden.

πŸ’‘Generative Adversarial Networks (GANs)

GANs are a type of AI algorithm used to create deep fakes. The video explains that GANs consist of two parts: a generator that creates images and a discriminator that evaluates the realism of these images. GANs represent a significant technological advancement in the creation of deep fakes.

πŸ’‘Political Misinformation

Political misinformation is a specific type of misinformation that targets political figures or issues. The video discusses how deep fakes can be used to create videos of politicians saying or doing things they never did, potentially influencing elections and causing unrest.

πŸ’‘National Security

National security is mentioned in the video as a concern related to deep fakes, as they can impersonate leaders and potentially trigger military responses based on false information. This highlights the high-stakes implications of AI-generated misinformation.

πŸ’‘Media Literacy

Media literacy is the ability to critically evaluate and create media in a digital age. The video suggests that media literacy is a crucial defense against deep fakes, encouraging viewers to be skeptical and verify the authenticity of online content before sharing it.

Highlights

Concerns about AI are growing, with influential figures like Elon Musk and Steve Wozniak calling for a halt in AI development for six months.

Microsoft's President, Brad Smith, identifies deep fakes as a major concern regarding AI, fearing their potential to create misinformation.

Google CEO Sundar Pichai acknowledges in a CBS interview that AI can facilitate the creation of fake news and images, including deep fakes.

Deep fakes are multi-dimensional, affecting not just voice or video, but also leading to issues like non-consensual explicit content.

The impact of deep fakes is significant, with a case in India where a deep fake led to a stock market collapse.

BetterHelp is highlighted as a solution for seeking therapy discreetly online, with over 25,000 licensed therapists available.

Deep fakes are not a new phenomenon; the first instance dates back to 1898 with Thomas Edison's manipulated footage.

A timeline of deep fakes from 2018 to the present shows an evolution in technology and increasing prevalence.

Different types of deep fakes are identified, including puppet deep fakes, mouth swap deep fakes, and face swap deep fakes.

The number of deep fakes is rapidly increasing, with projections for 106.4 million deep fakes in 2023.

A study by Deep trade reveals that 96% of deep fake videos found were non-consensual pornographic content featuring women.

Technological advancements in deep fakes include video rewrite programs, active appearance models, and generative adversarial networks.

Today, apps are available for creating deep fakes, allowing anyone to impersonate celebrities like Tom Cruise.

Deep fakes pose various threats, including political misinformation, fraud, corporate espionage, blackmail, and defamation.

National security risks from deep fakes include impersonating leaders, potentially triggering military responses based on false information.

Social unrest can be incited through deep fakes by portraying non-existent incidents or exacerbating social tensions.

The erosion of trust in media due to deep fakes could lead to confusion and cynicism about the authenticity of content.

Invasion of privacy is a concern with deep fakes, as they can use people's images or voices without consent.

Strategies to protect against deep fakes include media literacy, verifying information, using detection tools, maintaining privacy, and legislative support.

DHS provides advice on identifying deep fakes, such as checking for blurring, unnatural movements, and inconsistencies in the video or audio.

Transcripts

play00:00

so look you've seen the articles about

play00:01

you know people are concerned about Ai

play00:03

and Elon Musk and Steve Wozniak wrote

play00:05

this open letter with another thousand

play00:06

people saying let's pump the brakes on

play00:07

AI for six months but what part of AI is

play00:10

so deeply concerning to these guys

play00:11

recently six days ago the president of

play00:14

Microsoft Brad Smith said the following

play00:16

he said deep fakes are his biggest

play00:18

concern about AI this is when he was

play00:21

speaking to a group of government

play00:21

officials members of Congress and policy

play00:23

experts last Thursday in Washington when

play00:25

he shared the same he fears deep fakes

play00:27

will help create more misinformation and

play00:29

by the way it's not just him let's look

play00:30

at Google CEO here's what he had to say

play00:32

Sundar pichai conceded in a 60-minute

play00:35

interview with CBS he said AI will make

play00:37

it easier to create fake news and fake

play00:40

images including videos hence deep fakes

play00:42

and by the way this deep fake is

play00:43

multi-dimensional it's not just about

play00:45

Voice or video which we'll cover that

play00:47

here in a minute it's Gal Gadot the

play00:49

actress Wonder Woman's in a porn but

play00:51

it's a deep fake or all these pictures

play00:52

of Donald Trump getting arrested when he

play00:55

went to court or deep fakes about the

play00:57

Pentagon being blown up to the point

play00:59

where the news one of the most reputable

play01:00

new sources in India's reporting on it

play01:03

and the stock market collapsed because

play01:05

of a deep fake this is a real issue and

play01:07

we're going to talk about that today

play01:08

today's sponsor Better Health whether

play01:10

you're rich poor middle class married

play01:12

not married careers do well but

play01:13

relationships that sometimes you want to

play01:15

talk to somebody in discreet about your

play01:17

personal life we just don't want to do

play01:18

it where you go to a therapist at an

play01:20

office and everyone's waiting there and

play01:21

they kind of say hey well Johnny what

play01:22

are you doing here oh well I was about

play01:24

to see the therapist there is a solution

play01:26

for that better help is the world's

play01:27

largest therapy service and it's 100

play01:29

online at Better Health you can tap into

play01:31

a network of over 25

play01:33

000 licensed and experienced therapists

play01:35

who can help you with a wide range of

play01:37

issues to get started you just have to

play01:39

answer a few questions about your needs

play01:40

and preferences and therapy that way

play01:42

better health can match you with the

play01:44

right therapist from their Network then

play01:46

you can talk to the therapist however

play01:47

you feel comfortable whether it's via

play01:48

text chat phone or video call you can

play01:50

message your therapist at any time and

play01:52

schedule live sessions when it's

play01:53

convenient for you and by the way if

play01:55

your therapist isn't the right fit for

play01:56

any reason you can switch to a noon

play01:58

therapist at any time for no additional

play01:59

charge get 10 off your first month by

play02:02

going to betterhelp.com forward slash

play02:04

valuetainment the link is also below so

play02:06

whether you go to betterhelp.com forward

play02:07

slash valuetainment or you click on the

play02:09

link below you get 10 off for your first

play02:11

month so look if you give value out of

play02:12

this video give it a thumbs up subscribe

play02:13

to the channel so let's get right into

play02:15

it look a lot of people will say well

play02:16

deep fix our new thing now this has got

play02:18

to be something new it just started in

play02:19

the last 5 10 15 years no the first time

play02:22

we had a deep fake was an 1898 by Thomas

play02:24

Edison the Edison Company wanted to film

play02:27

the Spanish-American War but the low

play02:28

quality cameras made it challenging

play02:30

Edison chose to mix real footage of

play02:32

marching soldiers and weaponry with

play02:33

Stage footage of American soldiers by

play02:36

manipulating the truth the footage had

play02:37

the desired effect of fueling patriotism

play02:39

in America this shows that the intent of

play02:42

deep fakes Is Not A New Concept once

play02:44

again Thomas Edison started deep fakes

play02:46

before anybody else did so so now that's

play02:48

from the 1898 but if we look at today

play02:50

with the different kind of deep fakes

play02:51

we've had the last five years here's a

play02:53

timeline DHS did a guideline or report

play02:57

on this if you have not seen this report

play02:58

it's phenomenal to take a look at is

play03:00

very helpful they show a timeline from

play03:02

2018 starting with President Obama's

play03:04

BuzzFeed deep fake and then they talk

play03:05

about Jama Casa's deep fake to Jennifer

play03:07

Lawrence to David Beckham to world

play03:09

leaders sing and imagine all of these

play03:11

things whether it's Tom Cruise Mark

play03:13

Zuckerberg they have them all including

play03:14

Joe Rogan and they categorize it into

play03:17

different types of deep fakes puppet

play03:19

deep fake here's what puppet deep fake

play03:20

looks like and this stuff is not hard to

play03:22

get a hold of the second one they call

play03:24

is a mouth swap deep fake there's plenty

play03:27

of apps that can do it as well here's

play03:28

what it looks like which our enemies can

play03:30

make it look like anyone is saying

play03:32

anything the third one is a face swap

play03:34

deep fake and here's the example of what

play03:35

it looks like I've been thinking and I

play03:37

realized that it's been almost 10 years

play03:39

the next one is a synthetic media deep

play03:41

fake and this is what it looks like

play03:43

a press conference in the Rose Garden

play03:46

and then you have the audio deep fakes

play03:47

and this is what Morgan Freeman sounds

play03:49

like saying I'm not Morgan Freeman I am

play03:52

not Morgan Freeman and what you see is

play03:54

not real so let's look at the data to

play03:56

see how big of a deal this is like are

play03:57

we really getting that big of a growth

play03:59

and the number of deep fakes being done

play04:01

here's what the data looks like in 2019

play04:03

we had roughly 14.6 million deep fakes

play04:06

in 2020 that doubled to 29.2 then 43.8

play04:10

then 58.4 2023 they're expected to have

play04:14

a

play04:15

106.4 million total deep fakes imagine

play04:19

the influence and the manipulation on

play04:21

that and by the way there was a study

play04:22

done by Deep trade saying that 96 of all

play04:25

deep fake videos they found were

play04:28

non-consensual pornographic featuring

play04:30

women that's the Deep fake industry

play04:32

today so let's look at some of the

play04:34

evolutional technology on how deep fakes

play04:36

were made back in 1997 we had a video

play04:38

rewrite program this program would first

play04:41

track facial movements in the video and

play04:43

then it would rewrite the video with

play04:44

synthesized facial expressions then they

play04:46

evolved to 2001 the active appearance

play04:49

model this model would consider the

play04:51

shape and appearance color and texture

play04:52

of a face and binds these into a single

play04:54

unified model then in 2014 they came out

play04:57

with a generative adversarial networks

play04:59

this consisted of two parts a generator

play05:01

Network which creates the image in a

play05:03

discriminator network which tries to

play05:05

tell the difference between real and

play05:07

generated images and today we have

play05:09

plenty apps you can download to do deep

play05:11

fakes and you can even look like Tom

play05:12

Cruise and go viral and have a massive

play05:14

Tick Tock account like this guy so now

play05:16

let's talk about the threads and let's

play05:18

talk about how you can protect yourself

play05:19

against it some feedback from us and

play05:21

some from DHS number one political

play05:23

misinformation deep fakes can be used to

play05:24

create videos of politicians saying or

play05:26

doing things they never did this could

play05:28

potentially manipulate public opinion

play05:29

Insight unrest or even influence the

play05:32

outcomes of Elections number two fraud

play05:34

criminals could use deep fakes to

play05:35

impersonate individuals for the purpose

play05:36

of fraud such as creating false identity

play05:38

proofs or faking video calls to trick

play05:40

victims into revealing sensitive

play05:42

information or sending money that's

play05:43

pretty interesting that can happen to a

play05:45

lot of Boomers elderly people hey Ma hey

play05:47

Grandma can you give me the credit card

play05:48

information again and they will number

play05:50

three is Corporate Espionage this is

play05:51

when deep fakes can be used in person

play05:53

CEOs or other high-ranking officials

play05:55

potentially leading to stack

play05:56

manipulation spreading of corporate

play05:58

misinformation or tricking employees

play05:59

into revealing confidential information

play06:00

so imagine an Elon Musk comes out and

play06:02

says I've decided to sell Tesla I'm

play06:04

retiring I don't want to work anymore

play06:05

boom stock drops 20 percent these are

play06:08

the types of things that could happen

play06:09

number four blackmail and defamation

play06:11

deep fakes could be used to create

play06:13

compromising or damaging images or

play06:15

videos of individuals for the purpose of

play06:16

blackmail or character assassination

play06:18

number five spread of fake news depicts

play06:21

can make fake news stories seem more

play06:22

credible by providing seemingly real

play06:24

video or audio evidence to support false

play06:26

claims legal implications deep fake skin

play06:28

potentially undermine the legal system

play06:30

by creating false evidence that could be

play06:31

used to sway court cases or mislead

play06:34

investigations this could be used by

play06:36

criminals to kind of falsify that that

play06:38

wasn't really me and more importantly

play06:39

imagine if that's actually being used by

play06:41

some dark dirty evil politician and

play06:44

government employees to go after

play06:45

innocent people it's a little scary

play06:47

number seven National Security Risk by

play06:49

impersonating political military or

play06:50

other leaders deep fakes could

play06:52

potentially pose threat to National

play06:53

Security even potentially triggering

play06:55

military responses based on false

play06:57

information number eight social unrest

play06:58

deep fakes can be used to incite

play07:00

violence and create social unrest by

play07:02

portraying incidents that never occurred

play07:04

or by Fanning existing social tensions

play07:07

for example do you remember when Jesse

play07:08

smallett was like I was out there and

play07:10

these two African men came and they had

play07:12

Mega hats on and they're screaming Mecca

play07:14

and America lost their minds I can't

play07:16

believe racism still exists and then all

play07:18

of a sudden took us a few days to

play07:19

realize that wasn't real it was fake it

play07:22

was his way his contract was coming up

play07:25

he was trying to get a raise it created

play07:27

such tension and Division in America

play07:28

that was a real person doing a deep fake

play07:31

now imagine if he could have made the

play07:33

video showing that that happened and it

play07:36

wasn't the fake and you and I fell for

play07:37

it imagine that it's very possible that

play07:40

that can happen wouldn't be surprised if

play07:42

it does actually happen number nine

play07:43

erosion of trust as deep fakes become

play07:45

more prevalent they could contribute to

play07:47

a general erosion of trust and video and

play07:49

audio media causing confusion and

play07:51

cynicism among the public about what is

play07:53

real and what is not and last but not

play07:54

least invasion of privacy depicts can

play07:56

infringe on people's privacy by using

play07:58

their images or voices without their

play07:59

consent or by creating false scenarios

play08:02

involving them they can also be used to

play08:04

create non-consensual explicit content

play08:06

which is serious violation of personal

play08:07

privacy so how do you protect yourself

play08:09

against this stuff because by the way if

play08:10

you're watching this you have to know

play08:11

this is coming no matter how much the

play08:13

government's like well you can't do this

play08:14

and you can't do that a 12 year old kid

play08:16

can take this and do this and create a

play08:18

tick tock profile you know even right

play08:20

now there's a parody account of AOC

play08:22

obviously what's been happening with

play08:23

this going back and forth how AOC

play08:24

doesn't want to pay for a verified

play08:26

account and this other parity account is

play08:28

Dr if you haven't read some of these

play08:30

tweets on what AOC has said let me read

play08:32

three of them to you tell me how funny

play08:34

this is but it's a parenting account we

play08:36

should have 14 768 separate Pride days

play08:39

one day for every gender now that's

play08:41

pretty dumb and that's not AOC but it's

play08:43

a priority account but it's so real

play08:45

because it's verified and the other one

play08:46

isn't it's a form of a deep fake that's

play08:49

getting people to be confused a bit if

play08:51

we don't move to 100 green energy soon

play08:53

car emissions will kill off the human

play08:55

race just like it did with Dinosaurs

play08:57

[Laughter]

play09:03

that's right next to next to the car

play09:04

emissions

play09:05

again it's getting some people that

play09:07

don't really look at this stuff say I

play09:09

can't believe you ever she said

play09:10

something like this every time my

play09:11

boyfriend farts I make him plant a tree

play09:13

to offset his carbon emissions now that

play09:16

one she may have said I I'm I can see

play09:18

she's capable of saying that but it's

play09:19

actually a parody Count saying that so

play09:22

be fakes we got to be aware of them how

play09:24

to protect yourself number one media

play09:25

literacy one of the best defenses

play09:27

against deep fakes is to be media

play09:28

literate understand that not everything

play09:30

seen on the Internet is real even if it

play09:32

looks and sounds like a real person be

play09:34

skeptic and always double check what's

play09:35

out there go look at other accounts and

play09:37

other news sites to see if they're

play09:38

saying it as well if not maybe hang

play09:40

tight before you share that with other

play09:41

people number two verify information if

play09:43

a video or audio clip seems suspicious

play09:45

or add a character for the person

play09:46

involved verify it from multiple sources

play09:49

before you believe or share it number

play09:51

three use detection tools there's a lot

play09:52

of different detection tools out there

play09:54

one of them we noticed was called a

play09:55

deepware DOT AI but you can find many of

play09:57

them to see if something is fake or it's

play09:59

real number four maintain privacy be

play10:01

careful about what images and

play10:03

information you share online the more

play10:04

images and videos of your available on

play10:06

the internet the easier it is for

play10:07

someone to create a convincing defect of

play10:09

you perfect then I'm screwed number five

play10:12

set up alerts considering setting up

play10:14

Google alerts for your name to monitor

play10:16

new content associated with your name

play10:18

that appears on the internet number six

play10:19

report deep fakes if you come across a

play10:22

deep fake or if you find yourself the

play10:23

victim of one report it number seven

play10:25

legislative support advocate for

play10:27

legislation and regulations that protect

play10:29

against the malicious use of The Fakes

play10:31

number eight be careful in online

play10:32

Communications be wary of strange or

play10:34

unexpected requests in video calls

play10:36

especially if they involve sensitive

play10:38

information number nine you secure

play10:40

encrypted communication platforms for

play10:42

sensitive matters consider using secure

play10:44

communication platforms that employ

play10:46

strong encryption to prevent

play10:47

unauthorized access and number 10

play10:49

awareness and education lastly raising

play10:52

awareness about the existing and

play10:53

potential harm of defects is crucial the

play10:55

more people are aware the issue the less

play10:57

likely they are to be fooled by them and

play10:58

by the way I'll read you some of the

play11:00

advice dhsk for identifying deep fakes

play11:02

blurring dividend the face but not

play11:03

elsewhere in the video or the image a

play11:05

range of skin tone near the edge of the

play11:07

face double chins double eyebrows or

play11:08

double edges on the face whether the

play11:10

face gets blurry when it is partially

play11:11

obscured by a hand or another object

play11:13

lower quality sections throughout the

play11:15

same video box like shapes and crops

play11:17

affected around the mouth eyes and neck

play11:19

blinking movements that are not natural

play11:21

changes in a background or lighting

play11:22

varying tone inflection and speech

play11:24

phrasing with the speaker say that way

play11:26

context of the message is it relevant to

play11:28

recent discussion or can they answer

play11:29

related questions anyways just be aware

play11:31

of it this is real it's here it's not

play11:33

going away I guess what this does is

play11:35

kind of teach us to go back to what it

play11:36

was before I would assume in 1980s the

play11:39

average person didn't have a phone if

play11:40

the husband and wife are on a phone call

play11:41

at five o'clock and the husband's

play11:43

leaving work and they get into a bad

play11:44

fight he's got a 40-minute ride to the

play11:46

house he's not texting every second to

play11:48

say what he feels like it he's got 40

play11:49

minutes to cool down before he gets home

play11:51

and says hey what was that all about

play11:52

right and then they can do what they

play11:53

want to do

play11:54

today when you're seeing something don't

play11:57

immediately feel the need to react to it

play11:59

by the way we are all part of this Camp

play12:01

because we're like what is this so

play12:03

rather is this real can you is this

play12:05

really real hey Johnny is this real can

play12:07

you check the circle it's not okay it's

play12:08

a deep fake got it nope this this

play12:10

thing's very realistic guy look at this

play12:12

here and then move on verify it before

play12:14

you jump to conclusion on defects you're

play12:16

gonna see because this is the future

play12:17

it's not going away so now there's a lot

play12:19

of people that all they think about is

play12:20

negative with AI but there's a lot of

play12:22

ways to use a on a positive way

play12:23

especially Chad gbt we did a video

play12:25

titled seven ways to use Chad gbt in a

play12:28

positive way if you've never seen a

play12:29

click here to watch it if you got value

play12:31

from the video thumbs up subscribe to

play12:33

the channel take everybody bye

play12:36

foreign

play12:38

[Music]

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Deep FakesAI EthicsMisinformationMedia LiteracyFake NewsAI ConcernsElon MuskSteve WozniakBrad SmithSundar PichaiSecurity Threats