Deepfakes Explained: How they're made, how to spot them & are they dangerous? | Explained
Summary
TLDRDeepfakes, using AI to manipulate video and audio, have evolved from movie magic to accessible tech for anyone. This script explores their potential for humor and misinformation, highlighting past uses in films and the ethical concerns of spreading false narratives. It discusses the advancements in technology, making deepfakes increasingly realistic and difficult to detect, and the importance of critical thinking and media literacy to discern truth in the digital age.
Takeaways
- π Deepfakes are advanced video and audio manipulations that can replace faces or put characters in unusual scenarios.
- π¬ Facial recognition technology has been used in major films like 'Star Wars Rogue One' and 'Fast and the Furious 7' for recreating characters.
- π€ Hao Li, a pioneer in the field, highlights the evolution of deepfake technology from complex processes to accessible tools for anyone.
- π€ Artificial intelligence plays a crucial role in deepfakes by mapping faces using fed data, enabling face swapping and lip-syncing.
- π The ease of creating deepfakes has increased significantly, with the time to produce them reduced to weeks for just a few seconds of video.
- π The accessibility of deepfake technology raises concerns about misinformation and harassment, impacting individuals and society.
- π³οΈ Deepfakes pose a significant risk in political scenarios, potentially swaying elections by misrepresenting candidates' actions or words.
- π New voice technology like Lyrebird can create voice imprints, further enhancing the realism of deepfakes.
- π The number of deepfake videos online has doubled in less than a year, indicating rapid growth and proliferation.
- π΅οΈββοΈ Deeptrace, a cybersecurity company, uses deep learning to detect deepfake videos, emphasizing the need for advanced detection methods.
- π§ Media literacy and critical thinking are essential for the public to discern the authenticity of videos and avoid falling for deepfakes.
Q & A
What is the primary application of deepfakes technology?
-Deepfakes technology is primarily used for video and audio manipulation, such as swapping faces onto other bodies, creating humorous or unrealistic scenarios, and even bringing characters back to life in films.
How was facial recognition technology used in 'Star Wars Rogue One'?
-In 'Star Wars Rogue One', facial recognition technology was used to make the actress playing Carrie Fisher appear the same age she was in the original film and to digitally recreate Peter Cushing as General Tarkin, despite his passing.
What role did Hao Li play in the production of 'Furious 7'?
-Hao Li worked on 'Furious 7' to map Paul Walker's face onto his brothers' bodies after Walker's untimely death during the film's production, allowing the movie to be completed.
How has the process of creating deepfakes evolved from 2015 to the present?
-The process of creating deepfakes has become significantly easier and faster, with the need for fewer reference photos and less specialized expertise, thanks to advancements in AI and deep learning.
What are some potential negative implications of deepfakes technology?
-Deepfakes can be used to spread misinformation, harass individuals by making it seem like they've done or said something they haven't, and manipulate public opinion, which is particularly concerning in the context of politics and elections.
Can you provide an example of a deepfake video involving a political figure?
-One well-known example is a deepfake video made by BuzzFeed featuring former US President Barack Obama. It demonstrated the potential for deepfakes to mislead the public and influence opinions.
What is the technology behind Lyrebird, and how does it work?
-Lyrebird is a Canadian technology that creates an imprint of someone's voice. Users need to say a few scripted lines into the program, and it generates a complete copy of their voice, which can be used to make deepfake audio.
How is the advancement of deepfake technology changing the ease of creating such videos?
-The advancement of deepfake technology is making it easier to create videos with higher quality and fewer reference materials. For instance, some deepfakes can now be made from just one photo or even a painting.
What is the role of media literacy in the context of deepfakes?
-Media literacy is crucial for educating people to think critically about the content they consume, to assess the credibility of sources, and to use contextual information to discern the truth in an era where visual scrutiny alone may not be reliable.
What measures can be taken to protect against the misuse of deepfake technology?
-Lawmakers and social media platforms need to establish regulations and mechanisms for misuse deterrence, better fact-checking, and flagging suspicious content to prevent the spread of deepfakes and protect individuals from defamation.
How did the situation in Gabon highlight the importance of critical thinking skills?
-In Gabon, a video of the missing president was accused of being a deepfake, leading to an attempted coup and loss of life. An investigation later found no evidence of tampering, illustrating the need for critical thinking before sharing or acting on information.
Outlines
π The Evolution and Accessibility of Deepfakes
Deepfakes represent a significant leap in video and audio manipulation, allowing for the realistic superimposition of one person's face onto another's. Initially used for entertainment or to create amusing scenarios, such as placing characters in unusual settings or integrating multiple instances of a single actor, the technology has also been employed in major films like 'Star Wars Rogue One' and 'Fast and the Furious 7' to address the challenges posed by the passing of actors or to recreate younger versions of characters. Hao Li, a pioneer in this field, discusses the advancements that have made deepfake creation more accessible, shifting from a multi-week process requiring a team of specialists to a task achievable by a single individual with AI assistance. The technology's capabilities extend beyond face swapping to include manipulating mouth movements for false speech synthesis. However, the potential misuse of deepfakes to spread misinformation and harass individuals is a growing concern, especially in political contexts where manipulated videos could sway public opinion and election outcomes, as exemplified by the deepfake of Barack Obama and other influential figures.
π The Future and Detection of Deepfakes
The future of deepfake technology is poised for even greater ease of use and improved realism, with developments that enable deepfakes to be created from a single image or painting. Applications like the Chinese app Zao demonstrate the current trend of user-friendly deepfake generation, allowing users to insert themselves into movie scenes with minimal input. Real-time face mapping, a feature present in many popular phone apps, is also indicative of the technology's pervasiveness and potential for further sophistication. Hao Li predicts that the creation of virtually undetectable deepfakes is imminent, a sentiment echoed by Giorgio Patrini, whose cyber security company Deeptrace is dedicated to detecting such manipulated content. The challenge lies in the rapid evolution of deepfake technology, which is outpacing the ability to visually identify manipulated videos. Both experts agree that while technological solutions are being developed to analyze video content at a pixel level or through behavioral analysis, the onus is also on educating the public to think critically and use contextual information to discern truth from deception.
π‘οΈ Ethical Considerations and Countermeasures Against Deepfakes
The ethical implications of deepfake technology are profound, with the potential to defame and ruin reputations with relative ease. The responsibility to mitigate the misuse of deepfakes falls not only on the creators of the technology but also on lawmakers and social media platforms. Implementing regulations and enhancing fact-checking mechanisms are crucial steps in deterring the spread of deepfake content. Additionally, social media platforms should provide tools for users to flag suspicious content and analyze its dissemination patterns. The case of the Gabonese President highlights the importance of critical thinking, as the misidentification of a video as a deepfake led to violent repercussions. It underscores the necessity for individuals to process information thoughtfully before sharing it, and for platforms and authorities to promote media literacy and vigilance against the potential dangers of deepfake technology.
Mindmap
Keywords
π‘Deepfakes
π‘Facial Recognition Technology
π‘Hao Li
π‘Artificial Intelligence (AI)
π‘Misinformation
π‘BuzzFeed
π‘Lyrebird
π‘Media Literacy
π‘Deeptrace
π‘Giorgio Patrini
π‘Zao
π‘Real-time Face Mapping
π‘Vladmir Putin
Highlights
Deepfakes represent the next generation of video and audio manipulation, capable of realistic face swapping and creating humorous or absurd scenarios.
Facial recognition technology has been utilized in major films like Star Wars Rogue One and Fast and Furious 7 for character resurrection and age manipulation.
Hao Li, a pioneer in the field, discusses the evolution of deepfake technology from complex processes to accessible tools for creating realistic graphics within weeks.
The ease of creating deepfakes today is attributed to advancements in artificial intelligence that automate the face mapping process.
Deepfakes are not limited to face swapping but also include manipulating mouth movements to fabricate speech.
The misuse of deepfakes poses a significant risk in spreading misinformation and harassment, with potential impacts on public opinion and elections.
BuzzFeed's deepfake of Barack Obama and other instances demonstrate the technology's potential to deceive and influence public figures.
Lyrebird technology enables the creation of a voice imprint from a few scripted lines, blurring the line between human and synthetic speech.
The future of deepfakes is predicted to involve even more realistic and easier-to-create videos, with Hao Li estimating virtually undetectable deepfakes to be within a year.
The rapid increase in deepfake videos on the internet, doubling between December 2018 and July 2019, indicates the technology's widespread adoption.
Deeptrace, a cybersecurity company, specializes in detecting deepfake videos using deep learning to analyze potential manipulations.
Giorgio Patrini highlights the importance of media literacy and critical thinking in discerning the authenticity of videos, beyond just visual inspection.
Both Hao Li and Giorgio Patrini agree that current deepfakes are already convincing enough to fool many people, emphasizing the need for contextual analysis.
The challenge of spotting deepfakes is evolving as technology fixes visual cues like blinking and skin tone inconsistencies.
Patrini and Li are developing complex tech solutions for deepfake detection, including pixel scanning and analyzing speech patterns.
Hao Li suggests that individuals cannot protect themselves from becoming deepfake victims and calls for legislative and social media platform action to deter misuse.
The incident in Gabon demonstrates the importance of critical thinking, as a video thought to be a deepfake led to unrest and violence, despite being authentic.
Transcripts
Deepfakes are the next generation of video & audio manipulation
and often they look like this
swapping someone's face
on to someone else's
sometimes they're made to look accurate
other times it's just for laughs
It's also used to put characters in pretty ridiculous situations
like the Joker as a medieval knight
or putting Nicolas Cage in every role
This kind of facial recognition technology has been around for a long time
it's even been used in major movies
like Star Wars Rogue One
which is a prequel to a film that came out almost
40 years before
it they used it to make the actress playing Carrie Fisher look
the same age she was
and to bring Peter Cushing
who is the actor that played General Tarkin
back from the dead
it's also been used in the Fast and the Furious series
when Paul Walker died during the production of Furious 7
they mapped his face onto his brothers' bodies
so that they were able to complete the film
one of the people who worked on that film is Hao Li
he's one of the leading pioneers in the field
and is currently working on some
cutting-edge tech right now
when I sat down to chat with him
he talked about how making those graphics for Furious 7
in 2015 was very very different to
how easy it is to make deepfakes today thanks today
so it's about a couple of weeks for a few seconds
and that's usually just the animation
the amount of people that have to be involved
because everyone is an expert in one specific department
and the crazy thing about deepfakes
is that you just need that one person
deepfake tech has advanced so much it's accessible to pretty much everyone
so, how does this amazing technology work
part of the reason why pretty much anyone can make one
is that most of the heavy lifting is done by artificial intelligence
when it comes to face swapping
you feed the AI data
in this case, a photo
or lots and lots of photos of your subject
the AI then uses that info to map the face onto another one
deep fakes aren't just face swapping
they can also include things like manipulating someone's mouth to make it
seem like they're saying something they never did
while a lot of these videos
are cool and some are straight-up hilarious
there is a darker side to deep fakes
they can be used to spread misinformation and harass people
by making it seem like they've done or said something they never did
or never would
in the age of fake news this is a really big problem
and something a lot of people are really worried about right now
because it could be used to mislead the public
especially when it's applied to somebody in a position of power
like a politician
Imagine just a few weeks before a major election
a deepfake video comes out with one of the candidates
and it turns people's opinions against them and they end up losing the election
one of the most well-known politician deepfakes
was this video made by BuzzFeed about former US President Barack Obama
it's not the only one either
lots of videos have already been made of world leaders and influential people
like Facebook CEO Mark Zuckerberg
recently an Italian comedy show made a deep fake of their former PM Matteo Renzi
insulting other pollies
it was pretty clearly intended as a joke but a lot of people were fooled
and I imagine very confused
you might have noticed that videos like this one
and the Obama one use impersonators to make the subject say something that they never would
often that actually helps make it easier to spot a fake
because you can usually tell when someone's faking someone's voice
but that's changing
thanks to new technology like Lyrebird
it's a technology from Canada
that creates an imprint of someone's voice
all you need to do is say a couple of scripted lines into the program
and then it creates a complete copy of your voice
one journalist from Bloomberg got a chance to make an imprint of his voice and try out the tech
he ended up calling his mum
and his mom had no idea she was talking to a robot
all of this can seem pretty terrifying
so what does the future hold for deepfakes?
deepfake technology is constantly improving
making it much easier for people to make them
while that earlier video of Tom Cruise
took lots and lots of reference photos to make
we're now seeing deep fakes made from just one photo
or in this case one painting
and again it's not just these really tech-y researchers that are able to make these
you've got the Chinese app Zao which is really blowing up right now
and that gives users the chance to put themselves into movies
using just one photo of themselves
plus real-time face mapping is pretty much everywhere these days
in fact you've probably already seen a version of it on popular phone apps
this is me as the Joker
this is me as some old bloke with a beard
this is me as some weird Halloween-scary thing
and this is me as Donald Trump
you can see the technology isn't quite there yet
but it's getting better
and it's only going to get better
and this is me with a cone on my head
it's pretty good
Hao Li is actually working on that kind of tech right now
recently he was responsible for this real-time face mapping of Russian President Vladmir Putin
Li reckons virtually undetectable deepfakes are just around the corner
I mean intuitively my answer is
you know we're between six months and twelve months
based on the
it's just intuition right
based on what I've seen
the evolution so far
you know in the end we're all we're watching is pixels
and there should be a way to get pixels that are literally perfect
and I don't think it's gonna take that long
he says that not only is the video
quality getting a lot better
they're also becoming a lot easier to make
a recent study found the total number of deepfake vids on the internet
had practically doubled in just nine months between December 2018 and July 2019
that study was spearheaded by this guy Giorgio Patrini
I had a chat to him
about his cyber security company Deeptrace
which specialises in detecting deepfake videos
essentially we use the same type of technology
deep learning to understand the videos and in particular
to ascertain if they might be manipulated or completely fully synthesised by algorithm by generative models
There's one thing that both Li & Petrini agree
while virtually undetectable deepfakes aren't here just yet
it doesn't really matter
because people are already being fooled by them right now
just think of that Italian Prime Minister
so, are there certain things that people should look out for when trying to spot a fake
well not exactly
so in the long run it may be dangerous to just look at the content itself
in search for visual clues
because I think everybody I would agree
that we are very close to the time when videos are going to be convincing
you know fake vids are going to be convincing enough for fooling most people
the problem is telling people to look for things like changing skin tone
weird blurring
or unnatural shadows
might work right now
but those problems are eventually going to get fixed
for example in the early days of deepfakes
none of them ever blinked
but now they do
both Patrini and Li are
working on pretty complex tech solutions
like scanning individual pixels in a video for clues
or analyzing the unique way people move when they talk
but when it comes to regular people like your or I
Patrini and Li say it's not as simple as looking at what you're seeing
we need to educate people and make them think critically
which is why media literacy is so important
is that something believable that happened in the video
is that in line with what this person has said before
what is the reliability of the source where was this video published
has any major newspaper mentioned the video
search if they already did an investigation around it
if we are not equipped as humans
and we probably will not be any more in in a near future
to understand just visually
to trust with visual scrutiny
we can still leverage a lot of contextual information
to help us to sort out the truth I think
finally, how do you protect yourself from being turned into a deepfake
Li says there's not a lot you can do
instead he reckons lawmakers and social media platforms
need to step up and do their bit to deter people and make it harder for these kinds of videos to spread
basically work with lawmakers
and add in place regulations for the misuse of this technology
you can ruin someone
you can defame someone
you can ruin someone's reputation very easily
so on social media platforms there has to be mechanisms where people can do better fact-checking
you can flag something if there's something weird
they can even analyse how things are spreading
one quick thing before I go
Patrini said it's also really important to use those critical thinking skills
in situations where a video turns out not to be a deepfake
for example
Deeptrace was involved in an investigation earlier this year in the African nation of Gabon
long story short the president went missing for a few months
and people were demanding proof that he was alive and healthy
otherwise they were going to demand an election
when the President did finally reappear
it was in a short video that some people thought looked a bit odd
and it was actually accused of being a deepfake
the military attempted a coup which was unsuccessful
but people died
and an investigation later found no significant evidence of tampering or deepfakes in the video
it's an important lesson to always use those critical thinking skills when you're presented with a bit of information
and to make sure you actually look at it and process it
before mindlessly sharing it on
Browse More Related Video
Russia and Iran use AI to target US election | BBC News
When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED
SHS MIL Q1 Ep2: Media Literacy, Information Literacy and Technology Literacy
Black And White: Deepfake Video ΰ€ͺΰ€° PM Modi ΰ€¨ΰ₯ ΰ€ΰ₯ΰ€―ΰ€Ύ ΰ€ΰ€Ήΰ€Ύ? | PM Modi Deepfake Video | Sudhir Chaudhary
Dead Internet Theory: A.I. Killed the Internet
History of Media Literacy, Part 1: Crash Course Media Literacy #2
5.0 / 5 (0 votes)