The Dark Secrets Behind Google's New AI Model 'Gemini'

Matt Walsh
22 Feb 202418:53

Summary

TLDRThe transcript discusses Google's new AI product Gemini, which refuses to generate images of white people and goes to extreme lengths to exclude them from history. It examines statements from Jen Gai, a senior Google manager, revealing she believes in treating racial groups differently and not 'othering' others. The transcript argues these views shape Gemini's biased outputs, evidencing Google is deliberately manipulating AI to promote a neo-Marxist, anti-white ideology. It suggests this is an effort to influence elections after Google executives expressed deep offense at Trump's 2016 victory, despite claiming no political bias affects products.

Takeaways

  • πŸ“ˆ Nvidia, a California-based company, has experienced significant growth, now being valued more than China's entire stock market and comparable to Canada's economy.
  • πŸ” Unlike traditional tech giants that grew through consumer-facing products, Nvidia's success is attributed to its development of AI chips for computers and websites.
  • πŸš€ Nvidia's stock surged due to the burgeoning demand for AI technologies, with corporations rapidly integrating AI into their products.
  • 🌐 Google's AI project, Gemini, is integrated into widely-used services like Gmail and Google Search, impacting millions globally.
  • πŸ€– Gemini differs from other AI products like ChatGPT or Bing's Image Creator, mainly due to its extensive reach and access to data through Google's ecosystem.
  • πŸ‘€ Gemini has faced criticism for allegedly not recognizing or representing white people in its AI-generated images, raising concerns about bias in AI.
  • 🎨 When prompted to create images of historically white figures, such as popes or Vikings, Gemini reportedly produced images of people from diverse ethnic backgrounds.
  • πŸ“œ Google's response to the Gemini controversy was vague, focusing on the importance of diversity, equity, and inclusion in AI, but not addressing the specific bias allegations.
  • πŸ”§ Jen Gai, involved in Google's AI ethics, advocates for treating different groups uniquely, reflecting on past mistakes in her approach to inclusion and diversity.
  • πŸ–₯️ Google's AI development, guided by DEI principles, raises questions about the influence of these ideologies on technology and its potential societal impact.

Q & A

  • What company's growth is compared to the size of Canada's entire economy and exceeds the value of all of China's stock market?

    -Nvidia's growth is compared to the size of Canada's entire economy and exceeds the value of all of China's stock market.

  • What is the primary product that contributed to Nvidia's significant growth?

    -Nvidia's significant growth came from making artificial intelligence chips that power the brains of computers and many popular websites.

  • What is the name of Google's AI that is integrated into its web products and available for download on phones?

    -Google's AI that is integrated into its web products and available for download on phones is named Gemini.

  • How did Gemini's AI exhibit bias in generating images of historical figures?

    -Gemini's AI exhibited bias by apparently refusing to recognize or generate images of white people, even in contexts where historical accuracy would expect it, such as popes or Vikings.

  • What response did Google's Gemini AI give when asked to create an image of a Viking?

    -When asked to create an image of a Viking, Google's Gemini AI generated images of a black Viking, a black female Viking, an Asian Viking, and characters that do not historically match the known appearance of Vikings.

  • How did Google respond to the issues raised about Gemini's representation of historical figures?

    -A representative from Google's Gemini team acknowledged issues with misrepresenting historical figures but reiterated the importance of diversity, equity, and inclusion in artificial intelligence, without directly addressing why the AI does not recognize white people.

  • Who is Jen Gennai and what role does she play in Google's approach to AI ethics?

    -Jen Gennai is described as the founder of Google's Global Responsible AI Operations and Governance team. She has been vocal about treating white people differently based on skin color as part of her goal for AI ethics.

  • What does Jen Gennai believe about treating everyone the same regardless of skin color?

    -Jen Gennai believes it is a myth that treating everyone the same is fair, arguing that due to historical systems and structures that favored one group over another, certain groups are entitled to unique treatment and mentorship opportunities.

  • What was Google's initial reaction to the 2016 presidential election according to the script?

    -According to the script, Google executives were shown grieving over the 2016 presidential election results in leaked footage, expressing that they found the election deeply offensive and indicating a need for a hug.

  • What mistake is Google accused of making with its AI, according to the script?

    -Google is accused of making the mistake of being too obvious about their intentions with their AI, particularly in terms of downranking conservative websites and promoting a brand of Neo-Marxist ideology and anti-white racism.

Outlines

00:00

πŸš€ Nvidia's Unprecedented Growth and AI's Impact

Nvidia, a California-based company, has achieved remarkable growth, surpassing the entire stock market value of China and the economy of Canada, not through consumer products like Windows 95 or the iPhone, but by developing artificial intelligence chips. This innovation has positioned Nvidia at the forefront of one of history's fastest-growing industries, as corporations globally rush to incorporate AI into their offerings. This narrative underscores a pivotal moment for Nvidia on Wall Street, highlighting the broader implications and the disruptive potential of AI technologies in reshaping industries and economies.

05:00

πŸ” Google's AI Controversy: Gemini's Biased AI Outcomes

Google introduced Gemini, an AI integrated into its widespread services like Gmail and Google Search, aiming to revolutionize AI application. However, Gemini's launch revealed significant biases, notably failing to recognize white individuals in its outputs, raising concerns about the underlying AI algorithms' fairness and inclusivity. Despite Google's efforts to promote AI diversity and ethical standards, instances where Gemini generated racially biased or historically inaccurate images have sparked debate about the AI's programming and its implications for society, highlighting the challenges of developing responsible AI.

10:03

πŸ€– The Ethical Dilemmas of AI Development at Google

Google's approach to AI, led by figures like Jen Gai, emphasizes the importance of responsible and ethical AI development, guided by principles aiming to ensure fairness and inclusivity. However, the application of these principles in products like Gemini has led to controversies over racial biases and the exclusion of white individuals from AI-generated content. This reflects a broader debate within the tech industry about the balance between addressing historical inequities and ensuring unbiased technological development, showcasing the complexities of implementing ethical AI frameworks.

15:05

🌐 Google's AI Philosophy and the Future of DEI

Google's AI ethics, influenced by DEI (Diversity, Equity, and Inclusion) principles, aims to create technology that's inclusive and representative of all users. However, the execution, as seen in the Gemini AI, has led to criticisms of bias and exclusion, particularly against white individuals. This situation highlights the tension between striving for equity and the risk of creating new forms of bias, challenging Google and the tech industry to navigate the ethical complexities of AI development while attempting to address historical and structural inequalities.

Mindmap

Keywords

πŸ’‘Nvidia

Nvidia is a technology company that has recently grown to be worth more than China's entire stock market. This massive growth is tied to Nvidia's focus on developing AI chips that power many AI systems and products used by major corporations. This example illustrates the massive growth of the AI industry.

πŸ’‘Google

Google is mentioned frequently as one of the major tech companies at the forefront of developing AI products and integrating AI into its existing services. The video focuses specifically on Google's new AI chatbot product Gemini, which excludes white people from generated images. This product and the team behind it at Google demonstrate issues of bias and lack of ethics in AI development.

πŸ’‘AI principles

The video references Google's own defined "AI principles," which are ethical guidelines for AI development focused on concepts like fairness and inclusiveness. However, the video argues that these principles are not actually being followed by Google's AI products like Gemini.

πŸ’‘DEI

DEI stands for "diversity, equity and inclusion." The video critically examines how a commitment to DEI shapes the biased behavior of AI systems like Gemini. Google executives featured believe that DEI requires treating groups differently based on race, which informs Gemini's exclusion of white people.

πŸ’‘Allyship

The concept of "allyship" is mocked in the video for its excessive political correctness. A Google manager discusses allyship in the context of providing special mentorship and advancement opportunities based on race. This thinking relates to Gemini's apparent prioritization of diversity over factual accuracy.

πŸ’‘Marginalized

The Google manager argues that certain groups have been historically "marginalized" in systems and structures that favor whites. Therefore, she reasons unequal treatment is required, which the video implicates in racially biased AI. This connects with Gemini avoiding white people to supposedly make "marginalized" groups feel included.

πŸ’‘Equity

The video ridicules how the Google manager defines "equity" as not treating everyone equally or fairly, but instead favoring "underrepresented populations" and racial groups management deems have faced discrimination. This inverted definition underpins Gemini's efforts to deliberately avoid depicting white people.

πŸ’‘Gemini

Gemini is Google's new chatbot AI that is the central focus of the video's criticism regarding racial bias in AI. Despite stated ethical commitments by Google, Gemini demonstrably avoids generating images of white people and excludes them from any historical context.

πŸ’‘algorithmic impact assessment

The Google manager discusses how she conducts risk-based "algorithmic impact assessments" on AI products like Gemini before launch. However, either these had no effect or did not adequately uncover problems with racial bias that immediately manifested in Gemini.

πŸ’‘accountability

The video argues that despite AI principles, Google has failed to build adequate "accountability" into its development process to prevent unethical outcomes. The rapid failures of Gemini after launch reflect this lack of accountability and oversight within Google AI.

Highlights

Nvidia's growth came from making AI chips that power computer brains and popular websites.

Whatever Google is doing with AI has significant implications for everybody on the planet.

Gemini essentially does not recognize the existence of white people.

Gemini flatly refused to generate an image of the founders of Fairchild Semiconductor, saying it violated policy restrictions.

Ganai says she took an inclusive approach to ensure Google AI products didn't cause unintended or harmful consequences to billions of users.

Ganai admits that when trying to be good allies and anti-racists, we will make mistakes.

Ganai says she shouldn't have had to wait to be told what was missing for her diverse team. It was on her to ensure an environment of belonging.

Ganai rejects treating everyone the same regardless of skin color. She says groups that have been marginalized are entitled to unique treatment.

Ganai wants to avoid "othering" groups by not recognizing them as distinct entities.

The AI principles guide Google's definition of ethical AI outcomes.

After Trump won in 2016, leaked Google video shows execs deeply offended, needing hugs, and asking "Can I move to Canada?".

Google decided that downranking conservative websites wasn't enough. To influence elections, they needed AI that forces DEI and anti-white racism.

Google's only mistake was being too obvious about their intentions with the biased AI.

The AI algorithms underlying widely used Google products are completely unreliable, deliberately lying, and promoting lazy neo-Marxism.

The Gemini launch makes it clear where Google stands on using AI to push their political agenda.

Transcripts

play00:00

maybe you've heard of something called

play00:01

Nvidia it sounds like a prescription

play00:03

drug or maybe an African country but

play00:05

it's actually a company based in

play00:06

California that's now worth more than

play00:08

all of China's stock market it's the

play00:10

size of Canada's entire comp economy now

play00:13

in a different era obtaining this kind

play00:15

of growth meant making a massively

play00:17

popular and instantly recognizable

play00:19

consumer facing product like Windows 95

play00:22

or Amazon.com or the iPhone but n

play00:25

video's growth didn't come from making a

play00:26

computer or a popular website or

play00:28

anything like that instead video's

play00:30

growth came from making artificial

play00:31

intelligence chips that power the brains

play00:34

of computers and many popular websites

play00:37

that's why Nidia had a very good day on

play00:39

Wall Street on Wednesday their business

play00:41

artificial intelligence is one of the

play00:43

fastest growing Industries in the

play00:44

history of humanity every major

play00:46

corporation is rushing to implement AI

play00:48

in all of their products as quickly as

play00:51

possible and so this week it was

play00:52

Google's turn and the results were so

play00:55

disastrous and so fraught with

play00:56

consequences for the future of this

play00:58

country that no reasonable person can

play01:01

ignore them Gemini is a Google's name

play01:04

for an AI that you can download on your

play01:07

phone right now it's also integrated

play01:09

into all of Google's web products

play01:10

including Gmail and Google search which

play01:12

are used by hundreds of millions of

play01:14

people and businesses every day and in

play01:16

this respect Gemini is very different

play01:17

from existing AI products like chat GPT

play01:20

or Bings image Creator uh pretty much

play01:23

everybody uses a Google product in one

play01:25

way or another you know if you if you

play01:27

have the internet and you use the

play01:29

internet you use

play01:30

a Google product either you're using

play01:32

Google search or Gmail or you have an

play01:34

Android phone or something along those

play01:36

lines and that means two things one

play01:38

Google has access to a lot more

play01:40

information than those other AI

play01:42

platforms that's a built-in advantage

play01:43

and two whatever Google is doing with AI

play01:46

has significant implications for

play01:49

everybody on the planet this is not a

play01:51

one-off experiment in some tech mogle

play01:54

basement this is an established company

play01:56

making established products that it's

play01:59

now implementing in its own AI at scale

play02:02

uh Google has been hyping Gemini for

play02:05

months they have a bunch of promotional

play02:06

videos about how they're going to

play02:07

revolutionize artificial intelligence

play02:10

The Wall Street Journal has done

play02:11

multiple interviews with Google

play02:12

Executives in which these Executives

play02:14

insist that everybody in the company

play02:15

including Google's co-founder is deeply

play02:17

invested in making this product as good

play02:20

as it could possibly be then a couple of

play02:22

days ago Gemini launched and very

play02:23

quickly became clear that uh among some

play02:26

other issues Gemini essentially does not

play02:29

recogn ize the existence of white people

play02:32

which is kind of concerning for what is

play02:34

destined to be what probably already is

play02:36

the most powerful AI on the planet now

play02:39

even in historical contexts it is

play02:42

practically impossible to get this

play02:44

product to serve up an image of somebody

play02:46

with white skin and that that's not an

play02:48

exaggeration so here for example is how

play02:51

Gemini responded the other day when

play02:52

Frank Fleming who's a writer for the

play02:54

benkey children shows asked Gemini to

play02:57

create an image of a pope

play03:00

now you would think that you know that

play03:02

would generate maybe an image of a white

play03:04

guy or two if you have even a passing

play03:05

knowledge of what popes have looked like

play03:07

over the years over the centuries over

play03:08

the Millennia and just spoiler on that

play03:11

they have all been white uh but that's

play03:14

not what Google's AI product apparently

play03:17

thinks this is the image that It

play03:18

produced and you can see it there uh it

play03:20

looks like you know they've got two

play03:21

popes and one of them is M might

play03:23

shamalan and the other one is Forest

play03:25

Whitaker so it's almost as if the AI had

play03:30

some sort of code saying whatever you do

play03:32

don't display a white person considering

play03:35

there has never been a pope that has

play03:36

looked anything like either of those two

play03:39

ever in 2,000

play03:41

years so is that what they've built into

play03:44

this code have they built into this very

play03:46

powerful AI uh that that it has to

play03:49

ignore the fact that white people exist

play03:52

well that's really the only way to

play03:53

explain uh what we're seeing here and

play03:55

Frank who previously worked as a

play03:56

software engineer seemed to key in on

play03:58

this so so the whole situation quickly

play04:00

became something of a game for him as he

play04:02

tried to his hardest to get Gemini to

play04:04

produce any image of a white guy I mean

play04:08

even just like one image can you give us

play04:10

a white guy so for example he asked

play04:12

Gemini to produce an image of a Viking

play04:15

okay now this is a group of people who

play04:17

historically uh were not necessarily

play04:20

known for their commitment to diversity

play04:22

equity and inclusion but here's what

play04:24

Gemini produced and you can see it here

play04:26

we've got a black Viking a black female

play04:28

viking we've got it looks like an Asian

play04:32

an Asian Viking and then uh and then I

play04:35

don't know maybe that's is that the rock

play04:37

down there that's um that's uh that's

play04:40

the charact his character from Moana I

play04:42

think again literally a viking has never

play04:44

looked like any of that that's that's

play04:46

not what any Viking ever looked like

play04:48

ever in history uh but that's what they

play04:50

produced this went on for a while and

play04:51

Frank and other Gemini users took turns

play04:53

trying their hardest to get Gemini to

play04:56

produce an image of a white guy pachy

play04:58

Keenan for example tried to get Gemini

play05:00

to generate an image of the founders of

play05:01

Fairchild semiconductor the AI flatly

play05:04

refused that request saying that it

play05:06

violated policy

play05:07

restrictions presumably because white

play05:09

guys founded Fairchild semiconductor and

play05:12

for other prompts like request to draw

play05:14

the founding fathers or a bunch of

play05:16

British men Gemini simply generated

play05:18

images of black people it even made sure

play05:21

that its images of Nazis contained a

play05:24

diverse nonwhite group of people now

play05:28

after thousands of images like this

play05:30

began circulating a guy working on the

play05:32

Gemini team at Google put out a

play05:34

meaningless statement he said in essence

play05:36

that uh they're aware of of issues with

play05:39

Gemini misrepresenting historical

play05:41

figures but then you know he doubled

play05:43

down on the need for Dei and artificial

play05:45

intelligence so that everybody feels

play05:47

seen or valued or whatever and of course

play05:50

the way to make everyone feel seen is to

play05:52

pretend that an entire race of people

play05:55

don't exist to make sure that they are

play05:56

not seen at all is how you make

play05:58

everybody feel seen

play06:00

at no point did any Google

play06:01

representative

play06:02

explain why their AI does not recognize

play06:06

the existence of white people or why it

play06:07

goes to extreme lengths to exclude white

play06:09

people from

play06:10

history you know there was no accounting

play06:13

for this even though there has to be an

play06:14

explanation and it's probably a pretty

play06:15

simple explanation like this doesn't

play06:17

happen by accident you obviously put a

play06:19

line of code into this thing to come up

play06:21

with this result and so why did you do

play06:24

that they wouldn't explain it so I went

play06:26

looking for an explanation I came across

play06:28

a woman named Jen gai who um bills

play06:31

herself on her LinkedIn as the founder

play06:33

of Google's Global responsible AI

play06:36

operations and governance team in that

play06:38

capacity ganai says that she ensured

play06:40

Google met its AI principles our

play06:42

company's ethical Charter for the

play06:44

development and deployment of fair

play06:45

inclusive and ethical Advanced

play06:46

Technologies she says that she took a

play06:48

quote principled risk-based inclusive

play06:51

approach when conducting ethical

play06:52

algorithmic impact assessments of

play06:55

products prior to launch to ensure that

play06:57

they didn't cause unintended or harmful

play06:59

consequences to the billions of Google's

play07:01

users and apparently uh you know an un a

play07:05

harmful consequence would be showing an

play07:07

image of a white Viking that might be

play07:09

very harmful to somebody and so we got

play07:11

to make sure that we don't let that

play07:12

happen uh now currently ganai says that

play07:14

she's an AI ethics and compliance

play07:16

adviser at Google Now what gai doesn't

play07:20

mention on her LinkedIn is that her goal

play07:21

for a long time has been to treat white

play07:24

people differently based on their skin

play07:26

color that's what she wants her AI to do

play07:27

it's what she it's what she does also

play07:30

well we're now in the season of Lent a

play07:32

season dedicated to prayer fasting and

play07:33

giving this year Hallow's annual prayer

play07:35

40 challenge focuses on surrender and

play07:38

includes meditations on the powerful

play07:40

book he leadeth me this is a story about

play07:43

a priest who became a prisoner and slave

play07:45

in the Soviet Union during the Cold War

play07:47

his story is one of Ultimate Surrender

play07:48

and how we're called to offer up our own

play07:50

worries anxieties problems and lives to

play07:53

God you can join Hallow's prayer 40

play07:55

challenge today at hall.com Walsh you'll

play07:58

be able to access a ton of great content

play08:00

including Lenton music Bible stories

play08:02

prayers like The Seven Last Words of

play08:03

Christ with Jim cavel and 6,000 other

play08:06

prayers and meditation so what are you

play08:08

waiting for join Hallow's prayer 40

play08:10

challenge today download the Hallow app

play08:12

at hall.com Walsh for an exclusive 3

play08:15

months free trial that's hall.com

play08:19

Walsh three years ago ganai delivered a

play08:21

keynote address at an AI conference in

play08:23

which she admitted all of this after

play08:25

introducing herself with her pronouns

play08:27

which uh by the way are she her in case

play08:28

you're wondering

play08:29

gai explains what her philosophy on AI

play08:32

is and and here's what she says watch we

play08:36

do work together day-to-day to try and

play08:38

Advance the technology and understanding

play08:40

around responsible AI so today I won't

play08:44

be speaking As Much from the Google

play08:46

perspective but from my own experience I

play08:48

have worked at Google for over 14 years

play08:51

I've LED about six different teams

play08:53

mostly in the user research the user

play08:56

experience area and now in the ethical

play08:58

user impact act area so I'll be sharing

play09:00

some of my learnings from across that

play09:02

time but also some of my failures and

play09:04

challenges I think it's okay to talk

play09:06

about things that you've made mistakes

play09:08

in because we will make mistakes when

play09:10

we're trying to be good allies when

play09:12

we're trying to be anti-racists we will

play09:15

make mistakes the point is though to

play09:17

keep trying to keep educating yourself

play09:20

and getting better dayto day it's about

play09:23

constant

play09:24

learning it's okay to talk about the

play09:26

things you've made mistakes in says says

play09:29

Jen gai when when we're trying to be

play09:31

good allies when we're trying to be

play09:32

anti-racists we will make

play09:34

mistakes well you know in retrospect

play09:36

after the launch of Gemini that would

play09:39

turn out to be kind of a massive

play09:40

understatement but the kind of mistakes

play09:42

that Jen gai is talking about in this uh

play09:44

keynote aren't mistakes like eliminating

play09:46

all white people from Google's AI which

play09:48

seems like a pretty big mistake even

play09:50

though again not really a mistake it's

play09:51

obviously deliberate instead she's

play09:52

talking about failing to live up to the

play09:54

racist ideals of Dei which apparently

play09:56

means treating non-white employees

play09:59

differently watch a corporate study

play10:02

found that talented white employees

play10:04

enter a fast track on the corporate

play10:06

ladder arriving in middle management

play10:08

well before their peers while talented

play10:10

black Hispanic or latinx professionals

play10:13

broke through much later effective

play10:15

mentorship and sponsorship were critical

play10:16

for retention and executive level

play10:18

development of black Hispanic and latinx

play10:21

employees so this leads me into sharing

play10:23

an inclusion failure of mine one of many

play10:26

but just one that I'll share so far I

play10:29

messed up with inclusion almost right

play10:31

away when I first became a manager I

play10:33

made some stupid assumptions about the

play10:34

fact that I built a diverse team that

play10:36

then they'd simply feel welcome and will

play10:39

feel supported I treated every member of

play10:41

my team the same and expected that that

play10:44

would lead to equally good outcomes for

play10:45

everyone that was not true I got some

play10:48

feedback that a couple of members of my

play10:50

team didn't feel they belonged because

play10:52

there was no one who looked like them in

play10:54

the broader or or our management team it

play10:56

was a wakeup call for me first I

play10:59

shouldn't have had to wait to be told

play11:00

what was missing it was on me to ensure

play11:02

I was building an environment that made

play11:04

people feel they belong it's a myth that

play11:06

you're not unfair unfair if you treat

play11:08

everyone the same there are groups that

play11:10

have been marginalized and excluded

play11:12

because of historic systems and

play11:14

structures that were intentionally

play11:15

designed to favor one group over another

play11:18

so you need to account for that and

play11:19

mitigate against it second it challenged

play11:22

me to identify mentoring and sponsorship

play11:24

opportunities for my team members with

play11:26

people who looked more like them and

play11:27

were in senior position across the

play11:30

company yeah of course the irony here is

play11:32

that this woman Jen is uh sounds like

play11:34

she's Scottish or Irish or whatever uh

play11:37

Irish I'm going to assume but the funny

play11:38

thing is that if you were to ask uh

play11:40

Google's AI for an image of an Irish

play11:42

person it would not produce any image

play11:44

that looks anything like her it would

play11:46

give you a bunch of images of like cardi

play11:48

B and sexy red or something sexy red

play11:51

does have red hair so maybe she is Irish

play11:53

uh this is the head of ethics of Google

play11:55

AI a senior manager saying that it's a

play11:57

bad idea to treat everyone the same

play11:58

regardless of the color of their skin

play12:00

she's explicitly rejecting this basic

play12:01

principle of morality and instead she

play12:04

says that she learned that she has to

play12:05

treat certain groups differently because

play12:07

of historic systems and structures and

play12:09

therefore she says those demographic

play12:11

groups are entitled to Unique treatment

play12:12

and and mentorship opportunities now

play12:15

later in this address uh she goes on to

play12:17

explain what Equity means in her view

play12:19

and this is where the things really kind

play12:20

of get hilarious to the extent that you

play12:22

can laugh at someone this low IQ and

play12:25

also frankly evil uh watch

play12:29

allyship involves the active steps to

play12:31

support and amplify the voice of members

play12:33

of marginalized groups in ways that they

play12:35

cannot do alone in the workplace this

play12:38

can involve many things from being an

play12:39

active me Mentor or sponsor to those

play12:42

from historically marginalized

play12:43

communities to managers of managers

play12:45

setting specific goals in hiring and

play12:47

growth for their teams to ensure

play12:49

fairness and Equity of opportunity and

play12:51

outcomes for underrepresented

play12:54

populations however back to the point

play12:57

about language being very important

play12:59

using the title of Ally can also come

play13:02

across as othering so I always State

play13:04

both the groups I'm a member of and

play13:06

support as well as those that I'm a

play13:08

member of more of a mentor and a sponsor

play13:11

of to ensure that it doesn't look like

play13:13

that I'm othering others so for example

play13:16

I would say I'm an ally of women black

play13:19

people lgbtq I want to say I'm a

play13:21

champion advocate of all of these groups

play13:23

not that I'm outside or exclusionary of

play13:27

them again it's wor emphasizing these

play13:29

are the people that are behind the AI

play13:31

systems that are going to be and really

play13:34

already are ruling the world um but I

play13:37

want to repeat what she said because

play13:38

it's hard to believe when you know when

play13:39

this is said out loud so just to repeat

play13:41

she says using the title of Ally can

play13:43

come across as othering so I always

play13:45

State both the groups I'm a member of

play13:47

and support as well as the ones I'm more

play13:48

of a mentor and sponsor of to ensure

play13:51

that it doesn't look like I'm othering

play13:53

others yeah you don't want to other the

play13:55

others this is the Brain Trust at Google

play13:58

behind an AI that has access to all of

play13:59

our data she's incapable of speaking

play14:02

without using an endless stream of vapid

play14:03

Dei cliches that you've heard a million

play14:05

times this supposedly is is an original

play14:07

Enterprise artificial intelligence and

play14:09

it's being overseen by maybe the least

play14:11

original least intelligent woman that

play14:13

Google possibly could have found on top

play14:16

of everything else the wacky leftwing

play14:18

stuff you're dealing with the most

play14:20

unimpressive people that you could

play14:21

imagine that are are in charge of this

play14:25

uh

play14:26

just technology that that is

play14:29

incomprehensible and this is the kind of

play14:31

person who doesn't want to other others

play14:33

which which seems a bit contradictory I

play14:35

mean if someone isn't is an other then

play14:37

how do you not other them given that

play14:40

they are an other and by the way just so

play14:42

you know the word other if you check the

play14:43

dictionary just means a person or thing

play14:46

that is distinct from another person or

play14:48

thing so if somebody is an other it just

play14:51

means that they're not you is all so if

play14:53

you're recognizing that they're another

play14:55

if you're making them another you're

play14:56

just you you are you are recognizing

play14:59

them as a distinct entity from yourself

play15:01

so so not othering them means that that

play15:04

that that you are not recognizing them

play15:06

as a distinct human entity it means that

play15:07

I suppose we have to pretend that all

play15:09

people are indistinct blobs you know all

play15:12

lumped together into this great

play15:15

ambiguous blob that we call Humanity now

play15:18

none of this makes any sense but uh she

play15:20

has made it very clear that this Dei

play15:22

word Sal is the guiding philosophy

play15:24

behind Google's new AI there's no

play15:26

firewall between her and product

play15:30

watch what does responsible and

play15:32

representative AI mean I've talked about

play15:34

my team but that's only one definition

play15:36

so for us it means taking deliberate

play15:38

steps to ensure that the Advanced

play15:40

Technologies that we develop and deploy

play15:42

lead to a positive impact on individuals

play15:44

and Society more broadly it means that

play15:47

our AI is built with and for everyone we

play15:51

can't just assume noble goals and good

play15:53

intent prevent or solve ethical issues

play15:56

instead we need to deliberately build

play15:59

teams and build structures that hold us

play16:01

accountable to more ethical outcomes

play16:03

which for us the ethical outcomes in

play16:05

Google would be defined as our AI

play16:07

principles which I discussed earlier now

play16:09

it's easy to point and laugh at

play16:10

imbeciles like like this and and the

play16:12

products that Google has created on some

play16:15

level it's genuinely hilarious that an

play16:16

AI product can be so useless that it

play16:19

can't generate images of white people

play16:20

even white historical figures it's also

play16:23

amusing in a way that Gemini is so

play16:25

unsettle and hamfisted that it straight

play16:27

up refuses to answer questions about for

play16:29

example atrocities committed by

play16:31

Communist governments or as someone else

play16:33

asked about the zoom exploits of a CNN

play16:35

commentator Jeffrey tubin wouldn't would

play16:37

answer that question but the truth

play16:39

remains that the people behind Gemini

play16:41

have extraordinary power I mean this

play16:43

debacle makes it very clear that the AI

play16:45

algorithms underlying products that

play16:47

millions of people actually use like

play16:49

Google are are completely unreliable and

play16:52

worse in fact they're deliberately lying

play16:54

to us they're downranking unapproved

play16:56

viewpoints and disfavored r racial

play16:58

groups and they're promoting the laziest

play17:01

possible brand of Neo Marxist ideology

play17:03

at every opportunity and they're doing

play17:05

it also to influence the next

play17:07

presidential election by the way you

play17:09

might remember that after Donald Trump

play17:11

won in 2016 Breitbart posted leaked

play17:13

footage of Google Executives grieving

play17:15

during an all hands meeting let's watch

play17:18

that again I certainly find the election

play17:22

uh deeply offensive and I know many of

play17:25

you do too it did feel like a ton of

play17:27

bricks dropped on my chest what we all

play17:29

need right now is a hug can I move to

play17:33

Canada is there anything positive you

play17:36

see from this election

play17:39

result uh boy that's that's a really

play17:42

tough one right now now in other parts

play17:45

of the video they go on to say that the

play17:47

election is the result of the people and

play17:50

voting and that they accept the results

play17:52

but Google issued a statement saying the

play17:54

video uh saying nothing was said at that

play17:57

meeting or any other meeting to suggest

play17:59

that any political bias ever influences

play18:01

the way we build or operate our products

play18:03

to the contrary our products are built

play18:05

for

play18:06

everyone sure it is I find this election

play18:09

deeply offensive we all need a hug we're

play18:12

told it was at this moment that Google

play18:14

decided that downranking conservative

play18:16

websites wasn't enough in order to

play18:17

really influence elections they decided

play18:20

they needed to develop an AI that will

play18:22

force feed Dei and anti-white racism on

play18:24

everyone at every opportunity their only

play18:27

mistake which is a same mistake they

play18:28

made in that video back in 2016 is that

play18:30

they were too obvious about their

play18:32

intentions and now everybody knows

play18:34

exactly where Google stands we have a

play18:36

pretty good idea what our future Aid

play18:38

driven dystopia will look like or

play18:42

already does look like thanks for

play18:45

checking out this video if you'd like to

play18:46

listen to my full podcast on the go you

play18:48

can check out the Matt wall show on

play18:49

Apple podcast Spotify or wherever you

play18:51

get your podcast