Obama on AI, free speech, and the future of the internet

Decoder - The Verge
7 Nov 202344:27

Summary

TLDRIn a detailed conversation on Decoder, hosted at the Obama Foundation, former President Barack Obama discusses the challenges and potential of Artificial Intelligence (AI) in light of President Biden's sweeping executive order on AI. Obama shares insights into the transformative nature of AI, emphasizing the importance of intentionality in democracy's interaction with technology. Reflecting on lessons from social media's impact and the need for flexible, transparent governance, Obama stresses the significance of balancing innovation with safeguarding the public good. He advocates for collaborative efforts across sectors to navigate AI's rapid development, underscoring the necessity for ethical frameworks and societal involvement in shaping technology's future.

Takeaways

  • 😀 President Barack Obama discusses the transformative potential and risks of artificial intelligence (AI), emphasizing the need for intentional government interaction with technology primarily generated by the private sector.
  • 🙌 The conversation took place in the context of President Biden signing an extensive executive order on AI, aiming to regulate its development and application, including safety regulations and transparency measures like red teaming and watermarking.
  • 📚 Obama reflects on the lessons learned from the rapid transformation of the media landscape by social media, highlighting the importance of preparing for unintended consequences in technological innovations.
  • 📰 The former president points out the fast-paced development of AI technologies, surpassing expectations and necessitating a flexible and nimble regulatory framework to maximize benefits while minimizing harm.
  • 🛠 Discussing tech regulation, Obama notes the failure to effectively regulate social media over the past 15 years and emphasizes the need for a new regulatory framework for AI that is adaptive to its rapid evolution.
  • 💻 Obama supports the idea of bringing in young, talented individuals from the tech industry to work in government and research institutes to contribute to shaping AI regulation and understanding.
  • 📈 He addresses concerns over AI's impact on copyright and the broader economy, stressing the need to reevaluate jobs, education, and economic structures in the light of AI's capabilities.
  • 📲 Reflecting on the business models of social media platforms, Obama expresses concern over systems that feed users content reinforcing their existing biases, suggesting the need for platforms that encourage learning and perspective broadening.
  • 📱 Obama shares his personal use of AI tools, treating them as powerful extensions of human capability rather than as companions, and highlights the importance of public education on AI's capabilities and limitations.
  • 📝 As a content creator, Obama acknowledges the challenges and opportunities presented by AI in the realm of intellectual property, advocating for a balance that ensures creators are compensated while fostering innovation.

Q & A

  • What event prompted President Barack Obama's discussion about AI with Nilay?

    -The discussion was prompted by President Biden signing an executive order about AI.

  • How does President Obama describe the executive order on AI?

    -President Obama describes the executive order on AI as sweeping, noting that it is over 100 pages long and contains a lot of ideas, including regulations on biosynthesis with AI and safety regulations.

  • What are some of the specific regulations or measures mentioned in the executive order about AI?

    -The executive order mandates red teaming, transparency, and watermarking among other measures.

  • What is President Obama's general framework for evaluating new technological challenges like AI?

    -Obama's framework focuses on recognizing AI's transformative potential while being intentional about democracies interacting with primarily private sector-generated innovations. He emphasizes the need for establishing rules to maximize benefits and minimize unintended consequences.

  • What was the analogy used by a tech executive to describe the potential impact of AI, as mentioned by Obama?

    -A tech executive compared the potential impact of AI to electricity, highlighting its transformative power.

  • What role does President Obama believe the government should play in the development of AI technologies?

    -Obama believes the government should be aware of AI developments, require transparency from developers, not try to inhibit tech innovation, but instead establish guardrails to anticipate and mitigate risks while guiding technology towards public good.

  • Why do tech leaders, according to Obama, show genuine humility and concern regarding the development of AI technologies?

    -Tech leaders show humility and concern because they recognize the powerful impact and transformative potential of AI technologies, which could lead to significant consequences if not managed properly.

  • What does President Obama identify as a crucial aspect of regulating AI, based on his conversation about the Biden administration's executive order?

    -Obama highlights the importance of starting the process of building out a smart framework for AI regulation, acknowledging that it's the beginning of addressing the complex challenges AI presents.

  • How does Obama's experience and interaction with technology as a former president and an author influence his views on AI and its regulation?

    -His experience as the first digital president, coupled with his background in constitutional law and being an author, provides him a unique perspective on the balance between innovation, free speech, and the necessity of regulations to ensure technology serves the public good.

  • What is Obama's stance on the balance between free speech and regulation in the context of AI and social media?

    -Obama believes in a strong adherence to free speech but also recognizes the need for regulations that protect public health and welfare. He supports a deliberative process where society can agree on principles to guide technology, respecting democracy and the marketplace of ideas.

Outlines

00:00

🤝 Introduction and AI's Impact on Society

The conversation begins with a friendly exchange between Nilay and President Obama at the Obama Foundation. They discuss the transformative impact of AI on society, including potential benefits and risks. Obama shares his thoughts on the need for a framework to address the challenges posed by AI, reflecting on lessons learned from the media landscape's transformation and the importance of setting up rules for the interaction between democracies and technology generated by the private sector.

05:01

📜 Tech Regulation and Social Media

The discussion shifts to the challenges of tech regulation, particularly focusing on social media. Obama highlights his experience as the 'digital president' and the role of social networks in his election. The conversation delves into the failure to regulate social media effectively and the need for a comprehensive privacy bill. Obama and Nilay explore the application of existing media regulation frameworks to social networks and the unique challenges posed by AI, including the lack of a framework for AI's decision-making and potential risks.

10:03

🧠 AI's Transformative Power and Government's Role

Obama emphasizes AI's transformative potential, comparing it to electricity and discussing the need for a flexible regulatory framework that can adapt to AI's rapid development. He talks about the importance of transparency, safety protocols, and the involvement of smart individuals from various sectors in understanding and guiding AI's trajectory. The conversation touches on the Biden administration's executive order as a starting point for building a framework for AI regulation.

15:05

💡 Balancing Innovation and Regulation

The dialogue explores the tension between innovation and regulation, with Obama arguing for smart regulations that promote safety and public trust without stifling progress. He uses the example of airbags in cars to illustrate how regulations can lead to innovation and better products. Obama also addresses the tech community's perception of regulation and the need for a public conversation around AI's role in society.

20:06

🗣️ Deepfakes, Free Speech, and AI's Ethical Challenges

Obama and Nilay discuss the ethical challenges posed by deepfakes and AI-generated content, particularly in the context of free speech. They debate the balance between government regulation and the First Amendment, considering different rules for public figures and private citizens. The conversation also touches on the role of platforms in content moderation and the societal impact of AI's decision-making processes.

25:07

🌐 International Collaboration on AI Regulation

The conversation highlights the importance of international collaboration in developing safety standards for AI. Obama mentions an upcoming conference in England and the need for cross-border frameworks to manage the global phenomenon of the internet. He emphasizes the Biden administration's efforts to start a conversation on AI and the long-term unfolding of regulations over the next few years.

30:12

🤖 AI as a Tool and Public Education

Obama views AI as a tool rather than a companion and discusses the need for public education on AI's capabilities and limitations. He talks about the potential for AI to anthropomorphize interactions and the importance of not confusing AI reflections with consciousness. The conversation concludes with Obama's recommendations for those interested in contributing to the field of AI and his personal experience with technology.

Mindmap

Keywords

💡AI

Artificial Intelligence (AI) is central to the discussion, denoting the development of computer systems able to perform tasks that typically require human intelligence. This includes understanding natural language, recognizing patterns, and making decisions. In the script, Obama emphasizes AI's transformative potential across various sectors, from medicine to energy, while also acknowledging the risks and unintended consequences that come with rapid technological advancement.

💡Executive Order

The Executive Order discussed refers to a directive from President Biden aimed at addressing and regulating AI technologies. This comprehensive document, described as 'sweeping' and extensive, introduces measures like red teaming, transparency, and watermarking to mitigate risks associated with AI. The order is a response to both the promising opportunities and challenges AI presents, emphasizing the need for a balanced approach to technology governance.

💡Regulation

Regulation in this context pertains to the establishment of rules and standards governing AI development and application. The script highlights the necessity of creating guardrails to maximize benefits while minimizing harms. Obama discusses the importance of government involvement in setting these regulations to ensure that AI advancements align with the public good rather than just individual corporate profits.

💡Social Media

Social media is discussed as a cautionary tale of rapid technological advancement without sufficient oversight, illustrating how platforms can have profound societal impacts. Obama reflects on his experiences as the 'first digital president' and the role social media played in political processes. This comparison underscores the urgency of learning from past oversight failures as we confront the challenges posed by AI.

💡Unintended Consequences

This term refers to outcomes that are not foreseen or intended by a purposeful action, especially concerning technology's societal impact. Obama emphasizes that while AI can lead to significant innovations and benefits, it can also result in negative effects not initially anticipated. The script highlights the importance of being proactive and intentional in technology governance to mitigate these unintended consequences.

💡Democracy

Democracy is a recurring theme, with Obama discussing how technologies like AI should be aligned with democratic values and processes. The script stresses the importance of ensuring that AI development and application are transparent, equitable, and subject to public oversight, reflecting the broader goal of maximizing public good and ensuring the technology serves society as a whole.

💡Transparency

Transparency in the context of AI refers to the openness and clarity regarding how AI systems are developed, function, and are deployed. Obama calls for developers of advanced AI systems to be transparent about their safety protocols and testing regimens. This transparency is crucial for public trust, regulatory oversight, and ensuring that AI technologies are used responsibly.

💡Innovation

Innovation is discussed as a driving force behind the development and application of AI, embodying the potential for significant advancements across various sectors. The script balances the enthusiasm for AI's innovative capabilities with caution, underscoring the need for responsible innovation that considers societal impacts and ethical considerations.

💡Privacy

Although not explicitly mentioned in the provided text, privacy concerns are implicitly tied to the broader discussions of AI regulation and technology governance. The call for regulation, transparency, and ethical AI use hints at the underlying issues of data privacy and the protection of individuals' information in the digital age.

💡Public Good

The public good refers to outcomes that benefit society as a whole, beyond individual or corporate interests. The script emphasizes the role of government and regulation in steering AI development towards maximizing the public good. This includes ensuring AI advancements contribute to societal welfare, address public health and safety concerns, and are aligned with democratic values.

Highlights

President Barack Obama discusses the transformative potential and challenges of AI technology, emphasizing the importance of intentional governance.

Obama highlights the need for transparency and ethical guidelines in AI development to maximize benefits while minimizing harms.

The conversation covers the comparison between the regulation of social media and the emerging field of AI, highlighting past failures and the need for a new regulatory framework.

Obama reflects on the role of AI in accelerating advancements in various fields, including medicine, while stressing the necessity of safety and responsibility.

Discussion on the importance of creating a flexible and nimble regulatory framework for AI that can adapt to rapid technological advancements.

Obama underlines the need for collaboration between government, private sector, and academia in understanding and steering AI development.

The interview touches on the ethical considerations and potential risks of AI, such as the development of harmful applications by non-state actors.

Obama emphasizes the potential of AI to disrupt job markets and the importance of reconsidering education and employment strategies.

The conversation addresses the critical role of copyright and intellectual property laws in the context of AI-generated content.

Obama shares his personal experiences and observations on the use of AI tools, expressing a cautious optimism about their capabilities.

The discussion explores the impact of AI on content generation and distribution, especially in social media, and the need for regulatory interventions.

Obama talks about the importance of public engagement and education in shaping the future of AI and ensuring it serves the common good.

The former president highlights the potential of AI to foster innovation in art and creativity, stressing the irreplaceable value of human expression.

Obama reflects on his own digital legacy as the 'first digital president' and the evolution of digital communication during his presidency.

The interview concludes with Obama encouraging technologists and experts to contribute to public service and help shape the ethical development of AI.

Transcripts

play00:00

- Hello?

play00:00

- [Nilay] Hello, sir.

play00:02

- Nilay, how are you?

play00:03

- [Nilay] Nice to meet you.

play00:04

- Very nice to meet you.

play00:05

It looks like you cleared out my whole office.

play00:06

- Yeah, we got rid of everything.

play00:08

- Man, they're doing some work somewhere.

play00:10

How you been?

play00:11

- I'm doing all right, man.

play00:12

How are you? - I'm doing great.

play00:13

I should have told you, by the way,

play00:14

you didn't have to wear a tie, but you look sharp.

play00:16

You know more about this stuff than I do so.

play00:18

- Well, let's start.

play00:20

(Obama chuckling)

play00:22

President Barack Obama,

play00:23

you're the 44th president of the United States.

play00:25

We're here at the Obama Foundation.

play00:26

Welcome to Decoder.

play00:27

- It is great to be here, thank you for having me.

play00:29

- I am really excited to talk to you.

play00:31

There's a lot to talk about.

play00:33

We are here on the occasion of President Biden

play00:35

signing the executive order about AI.

play00:37

I would describe this order as sweeping.

play00:39

I think it's over 100 pages long.

play00:41

There's a lot of ideas in it.

play00:43

Everything from regulating biosynthesis with AI.

play00:46

There's some safety regulations in there.

play00:48

It mandates something called red teaming,

play00:49

transparency, watermarking.

play00:51

These feel like new challenges, like very new challenges

play00:55

for the government's relationship with technology.

play00:57

I wanna start with a Decoder question.

play00:59

What is your framework for thinking about these challenges

play01:02

and how you evaluate them?

play01:03

- This is something that I've been

play01:04

interested in for a while.

play01:06

So, back in 2015, 2016, as we were watching

play01:13

the landscape transformed by social media

play01:16

and the information revolution

play01:18

impacting every aspect of our lives,

play01:22

I started getting in conversations

play01:24

about artificial intelligence and this next phase,

play01:28

this next wave that might be coming.

play01:29

And I think one of the lessons that we got

play01:33

from the transformation of our media landscape

play01:39

was that incredible innovation, incredible promise,

play01:43

incredible good, can come out of it,

play01:45

but there are a bunch of unintended consequences

play01:48

and that we have to be maybe a little more intentional

play01:51

about how our democracies interact with

play01:57

what is primarily being generated out of the private sector.

play02:02

And you know, what rules of the road are we setting up

play02:05

and how can we make sure that we maximize the good

play02:09

and maybe minimize some of the bad.

play02:11

And so, I commissioned the, you know, my science guy,

play02:16

John Holdren, along with John Podesta,

play02:19

who had been a former Chief of Staff

play02:21

and worked on climate change issues.

play02:25

Let's pull together some experts to figure this out,

play02:27

and we issued a big report in my last year.

play02:32

The interesting thing even then was

play02:37

people felt enormously promising technology,

play02:40

but, you know, we may be overhyping

play02:43

how quick it's gonna come.

play02:45

And as we've seen just in the last year or two,

play02:48

even those who are developing these large language models

play02:52

who are, you know, in the weeds with these programs,

play02:56

are starting to realize this thing is moving faster

play02:59

and is potentially even more powerful

play03:02

than we originally imagined.

play03:05

So, my framework and in conversations

play03:08

with government officials, private sector academics,

play03:14

that the framework I emerge from is

play03:16

that this is going to be a transformative technology.

play03:19

It's already in all kinds of small ways,

play03:24

but very broadly, changing the shape of our economy.

play03:29

In some ways, even our search engines,

play03:32

you know, basic stuff that we take for granted

play03:34

is already operating under some AI principles,

play03:36

but this is gonna be turbocharged.

play03:39

It's gonna impact how we make stuff,

play03:41

how we deliver services, how we get information,

play03:44

and the potential for us

play03:47

to have enormous medical breakthroughs.

play03:51

The potential for us to be able to provide

play03:53

individualized tutoring for kids in remote areas.

play03:57

The potential for us to solve some of our energy challenges

play04:02

and deal with greenhouse gases

play04:05

that this could unlock amazing innovation,

play04:10

but that it can also do some harm.

play04:13

Yeah, we can end up with powerful AI models

play04:16

in the hands of somebody in a basement

play04:18

who develops a new smallpox variant

play04:21

or, you know, non-state actors, who suddenly,

play04:25

because of a powerful AI tool,

play04:28

can hack into critical infrastructure

play04:31

or maybe less dramatically,

play04:34

AI, you know, infiltrating the lives of our children

play04:39

in ways that we didn't intend,

play04:42

in some cases, the way social media has.

play04:44

So, what that means then is, is that I think the government,

play04:50

as an expression of our democracy,

play04:53

needs to be aware of what's going on.

play04:57

Those who are developing these frontier systems

play04:59

need to be transparent.

play05:01

I don't believe that we should

play05:05

try to put the genie back in the bottle and be anti-tech,

play05:09

because of all the enormous potential,

play05:12

but I think we should put some guardrails

play05:14

around some risks that we can anticipate

play05:18

and have enough flexibility

play05:20

that it doesn't destroy innovation,

play05:24

but also, is guiding and steering this technology

play05:29

in a way that maximizes not just individual company profits,

play05:36

but also the public good.

play05:37

- So, lemme make the comparison for you.

play05:38

I would say that the problem in tech regulation

play05:41

for the past 15 years has been social media.

play05:45

How do we regulate social media?

play05:46

How do we get more good stuff, less bad stuff?

play05:48

Make sure that really bad stuff is illegal.

play05:50

You came to the presidency on the back of social media.

play05:54

- I was the first digital president.

play05:56

- You had a Blackberry, I remember,

play05:57

people were very excited about your Blackberry.

play05:59

I wrote a story about your iPad that was transformative.

play06:02

That's young people are gonna

play06:03

take to the political environment.

play06:05

They're gonna use these tools,

play06:06

we're gonna change America with it.

play06:07

- You can make an argument.

play06:08

I wouldn't have been elected

play06:09

had it not been for social networks.

play06:11

- Now, we're on the other side of that.

play06:12

There was another guy who got elected

play06:14

on the back of social networks.

play06:15

There was another movement in America

play06:16

that has been very negative on the back of that election.

play06:19

We have basically, failed to regulate social networks.

play06:22

I'd say there's no comprehensive privacy bill even.

play06:25

There was already a framework

play06:26

for regulating media in this country.

play06:28

We could apply a lot of what we knew

play06:30

about should we have good media to social networks.

play06:32

There are some First Amendment questions in there,

play06:34

what have you, the important ones,

play06:36

but there was an existing framework.

play06:39

With AI, it's, we're gonna tell computers to do stuff

play06:42

and they're gonna go do it.

play06:43

- Right, we hope.

play06:44

- That we have no framework for that.

play06:47

- We hope they do what...

play06:48

- We hope, right?

play06:49

- Think we're telling them to do.

play06:51

- We also ask computers a question.

play06:52

They might just confidently lie to us

play06:54

or help us lie at scale.

play06:56

There is no framework for that.

play06:58

What do you think you can pull from the sort of

play07:00

failure to regulate social media into this new environment

play07:04

such that we get it right this time,

play07:05

or do anything at all?

play07:06

- Well, this is part of the reason why I think

play07:10

what the Biden administration did today

play07:12

in putting out the EO.

play07:13

The work they've done is so important.

play07:16

Not because it's the end point,

play07:19

but because it's really the beginning

play07:20

of building out a framework.

play07:22

And when you mentioned how this executive order

play07:28

has a bunch of different stuff in it,

play07:32

what that reflects is we don't know all the problems

play07:38

that are gonna arise out of this.

play07:39

We don't know all the, you know,

play07:43

promising potential of AI,

play07:47

but we're starting to put together sort of

play07:51

the foundations for what we hope will be

play07:54

a smart framework for dealing with it.

play07:56

And in some cases, what AI is gonna do

play08:00

is to accelerate advances in, let's say medicine.

play08:08

You know, we've already seen, for example,

play08:12

with, you know, things like protein folding

play08:16

and the breakthroughs that can take place

play08:18

that would not have happened

play08:20

had it not been for some of these AI tools.

play08:22

And, you know, we wanna make sure that that's done safely.

play08:26

We wanna make sure that it's, you know, done responsibly.

play08:30

And it may be that we already have some laws in place

play08:34

that can manage that.

play08:37

There may be some novel developments in AI

play08:43

where an existing agency, an existing law just doesn't work.

play08:47

You know, if we're dealing with the alignment problem

play08:50

and we wanna make sure that

play08:51

some of these large language models

play08:54

where even the developers aren't entirely confident

play08:58

about what these models are doing,

play09:01

what the computer's thinking or doing.

play09:06

Well, in that case, we're gonna have to figure out

play09:09

what are the red teaming, what are the testing regimens?

play09:12

And in talking to the companies themselves,

play09:15

they will acknowledge that their safety protocols

play09:18

and their testing regimens, et cetera,

play09:20

may not be where they need to be yet.

play09:24

And I think it's entirely appropriate then for us

play09:26

to plant a flag and say,

play09:27

"All right, frontier companies,

play09:30

you need to disclose what your safety protocols are

play09:34

to make sure that we don't have rogue programs going off

play09:38

and hacking in our financial system, for example.

play09:45

Tell us what tests you're using.

play09:48

Make sure that we have some independent verification

play09:50

that right now this stuff is working."

play09:54

But that framework can't be a fixed framework,

play09:59

because these models are developing so quickly

play10:03

that oversight and any regulatory framework

play10:08

is gonna have to be flexible

play10:09

and it's gonna have to be nimble.

play10:10

And it's gonna, and by the way, it's also gonna require

play10:15

some really smart people who understand

play10:18

how these programs and these models are working

play10:22

not just in the companies themselves,

play10:24

but also in the nonprofit sector and in government.

play10:28

Which is why I was glad to see

play10:30

that the Biden administration part of the executive order

play10:35

is specifically calling on a bunch of, you know,

play10:40

hotshot young people who are interested in AI

play10:43

to do a stint outside of the companies themselves

play10:47

and, you know, go work for government for a while.

play10:51

Go work, you know, with some of the research institutes

play10:55

that are popping up in places like the Harvard Lab,

play10:58

or the Stanford AI Center, and some other non-profits.

play11:03

Because we're going to need to make sure that

play11:09

everybody can have confidence

play11:12

that whatever journey we're on here with AI,

play11:15

that it's not just being driven by a few people

play11:19

without any kind of interaction

play11:22

or voice from ordinary folks, regular people

play11:27

who are gonna be using- - Well, there's

play11:28

a difference there. - these products

play11:29

and impacted by these products.

play11:30

- There's ordinary folks

play11:31

and there's the people who are building it

play11:32

who need to go help write regulations.

play11:34

And there's a split there.

play11:35

The conventional wisdom in the valley for years

play11:39

is the government is too slow,

play11:41

it doesn't understand technology.

play11:43

And by the time it actually writes a functional rule,

play11:45

the technology it was aiming to regulate will be obsolete.

play11:49

This is markedly different, right?

play11:50

The AI doomers are the ones asking for regulation the most.

play11:54

The big companies have asked for regulation.

play11:57

Sam Altman has toured the capitals of the world

play11:59

politely asking to be regulated.

play12:01

Why do you think there's such a fervor for that regulation?

play12:04

It's just incumbents wanting to cement their position.

play12:06

- Well, I look, you're raising an important point,

play12:09

which is, and rightly there's some suspicion,

play12:13

I think among some people that,

play12:17

yeah, these companies want regulation,

play12:19

because they wanna lock out competition.

play12:22

And as you know, historically,

play12:27

sort of a central principle of tech culture

play12:30

has been open source.

play12:31

We want everything out there.

play12:33

Everybody's, you know, able to play with models

play12:38

and applications and create new products,

play12:42

and that's how innovation happens.

play12:44

Here, regulation starts looking like,

play12:48

well, maybe we start having closed systems

play12:50

and, you know, the big frontier companies,

play12:53

the Microsofts, the Googles, the open AIs, anthropics,

play12:56

that they're gonna somehow lock us out.

play13:00

But in my conversations with the tech leaders on this,

play13:07

I think there is for the first time some genuine humility,

play13:13

because they are seeing the power

play13:17

that these models may have.

play13:19

I talked to one executive,

play13:22

and look, there's no shortage of

play13:26

hyperbole in the tech world, right?

play13:29

But this is a pretty sober guy, like an adult-

play13:32

- Now, I have to guess who it's.

play13:34

- who's seen a bunch of these cycles

play13:35

and been through boom and bust.

play13:37

And I asked him, I said,

play13:40

"Well, when you say this technology

play13:42

you think is gonna be transformative,

play13:44

give me sort of some analogy."

play13:46

He said, "You know, I sat with my team

play13:48

and we talked about it.

play13:50

And after going around and around what we decided was

play13:52

maybe the best analogy was electricity."

play13:56

Then I thought, well, yeah, electricity.

play13:58

That was a pretty big deal.

play13:59

- [Nilay] Yeah.

play14:00

- And if that's the case, I think what they recognize is

play14:04

that it's in their own commercial self-interest

play14:09

that there's not some big screw up on this.

play14:12

That if in fact it is as transformative

play14:15

as they expect it to be,

play14:17

then having some rules, some protections,

play14:21

that create a competitive field,

play14:24

allow everybody to participate, come up with new products,

play14:27

compete on price, compete on functionality,

play14:30

but you know, that none of us are taking such big risks,-

play14:35

- Yeah, there's a view in the valley.

play14:36

- that the whole thing blows up in our faces.

play14:40

I do think that there is sincere concern that,

play14:44

if we just have a unfettered race to the bottom,

play14:47

that this could end up, you know, choking off the goose

play14:50

that might be laying a bunch of golden eggs.

play14:52

- There is the view in the valley though,

play14:53

that any constraint on technology is bad.

play14:56

- Yeah, and I disagree with that.

play14:57

- Any caution, any principle where you might slow down

play15:00

is the enemy of progress.

play15:02

And the net good is better,

play15:03

if we just race that as fast as possible.

play15:04

- In fairness, that's not just in the valley,

play15:06

that's in every business I know.

play15:10

It's not like Wall Street loves regulation.

play15:12

It's not as if manufacturers are really keen

play15:15

for government to micromanage how they produce goods.

play15:20

But one of the things that we've learned,

play15:25

you know, through the industrial age,

play15:27

and the information age, you know, over the last century,

play15:33

is that you can overregulate,

play15:37

you can have over bureaucratize things,

play15:42

but that if you have smart regulations

play15:45

that set some basic goals and standards,

play15:48

making sure you're not creating products

play15:51

that are unsafe to consumers.

play15:52

making sure that if you're, you know, selling food,

play15:56

you know, people who go in the grocery store

play15:59

can trust that they're not gonna die

play16:00

from salmonella or E.coli.

play16:03

Making sure that if somebody buys a car

play16:07

that you know, the brakes work, making sure that,

play16:13

you know, if I take my electric whatever

play16:18

and I plug it into a socket anywhere,

play16:21

any place in the country,

play16:23

that it's not gonna shock me and blow up on my face.

play16:26

It turns out all those various rules, standards,

play16:30

actually create marketplaces and are good for business,

play16:33

and innovation then develops around those rules.

play16:38

So, it's not an argument that I think

play16:41

part of what happens in the tech community

play16:44

is the sense that we're smarter than everybody else,

play16:49

and these people slowing us down

play16:51

are impeding rapid progress.

play16:55

And you know, when you look at the history of innovation,

play16:58

it turns out that having some smart guideposts

play17:02

around which innovation takes place,

play17:06

not only doesn't slow things down,

play17:08

in some cases, it actually raises standards

play17:10

and accelerates progress.

play17:12

There were a bunch of folks who said,

play17:13

"Look, you know, you're gonna kill the automobile,

play17:17

if you put airbags in there."

play17:19

Well, it turns out actually people figured out,

play17:22

you know what, we can actually put airbags in there

play17:24

and make 'em safer, and over time the costs go down.

play17:29

- There's a great TikTok- - Everybody's better off.

play17:30

- of somebody reacting to drunk driving laws

play17:33

in the eighties.

play17:33

It's great, I'll send it to you.

play17:35

There's a really difficult part in the EO about providence.

play17:39

Watermarking content, making sure

play17:41

people can see it's AI generated.

play17:42

You are among the most deep faked-

play17:45

- Oh absolutely.

play17:46

- people in the world.

play17:47

- Well, because what I realized is

play17:48

when I left office, I'd probably been filmed

play17:52

and recorded more than any human in history,

play17:54

just 'cause I happened to be the first president

play17:56

when the smartphone came out.

play17:59

- I'm assuming you have some very deep personal feelings

play18:02

about being deep faked in this way.

play18:03

There's a big First Amendment issue here, right?

play18:06

I can use Photoshop one way,

play18:08

and the government doesn't say I have to put a label on it.

play18:10

I use it a slightly different way.

play18:12

The government's gonna show up and tell Adobe,

play18:13

you've gotta put a label on this.

play18:15

How do you square that circle?

play18:18

It seems very challenging to me.

play18:20

- Look, I think this is gonna be an iterative process.

play18:22

I don't think you're gonna be able

play18:24

to create a blanket rule.

play18:25

But the truth is, that's been how

play18:31

our governance of information, media, speech,

play18:35

that's how it's developed for a couple hundred years now.

play18:39

With each new technology, we have to adapt

play18:42

and figure out some new rules of the road.

play18:45

So, let's take my example, a deep fake of me,

play18:51

that is used for political satire

play18:53

or just to, you know, somebody doesn't like me

play18:55

and they wanna deep fake me.

play18:58

I was the President of the United States,

play18:59

and there are some pretty formidable rules

play19:04

that have been set up to protect people

play19:06

from making fun of public figures.

play19:09

I'm a public figure,

play19:10

and what you are doing to me as a public figure

play19:14

is different than what you do to a 13-year-old girl

play19:20

in high school, freshmen in high school.

play19:24

And so, we're gonna treat that differently.

play19:27

And that's okay.

play19:28

We should have different rules for public figures

play19:30

than we do for private citizens.

play19:32

We should have different rules for

play19:35

what is clearly sort of political commentary and satire

play19:40

versus cyber bullying, or...

play19:42

- Where do you think those rules land?

play19:43

Do they land on individuals?

play19:45

Do they land on the people making the tools

play19:48

like Adobe or Google?

play19:50

Do they land on the distribution networks like Facebook?

play19:53

- My suspicion is how responsibility is allocated,

play19:56

we're gonna have to sort out.

play20:00

But I think the key thing to understand is,

play20:03

and look, I taught constitutional law.

play20:06

I'm close to a First Amendment absolutist in the sense that

play20:10

I generally don't believe that,

play20:15

you know, even offensive speech, mean speech, et cetera,

play20:20

it should be certainly not regulated by the government.

play20:23

And I'm even game to argue

play20:28

that on social media platforms, et cetera,

play20:31

that the default position should be

play20:33

free speech rather than censorship.

play20:36

I agree with all that.

play20:38

But keep in mind, we've never had

play20:41

completely free speech, right?

play20:43

We have laws against child pornography.

play20:45

We have laws against, you know, human trafficking.

play20:52

We have laws against certain kinds of speech

play20:57

that we deem to be really harmful

play21:00

to the public health and welfare.

play21:04

And the courts, when they evaluate that, they say,

play21:08

hmm, you know, they come up with

play21:10

a whole bunch of time, place, manner restrictions

play21:13

that may be acceptable in some cases

play21:15

aren't acceptable in others.

play21:17

You get a bunch of case law that develops.

play21:19

There's arguments about it in the public square.

play21:22

We may disagree.

play21:23

Should Nazis be able to protest in Skokie?

play21:26

Well, you know, that's a tough one.

play21:28

But, you know, we can figure this out.

play21:31

And that I think is how this is gonna develop.

play21:34

I do believe that the platforms themselves

play21:41

are more than just common carriers like the phone company.

play21:46

They're not passive.

play21:48

There's always some content moderation taking place.

play21:53

And so, you know, once that line has been crossed,

play21:58

it's perfectly reasonable for the broader society to say,

play22:01

"Well, we don't wanna just leave

play22:04

that entirely to a private company.

play22:07

I think we need to at least know

play22:09

how you're making those decisions,

play22:11

what things you might be amplifying through your algorithm

play22:14

and what things you aren't.

play22:16

And you know, it may be that

play22:20

what you're doing isn't illegal,

play22:21

but we should at least be able to know

play22:22

how some of these decisions are made."

play22:25

I think it's gonna be that kind of process that takes place.

play22:29

What I don't agree with is the large tech platform

play22:34

suggesting somehow that we want to be treated

play22:40

entirely as a common carrier.

play22:46

- It's the Clarence Thomas view, right?

play22:48

- Yeah, but on the other hand,

play22:51

we know you're selling advertising based on the idea

play22:54

that you're making a bunch of decisions about your products.

play22:57

- Well, this is very challenging, right?

play22:58

If you say you're a common carrier,

play22:59

then you are in fact regulating them.

play23:01

You're saying you can't make any decisions.

play23:02

- Yes.

play23:03

- You say you are exercising editorial control.

play23:05

They are protected by the First Amendment.

play23:07

- [Obama] Yes.

play23:07

- And then regulations get very, very difficult.

play23:10

It feels like even with AI,

play23:13

when we talk about content generation with AI,

play23:15

or with social networks,

play23:16

we run right into the First Amendment over and over again.

play23:19

And most of our approaches, this is what I worry about,

play23:22

is we try to get around it

play23:24

so we can make some speech regulations

play23:25

without saying we're gonna make some speech regulations.

play23:28

Copyright law is the most effective speech regulation

play23:30

on the internet, because everyone will agree.

play23:32

Okay, Disney owns that bring it down.

play23:33

- Well, because there's property involved.

play23:35

There's money involved.

play23:36

- There's money, maybe less property than money,

play23:39

but there's definitely money.

play23:40

- Well, IP and hence money, yeah.

play23:43

Well, look, here's my general view.

play23:46

- Yeah, but do you worry

play23:47

that we're making fake speech regulations

play23:49

without actually talking about the balance of equities

play23:51

that you're describing here?

play23:52

- I think that we need to have,

play23:56

and AI I think is gonna force this,

play23:59

that we need to have a much more robust

play24:05

public conversation around these rules

play24:08

and agree to some broad principles to guide us.

play24:14

And the problem is, right now, let's face it,

play24:17

it's gotten so caught up in partisanship,

play24:21

partly because of the last election,

play24:24

partly because of Covid and vax and anti-vax proponents

play24:29

that we've lost sight of our ability

play24:32

to just come up with some principles

play24:35

that don't advantage one party or another,

play24:37

or one position, or another,

play24:39

but do reflect our broad adherence to democracy.

play24:43

But the point I guess I'm emphasizing here is

play24:48

this is not the first time we've had to do this.

play24:50

We had to do this when radio emerged.

play24:53

We had to do this when television emerged.

play24:55

And, you know, it was easier to do back then

play24:58

in part because you had three or five companies

play25:02

or you, you know, the public

play25:04

through the government technically owned the airwaves.

play25:06

And so, you could make these.

play25:08

- No, this is the square on my bingo card.

play25:09

If I could get to the Red Lion case with you,

play25:11

I've won, right?

play25:13

There was a framework here that said

play25:14

the government owns the airwaves,

play25:16

it's gonna allocate them to people in some way,

play25:19

and we can make some decisions.

play25:20

And that is an effective and appropriate speech regulation.

play25:22

- [Obama] That was the hook.

play25:22

- Can you bring that to the internet?

play25:24

- I think you have to find a different kind of hook.

play25:27

- Sure.

play25:28

- But ultimately, even though the idea

play25:30

that the public and the government owned the airwaves,

play25:35

that was really just another way of saying

play25:39

this affects everybody,

play25:41

and so, we should all have a say in how this operates.

play25:44

And we believe in capitalism,

play25:46

and we don't mind you making a bunch of money

play25:49

through the innovation and the products that you're creating

play25:52

and the content that you're putting out there,

play25:54

but we wanna have some say in what our kids are watching

play25:59

or how things are being advertised, et cetera.

play26:02

- If you were the President now,

play26:03

- [Obama] Yeah.

play26:04

- And I was with my family last night,

play26:06

and the idea that the Chinese TikTok

play26:09

teaches kids to be scientists and doctors,

play26:12

in our TikTok the algorithm is different.

play26:15

And we should have a regulation like China has

play26:16

that teaches our kids, we don't, it came up.

play26:18

And all the parents around the table said,

play26:20

"Yeah, we're super into that. We should do that."

play26:23

How would you write a rule like that?

play26:24

Is it even possible with our First Amendment?

play26:26

- Well look, for a long time, let's say under television,

play26:30

there were requirements around children's television.

play26:32

It kept on getting watered down to the point

play26:34

where anything qualified as children's television, right?

play26:38

We had a fairness doctrine that made sure

play26:42

that there was some balance

play26:45

in terms of how views were presented.

play26:47

And I'm not arguing, you know,

play26:51

good or bad in either of those things.

play26:54

I'm simply making the point that we've done it before

play26:57

and there was no sense that somehow that was anti-democratic

play27:00

or it was that squashing innovation.

play27:03

It was just a understanding that we live in a democracy.

play27:06

And so, we kind of set up rules so that

play27:12

we think the democracy works is better rather than worse,

play27:16

and everybody has some say in it.

play27:18

The idea behind the First Amendment is

play27:23

we're gonna have a marketplace of ideas

play27:26

that these ideas battle themselves out.

play27:28

And ultimately, we can all judge

play27:31

better ideas versus worse ideas.

play27:34

And I deeply believe in that core principle.

play27:38

We are gonna have to adapt to the fact that now

play27:42

there is so much content, there are so few regulators.

play27:47

Everybody's can throw up any idea out there,

play27:51

even if it's sexist, racist, violent, et cetera.

play27:57

And that makes it a little bit harder

play28:00

than it did when we only had three TV stations

play28:02

or a handful of radio stations or what have you.

play28:05

But the principle still applies,

play28:07

which is, how do we create a deliberative process

play28:11

where the average citizen

play28:13

can hear a bunch of different viewpoints

play28:16

and then say, "You know what, here's what I agree with.

play28:21

Here's what I don't agree with."

play28:23

And hopefully, through that process, we get better outcomes.

play28:26

- Let me crash the two themes

play28:28

of our conversations together,

play28:29

AI and the social platforms.

play28:31

Meta just had earnings.

play28:33

Mark Zuckerberg was on the earnings call.

play28:35

And he said for our feed apps, Instagram, Facebook threads,

play28:40

"For the feed apps, I think that over time,

play28:43

more of the content that people consume

play28:45

is either going to be generated or edited by AI."

play28:49

So, he envisions a world in which social networks

play28:51

are showing people perhaps exactly what they wanna see-

play28:53

- Absolutely.

play28:54

- inside of their preferences.

play28:55

Much like advertising that keeps 'em engaged.

play28:58

Should we regulate that away?

play28:59

Should we tell 'em to stop?

play29:00

Should we embrace this as a way to show people

play29:03

more content that they're willing to see

play29:05

that might expand their worldview?

play29:07

- This is something I've been wrestling with for a while.

play29:08

I give a speech about misinformation

play29:12

in our information silos at Stanford last year.

play29:16

I am concerned about business models

play29:23

that just feed people exactly

play29:30

what they already believe and agree with,

play29:33

and all designed to sell them stuff.

play29:38

Do I think that's great for democracy? No.

play29:42

Do I think that that's something

play29:47

that the government itself can regulate?

play29:49

I'm skeptical that you can come up

play29:52

with perfect regulations there.

play29:54

What I actually think probably needs to happen though

play29:58

is that we need to think about different platforms

play30:04

and different models, different business models,

play30:12

so that it may be that I'm perfectly happy

play30:15

to have AI mediate how I buy jeans online, right?

play30:22

That could be very efficient.

play30:24

I'm perfectly happy with it.

play30:25

If, you know, and so, if it's a shopping app,

play30:32

or a thread, fine.

play30:35

When we're talking about political discourse,

play30:37

when we're talking about culture, et cetera,

play30:39

can we create other places for people to go

play30:43

that broaden their perspective.

play30:46

Make them curious about

play30:51

how other people are seeing the world

play30:53

where they actually learn something

play30:54

as opposed to just reinforce their existing biases.

play30:57

But I don't think that's something

play30:59

that government is going to be able to sort of legislate.

play31:05

I think that's something that consumers

play31:10

and interacting with companies

play31:12

are gonna have to discover and find alternatives.

play31:16

The interesting thing, look,

play31:18

I'm not obviously 12 years old,

play31:21

I didn't grow up, you know,

play31:24

with my thumbs on these screens.

play31:26

So, I'm a old ass, you know, 62-year-old guy,

play31:30

that sometimes can't really work all the apps on my phone,

play31:34

but I do have two daughters who are in their twenties.

play31:41

And it's interesting the degree to which

play31:43

at a certain point they have found

play31:46

almost every, you know, app, social media app, thread

play31:55

getting kind of boring after a while.

play31:57

It gets old, precisely because all it's doing

play32:00

is telling 'em what you already know,

play32:02

or what the program thinks you want to know,

play32:05

or what you want to see.

play32:07

So, you're not surprised anymore.

play32:08

You're not discovering anything anymore.

play32:10

You're not learning anymore.

play32:11

And so, I think there's a promise

play32:15

to how we can, there's a market, let's put it that way.

play32:19

I think there's a market for products

play32:22

that don't just do that.

play32:25

It's the same reason why, you know,

play32:28

people have asked me around AI,

play32:31

you know, are there gonna still be artists around

play32:34

and singers and actors, or is it all gonna be,

play32:39

you know, computer-generated stuff?

play32:41

And my answer is, you know, for elevator music,

play32:46

AI's gonna work fine, you know, for...

play32:48

- A bunch of elevator musicians just freaked out, dude.

play32:51

- You know, for the average,

play32:57

you know, even legal brief,

play33:00

or let's say a research memo in a law firm.

play33:03

AI can probably do as good a job

play33:05

as a second year law associate.

play33:07

- Certainly. as good a job as I ever did.

play33:08

- Exactly, but you know,

play33:11

Bob Dylan or Stevie Wonder...

play33:14

- [Nilay] Lemme ask, there's one thing.

play33:15

- That, that is different.

play33:16

And the reason is because part of the human experience,

play33:20

part of the human genius is it's almost a mutation.

play33:23

It's not predictable, it's messy, it's new,

play33:26

it's different, it's rough, it's weird.

play33:29

That is the stuff that ultimately,

play33:33

taps into something deeper in us.

play33:35

And I think there's gonna be a market for that.

play33:40

- So you, in addition to being the former president,

play33:42

you are a bestselling author,

play33:44

you have a production company with your wife,

play33:46

you're in the IP business,

play33:48

which is why you think it's property.

play33:49

It's good, I appreciate that.

play33:52

The thing that will stop AI in its tracks in this moment

play33:55

is copyright lawsuits, right?

play33:57

You ask a generative AI model

play33:58

to spit out a Barack Obama speech

play34:01

and it it will do it to some level of passibility.

play34:05

Probably C plus, that's my estimation C plus.

play34:07

- It'd be one of my worst speeches.

play34:08

But it might sound sort of.

play34:10

- You fire a canon of C plus content

play34:12

in any business model of the internet, you upend it.

play34:15

But there are a lot of authors, musicians now.

play34:18

Artists suing the companies saying,

play34:20

"This is not fair use to train on our data

play34:22

to just ingest all of it."

play34:24

Where do you stand on that?

play34:25

Do you think that, as an author,

play34:27

do you think it's appropriate for them

play34:28

to ingest this much content?

play34:29

- Set me aside for a second,

play34:30

'cause you know, Michelle and I,

play34:34

we've already sold a lot of books and we're doing fine.

play34:36

So, I'm not overly stressed about it personally,

play34:39

but what I do think President Biden's

play34:47

executive order speaks to,

play34:49

but there's a lot more work that has to be done on this,

play34:52

and copyright is just one element of this.

play34:57

If AI turns out to be as pervasive

play35:01

and as powerful as its proponents expect,

play35:06

and I have to say, the more I look into it,

play35:08

I think it is going to be that disruptive,

play35:11

we are going to have to think about

play35:15

not just intellectual property,

play35:16

we're gonna have to think about jobs

play35:18

and the economy differently.

play35:20

And not all these problems

play35:22

are going to be solved inside of industry.

play35:26

So, what do I mean by that?

play35:29

I think with respect to copyright law,

play35:34

you will see people with legitimate claims,

play35:41

financing the lawsuits and litigation,

play35:43

and through the courts

play35:46

and various other regulatory mechanisms,

play35:50

you know, people who are creating content,

play35:51

they're gonna figure out ways to get paid

play35:53

and to protect the stuff they create.

play35:58

And it may impede the development

play36:02

of large language models for a while,

play36:03

but over the long term,

play36:06

I don't think that'll just be a speed bump.

play36:09

The broader question is gonna be

play36:13

what happens when 10% of existing jobs

play36:20

now definitively can be done better by

play36:24

some large language model or other variant of AI?

play36:30

And are we gonna have to reexamine how we educate our kids

play36:41

and what jobs are gonna be available?

play36:43

And you know, the truth of the matter is

play36:47

that during my presidency,

play36:50

there was I think a little bit naivete

play36:53

where people would say, you know,

play36:55

the answer to lifting people out of poverty

play36:58

and making sure they have high enough wages

play37:00

is we're gonna retrain them and we're gonna educate them

play37:02

and they should all become coders,

play37:04

'cause that's the future.

play37:05

Well, if AI is coding better than all,

play37:09

but the very best coders,

play37:11

if ChatGPT can generate a research memo

play37:15

better than the third, fourth year associate,

play37:18

maybe not the partner, you know,

play37:20

who's got a particular expertise or judgment,

play37:23

you know, now what are you telling young people coming up?

play37:28

And I think we're gonna have to start having conversations

play37:32

about how do we pay those jobs that can't be done by AI.

play37:40

How do we pay those better, you know,

play37:44

healthcare, nursing, you know, teaching, childcare, art,

play37:52

things that are really important to our lives,

play37:54

but maybe commercially, historically have not paid as well.

play37:58

Are we gonna have to think about

play38:00

the length of the work week and how we share jobs?

play38:04

Are we gonna have to think about

play38:05

the fact that more people

play38:10

choose to operate like independent contractors,

play38:16

but where are they getting their healthcare from

play38:18

and where are they getting their retirement from, right?

play38:22

Those are the kinds of conversations

play38:24

that I think we're gonna have to start having to deal with.

play38:27

And that's why I'm glad that the, you know,

play38:31

President Biden's EO begins that conversation.

play38:34

Again, I can't emphasize enough,

play38:36

because I think you'll see some people saying,

play38:38

"Well, we still don't have tough regulations.

play38:41

Where's the teeth in this?

play38:42

We're not forcing these big companies to do X, Y, Z

play38:45

as quickly as we should."

play38:48

That I think this administration understands.

play38:51

and I've certainly emphasized in conversations with them.

play38:54

This is just the start.

play38:56

And you know, this is gonna unfold

play38:58

over the next 2, 3, 4, 5 years.

play39:01

And by the way, it's gonna be unfolding internationally.

play39:04

You know, there's gonna be a conference this week

play39:07

in England around international safety standards on AI.

play39:15

Yeah, the Vice President Harris is gonna be attending.

play39:19

I think that's a good thing,

play39:21

because part of the challenge here is

play39:25

we're gonna have to have some cross-border frameworks

play39:28

and regulations and standards and norms.

play39:31

You know, that's part of what makes this different

play39:35

and harder to manage than, you know,

play39:38

the advent of radio and television.

play39:40

Because the internet by definition

play39:42

is a worldwide phenomenon.

play39:46

- Yeah, you said you were the first digital president.

play39:49

I gotta ask, have you used these tools?

play39:51

Have you had the aha moment

play39:52

where the computer's talking to you?

play39:53

Have you generated a picture of yourself?

play39:54

- I have used some of these tools during the course of,

play39:57

you know, these conversations and this research.

play40:01

And you know, it's fun.

play40:02

- Has Bing flirted with you yet?

play40:03

It flirts with everybody I hear.

play40:05

(Obama laughing)

play40:06

- Bing didn't flirt with me,

play40:07

but you know, the way they're designed,

play40:10

and I've actually, raised this

play40:11

with some of the designers.

play40:16

You know, in some cases,

play40:18

they're designed to anthropomorphize,

play40:21

to make it feel like you are talking to a human, right?

play40:25

It's like, you know, can we pass the Turing test, right?

play40:30

That's a specific objective,

play40:32

'cause it makes it seem more magical.

play40:34

And in some cases it improves function,

play40:36

but in some cases it just makes it cooler.

play40:38

And so, there's a little pizazz there

play40:40

and people are interested in it.

play40:42

I have to tell you that generally speaking though

play40:46

the way I think about AI is as a tool, not a buddy.

play40:52

And I think part of what we're gonna need to do

play40:55

as these models get more powerful,

play41:00

and this is where I do think government can help,

play41:02

is also just educating the public

play41:04

on what these models can do and what they can't do.

play41:09

That, you know, these are really

play41:14

powerful extensions of yourself and tools,

play41:19

but also reflections of yourself.

play41:20

And so, don't get confused and think that

play41:25

somehow what you're seeing in the mirror

play41:27

is you know, some other consciousness.

play41:32

A lot of times this is just feeding back to you.

play41:34

- You just want Bing to flirt with you.

play41:36

This is what I felt personally very, very deeply.

play41:38

- Yeah.

play41:39

- All right, last question.

play41:40

I need to know this.

play41:41

It's very important to me.

play41:42

What are the four apps in your iPhone dock?

play41:45

- Four apps at the bottom.

play41:47

I've got Safari.

play41:48

- Key.

play41:50

- I've got my tax, you know, the green box.

play41:56

- Yeah, you're a blue bubble.

play41:57

Do you give people any crap for being a green bubble?

play42:00

- No, I'm okay.

play42:02

- [Nilay] All right.

play42:04

- I've got my email, and I have my music, that's it.

play42:11

- It's like the stock set, pretty good.

play42:12

- Yeah, you know, if you asked the ones

play42:18

that I probably go to more than I should,

play42:22

I might have to put like words with friends on there

play42:26

where I think I waste a lot of time

play42:27

and maybe my NBA League Pass.

play42:31

- Oh, that's pretty good, that's pretty good.

play42:33

- But you know, I try not to overdo it on those.

play42:38

- League Pass is just one click above the dock.

play42:40

That's what I'm getting outta this.

play42:41

- That's exactly.

play42:42

- President Obama, thank you so much for being on Decoder.

play42:43

I really appreciate this conversation.

play42:44

- I really enjoyed it.

play42:45

And I wanna emphasize once again,

play42:49

because you've got an audience

play42:51

that understands this stuff, cares about it,

play42:53

is involved in it and working at it,

play42:55

if you are interested in helping to shape

play42:58

all these amazing questions that are gonna be coming up,

play43:01

go to ai.gov and see if there are

play43:03

opportunities for you fresh outta school.

play43:06

Or you might be an experienced, you know,

play43:10

tech coder who's, you know, done fine,

play43:14

you know, bought the house,

play43:16

got everything set up and says,

play43:17

"You know what? I wanna do something for the common good."

play43:22

Sign up, you know, this is part of

play43:24

what we set up during my presidency, US Digital Services.

play43:28

And it's remarkable how many really high-level folks

play43:37

decided that for six months, for a year, for two years,

play43:41

them devoting themselves to questions

play43:44

that are bigger than just,

play43:47

you know, what the latest app, you know, or video game was

play43:55

turned out to be really important to them

play43:57

and meaningful to them.

play43:57

And attracting that kind of talent

play44:01

into this field with that perspective,

play44:04

I think is gonna be vital.

play44:05

- Yeah, sounds like it.

play44:06

- All right, great to talk to you.

play44:07

- [Nilay] Thanks so much. - You bet.

play44:09

- [Nilay] Thank you very much.

play44:10

- Really enjoyed it. - I appreciate that.

play44:11

- [Obama] Come on, why don't we get a picture of-

play44:12

- [Nilay] Yeah.

play44:13

- [Obama] the team, 3, 2, 1.

play44:17

One more, okay.

play44:18

Fantastic, really enjoyed it.

play44:20

You did great.

play44:21

- [Nilay] Thank you for saying that.

play44:23

- [Production Assistant] Thank you.

play44:23

- One thing real quick. - Yes, of curse.

Rate This

5.0 / 5 (0 votes)

Related Tags
ObamaAI RegulationDecoder InterviewTechnology PolicyInnovationSafety ProtocolsDemocracyTech IndustryFuture of WorkPublic Good