OpenAI's Altman and Makanju on Global Implications of AI

Bloomberg Live
16 Jan 202438:47

Summary

TLDRSam Altman and Anna Patterson of OpenAI discuss AI safety, regulation, energy use, and preparing for artificial general intelligence at a Bloomberg roundtable interview during the 2024 Davos conference. They talk about OpenAI's content moderation efforts, relationship with publishers and artists, the pace of AI progress, and the need for vastly more energy to power advanced AI while combatting climate change.

Takeaways

  • OpenAI aims to restrict harmful uses of AI like misinformation while enabling beneficial ones.
  • Demand for AI compute power will drive breakthroughs in fusion, solar and energy storage.
  • AI will augment and enhance human productivity more than replace jobs.
  • OpenAI seeks partnerships with news publishers to properly attribute content.
  • Advanced AI may discover new scientific knowledge and even do AI research.
  • Regulating AI risks stifling entrepreneurial innovation.
  • AI progress will likely continue at an exponential, continuous pace.
  • AI safety standards and best practices still need development.
  • Well-designed AI can make technology feel more human-compatible.
  • Preparing for transformative AI requires humility about the future.

Q & A

  • What content moderation efforts is OpenAI making regarding AI?

    -OpenAI is banning use of ChatGPT for political campaigns, adding cryptographic watermarks to AI images, and partnering with secretaries of state to surface authoritative voting information.

  • How does OpenAI view AI's impact on jobs and employment?

    -OpenAI sees AI as more likely to augment human productivity than replace jobs, though concedes AI may still significantly alter many occupations.

  • What is OpenAI's perspective on training AI systems?

    -OpenAI believes future AI systems will need vastly less training data than currently speculated, using smaller high-quality datasets rather than mass quantities of data.

  • How does OpenAI characterize the pace of AI progress?

    -OpenAI expects AI capabilities to improve at an exponential, continuous pace with each new system delivering impressive advances.

  • What is the greatest uncertainty regarding advanced AI?

    -What societal impacts will emerge when everyone has access to highly capable AI assistants and collaborators.

  • Why does OpenAI iteratively deploy AI systems?

    -To give people time to gradually adapt to the technology while mistakes can occur at low stakes, allowing co-evolution of technology and society.

  • What role can AI have regarding scientific knowledge?

    -Advanced AI systems may discover entirely new scientific knowledge that benefits humanity.

  • How can AI research best progress safely?

    -Through collaboration on safety standards and best practices among organizations like the new Frontier Model Forum.

  • What makes for good AI system design?

    -Systems that feel natural, intuitive and human-compatible in how they are interacted with.

  • What mindset is most constructive regarding transformative AI?

    -Cautious optimism along with humility about the difficulty inherent in predicting the future societal impacts of advanced AI.

Outlines

00:00

Discussion on AI Guidelines and Elections

Paragraph 1 covers a discussion between the speakers on OpenAI's newly announced guidelines restricting the use of AI tools like ChatGPT in political campaigns. They talk about how OpenAI plans to enforce these guidelines at scale using safety systems and partnerships.

05:00

Bipartisan Nature of AI Regulation

Paragraph 2 continues the discussion on AI regulation, with the speakers highlighting the bipartisan support and agreement so far on the need to regulate AI technology.

10:01

AI Applications for National Security

Paragraph 3 has the speakers talk about OpenAI's evolving policies around use of their AI for military and national security applications. They are open to cybersecurity and veteran wellbeing focused collaborations but not developing weapons.

15:04

Engaging Responsibly with Artists

Paragraph 4 covers the issues around use of AI generative models like DALL-E in art. The speakers talk about respecting artist preferences on use of their style and data while finding ways for them to benefit from and collaborate on the technology.

20:04

Learning from Past Controversies

In Paragraph 5, Anna talks about leveraging learnings from past industry controversies in government partnerships and getting policymakers to better understand AI technology from early on.

25:05

Preparing for Societal Impacts

Paragraph 6 has Sam highlight the potential for rapid advancement in AI capabilities and the resulting responsibility leaders have for thoughtful governance and policies to shape positive societal impacts.

30:06

Developing Safely with Humility

Sam continues on responsible development of AI in paragraph 7, emphasizing the need for humility and safety-focused iterative deployment to give society time to gradually adapt and government policies to evolve.

35:09

Possibilities of AI-Powered Devices

In the final paragraph, when asked about reports of collaborations with Elon Musk's Neuralink, Sam leaves open the possibility of building new kinds of AI empowered devices, though not as replacements for smartphones.

Mindmap

Keywords

💡Artificial General Intelligence (AGI)

AGI refers to AI systems that can perform any intellectual task that a human being can. It is considered the holy grail of AI research. Sam talks about AGI as the future capability he expects AI systems like GPT to achieve, where they can discover new scientific knowledge, do AI research, and integrate deeply into the economy. This would profoundly change society in ways we can't fully anticipate.

💡Compute

Sam mentions compute as one of the two key 'currencies' of the future along with intelligence. He believes massive amounts of computing power will be needed to develop AGI. This poses an energy and climate challenge, forcing breakthroughs in fusion power and renewable energy to supply AI systems sustainably.

💡Regulation

With the rapid development of AI, governments are starting to explore regulating it. Sam believes different countries will take different approaches initially. Anna discusses specific regulatory efforts in the EU and US. They agree regulation needs an iterative, experimental approach as AI and society co-evolve.

💡Governance

The governance of AI development companies is discussed, including OpenAI's nonprofit/for-profit structure. Sam admits OpenAI's governance needs reviewing after recent events, but wants to focus first on reforming the board of directors.

💡Safety

Ensuring AI is developed safely and responsibly is a core aim. OpenAI uses techniques like red teaming to anticipate and mitigate risks. But Anna notes there is no consensus yet on what AI safety means and entails.

💡Alignment

Sam stresses the importance of AI being human-compatible and human-focused - aligned with human values and creativity rather than autonomous. He is relieved GPT has so far been more of a beneficial tool than a disruptive agent.

💡Climate change

Sam acknowledges AI's voracious demand for compute power could drive climate change, unless matched by clean energy breakthroughs. He thinks AI's needs will force greater investment in fusion, renewables and storage.

💡Exponential growth

Sam argues people underestimate the implications of the exponential improvements in AI capabilities. Each new version represents a huge jump, so the pace of change will likely accelerate rapidly even if deployments remain gradual.

💡Uncertainty

A recurring theme is how little we can predict the societal impacts of more advanced AI. Sam emphasizes remaining humble and cautions against false precision in forecasts, since the effects are beyond human intuition.

💡Co-evolution

Sam and Anna both argue AI deployment should be iterative so that technology and society can co-evolve. This allows time for societal adaptation and policy experimentation to ensure AI benefits humanity.

Highlights

OpenAI introduced new guidelines banning ChatGPT use in political campaigns and adding cryptographic watermarks to AI-generated images.

Sam Altman doesn't believe AI will displace jobs as much as previously thought. He sees it more as a productivity-enhancing tool that lets people do more.

Anna Chen says governments are becoming more interested in understanding and potentially regulating AI, though most are not yet ready to incorporate it operationally.

Sam believes developing AGI will require a massive increase in energy production and breakthroughs like fusion or much cheaper solar and storage.

Sam thinks the world will change more slowly then more quickly as AI capabilities advance exponentially, but says no one knows exactly what happens next.

Anna sees government AI regulation accelerating in 2024 as more comprehensive policies like the EU AI Act and US executive order are implemented.

Sam wants leaders at Davos to understand 2023 woke people up to AI's potential, but what's coming with models like GPT-5 will be far more impressive and transformative.

Sam says open AI considers its systems AI assistants rather than standalone products or entities.

Sam expects open AI's next model after GPT-4 to be very impressive, doing new things not possible before and improving on GPT-4's capabilities.

Sam thinks new specialized AI devices could be created to augment humanity, but they won't replace general-purpose tools like smartphones.

Anna wants leaders at Davos to balance realistic concerns about AI with messaging that lets people engage with and benefit from its potential.

Sam believes the way AI products are designed matters hugely for making the technology feel accessible rather than scary or mystical.

Sam thinks no one has strong intuitions for what happens when AI becomes thousands of times cheaper and more capable.

Sam argues that gradual, iterative AI deployment builds public familiarity and helps society and policy co-evolve alongside rapid technical advances.

Anna thinks clearer industry standards are needed for concepts like AI safety that lack common definitions and approaches.

Transcripts

play00:00

you guys made some news today announcing

play00:02

some new guidelines around the use of AI

play00:04

in elections I'm sure it's all uh stuff

play00:07

that the the the davo set loved to hear

play00:11

uh you banned the use of chat GPT in

play00:13

political campaigns you introduced

play00:16

cryptographic watermarks or images

play00:18

created by do to to create kind of

play00:20

Providence and transparency around the

play00:23

use of AI generated images I read it and

play00:25

I thought you know this is great some of

play00:28

these principles are shared by much

play00:29

larger platforms like Facebook and Tik

play00:31

Tok and YouTube and they have struggled

play00:34

to enforce it how do

play00:36

you make it

play00:38

real I mean these a lot of these are

play00:40

things that we've been doing for a long

play00:42

time and we have a really strong safety

play00:44

systems team that um not only sort of

play00:48

has monitoring but we're actually able

play00:49

to leverage our own tools in order to

play00:51

scale our enforcement which gives us I

play00:53

think a significant Advantage um but uh

play00:56

so there this there also some some

play01:00

really important Partnerships like with

play01:01

the National Association with the

play01:02

secretaries of state so we can Surface

play01:04

authoritative voting information so we

play01:06

have quite a few ways that we are able

play01:08

to enforce this mean Sam are you does

play01:10

this put your mind at ease that we don't

play01:13

that that open AI doesn't move the

play01:15

needle in in some 77 upcoming critical

play01:19

Democratic elections in 2024 no we're

play01:22

quite focused on it uh and I think it's

play01:24

good that our mind is not at EAS I think

play01:25

it's good that we have a lot of anxiety

play01:26

and are going to do everything we can to

play01:28

get it as right as we can um I think our

play01:31

role is very different than the role of

play01:33

a distribution platform but still

play01:35

important we'll have to work with them

play01:36

too uh it'll you know it's like you

play01:38

generate here and distribute here uh and

play01:41

there needs to be a good conversation

play01:43

between them but we also have the

play01:45

benefit of having watched what's

play01:47

happened in previous Cycles with

play01:49

previous uh you know Technologies and I

play01:53

don't think this will be the same as

play01:54

before I I think it's always a mistake

play01:55

to try to fight the last war but we do

play01:58

get to take away some learnings from

play01:59

that

play02:00

and so I I wouldn't you know I I think

play02:03

it'd be terrible if I said oh yeah I'm

play02:04

not worried I feel great like we're

play02:05

going to have to watch this incredibly

play02:07

closely this year super tight monitoring

play02:09

super tight feedback loop Anna you you

play02:11

were at Facebook for open

play02:13

AI so so I almost apologize for asking

play02:16

it this in this way uh probably a

play02:18

trigger phrase but do you worry about

play02:20

another Cambridge analytical analytica

play02:22

moment I think as Sam alluded to there

play02:25

are a lot of learnings that we can

play02:27

leverage but also open the eye from its

play02:29

Inception has been a company that thinks

play02:31

about these issues that it was one of

play02:33

the reasons that it was founded so I

play02:35

think I'm a lot less concerned because

play02:37

these are issues that our teams have

play02:38

been thinking about from the beginning

play02:40

of uh our building of these tools Sam

play02:44

Donald Trump just won the Iowa caucus

play02:47

yesterday uh we are now sort of

play02:49

confronted with the reality of this

play02:50

upcoming election what do you think is

play02:52

at

play02:53

stake in the in the US election for for

play02:57

Tech and for the safe stewardship of AI

play02:59

do you feel like that's a a critical

play03:01

issue that voters should and will have

play03:04

to consider in this election I think the

play03:05

now confronted as part of the problem uh

play03:07

I actually think most people who come to

play03:09

D say that again I didn't quite get that

play03:10

I think part of the problem is we're

play03:11

saying we're now confronted you know it

play03:13

never occurred to us that what Trump is

play03:15

saying might be resonating with a lot of

play03:16

people and now all of a sudden after

play03:18

this performance in Iowa oh man um it's

play03:22

a very like Davos Centric you know um

play03:25

I've been here for two days I guess

play03:28

just uh so I I would love if we had a

play03:32

lot more reflection and if we started it

play03:34

a lot sooner um about and we didn't feel

play03:37

now confronted but uh I think there's a

play03:39

lot at stake at this election I think

play03:41

elections are you know huge deals I

play03:44

believe that America is going to be fine

play03:47

no matter what happens in this election

play03:49

I believe that AI is going to be fine no

play03:51

matter what happens in this election and

play03:52

we will have to work very hard to make

play03:54

it so um but this is not you know no one

play03:59

wants to sit up here and like hear me

play04:01

rant about politics I'm going to stop

play04:02

after this um but I think there has been

play04:07

a real failure to sort of learn lessons

play04:11

about what what's kind of like working

play04:13

for the citizens of America and what's

play04:15

not Anna I want to ask you the same

play04:17

question uh um you know taking your

play04:19

political background into account what

play04:21

do you feel like for Silicon Valley for

play04:24

AI is at stake in the US election I

play04:27

think what has struck me and has been

play04:29

really remarkable is that the

play04:30

conversation around AI has remained very

play04:34

bipartisan and so you know I think that

play04:37

the one concern I have is that somehow

play04:39

both parties hate

play04:42

it no but you know this is like an area

play04:45

where um

play04:46

you Republicans tend to of course have a

play04:50

an approach where they are not as in

play04:52

favor of Regulation but on this I think

play04:54

there's agreement on both parties that

play04:55

they are consider they believe that

play04:57

something is needed on this technology

play05:00

you know Senator Schumer has this

play05:01

bipartisan effort that he is running

play05:03

with his Republican counterparts again

play05:05

uh when we speak to people in DC on both

play05:08

sides of the aisle for now it seems like

play05:11

they're on the same page and do you feel

play05:13

like all the existing campaigns are

play05:15

equally articulate about the about the

play05:18

issues relating to Ai No know that AI

play05:20

has really been a campaign issue to date

play05:22

so it will be interesting to see how

play05:24

that if we're right about what's going

play05:25

to happen here this is like bigger than

play05:28

just a technological re ution in some

play05:30

sense I mean sort of like all

play05:31

technological revolutions or societal

play05:33

revolutions but this one feels like it

play05:35

can be much more of that than usual and

play05:39

so it it is going to become uh a social

play05:43

issue a political issue um it already

play05:45

has in some ways but I think it is

play05:48

strange to both of us that it's not more

play05:50

of that already but with what we expect

play05:52

to happen this year not with the

play05:53

election but just with the the increase

play05:55

in the capabilities of the products uh

play05:58

and as people really

play06:00

catch up with what's going to happen

play06:02

what is happening what's already

play06:04

happened uh there's like a lot of a Nur

play06:05

always in society well I mean there are

play06:07

political figures in the US and around

play06:08

the world like Donald Trump who have

play06:11

successfully tapped into a feeling of

play06:13

yeah

play06:14

dislocation uh anger of the working

play06:17

class the feeling of you know

play06:19

exacerbating inequality or technology

play06:22

leaving people behind is there the

play06:24

danger that uh you know AI furthers

play06:27

those Trends yes for sure I think that's

play06:29

something to think about but one of the

play06:32

things that surprised us very pleasantly

play06:34

on the upside uh cuz you know when you

play06:36

start building a technology you start

play06:37

doing research you you kind of say well

play06:39

we'll follow where the science leads us

play06:40

and when you put a product you'll say

play06:42

this is going to co-evolve with society

play06:43

and we'll follow where users lead us but

play06:46

it's not you get you get to steer it but

play06:48

only somewhat there's some which is just

play06:50

like this is what the technology can do

play06:53

this is how people want to use it and

play06:55

this is what it's capable of and this

play06:57

has been much more of a tool than I

play06:59

think we expected it is not yet and

play07:02

again in the future it'll it'll get

play07:04

better but it's not yet like replacing

play07:06

jobs in the way to the degree that

play07:08

people thought it was going to it is

play07:10

this incredible tool for productivity

play07:13

and you can see people magnifying what

play07:14

they can do um by a factor of two or

play07:17

five or in some way that doesn't even

play07:19

talk to makes sense to talk about a

play07:20

number because they just couldn't do the

play07:21

things at all before and that is I think

play07:25

quite exciting this this new vision of

play07:28

the future that we didn't really see

play07:30

when we started we kind of didn't know

play07:31

how it was going to go and very thankful

play07:33

the technology did go in this direction

play07:35

but where this is a tool that magnifies

play07:37

what humans do lets people do their jobs

play07:39

better lets the AI do parts of jobs and

play07:42

of course jobs will change and of course

play07:43

some jobs will totally go away but the

play07:46

human drives are so strong and the sort

play07:48

of way that Society works is so strong

play07:50

that I think and I can't believe I'm

play07:52

saying this because it would have

play07:54

sounded like an ungrammatical sentence

play07:56

to me at some point but I think AGI will

play07:58

get developed in the reasonably

play08:00

close-ish future and it'll change the

play08:02

world much less than we all think it'll

play08:03

change jobs much less than we all think

play08:06

and again that sounds I may be wrong

play08:08

again now but that wouldn't have even

play08:10

compiled for me as a sentence at some

play08:11

point given my conception then of how

play08:13

AGI was going to go as you've watched

play08:15

the technology develop have you both

play08:17

changed your views on how significant

play08:19

the job dislocation and disruption will

play08:22

be as AGI comes into Focus so this is

play08:25

actually an area that we know we have a

play08:26

policy research team that studies this

play08:28

and they've seen pretty significant

play08:30

impact in terms of changing the way

play08:31

people do jobs rather than job

play08:33

dislocation and I think that's actually

play08:35

going to accelerate and that it's going

play08:36

to change more people's jobs um but as

play08:39

Sam said so far it hasn't been the

play08:41

significant a replacement of jobs you

play08:44

know you hear a coder say okay I'm like

play08:46

two times more productive three times

play08:48

more productive whatever than they used

play08:49

to be and I like can never code again

play08:50

without this tool you mostly hear that

play08:52

from the younger ones but

play08:54

um it turns out and I think this will be

play08:57

true for a lot of Industries the world

play08:58

just needs a lot more code than we have

play09:00

people to write right now and so it's

play09:02

not like we run out of demand it's that

play09:04

people can just do more expectations go

play09:06

up but ability goes up

play09:08

too goes up I want to ask you about

play09:10

another news report today that suggested

play09:13

that open AI was relaxing its

play09:15

restrictions around the use of AI in

play09:18

military projects and developing weapons

play09:21

can you say more about that and you what

play09:24

work are you doing with the US

play09:25

Department of Defense and other military

play09:27

agencies so a lot of these policies were

play09:30

written um before we even knew what

play09:32

these people would use our tools for so

play09:34

what this was not actually just the

play09:37

adjustment of the military use case

play09:38

policies but across the board to make it

play09:40

more clear so that people understand

play09:42

what is possible what is not possible

play09:43

but specifically on this um area we

play09:47

actually still prohibit the development

play09:49

of weapons um the destruction of

play09:51

property harm to individuals but for

play09:53

example we've been doing work with the

play09:55

Department of Defense on um cyber

play09:57

security tools for uh open source

play09:59

software that secures critical

play10:01

infrastructure we've been exploring

play10:02

whether it can assist with veteran

play10:04

suicide and because we previously had a

play10:06

what essentially was a blanket

play10:07

prohibition on Military many people felt

play10:10

like that would have prohibited any of

play10:12

these use cases which we think are very

play10:13

much aligned with what we want to see in

play10:15

the world has the US government asked

play10:17

you to restrict the level of cooperation

play10:20

with uh militaries in other

play10:23

countries um they haven't asked us but

play10:25

we certainly are not you know right for

play10:28

now actually our discussion are focused

play10:29

on um United States national security

play10:32

agencies and um you know I think we have

play10:35

always believed that democracies need to

play10:37

be in the lead on this technology uh Sam

play10:39

changing topics uh give us an update on

play10:41

the GPT store and are you seeing maybe

play10:44

probably explain it briefly and are you

play10:45

seeing the same kind of explosion of

play10:47

creativity we saw in the early days of

play10:50

the mobile app stores yeah the same

play10:52

level of creativity and the same level

play10:53

of crap but it I mean that happens in

play10:56

the early days as people like feel out a

play10:57

technology there's some incredible stuff

play10:59

in there too um give us an example the

play11:01

gpts should I say what gpts are first

play11:04

yeah sure um so gpts are a way to do a

play11:06

very lightweight customization of chat

play11:08

GPT and if you want it to behave in a

play11:11

particular way to use particular data to

play11:13

be able to call out to an external

play11:14

service um you can make this thing and

play11:17

you can do all sorts of like uh great

play11:19

stuff with it um and then we just

play11:21

recently launched a store where you can

play11:23

see what other people have built and you

play11:24

can share it and um I mean personally

play11:27

one that I have loved is Al Trails I

play11:29

have this like every other weekend I

play11:31

would like to like go for a long hike

play11:33

and there's always like the version of

play11:34

Netflix that other people have where

play11:35

it's like takes an hour to figure out

play11:37

what to watch it takes me like two hours

play11:38

to figure out what hike to do and the

play11:40

all Trails thing to like say I want this

play11:42

I want that you know I've already done

play11:44

this one and like here's a great hike

play11:46

it's been I it's sounds silly but I love

play11:48

that one have you added any gpts of your

play11:50

own have I made any yeah um I have not

play11:53

put any in the store maybe I will great

play11:57

um can you give us an update on the

play11:58

volume or or the pace at which you're

play12:00

seeing new gpts um the number I know is

play12:03

that there had been 3 million created

play12:04

before we launched the store I have been

play12:05

in the middle of this trip around the

play12:07

world that has been quite hectic and I

play12:08

have not been doing my normal daily

play12:10

metrics tracking so I don't know how

play12:12

it's gone since launch but I'll tell you

play12:13

by the slowness of chat GPT it's

play12:15

probably doing really

play12:18

well um I want to ask you about open

play12:20

ai's copyright issues uh how important

play12:22

are publisher relations to open ai's

play12:25

business considering for example the

play12:27

lawsuit last month file against open AI

play12:29

by the New York Times They are important

play12:32

but not for the reason people think um

play12:34

there is this belief held by some people

play12:36

that man you need all of my training

play12:38

data and my training data is so valuable

play12:40

and actually uh that is generally not

play12:43

the case we do not want to train on the

play12:45

New York Times data for example um and

play12:48

all more generally we're getting to a

play12:49

world where it's been like data data

play12:51

data you just need more you need more

play12:53

you need more you're going to run out of

play12:54

that at some point anyway so a lot of

play12:55

our research has been how can we learn

play12:57

more from smaller amounts of very high

play12:59

quality data and I think the world is

play13:01

going to figure that out what we want to

play13:02

do with Publishers if they want is when

play13:05

one of our users says what happened to

play13:08

Davos today be able to say here's an

play13:10

article from blueberg here's an article

play13:11

from New York Times and here you know

play13:12

here's like a little snippet or probably

play13:14

not a snippet there's probably some

play13:15

cooler thing that we can do with the

play13:16

technology and you know some people want

play13:18

to partner with us some people don't

play13:20

we've been striking a lot of great

play13:21

Partnerships and we have a lot more

play13:23

coming um and then you know some people

play13:26

don't want want to uh we'd rather they

play13:28

just say we don't want to do that rather

play13:30

than Sue us but like we'll defend

play13:32

ourselves that's fine too I just heard

play13:34

you say you don't want to train on the

play13:36

New York Times does that mean given the

play13:38

the legal exposure you would have done

play13:40

things differently as you trained your

play13:41

model here's a tricky thing about that

play13:43

um people the web is a big thing and

play13:46

there are people who like copy from The

play13:47

New York Times and put an article

play13:49

without attribution up on some website

play13:51

and you don't know that's a New York

play13:52

Times article if the New York Times

play13:54

wants to give us a database of all their

play13:55

articles or someone else does and say

play13:57

hey don't put anything out that's like a

play13:58

match for this we can probably do a

play14:00

pretty good job and um solve we don't

play14:03

want to regurgitate someone else's

play14:05

content um but the problem is not as

play14:07

easy as it sounds in a vacuum I think we

play14:10

can get that number down and down and

play14:11

down have it be quite low and that seems

play14:13

like a super reasonable thing to

play14:15

evaluate us on you know if you have

play14:17

copyrighted content whether or not it

play14:20

got put into someone else's thing

play14:22

without our knowledge and you're willing

play14:23

to show us what it is and say don't

play14:25

don't put this stuff as a direct

play14:26

response we should be able to do that

play14:29

um again it won't like thousand you know

play14:32

monkeys thousand typewriters whatever it

play14:33

is once in a while the model will just

play14:35

generate something very close but on the

play14:36

whole we should be able to do a great

play14:38

job with this um so there's like there's

play14:41

all the negatives of this people like ah

play14:43

you know don't don't do this but the

play14:44

positives are I think there's going to

play14:46

be great new ways to consume and

play14:49

monetize news and other published

play14:51

content and for every one New York Times

play14:54

situation we have we have many more

play14:56

Super productive things about people

play14:57

that are excited to to build the future

play14:59

and not do their theatrics and and what

play15:03

and what about DOI I mean there have

play15:05

been artists who have been upset with

play15:07

Dolly 2 Dolly 3 what what has that

play15:09

taught you and how will you do things

play15:11

differently we engage with the artist

play15:12

Community a lot and uh you know we we

play15:15

try to like do the requests so one is

play15:16

don't don't generate in my style um even

play15:20

if you're not training on my data super

play15:22

reasonable so we you know Implement

play15:23

things like that

play15:25

um you know let me opt out of training

play15:27

even if my images are all over the

play15:28

Internet and you don't know what they

play15:29

are what I'm and so there's a lot of

play15:31

other things too what I'm really excited

play15:32

to do and the technology isn't here yet

play15:34

but get to a point where rather than the

play15:36

artist say I don't want this thing for

play15:38

these reasons be able to deliver

play15:40

something where an artist can make a

play15:41

great version of Dolly in their style

play15:44

sell access to that if they want don't

play15:46

if they don't want just use it for

play15:47

themselves uh or get some sort of

play15:49

economic benefit or otherwise when

play15:52

someone does use their stuff um and it's

play15:54

not just training on their images it

play15:55

really is like you know it really is

play15:59

about style uh and and that's that's the

play16:02

thing that at least in the artist

play16:03

conversations I've had that people are

play16:05

super interested in so for now it's like

play16:07

all right let's know what people don't

play16:08

want make sure that we respect that um

play16:11

of course you can't make everybody happy

play16:12

but try to like make the community feel

play16:14

like we're being a good partner um but

play16:17

what what I what I think will be better

play16:18

and more exciting is when we can do

play16:20

things that artists are like that's

play16:23

awesome Anna you are open AI ambassador

play16:27

to Washington other capitals around the

play16:30

world I I'm curious what you've taken

play16:32

from your experience in Facebook what

play16:34

you've taken from the tense relations

play16:37

between a lot of tech companies and

play16:39

governments and Regulators over the past

play16:42

few decades and how you're putting that

play16:43

to use now in open open AI I mean so I

play16:46

think one thing that I really learned

play16:48

working in government and of course I

play16:49

worked in the White House during the

play16:51

2016 Russia election interference and

play16:53

people think that that was the first

play16:54

time we'd ever heard of it but it was

play16:56

something that we had actually been

play16:57

working on for years and thinking you

play16:59

know we know that this happens what do

play17:01

we do about it and one thing I never did

play17:03

during that period is go out and talk to

play17:05

the companies because it's not actually

play17:07

typical thing you do in government and

play17:08

was much more rare back then especially

play17:11

with you know these emerging tools and I

play17:13

thought about that a lot as I entered

play17:15

the tech space that I regretted that and

play17:16

that I wanted governments to be able to

play17:18

really understand the technology and how

play17:19

the decisions are made by these

play17:21

companies and also just honestly when I

play17:23

first joined openi no one of course had

play17:25

heard of openi in government for the

play17:27

most part

play17:28

and I thought every time I used it I

play17:31

thought my God if IID had this for8

play17:33

years I was in the administration I

play17:35

could have gotten 10 times more done so

play17:37

for me it was really how do I get my

play17:38

colleagues to use it um especially with

play17:40

open eyes mission to make sure these

play17:42

tools benefit everyone I don't think

play17:44

that'll ever happen unless governments

play17:45

are incorporating it to serve citizens

play17:47

more efficiently and faster and so this

play17:49

is actually one of the things I've been

play17:51

most excited about is to just really get

play17:53

governments to use it for everyone's

play17:55

benefit I mean I'm hearing like a lot of

play17:56

sincerity in that pitch are Regulators

play17:59

receptive to it it feels like a lot are

play18:01

coming to the conversation probably with

play18:04

a good deal of skepticism because of

play18:06

past interactions with Silicon Valley I

play18:08

think I mostly don't even really get to

play18:09

talk about it because for the most part

play18:11

people are interested in governance and

play18:12

Regulation and I think that they know um

play18:16

theoretically that there is a lot of

play18:17

benefit the government many governments

play18:19

are not quite ready to incorporate I

play18:20

mean there are exceptions obviously

play18:22

people who are really at the Forefront

play18:24

so it's not you know I think often I

play18:26

just don't even really get to that

play18:27

conversation

play18:29

so I want to ask you both about the

play18:31

dramatic turn of events in uh November

play18:34

Sam one day the window on these

play18:36

questions will close um that is not you

play18:39

think they

play18:40

will I think at some point they probably

play18:43

will but it hasn't happened yet so it

play18:45

doesn't doesn't matter um I guess my

play18:47

question is is you know have you

play18:50

addressed the Govern the governance

play18:52

issues the very unique uh corporate

play18:56

structure at open AI with the nonprofit

play18:58

board and the cap profit arm that led to

play19:02

your ouer we're going to focus first on

play19:05

putting a great full board in place um I

play19:08

expect us to make a lot of progress on

play19:09

that in the coming months uh and then

play19:11

after that the new board uh will take a

play19:13

look at the governance structure but I

play19:15

think we debated both what does that

play19:17

mean is it should open AI be a

play19:19

traditional Silicon Valley for-profit

play19:21

company we'll never be a traditional

play19:23

company but the structure I I think we

play19:25

should take a look at the structure

play19:27

maybe the answer we have now is right

play19:28

but I think we should be willing to

play19:30

consider other things but I think this

play19:32

is not the time for it and the focus on

play19:33

the board first and then we'll go look

play19:35

at it from all angles I mean presumably

play19:37

you have investors including Microsoft

play19:40

including uh your Venture Capital

play19:42

supporters um your employees who uh over

play19:46

the long term are seeking a return on

play19:48

their investment um I think one of the

play19:51

things that's difficult to express about

play19:54

open aai is the degree to which our team

play19:57

and the people around us investors

play19:58

Microsoft whatever are committed to this

play20:01

Mission um in the middle of that crazy

play20:04

few days uh at one point I think like 97

play20:09

something like that 98% of the company

play20:11

signed uh a letter saying you know we're

play20:14

all going to resign and go to something

play20:16

else and that would have torched

play20:18

everyone's equity and for a lot of our

play20:20

employees like this is all or the great

play20:22

majority of their wealth and people

play20:24

being willing to go do that I think is

play20:27

quite unusual our investors who also

play20:29

were about to like watch their Stakes go

play20:31

to zero which just like how can we

play20:33

support you and whatever is best for for

play20:35

the mission Microsoft too um I feel very

play20:37

very fortunate about that uh of course

play20:40

also would like to make all of our

play20:42

shareholders a bunch of money but it was

play20:44

very clear to me what people's

play20:45

priorities were and uh that meant a lot

play20:47

I I I sort of smiled because you came to

play20:49

the Bloomberg Tech Conference in last

play20:51

June and Emily Chang asked uh it was

play20:54

something along along the lines of why

play20:56

should we trust you and you very

play20:58

candidly says you shouldn't and you said

play21:00

the board should be able to fire me if

play21:02

if they want and of course then they did

play21:05

and you quite uh adeptly orchestrated

play21:08

your return actually let me tell you

play21:09

something um I the board did that I was

play21:12

like I think this is wild super confused

play21:16

super caught off guard but this is the

play21:17

structure and I immediately just went to

play21:19

go thinking about what I was going to do

play21:20

next it was not until some board members

play21:22

called me the next morning that I even

play21:24

thought about really coming back um when

play21:27

they asked you don't want you want to

play21:28

come back uh you want to talk about that

play21:31

but like the board did have all of the

play21:33

Power there now you know what I'm not

play21:36

going to say that next thing but I I I

play21:39

think you should continue I think I no I

play21:42

would I would also just say that I think

play21:44

that there's a lot of narratives out

play21:45

there it's like oh well this was

play21:46

orchestrated by all these other forces

play21:48

it's not accurate I mean it was the

play21:50

employees of open AI that wanted this

play21:54

and that thought that it was the right

play21:55

thing for Sam to be back the you know

play21:57

like yeah I thing I'll will say is uh I

play21:59

think it's important that I have an

play22:01

entity that like can fire this but that

play22:04

entity has got to have some

play22:05

accountability too and that is a clear

play22:08

issue with what happened right Anna you

play22:11

wrote a remarkable letter to employees

play22:13

during The Saga and one of the many

play22:15

reasons I was excited to to have you on

play22:17

stage today was ju to just ask you what

play22:20

were those five days like for you and

play22:22

why did you step up and write that uh

play22:25

Anna can clearly answer this if she

play22:26

wants to but like is really what you

play22:28

want to spend our time on like the soap

play22:30

opera rather than like what AI is going

play22:32

to do I mean I'm wrapping it up but but

play22:35

um I mean go I think people are

play22:36

interested okay well we can leave it

play22:38

here if you want no no yeah let's let's

play22:40

answer that question and we'll we'll we

play22:42

can move on I would just say uh for

play22:45

color that it happened the day before

play22:46

the entire company was supposed to take

play22:48

a week off so we were all on Friday uh

play22:50

preparing to you know have a restful

play22:52

week after an insane year so then you

play22:54

know many of us slept on the floor of

play22:56

the office for a week right there's a

play22:58

question here that I think is a a really

play23:00

good one we are at Davos climate change

play23:03

is on the agenda um the question is does

play23:06

do well I'm going to give it a different

play23:08

spin considering the compute costs and

play23:12

the the need for chips does the

play23:14

development of AI and the path to AGI

play23:16

threaten to take us in the opposite

play23:19

direction on the climate

play23:23

um we do need way more energy in the

play23:27

world than I think we thought we needed

play23:29

before my my whole model of the world is

play23:32

that the two important currencies of the

play23:34

future are compute SL intelligence and

play23:37

energy um you know the ideas that we

play23:40

want and the ability to make stuff

play23:42

happen and uh the ability to like run

play23:44

the compute and I think we still don't

play23:47

appreciate the energy needs of this

play23:50

technology um the good news to the

play23:53

degree there's good news is there's no

play23:55

way to get there without a breakthrough

play23:57

we need Fusion or we need like radically

play23:59

cheaper solar Plus Storage or something

play24:02

at massive scale like a scale that no

play24:04

one is really planning for um so

play24:08

we it's totally fair to say that AI is

play24:11

going to need a lot of energy but it

play24:13

will force us I think to invest more in

play24:16

the technologies that can deliver this

play24:18

none of which are the ones that are

play24:19

burning the carbon like that'll be those

play24:21

all those unbelievable number of fuel

play24:23

trucks and by the way you back one or

play24:25

more nuclear yeah I I personally think

play24:29

that

play24:31

is either the most likely or the second

play24:33

most likely approach feel like the world

play24:36

is more receptive to that technology now

play24:38

certainly historically not in the US um

play24:40

I think the world is still

play24:43

unfortunately pretty negative on fishing

play24:46

super positive on Fusion it's a much

play24:48

easier story um but I wish the world

play24:51

would Embrace fishing much more I look I

play24:55

I may be too optimistic about this but I

play24:56

think

play24:58

I I think we have paths now to

play25:02

massive a massive energy transition away

play25:05

from burning carbon it'll take a while

play25:07

those cars are going to keep driving

play25:08

there you know there's all the transport

play25:10

stuff it'll be a while till there's like

play25:12

a fusion reactor in every cargo ship um

play25:14

but if if we can drop the cost of energy

play25:16

as dramatically as I hope we can then

play25:19

the math on carbon captur just so

play25:22

changes uh I still expect unfortunately

play25:26

the world is on a path where we're going

play25:27

to have to do something dramatic with

play25:29

climate look like geoengineering as a as

play25:32

a as a Band-Aid as a stop Gap but I

play25:34

think we do now see a path to the

play25:36

long-term solution so I I want to just

play25:38

go back to my question in terms of

play25:40

moving in the opposite direction it

play25:42

sounds like the answer is potentially

play25:44

yes on the demand side unless we take

play25:49

drastic action on the supply side but

play25:51

there there is no I I see no way to

play25:54

supply this with to to manage the supply

play25:56

side without

play25:58

a really big breakthrough right which is

play26:01

this is does this frighten you guys

play26:03

because um you know the world hasn't

play26:05

been that versatile when it comes to

play26:08

supply but AI as you know you have

play26:10

pointed out is not going to take its

play26:12

time until we start generating enough

play26:14

power it motivates us to go invest more

play26:16

in fusion and invest more in nor new

play26:18

storage and and not only the technology

play26:21

but what it's going to take to deliver

play26:23

this at the scale that AI needs and that

play26:26

the whole globe needs so I think it

play26:28

would be not helpful for us to just sit

play26:30

there and be nervous um we're just like

play26:32

hey we see what's coming with very high

play26:34

conviction it's coming how can we use

play26:37

our

play26:38

abilities uh our Capital our whatever

play26:40

else to do this and in the process of

play26:42

that hopefully deliver a solution for

play26:44

the rest of the world not just AI

play26:46

training workloads or inference

play26:47

workloads Anna it felt like in 2023 we

play26:50

had the beginning of a almost

play26:53

hypothetical conversation about

play26:54

regulating AI what what should we expect

play26:58

in 2024 and you know does it do do do

play27:02

governments act does it does it become

play27:04

real and what is what is AI safety look

play27:06

like so I think we it is becoming real

play27:09

you know the EU is uh on the cusp of

play27:12

actually finalizing this regulation

play27:14

which is going to be quite extensive and

play27:16

the Biden Administration uh wrote the

play27:18

longest executive order I think in the

play27:20

history of executive orders uh covering

play27:22

this technology and is being implemented

play27:24

in 2024 because they gave agencies you

play27:26

know a bunch of homework for how to

play27:29

implement this and govern this

play27:30

technology and and it's happening so I

play27:32

think it is really moving forward um but

play27:35

what exactly safety looks like of what

play27:37

it even is I think this is still a

play27:38

conversation we haven't bottomed out on

play27:41

you know we founded this Frontier Model

play27:42

Forum in part yeah maybe explain what

play27:44

that is so this is um for now this is um

play27:47

Microsoft openai anthropic and um Google

play27:50

but it will I think expand to other

play27:52

Frontier Labs but really right now all

play27:55

of us are working on safety we all red

play27:57

teamr models um we all do a lot of this

play28:00

work but we really don't have even a

play28:01

common vocabulary um or a standardized

play28:04

approach and to the extent that people

play28:06

think like well this is just industry

play28:08

but uh this is in part in response to

play28:10

many governments that have asked us for

play28:12

this very thing so like what is it

play28:14

across industry that you think are

play28:16

viable best practices is there a risk

play28:20

that regulation starts to discourage

play28:23

entrepreneurial activity in in AI I mean

play28:26

I think people are terrified of this um

play28:28

this is why I think Germany and France

play28:30

and Italy in interjected into the EU um

play28:34

AI act discussion because they are

play28:36

really concerned about their own

play28:37

domestic Industries being sort of

play28:39

undercut before they've even had a

play28:41

chance to develop were you satisfied

play28:44

with your old boss's executive order and

play28:46

was was there anything in there that uh

play28:48

you had lobbied against no and in fact

play28:52

you know I think it's it was really good

play28:54

in that it wasn't just these are the

play28:56

restrictions it's like and then also

play28:58

please go and think about how your

play29:00

agency will actually leverage this to do

play29:02

your work better so I was really

play29:04

encouraged that they actually did have a

play29:07

balanced

play29:08

approach um Sam first time at Davos

play29:11

first time okay is um uh you mentioned

play29:15

that uh You' prefer to spend more of our

play29:16

time here on stage talking about AGI

play29:19

what is the message you're bringing to

play29:21

political leaders and other Business

play29:22

Leaders here if you could distill it

play29:24

thank you

play29:26

um

play29:28

so I think 2023 was a year where the

play29:31

world woke up to the possibility of

play29:34

these systems becoming increasingly

play29:36

capable and increasingly General but GPT

play29:39

4 I think is best understood as a

play29:42

preview and it was more Over the Bar

play29:46

than we expected of utility for more

play29:48

people in more ways but you know it's

play29:51

easy to point out the limitations and

play29:53

again we're thrilled that people love it

play29:55

and use it as much as they do but this

play29:58

is progress here is not linear and this

play30:01

is the thing that I think is really

play30:03

tricky humans have horrible intuition

play30:06

for exponentials at least speaking for

play30:07

myself but it seems like a common part

play30:09

of the human condition um what does it

play30:12

mean if GPT 5 is as much better than gp4

play30:15

is four was to three and six is to five

play30:17

and what does it mean if we're just on

play30:18

this trajectory now um what you know on

play30:23

the question of Regulation I think it's

play30:24

great that different countries are going

play30:25

to try different things some countries

play30:27

will probably ban AI some countries will

play30:29

probably say no guard rails at all both

play30:31

of those I think will turn out to be

play30:32

suboptimal and we'll we'll get to see

play30:34

different things work but as these

play30:36

systems become more powerful um as they

play30:41

as they become more deeply integrated

play30:42

into the economy as they become

play30:43

something we all used to do our work and

play30:45

then as things beyond that happen as

play30:47

they become capable of discovering new

play30:50

scientific knowledge for

play30:52

Humanity even as they become capable of

play30:54

doing AI research at some point um the

play30:57

world is going

play30:58

to change more slowly and then more

play31:01

quickly than than we might imagine but

play31:03

the world is going to change um this is

play31:06

you know a thing I I always say to

play31:08

people is no one knows what happens next

play31:09

and I really believe that and I think

play31:10

keeping the humility about that is

play31:12

really important you can see a few steps

play31:14

in front of you but not too many

play31:17

um but when cognition the when the cost

play31:20

of cognition Falls by a factor of a

play31:23

thousand or a million when the

play31:24

capability of it becomes uh it augments

play31:28

Us in ways we can't even imagine you

play31:30

know uh like one example I I try to give

play31:33

to people is what if everybody in the

play31:35

world had a really competent company of

play31:38

10,000 great virtual employees experts

play31:41

in every area they never fought with

play31:42

each other they didn't need to rest they

play31:45

got really smart they got smarter at

play31:46

this rapid Pace what would we be able to

play31:48

create for each other what would that do

play31:50

to the world that we experience and the

play31:52

answer is none of us know of course and

play31:55

none of us have strong intuitions for

play31:56

that I can imagine it sort of but it's

play31:59

not like a clear picture um and this is

play32:03

going to happen uh it doesn't mean we

play32:06

don't get to steer it it doesn't mean we

play32:07

don't get to work really hard to make it

play32:09

safe and to do it in a responsible way

play32:11

but we are going to go to the Future and

play32:13

I think the best way to get there in a

play32:15

way that works

play32:17

is the level of Engagement we now have

play32:20

part of the reason a big part of the

play32:21

reason we believe in iterative

play32:23

deployment of our technology is that

play32:25

people need time to gradually get used

play32:28

to it to understand it we need time to

play32:30

make mistakes while the stakes are low

play32:32

governments need time to make some

play32:33

policy mistakes and also technology and

play32:36

Society have to co-evolve in a case like

play32:39

this uh so technology is going to change

play32:41

with each iteration but so is the way

play32:43

Society works and that's got to be this

play32:45

interactive iterative process um and we

play32:48

need to embrace it but have caution

play32:51

without fear and how long do we have for

play32:53

this iterative process to play I I think

play32:56

it's surprisingly continuous I don't

play32:58

like if I try to think about

play33:00

discontinuities I can sort of see one

play33:02

when AI can do really good AI research

play33:05

um and I can see a few others too but

play33:07

that's like an evocative example um but

play33:09

on the whole I don't think it's about

play33:12

like Crossing this one line I think it's

play33:14

about this continuous exponential curve

play33:17

we climb together and so how long do we

play33:19

have like no time at all in

play33:24

infinite I saw GPT five trending on X

play33:28

earlier this week and I clicked and I

play33:30

you know couldn't I it sounded uh you

play33:33

know probably misinformed but what what

play33:35

can you tell us about gbt 5 and is it an

play33:40

exponential uh you know improvement over

play33:43

what we've seen look I don't know what

play33:44

we're going to call our next model um I

play33:45

don't know when are you going to get

play33:46

creative with the uh the naming process

play33:49

uh I don't want to be like shipping

play33:52

iPhone

play33:53

27 um so you know it's not my style

play33:57

quite uh but I I think the next model we

play34:02

release uh I expect it to be very

play34:04

impressive to do new things that were

play34:06

not possible with gp4 to do a lot of

play34:08

things better and I expect us to like

play34:10

take our time and make sure we can

play34:11

launch something that we feel good about

play34:14

and responsible about within open AI

play34:16

some employees consider themselves to be

play34:20

quote building God is that I haven't

play34:23

heard that okay is um I mean I've heard

play34:27

like people say that factiously but uh I

play34:31

think almost all employees would say

play34:34

they're building a tool more so than

play34:36

they thought they were going to be which

play34:38

they're thrilled about you know this

play34:39

confusion in the industry of Are We

play34:41

building a creature are we building a

play34:42

tool um I think we're much more building

play34:45

a tool and that's much

play34:46

better uh to transition to something

play34:49

yeah goad no no no no you finish your

play34:51

thought oh I was just going to say like

play34:53

the

play34:54

the we think of ourselves as tool

play34:57

Builders um AI is much more of a tool

play35:01

than a product and much much more of a

play35:03

tool than this like entity and uh one of

play35:08

the most wonderful things about last

play35:10

year was seeing just how much people

play35:12

around the world could do with that tool

play35:14

and they astonished us and I think we'll

play35:16

just see more and more and human

play35:17

creativity uh and ability to like do

play35:21

more with better tools is remarkable and

play35:23

and before we have to start wrapping up

play35:25

you know there was a report that you

play35:26

were working with Johnny I on an AI

play35:29

powered device either within open AI

play35:32

perhaps as a separate company you know I

play35:34

bring it up because CES was earlier this

play35:37

month and AI powered devices were the

play35:39

the talk of of the conference you know

play35:42

can you give us an update on that and

play35:44

are we approach does AI bring us to the

play35:46

beginning of the end of the smartphone

play35:48

era smartphones are fantastic I don't

play35:51

think smartphones are going anywhere uh

play35:53

I think what they do they do really

play35:54

really well and they're very general if

play35:56

if there is a new thing to make uh I

play35:59

don't think it replaces a smartphone in

play36:01

the way that I don't think smartphones

play36:02

replace computers but if there's a new

play36:04

thing to make that helps us do more

play36:06

better you know in a in a new way given

play36:08

that we have this unbelievable change

play36:11

like I don't think we quite I don't

play36:13

spend enough time I think like marveling

play36:14

at the fact that we can now talk to

play36:16

computers and they understand us and do

play36:18

stuff for us like it is a new affordance

play36:20

a new way to use a computer and if we

play36:22

can do something great there uh a new

play36:25

kind of computer we should do that and

play36:27

if it turns out that the smartphone's

play36:28

really good and this is all software

play36:29

then fine but I bet there is something

play36:32

great to be done and um the partnership

play36:35

with Johnny is that an open AI effort is

play36:38

that another company I have not heard

play36:39

anything official about a partnership

play36:41

with

play36:42

Johnny okay um Anna I'm going to give

play36:46

you the last word as you and Sam meet

play36:48

with business and world leaders here at

play36:50

Davos what's the message you want to

play36:52

leave them

play36:54

with um I think the that there is an a

play36:58

trend where people feel more fear than

play37:01

excitement about this technology and I

play37:03

understand that we have to work very

play37:04

hard to make sure that the best version

play37:06

of this technology is realized but I do

play37:08

think that many people are engaging with

play37:11

this via the leaders here and that they

play37:13

really have a responsibility to make

play37:15

sure that um they are sending a balanced

play37:17

message so that um people can really

play37:20

actually engage with it and realize the

play37:22

benefit of this technology can I have 20

play37:24

seconds absolutely one one of the things

play37:26

that I think open ey has not always done

play37:28

right in the field hasn't either is find

play37:30

a way to build these tools in a way uh

play37:33

and also talk about them that don't

play37:36

don't get that kind of response I think

play37:38

chat gbt one of the best things it did

play37:39

is it shifted the conversation to the

play37:41

positive not because we said trust us

play37:43

it'll be great but because people used

play37:44

it and are like oh I get this I use this

play37:47

in a very natural way the smartphone was

play37:48

cool cuz I didn't even have to use a

play37:49

keyboard and phone I could use it more

play37:51

naturally talking is even more natural

play37:53

um speaking of Johnny Johnny is a genius

play37:55

and one of the things that I think he

play37:57

has done again and again about computers

play37:59

is figuring out a way to make them very

play38:03

human compatible and I think that's

play38:05

super important with this technology

play38:07

making this feel like uh you know not

play38:10

this mystical thing from sci-fi not this

play38:11

scary thing from sci-fi but this this

play38:14

new way to use a computer that you love

play38:16

and that really feels like I still

play38:18

remember the first iMac I got and what

play38:21

that felt like to me

play38:23

relative it was heavy but the fact that

play38:25

it had that handle even though it is

play38:26

like a kid it was very heavy to carry um

play38:29

it did mean that I was like I had a

play38:31

different relationship with it because

play38:32

of that handle and because of the way it

play38:34

looked I was like oh I can move this

play38:36

thing around I could unplug it and throw

play38:38

it out the window if it tried to like

play38:39

wake up and take over that's nice um and

play38:42

I think the way we design our technology

play38:44

and our products really does matter