OpenAI CEO Sam Altman on the Future of AI

Bloomberg Live
22 Jun 202322:56

Summary

TLDRこのインタビューでは、AI技術の将来とその影響について深く掘り下げられています。ゲストは、AIの驚くべき発展と、それが世界各地の人々に与える可能性の高い影響について語っています。彼は、AIのリスクと機会のバランスをとる必要性、そして社会がこれらの技術をどのように活用し、規制すべきかについての考えを共有しました。さらに、AIの倫理的側面と安全性、そしてこれがもたらす社会的、経済的利益についても議論されました。インタビューでは、AIが人類の未来にどのように組み込まれ、形作られるべきかについての洞察が提供されています。

The video is abnormal, and we are working hard to fix it.
Please replace the link and try again.

Q & A

  • AIに関する世界のリーダーや開発者の間での最大の驚きは何でしたか?

    -AIに対するレベルの興奮、楽観主義、未来への信念が、ベイエリアで感じたものよりもはるかに強烈であったことです。

  • AIの発展から得られた具体的なフィードバックにはどのようなものがありますか?

    -開発者からの100ページ以上のメモがあり、人々が持っている不満や彼らが望むことに関するフィードバックが含まれています。

  • AIの未来における最大のリスクは何ですか?

    -AIのリスクを絶滅レベルのリスクとして扱い、パンデミックや核戦争と同じくらいの優先度で取り組むべきであるという警告がされています。

  • なぜAIの開発を止めないのですか?

    -AIがもたらす膨大な利点、例えば教育や医療の質の向上、科学技術の進歩など、人類にとって重要なものが多く、また経済的利益の可能性を鑑みると、どの会社も開発を止めることはないでしょう。

  • AIに関するグローバルな規制の重要性は何ですか?

    -強力なAIシステムに対するグローバルな規制は、過剰な規制を避けつつ、リスクを管理し安全を確保するために役立つと考えられています。

  • 小規模なスタートアップやオープンソースプロジェクトに対する規制の考え方は?

    -これらを過度に規制すべきではないという立場を取っており、イノベーションを阻害しないような規制の形が求められています。

  • AIの安全性を確保するための認証システムはどのようなものが考えられていますか?

    -特定の能力の閾値を超えるAIモデルを訓練する人々は、認証プロセスを経るべきであり、外部監査や安全テストが行われるべきだとされています。

  • AI技術における人間のデータの偏見をどのように防ぐのですか?

    -人間のフィードバックによる強化学習を使用してモデルを調整し、偏見を減らすことが可能であり、GPT-4はこれまでのモデルよりも偏見が少ないと評価されています。

  • OpenAIの財務構造において、Sam Altmanはどのように利益を得るのですか?

    -Sam AltmanはOpenAIから直接的な財務的利益を得ることはなく、彼の目的は金銭的なものではなく、興味深い人生と影響を与えることにあります。

  • なぜ人々はOpenAIを信頼すべきではないのですか?

    -一つの会社や個人を盲信すべきではなく、AI技術の恩恵やアクセスは全人類に属するものであるため、より民主的な統治構造への移行が求められています。

Outlines

00:00

🌍 AIの現在と将来の展望

ブラッドは、世界中でAIに対する興奮と最適主義、そして適度な不安が存在することを発見し、それらの間の緊張関係を理解しようとする人々の願望に驚いたと語っています。彼は、AIの開発における地域差を認識し、特定のフィードバックを受け取り、それを開発プロセスに組み込むことの重要性を強調しています。さらに、AIの危険性に対する警鐘を鳴らし、それにもかかわらず技術の進歩を進める理由を説明しています。

05:02

🚀 AI技術の社会的影響と規制の必要性

AIの発展による経済的な利益が無視できない現状で、企業が開発を止めることはありえないとブラッドは述べています。彼は、過度な規制の危険性とともに、存在リスクレベルのシステムに対する世界的な規制の重要性を強調し、公私ともに規制に積極的であることを明言しています。また、AI安全性に対する異なる意見があること、および開発者としての彼の役割についても触れています。

10:07

🤝 MicrosoftとのパートナーシップとAIの未来

ブラッドは、Microsoftとの関係を肯定的に評価し、その挑戦にもかかわらず、過去最高の主要なパートナーシップであると語っています。彼は、AIが人類の最悪の衝動を抑え、最善を導くのに役立つという信念を再確認し、技術が進歩すればするほど、社会が直面する挑戦が大きくなると指摘しています。また、AI開発における倫理的考慮事項についても言及しています。

15:10

💡 AIと人間性:機械による自己改善の可能性

ブラッドは、AI技術の将来に対する見通しと、それが人類に与える影響について議論しています。彼は、AIが人間の創造性と有用性を促進し、技術革新を通じて社会により良い変化をもたらす可能性を強調しています。さらに、AIが自己改善の能力を持つ日が来るかもしれないという考えと、そのような技術の開発において倫理的に注意深く進める必要があることを認めています。

20:13

🤔 AIのグローバルな影響と信頼の問題

ブラッドは、中国やロシアとの協力の可能性に開かれていると述べ、国際的なAI開発における透明性の重要性を強調しています。彼は、OpenAIの構造と目標が人類全体に利益をもたらすよう設計されていることを説明し、一人の個人や企業に過度の権力が集中することの危険性について警告しています。彼は、AI技術の管理と利益を人類全体で共有することの重要性を強調し、そのための新しい構造やシステムの必要性を提唱しています。

Mindmap

Keywords

The video is abnormal, and we are working hard to fix it.
Please replace the link and try again.

Highlights

GPT-4 is less biased on implicit bias tests even compared to humans who think they're good at mitigating bias

Reinforcement learning and human feedback help align AI models and reduce bias over time

AI can be a force for reducing bias in the world, not enhancing it

Global regulation of powerful AI systems that pose existential risk should not over-regulate small startups

Altman has enough money already and wants to contribute back to technological progress

The Microsoft partnership has been OpenAI's best major partnership despite initial challenges

OpenAI aims to figure out governance democratized to all humanity over time

AI progress follows an exponential curve that humans struggle to intuitively understand

Upsides of AI like education, healthcare, science could help end poverty but we must manage risks

Altman took over 100 pages of feedback from developers on how to improve AI systems

World leaders Altman met with want global cooperation on developing safe AGI

Altman believes benefits enabled by advanced AI will be more interesting than prior tech revolutions

Altman takes handwritten notes in meetings to capture feedback

Google remains an intensely competent company that no one should count out in AI

Altman met AI developers, users, leaders globally and found high excitement about future AI potential

Transcripts

play00:04

MY GUEST NOW IS THE ONE AND ONLY PERSON WHO IS GOING TO BE

play00:07

DECIDING OUR FUTURES. BRAD: I DON'T THINK SO. [APPLAUSE]

play00:17

EMILY: YOU HAVE BEEN EVERYWHERE. >> THAT WAS A LONG TRIP.

play00:24

EMILY: YOU WERE IN RIO, TOKYO.

play00:33

WHAT SURPRISED YOU MOST?

play00:36

>> A LOT. IT IS LIKE A VERY SPECIAL

play00:42

EXPERIENCE TO GO TALK TO PEOPLE THAT ARE USERS, DEVELOPERS, SO

play00:47

RULED LEADERS INTERESTED IN AI. YOU REALLY GET AN UNDERSTANDING

play00:50

OF WHAT IS GOING ON. I THINK THE BIGGEST SURPRISE

play01:01

WAS THE LEVEL OF EXCITEMENT, OPTIMISM, BELIEF IN THE FUTURE

play01:03

AND WHAT THIS IS GOING TO MEAN EVERYWHERE.

play01:08

I KIND OF KNEW WHAT IT WAS LIKE IN THE BAY AREA AND IT WAS MUCH

play01:13

MORE INTENSE EVERYWHERE ELSE. EMILY:

play01:14

MORE EXCITEMENT THAN ANXIETY?

play01:16

>> ANXIETY, TOO, AS THERE SHOULD BE.

play01:19

YOU HAVE TO HAVE BOTH. BUT JUST THE THOUGHTFULNESS,

play01:24

THE UNDERSTANDING, THE NUANCE, THE TENSION BETWEEN THE TWO.

play01:28

THAT EXISTS EVERYWHERE AND PEOPLE'S DESIRE TO REALLY

play01:31

FIGURE OUT HOW TO DRIVE SOCIAL WITH THIS TECH AND WHAT IT WILL

play01:36

TAKE TO COME TOGETHER AS A PLANET TO REALLY MAKE SURE WE

play01:40

AVOID SOME OF THE DOWNSIDE SCENARIOS WAS QUITE

play01:42

SOPHISTICATED. EMILY: HOW MIGHT YOU CHANGE YOUR

play01:45

APPROACH TO THE DEVELOPMENT OF AI AS A RESULT OF WHAT YOU

play01:46

LEARNED?

play01:52

>> THERE'S A BUNCH OF SPECIFIC FEEDBACK, MORE THAN 100 PAGES

play01:55

OF NOTES FROM MEETING WITH DEVELOPERS ABOUT COMPLAINTS

play01:59

PEOPLE HAVE OR THINGS THEY WANT. EMILY:

play02:00

I SAW YOU TAKING HANDWRITTEN NOTES.

play02:03

>> I DO TAKE HANDWRITTEN NOTES. THERE IS ALL OF THAT AND THERE

play02:12

IS SORT OF THE WAY PEOPLE WANT TO CUSTOMIZE THE TOOLS, MAKE

play02:15

SURE THEIR OWN VALUES, CULTURE, HISTORY, LEG WHICH ARE

play02:17

REPRESENTED IN WHAT WE HAVE TO DO TO ENABLE THAT.

play02:19

THERE WILL BE A BUNCH OF SPECIFIC CHANGES WE WILL GO

play02:21

MAKE.

play02:28

AND THEN THE DESIRE FOR THE WORLD TO COOPERATE, LIKE THE

play02:31

NUMBER OF WORLD LEADERS WHO WOULD SAY THINGS LIKE I THINK

play02:36

THIS IS REALLY IMPORTANT, WE WENT TO GET AGI RIGHT, TELL ALL

play02:39

OF THE OTHER WORLD LEADERS I AM IN ON IT AND WORK TOGETHER.

play02:42

THEY CAME UP MAYBE EVERY TIME BUT ONE. EMILY:

play02:47

YOU SIGNED A 22 WORD STATEMENT WARNING ABOUT THE DANGERS OF AI.

play02:52

A REIT'S OF MITIGATING THE RISK OF EXTINCTION FROM AI SHOULD BE

play02:54

A GLOBAL PRIORITY ALONGSIDE OTHER SUBTLE SKILL RISKS THAT

play02:57

JUST PANDEMICS AND NUCLEAR WAR." CONNECT THE DOTS FOR US.

play03:03

HOW DO WE GET FROM A COOL CHATBOT TO THE END OF HUMANITY?

play03:10

>> WE ARE PLANNING NOT TO. EMILY:

play03:12

THAT IS THE HOPE, BUT THERE'S ALSO THE FEAR.

play03:17

>> I THINK ARE MANY WAYS IT COULD GO WRONG BUT WE WORK WITH

play03:23

POWERFUL TECHNOLOGY THAT CAN BE USED IN DANGEROUS WAYS VERY

play03:26

FREQUENTLY IN THE WORLD. AND I THINK WE HAVE DEVELOPED

play03:31

OVER THE DECADES GOOD SAFETY SYSTEM PRACTICES IN MANY

play03:36

CATEGORIES. IT IS NOT PERFECT. THINGS WILL GO WRONG.

play03:41

BUT I THINK WE WILL BE ABLE TO MITIGATE SOME OF THE WORST

play03:42

SCENARIOS YOU CAN IMAGINE.

play03:48

BIOTERRORIST, CYBERSECURITY. MANDY MOORE WE COULD TALK ABOUT.

play03:53

-- MANY MORE WE COULD TALK ABOUT. THE MAIN THING I FEEL IS

play03:59

IMPORTANT THROUGHOUT THIS TECHNOLOGY IS WE ARE ON AN

play04:03

EXPONENTIAL CURVE AND A RELATIVELY STEEP ONE.

play04:05

HUMAN INTUITION FOR EXPONENTIAL CURVES IS REALLY THAT IN

play04:07

GENERAL. IT CLEARLY WAS NOT THAT IMPORTANT IN OUR EVOLUTIONARY

play04:13

HISTORY. SO I THINK WE HAVE -- GIVEN WE

play04:17

HAVE THAT WEAKNESS, GET TO

play04:23

REALLY PUSH OURSELVES TO SAY, OK, GPT4, HOW SURE ARE WE THE

play04:28

GPT9 WON'T BE? IF THERE IS EVEN A SMALL

play04:30

PERCENTAGE CHANCE IT WILL BE BAD. EMILY:

play04:34

IF THERE IS THAT SMALL PERCENTAGE CHANCE, WHY KEEP

play04:35

DOING THIS AT ALL? WHY NOT STOP? >> A BUNCH OF REASONS.

play04:44

I THINK THE UPSIDES HERE ARE TREMENDOUS, THAT OPPORTUNITY

play04:47

FOR EVERYONE ON EARTH TO HAVE A BETTER QUALITY EDUCATION THAT

play04:52

BASICALLY ANYONE CAN GET TODAY, THAT SEEMS REALLY IMPORTANT.

play04:58

MEDICAL CARE AND WHAT I THINK IS GOING TO HAPPEN THERE,

play05:02

MAKING THAT AVAILABLE TRULY GLOBALLY. THAT IS GOING TO BE

play05:04

TRANSFORMATIVE. THE SCIENTIFIC PROGRESS WE ARE

play05:06

GOING TO SEE. I AM A BIG BELIEVER THAT REAL

play05:09

SUSTAINABLE IMPROVEMENTS IN ALL THE OF LIFE COME FROM

play05:14

SCIENTIFIC TECHNOLOGICAL PROGRESS AND I THINK WE WILL

play05:16

HAVE A LOT MORE OF THAT. THERE ARE THE OBVIOUS BENEFITS.

play05:20

I THINK IT WOULD BE GOOD TO END POVERTY.

play05:26

BUT WE HAVE TO MANAGE THROUGH THE RISK TO GET THERE.

play05:30

I ALSO THINK AT THIS POINT, GIVEN HOW MUCH PEOPLE SEE THE

play05:36

ECONOMIC BENEFITS POTENTIAL, NO COMPANY WOULD STOP IT.

play05:40

THE GLOBAL REGULATION, WHICH ONLY THINK SHOULD BE ON THESE

play05:46

POWERFUL EXISTENTIAL RISK LEVEL SYSTEMS -- OVERREGULATION IS

play05:47

HARD. YOU DON'T WANT TO OVERDO IT FOR

play05:48

SURE. BUT I THINK GLOBAL REGULATION

play05:53

CAN HELP MAKE IT SAFE, WHICH IS A BETTER ANSWER THAN STOPPING

play05:54

IT. I DON'T THINK STOPPING WOULD WORK. EMILY:

play05:58

LET'S TALK ABOUT GLOBAL REGULATION.

play06:01

YOU HAVE MET WITH PRESIDENT BIDEN AND THE CEOS OF MICROSOFT

play06:05

AND GOOGLE, AND YOUR CONQUER REGULATION BUT WITH CAVEATS.

play06:08

THE CRITICS SAY IT SOUNDS LIKE YOU'RE SAYING, REGULATE US BUT

play06:12

NOT REALLY. OR THAT YOU ARE CALLING FOR

play06:16

REGULATION IN PUBLIC BUT LOBBYING FOR SOMETHING ELSE IN

play06:19

PRIVATE. HOW WOULD YOU RESPOND TO THAT?

play06:23

>> WE ARE PUSHING FOR IN PRIVATE, TOO.

play06:27

OBVIOUSLY, WE HAVE THINGS ABOUT WAY TO DO IT THAT WILL BE

play06:28

EFFECTIVE AND INEFFECTIVE. WE, FOR EXAMPLE, DON'T THINK

play06:34

SMALL STARTUPS AND OPEN-SOURCE BLOWS AND HIKE OF ABILITY

play06:35

THRESHOLD SHOULD BE SUBJECT TO A LOT OF REGULATION.

play06:39

WE HAVE SEEN WHAT HAPPENS TO COUNTRIES THAT TRY TO OVERRATE

play06:41

-- OVER REGULATE TECH.

play06:49

BUT ALSO, WE THINK IT IS SUPER IMPORTANT THAT AS WE THINK

play06:51

ABOUT A SYSTEM THAT COULD GET A RISK LEVEL LIKE YOU'RE TALKING

play06:58

ABOUT, THAT WE HAVE A GLOBAL AND COORDINATED RESPONSE AS

play07:02

POSSIBLE SO WE HAVE BEEN TALKING ABOUT THAT PUBLICLY,

play07:03

PRIVATELY. I THINK IT IS IMPORTANT.

play07:07

YOU COULD POINT OUT WE ARE TRYING TO DO REVELATORY CAPTURE

play07:13

HERE, WHATEVER, BUT I JUST -- I THINK THAT IS SO TRANSPARENTLY

play07:17

INTELLECTUAL HE DISHONEST, I DON'T EVEN KNOW HOW TO RESPOND.

play07:19

EMILY: THERE IS THE SKEPTIC'S VIEW

play07:23

YOUR BUILDING THESE RARE ASHLEY SHOULD SHIPS WITH REGULATORS

play07:27

AND IT IS GOING TO BOX OTHER STARTUPS.

play07:29

>> THAT IS WHAT I MEANT ABOUT THE REGULATORY CAPTURE.

play07:35

WE ARE SAYING EXPLICITLY SHOULD NOT REGULATE SMALL STARTUPS.

play07:40

IT IS A BURDEN ON THEM THAT WE DON'T WANT IN SOCIETY. EMILY:

play07:42

WHAT YOU THINK ABOUT THE CERTIFICATION SYSTEM OF AI?

play07:47

>> I THINK THERE SOME VERSION THAT IS REALLY GOOD.

play07:48

I THINK PEOPLE TRAINING MODELS THAT ARE WAY ABOVE ANY MODEL

play07:54

SCALE WE HAVE TODAY, BUT ABOVE SOME CERTAIN CAPABILITY

play07:57

THRESHOLD -- I THINK YOU NEED TO GO THROUGH CERTIFICATION

play07:59

PROCESS FOR THAT. I THINK THERE SHOULD BE

play08:01

EXTERNAL AUDITS AND SAFETY TEST. WE DO THIS FOR LOTS OF

play08:06

INDUSTRIES WHERE WE CARE ABOUT SAFETY. EMILY:

play08:11

ELON MUSK WAS SCARED OF GOOGLE. IS GOOGLE STILL A THREAT?

play08:17

>> GOOGLE IS UNBELIEVABLY COMPETENT AND IT SEEMS LIKE

play08:20

THEY'RE FOCUSED WITH AN INTENSITY. EMILY:

play08:23

SO THEY ARE STILL SCARY?

play08:28

>> THEY ARE A COMPANY THAT I DON'T THINK ANYONE SHOULD EVER

play08:29

WRITE OFF. EMILY: WE HAVE SEEN NEW BARBS YOU AND

play08:34

ELON HAVE BEEN TRADING IN PUBLIC AND IN INTERVIEWS.

play08:42

>> I DON'T REALLY -- EMILY: YOU ARE RESPONDING.

play08:44

YOU ARE RESPONDING TO PEOPLE ASKING LIKE PEOPLE LIKE ME.

play08:52

WHY DO YOU THINK HE IS SO FRUSTRATED OR DISAPPOINTED WITH

play08:53

THE DIRECTION THAT OPENAI HAS GONE?

play08:59

>> I MEAN, YOU SHOULD ASK HIM. HE CAN GIVE YOU BETTER ANSWER.

play09:01

I CAN SPECULATE.

play09:07

I AM HAPPY TO TALK ABOUT THIS.

play09:15

I THINK HE REALLY CARES ABOUT AI SAFETY A LOT, AND I THINK

play09:20

THAT IS WHERE IT IS COMING FROM. A GOOD PLACE.

play09:24

WE JUST HAVE A DIFFERENCE OF OPINION ON SOME PARTS BUT WE

play09:25

BOTH CARE ABOUT THAT. HE WANTS TO MAKE SURE THAT WE

play09:32

THE WORLD HAVE THE MAXIMAL CHANCE. EMILY:

play09:33

YOU'RE NOT WORRIED HE'S GOING TO CALL YOU OUT, CALL YOU TO

play09:39

COME TO SOME CAGE MATCH IN THE VEGAS OCTAGON LIKE HE JUST DID

play09:40

WITH MARK ZUCKERBERG?

play09:48

>> I WILL GO WATCH IT AND ZUCK DID THAT. EMILY:

play09:53

MUCH HAS BEEN MADE OF THE MICROSOFT RELATIONSHIP.

play09:56

IT IS NOT JUST HIM, BUT HE HAS SAID HE IS WORRIED MICROSOFT

play10:02

HAS MORE CONTROL THAN THE LEADERSHIP AT OPENAI HAS.

play10:07

>> I THINK WHAT HE MEANS IS THEY COULD BREAK THE CONTRACT

play10:12

AND TAKE AWAY OUR ACCESS TO THE DATA CENTER. EMILY:

play10:15

AND A LOT OF MONEY THAT YOU HAVE ACCESS TO.

play10:19

>> WE HAVE MONEY, IT IS THE DATA CENTER THEY OPERATE. EMILY:

play10:24

HOW WOULD YOU CHARACTERIZE THE RELATIONSHIP?

play10:26

>> WE THINK IT IS GREAT. ANY DEEP COMP LOOKS

play10:31

RELATIONSHIP, IT IS NOT WITHOUT ITS CHALLENGES BUT IT IS REALLY

play10:35

GREAT. IT IS BY FAR THE BEST MAJOR

play10:39

PARTNERSHIP I HAVE EVER BEEN A PART OF.

play10:42

IT IS KIND OF LIKE -- ON BOTH SIDES, IT WAS A CRAZY THING TO

play10:46

JUMP INTO. SURPRISING IT WORKS THIS WELL.

play10:49

BUT IF YOU LOOK AT THE RESULT, WE ARE VERY HAPPY. EMILY:

play10:53

IN 2018 COME THE LAST TIME WE TALK IN PERSON, YOU TOLD ME YOU

play10:59

THOUGHT AI WOULD HELP US BE OUR BEST BUT ALSO STOP OUR WORST

play11:01

IMPULSES.

play11:07

WHAT MAKES YOU COMPETENT ABOUT THAT BECAUSE SO MANY TIMES,

play11:10

TIME AND TIME AGAIN, IF IT PSYCH TECHNOLOGY HAS ONLY

play11:14

AMPLIFIED -- TECHNOLOGY HAS ONLY AMPLIFIED OUR WORST. >>

play11:17

WILL DO BAD THINGS, TOO. I DON'T HAVE THIS ONLY VIEW OF

play11:23

IT, I HAVE A REALISTIC VIEW. IT IS HUMAN NATURE TO TALK

play11:25

ABOUT THE BAD MORE THAN THE GOOD.

play11:29

I THINK YOU CAN LOOK AT OTHER TECHNOLOGIES THAT HAVE DONE A

play11:34

LOT OF GOOD AND PLENTY OF HARM AND TALK 99% ABOUT THE HARM AND

play11:37

1% ABOUT THE GOOD. I DID THAT, TOO.

play11:39

THAT IS UNDERSTANDABLE. IN 2018, THAT WAS WAY BEFORE

play11:45

THE GPT SERIES WAS A THING SO AT THAT POINT WE HAD SOME

play11:49

INKLING IT WOULD GO LIKE THIS, WE CERTAINLY DID NOT KNOW

play11:50

EXACTLY. BUT I THINK WHAT WE ARE HEADING

play11:55

TO IS THIS PERSONAL TOOL THAT CAN HELP YOU IN WHATEVER WAY

play11:58

YOU WOULD LIKE. ONE OF THE FUN PARTS OF THE

play12:04

TRIP WAS HOW DIVERSE AND BROAD THE STORIES ARE OF HOW PEOPLE

play12:10

ARE USING IT AT WHATEVER THEY WANT TO BE BETTER AT AND HELP

play12:11

THEM.

play12:17

I THINK IF YOU GO TALK TO CHATGPT USERS, WILL FIND A LOT

play12:19

OF SUPPORT AND YOU CAN ALSO FIND PEOPLE WHO ARE MISUSING IT.

play12:24

EMILY: THE REALITY IS YOUR BUILDING IS

play12:28

ON THE BACK OF HUMAN DATA, THAT IS BIASED, RACIST, SEXIST,

play12:30

EMOTIONAL, THAT IS WRONG. A LOT IS WRONG.

play12:34

HOW DO YOU SAFEGUARD AGAINST THAT?

play12:37

>> THERE WAS A RECENT STUDY THAT GPT4, THE MODEL THAT IS

play12:44

RELEASED, IS LESS BIASED ON IMPLICIT BIAS TO TEST EVEN FOR

play12:47

HUMANS WHO THINK THEY HAVE REALLY TRAINED THEMSELVES TO BE

play12:50

GOOD AT THIS. IF YOU LOOK AT THE MODEL THAT

play12:54

COMES OUT OF THE PRETRAINING PROCESS, THAT MODEL WILL BE

play12:57

QUITE BIASED AND WILL REFLECT THE INTERNET.

play13:01

BUT REINFORCEMENT LEARNING HUMAN FEEDBACK, ONE OF THE

play13:04

TECHNIQUES WE USE TO ALIGN THE MODELS, WORKS QUITE WELL.

play13:07

IF YOU LOOK AT THE PROGRESS FROM MODEL TO MODEL, EVEN SOME

play13:13

OF OUR BIGGEST CRITICS ARE LIKE, WOW, THEY HAVE GOTTEN A

play13:15

LOT OF THE BIAS OUT OF THE MODELS.

play13:17

I THINK IT CAN BE A FORCE FOR REDUCING BIAS IN THE WORLD, NOT

play13:20

FOR ENHANCING IT. THERE ARE QUESTIONS ABOUT WHAT

play13:26

IF THE USER WANTS TO USE THE MODEL IN A BIASED WAY?

play13:29

HOW MUCH CONTROL DO YOU GET A USER?

play13:33

WHO DECIDES THAT LIMITS OF THE VALUE SYSTEM?

play13:36

THAT WILL BE A TOUGH QUESTION FOR SOCIETY TO WRESTLE WITH.

play13:41

THERE IS NOT A ONE SENTENCE BUTTONED UP ANSWER, BUT THE

play13:43

TECHNOLOGY I THINK HAS GONE MUCH FURTHER THAN PEOPLE

play13:47

THOUGHT IT WAS GOING TO IN TERMS OF BEING ABLE TO ALIGN

play13:51

THESE MODELS TO BEHAVE IN CERTAIN WAYS. EMILY:

play13:55

WE HAVE BEEN TALKING GOING BACK TO YOUR DAYS AT N.Y.C.

play13:59

AND IT HAS BEEN FUN TO WATCH THAT JOURNEY.

play14:02

I THINK PEOPLE REALLY WANT TO UNDERSTAND YOUR INCENTIVES AND

play14:05

DON'T NECESSARILY UNDERSTAND YOUR INCENTIVES.

play14:07

PEOPLE ARE PERPLEXED. THERE PERPLEXED YOU HAVE NO

play14:11

EQUITY. CAN YOU EXPLAIN THAT A LITTLE?

play14:18

IS THERE ANY FINANCIAL STRUCTURE WHEREBY YOU DO

play14:24

BENEFIT IF OPENAI IS A BIG THING?

play14:28

>> I GET WHY PEOPLE ARE PERPLEXED ABOUT THIS, AND I

play14:31

HAVE WONDERED IF I SHOULD JUST TAKE ONE SHARE OF EQUITY SO I

play14:33

NEVER HAVE TO ANSWER THIS QUESTION AGAIN. A FEW THINGS.

play14:40

ONE, WE ARE GOVERNED BY NONPROFIT WHICH I AM A BOARD

play14:42

MEMBER AND OUR BOARD NEEDS TO HAVE A MAJORITY OF

play14:45

DISINTERESTED INVESTORS. LIKE, DON'T HAVE EQUITY IN THE

play14:48

COMPANY. I ORIGINALLY DID NOT DO IT FOR

play14:52

THAT REASON. EMILY:

play14:59

BUT ARE THERE ANY FINANCIAL INCENTIVES?

play15:00

LIKE ON A CERTAIN BENCHMARK?

play15:02

>> NO. I HAVE A TINY BIT OF INVESTMENT

play15:04

BUT IT IS IMMATERIAL. EMILY:

play15:10

IF OPENAI IS MASSIVELY PROFITABLE, YOU WON'T BENEFIT

play15:11

FINANCIALLY?

play15:15

>> ONE OF THE TAKEAWAYS I HAVE LEARNED FROM QUESTIONS LIKE

play15:17

THIS IS THIS CONCEPT OF HAVING ENOUGH MONEY IS NOT SOMETHING

play15:21

THAT IS EASY TO GET ACROSS TO OTHER PEOPLE. EMILY:

play15:24

IT IS HARD FOR PEOPLE TO UNDERSTAND. [APPLAUSE]

play15:30

>> I HAVE ENOUGH MONEY. I'M GOING TO MAKE WAY, WAY MORE

play15:33

FROM OTHER INVESTMENTS.

play15:39

IF I JUST HAD TAKEN THE EQUITY, PEOPLE WOULD BE LIKE, THAT

play15:40

MAKES SENSE.

play15:46

THEY GOES TO THE NONPROFIT AND I TRUST THE PROPHET TO DO A

play15:50

GOOD THING WITH IT, BUT I HAVE ENOUGH MONEY.

play15:53

WHAT I WANT MORE OF IS AN INTERESTING LIFE IMPACT.

play15:57

I STILL GET A LOT OF SELFISH BENEFIT FROM THIS LIKE, WHAT

play16:01

ELSE AM I GOING TO DO WITH MY TIME? THIS IS REALLY GREAT.

play16:04

I CAN'T IMAGINE A MORE INTERESTING LIFE THAN THIS ONE

play16:09

AND A MORE INTERESTING THING TO WORK ON.

play16:12

I GET A TON OF BENEFIT BUT, YES, SOMEHOW THIS IDEA OF

play16:15

HAVING -- IT DOESN'T COMPETE FOR PEOPLE. EMILY:

play16:19

IS THAT ABOUT POWER? CONTROL?

play16:22

>> I WANT TO MAKE MY CONTRIBUTION BACK TO HUMAN

play16:25

TECHNOLOGICAL PROGRESS. I GET TO BENEFIT FROM ALL OF

play16:27

THIS STUFF THAT PEOPLE DID BEFORE.

play16:32

I GET TO USE THIS IPHONE THAT I STILL MARVEL AT EVERY DAY, ALL

play16:35

OF THE WORK THAT HAD TO GO INTO THAT.

play16:39

THOSE PEOPLE, I DON'T KNOW WHO THEY ARE, I AM VERY GRATEFUL TO

play16:40

THEM. THEY KNEW THEY WERE NEVER GOING

play16:43

TO GET RECOGNITION FOR ME PERSONALLY, BUT THEY ALSO

play16:47

WANTED TO DO SOMETHING TO CONTRIBUTE. AND SO DO I.

play16:50

I CAN'T IMAGINE BETTER COMPENSATION OR FEELING.

play16:55

IT WOULD BE MAYBE WEIRD IF I HAD NOT ALREADY MADE A BUNCH OF

play16:59

MONEY AND PLANNED TO MAKE WAY MORE FROM OTHER INVESTMENTS.

play17:02

I JUST DON'T THINK ABOUT IT.

play17:10

-- I THINK THAT THIS WILL BE LIKE, I THINK IT WILL JUST BE

play17:15

THE MOST IMPORTANT STEP YET THAT HUMANITY HAS TO GET

play17:18

THROUGH WITH TECHNOLOGY. AND I CARE ABOUT THAT.

play17:24

EMILY: WHAT IS ONE QUESTION PEOPLE

play17:26

REALLY WISH PEOPLE LIKE ME ASKED? SAM:

play17:30

THE PERSON [LAUGHTER] DRAMA OF THE DAY. EMILY:

play17:35

SERIOUSLY. I UNDERSTAND YOU ARE GETTING A

play17:37

LOT OF THOSE QUESTIONS. SAM:

play17:44

I'M ALWAYS EXCITED TO JUST TALK ABOUT WHAT CAN HAPPEN IN THE

play17:47

COMING FEW YEARS AND DECADES WITH THE TECHNOLOGY.

play17:53

EMILY: SO WHAT DO WE HAVE TO DO? SAM:

play18:00

TALK ABOUT ELON MUSK? [LAUGHTER] WE DON'T TALK ABOUT, IT'S

play18:07

DEEPLY IN OUR NATURE TO WANT TO CREATE, TO WANT TO BE USEFUL.

play18:12

TO WANT TO LIKE FEEL THE FULFILLMENT OF DOING SOMETHING

play18:13

THAT MATTERS.

play18:19

IF YOU TALK TO PEOPLE FROM THOUSANDS OF YEARS AGO,

play18:21

HUNDREDS OF YEARS AGO, THE WORK WE DO NOW WOULD HAVE SEEMED,

play18:26

YOU KNOW, UNIMAGINABLE BEST AND PROBABLY TRIVIAL.

play18:30

THIS IS NOT DIRECTLY NECESSARY FOR OUR SURVIVAL IN THE SENSE

play18:35

OF LIKE FOOD OR WHATEVER. THE SHIFT HAPPENS WITH EVERY

play18:39

TECHNOLOGY REVOLUTION. ANY WE WORRY ABOUT THE PEOPLE

play18:44

DO ON THE OTHER SIDE AND EVERY TIME WE FIND THINGS AND I

play18:46

SUSPECT -- EXPECT NOT ONLY WILL THIS BE AN EXCEPTION TO THAT,

play18:52

BUT THE THINGS WE FIND WILL BE BETTER, MORE INTERESTING, AND

play18:55

MORE IMPACTFUL THAN EVER BEFORE. A LOT OF PEOPLE TALK ABOUT AI

play18:59

AS THE LAST TECHNOLOGICAL REVOLUTION.

play19:02

I SUSPECT THAT FROM THE OTHER SIDE IT WILL LOOK LIKE THE

play19:06

FIRST. LIKE THE OTHER STUFF WILL BE SO SMALL IN CAN -- IN

play19:08

COMPARISON.

play19:14

THE TECHNOLOGICAL REVOLUTION, IT'S A CONTINUOUS ONE.

play19:18

CONTINUING AND EXPONENTIAL. WHAT WILL BE ENABLED, WHAT WE

play19:24

CAN'T IMAGINE ON THE OTHER SIDE, WE WILL HAVE TOO MUCH.

play19:27

IF YOU WANT TO SIT AROUND AND DO NOTHING, THAT WILL BE GOOD,

play19:29

TOO. EMILY: BONBONS AND PEACHES IN MY

play19:33

FUTURE. SAM: I DON'T THINK THAT IS WHAT YOU

play19:36

WILL WANT, BUT IT'S UP TO YOU. EMILY:

play19:40

YOU TALKED ABOUT AI DECIDING OTHER AI. SAM:

play19:44

THIS IS THE CLASSIC SCI-FI IDEA. THAT AT SOME POINT THESE

play19:51

SYSTEMS CAN HELP ADDRESS THEMSELVES, CAN DISCOVER BETTER

play19:54

ARCHITECTURES AND WRITE THEIR OWN CODE.

play19:57

I THINK WE ARE ALWAYS AWAY FROM THAT THAT IT IS WORTH PAYING

play19:58

ATTENTION TO. SAM: YOU DIDN'T -- EMILY: CHINA AND

play20:03

RUSSIA, YOU DIDN'T GO THERE? SAM: I DID SPEAK THERE. EMILY:

play20:08

VIRTUALLY? SAM: YEAH. EMILY: WHERE ARE THEY ON AI AND SHOULD

play20:13

WE BE WORRIED? SAM: I DON'T HAVE A GREAT SENSE.

play20:18

EMILY: DOES IT CONCERN YOU THAT WE

play20:19

DON'T KNOW? SAM: YEAH, I MEAN AGAIN LIKE

play20:26

ANYTHING, IMPERFECT INFORMATION CAUSES CONCERN.

play20:28

I WOULD LOVE TO HAVE A BETTER SENSE.

play20:31

BUT YOU KNOW, I'M OPTIMISTIC THAT WE CAN FIND SOME SORT OF

play20:33

COLLABORATIVE THING AND I THINK THE THING THAT GETS SAID IN THE

play20:38

U.S., THAT IT'S IMPOSSIBLE TO COOPERATE WITH CHINA, THAT IT'S

play20:43

OFF THE TABLE, IT'S ASSERTED AND PEOPLE ARE TRYING TO WILL

play20:47

IT INTO EXISTENCE BUT IT'S NOT CLEAR TO ME THAT THAT'S TRUE IN

play20:49

FACT I SUSPECT IT'S NOT.

play20:55

EMILY: I'M SO GRATEFUL YOU HAVE BEEN

play20:58

AROUND THE WORLD TALKING ABOUT THIS ENTER WITH US HERE TODAY.

play21:02

EVEN YOU WOULD ACKNOWLEDGE THAT YOU HAVE AN INCREDIBLE AMOUNT

play21:04

OF POWER AT THIS MOMENT IN TIME. WHY SHOULD WE TRUST YOU? SAM:

play21:07

YOU SHOULDN'T. LIKE YOU KNOW, I DON'T, YOU

play21:12

HAVE KNOWN ME FOR A LONG TIME. I WOULD RATHER BE IN THE OFFICE

play21:16

WORKING. BUT AT THIS MOMENT IN TIME

play21:21

PEOPLE DESERVE BASICALLY AS MUCH TIME ASKING QUESTIONS AS

play21:22

THEY WANT.

play21:29

BUT MORE TO THAT, LIKE KNOW ONE PERSON SHOULD BE TRUSTED HERE.

play21:35

I DON'T WANT 7 BILLION SHARES. THE BOARD OVERTIME TIME NEEDS

play21:41

TO GET LIKE DEMOCRATIZED TO ALL OF HUMANITY.

play21:44

THERE ARE MANY WAYS THAT COULD BE IMPLEMENTED.

play21:48

BUT THE REASON FOR OUR STRUCTURE, THE REASON IT IS SO

play21:50

WEIRD, THE CONSEQUENCE OF THAT READ -- WEIRDNESS IS WE THINK

play21:54

THIS TECHNOLOGY, THE BENEFITS, THE ACCESS TO IT BELONGS TO THE

play21:56

WHOLE.

play22:02

LIKE IF THIS REALLY WORKS IT'S LIKE QUITE A POWERFUL

play22:05

TECHNOLOGY AND YOU SHOULD NOT TRUST ONE COMPANY AND CERTAINLY

play22:11

NOT ONE PERSON. EMILY:

play22:14

ARE YOU SAYING WE SHOULD? -- SHOULDN'T TRUST OPENAI? SAM:

play22:21

IF WE ARE A FEW YEARS DOWN THE ROAD AND HAVEN'T FIGURED OUT

play22:23

HOW TO START DEMOCRATIZED TROLL, YOU SHOULDN'T.

play22:26

BUT LIKE IF WE FIGURE OUT SOME SORT OF NEW STRUCTURE WHERE

play22:32

OPENAI IS LIKE GOVERNED BY HUMANITY, AND THAT COULD HAPPEN

play22:35

IN MANY WAYS, INCLUDING THE ALIGNMENT THAT WE PICK.

play22:41

IT COULD MEAN ACTUAL BOARD CONTROL.

play22:44

WE ARE TALKING TO A LOT OF PEOPLE ABOUT WHAT I COULD LOOK

play22:48

LIKE IF WE DON'T DO THAT, I DON'T THINK LIKE JUST TRUST US

play22:49

IS GOOD ENOUGH. EMILY:

play22:55

WELL THANK YOU FOR EXPLAINING WHY WE SHOULD MAYBE CONSIDER

play22:58

TRUSTING YOU. YOU HAVE A PLANE TO CATCH.

play23:00

WE ARE SO GRATEFUL FOR YOUR TIME. SAM:

play23:03

THANK YOU SO MUCH FOR HAVING ME. EMILY: THANK YOU SO MUCH. SAM:

play23:07

FOR SURE.