AI Everywhere: Transforming Our World, Empowering Humanity

Dartmouth Engineering
19 Jun 202451:42

Summary

TLDRВ рамках специального мероприятия в Дарmouth прошла беседа с Мирой Мурати, одной из ведущих специалистов в области искусственного интеллекта и альумной инженерной школы. Руководитель Open AI обсудила разработку инновационных моделей, таких как ChatGPT и Dall-E, а также взглянула на будущие применения ИИ, проблемы безопасности и этические аспекты. Также была поднята тема влияния ИИ на рынок труда и образование, а также важность регулирования использования передовых технологий.

Takeaways

  • 🌟 Мира Мурати, известная как главный технический директор в Open AI и выпускница 2012 года из школы инженерии Дарmouth, является одной из ведущих лидеров в области искусственного интеллекта в стране.
  • 🎓 В рамках мероприятия Дарmouth была представлена честь приветствовать Миру Мурати и Джоффа Блэкберна, который является членом совета директоров Дарmouth и бывшим старшим вице-президентом по глобальным медиа и развлечениям в Amazon.
  • 🏗️ Мира Мурати начала свою карьеру после окончания школы в аэрокосмической промышленности, затем перешла в Tesla, где работала над моделями Model S и Model X, и после этого присоединилась к стартапу, где применяла AI и компьютерное зрение в пространственном общении.
  • 🤖 Она была привлечена к Open AI из-за его миссии по созданию безопасного искусственного общего интеллекта и возможности работать над научными исследованиями в этой области.
  • 📈 В Open AI Мира взяла на себя роль развития преобразующих моделей, таких как ChatGPT и Dall-E, которые открывают новые горизонты для будущих генеративных технологий ИИ.
  • 🔧 Важной частью работы Open AI является понимание и улучшение безопасности и контроля ИИ, включая предотвращение несанкционированного доступа к интернету и самостоятельных действий систем.
  • 🧠 Обсуждение включало в себя тему развития ИИ и его влияния на различные отрасли, с утверждением, что технология ИИ будет оказывать влияние на все аспекты когнитивной работы.
  • 👨‍🏫 Высшее образование, по мнению Миры, должно использоваться для продвижения инноваций в образовании и создания высококачественного, доступного и персонализированного обучения с использованием ИИ.
  • 🛠️ Мира подчеркнула значимость обучения навыкам, необходимым для работы с новыми технологиями, и умению адаптироваться к быстрому развитию в области ИИ.
  • 🌐 Важно обеспечивать этичность и безопасность ИИ, включая вопросы авторских прав, биометрических прав и предотвращение неприкосновенности личности с помощью ИИ.
  • 💡 В заключение, Мира Мурати дала совет студентам о важности изучать с радостью и любопытством, а также уменьшения стресса, связанного с будущим карьерным путем.

Q & A

  • Какое событие проходит в видеотранскрипте?

    -В видеотранскрипте проходит специальное событие - беседа с Мирой Мурати, одной из ведущих специалистов в области искусственного интеллекта и альумнажкой инженерного факультета Дартмутского колледжа.

  • Какое звание у Алексис Абрамсон?

    -Алексис Абрамсон является деканом Thayer School of Engineering в Дартмутском колледже.

  • Что известна о Мирове Мурати?

    -Миру Мурати известна как главный технический директор в OpenAI и альумнажкой инженерного факультета Дартмутского колледжа 2012 года. Она признана за ее pioneeering work в области AI, включая развитие моделей ChatGPT и Dall-E.

  • Какой особой гостевой присутствует на мероприятии?

    -Особый гость на мероприятии - Джой Буоламвини, известная своими работами в области этики искусственного интеллекта и алгоритмической справедливости.

  • Какой опыт Мира Мурати в области инноваций?

    -Мира Мурати имеет опыт работы в аэрокосмической отрасли, а также в Tesla, где она работала над моделями Model S и Model X. Она также интересуется применением AI и компьютерного зрения в автономном вождении и пространственном вычислении.

  • Какие технологии разрабатывает OpenAI?

    -OpenAI разрабатывает преобразующие модели, такие как ChatGPT и Dall-E, которые открывают новые горизонты для будущих генеративных технологий AI.

  • Что такое генеративные AI технологии и как они работают?

    -Генеративные AI технологии - это системы, способные создавать новые данные, такие как текст, изображения или видео, на основе обучения на больших объемах данных с использованием глубоких нейросетей.

  • Какие аспекты безопасности и этики рассматриваются в разработке AI?

    -В разработке AI рассматриваются аспекты безопасности, таких как предотвращение несанкционированного доступа к интернету и самостоятельных действий систем, а также социальные последствия, такие как влияние на рабочие места и необходимость регулирования.

  • Какие возможности для инноваций предлагается в области образования с использованием AI?

    -AI предлагает возможности для создания высококачественного, доступного и персонализированного образования, способного удовлетворять индивидуальные потребности обучающихся по всему миру.

  • Какие изменения в отраслях могут быть вызваны применением AI?

    -Применение AI может привести к изменениям во всех отраслях, особенно в области когнитивной работы, включая финансы, контент, медиа, здравоохранение и многие другие.

  • Какие проблемы могут возникнуть в связи с правами на создание и биометрическими данными при использовании AI?

    -Проблемы, связанные с правами на создание, могут включать в себя вопросы о согласии, компенсации и авторских правах при использовании данных для обучения моделей AI. Биометрические данные, такие как голоса и лица, вызывают вопросы о подделке и правах на личность.

  • Что такое 'Spec' в контексте разработки AI и как он помогает?

    -Spec - это инструмент, предоставляющий прозрачность в значения, заложенные в систему AI. Он может быть понятен как 'конституция' для AI-систем, которая развивается и становится более точкой со временем.

  • Какой подход к тестированию и развертыванию использует OpenAI для своих продуктов?

    -OpenAI использует итерационное развертывание, начиная с ограниченного доступа для экспертов и 'красного командования' для изучения рисков, а затем постепенного расширения доступа, собирая обратную связь и понимая крайние случаи.

  • Какие меры предосторожности применяются для предотвращения негативных последствий использования технологий AI?

    -OpenAI разрабатывает и использует такие меры, как водмаркировка, политики контента, инструменты для определения глубоких подделок и распространения информации, а также партнерства с различными общественными группами для управления рисками и проблемами, связанными с AI.

  • Какие взгляды Алексис Абрамсон на будущее образования и его роль в интеграции AI?

    -Алексис Абрамсон видит, что образование должно интегрировать AI для продвижения творчества и знаний, предоставляя высококачественное и доступное образование, способное адаптироваться к индивидуальным потребностям обучающихся.

  • Какой совет Алексис Абрамсон дает студентам Дартмутского колледжа?

    -Алексис Абрамсон советует студентам изучать с меньшим стрессом, изживать учебу с радостью и любопытством, а также стремиться к общему пониманию различных областей знаний.

Outlines

00:00

🎓 Вступление и приветствие

Алексис Абрамсон, декан школы инженерии Тайер в Дарmouth, открывает мероприятие, приветствуя гостей и участников. Она представляет Миру Мурати, выдающуюся лидеру в области искусственного интеллекта и альумну Дарmouth по инженерии. Также приветствуются специальные гости, включая Джой Бууламвини, известную по своим работам в области этики ИИ и альтернативной справедливости. Обсуждается история инноваций в ИИ Дарmouth и роль Мира Мурати в Open AI, где она разработала модели, такие как ChatGPT и Dall-E. Важное внимание уделяется подготовке к общению с Мирой и ее вкладом в разработку генеративных технологий ИИ.

05:03

🚀 Начало карьеры и переход в OpenAI

Миру Мурати рассматривает свой переход от работы в аэрокосмической отрасли к позиции в Tesla, которая была важным шагом из-за ее интереса к инновационным вызовам в области автономного транспорта. Она отмечает, что хотя ее работа в Tesla связывалась с разработкой Model S и Model X, интерес к искусственному интеллекту и автономному вождению машин привела ее к интересу к обучению в этой области. Важным этапом стало присоединение к стартапу, где она применяла AI и компьютерное зрение для работы в пространственном вычислении. Это дало ей новый угол зрения на применение AI, что позже привело к ее присоединению к OpenAI и работе над безопасным искусственным общеприменительным интеллектом.

10:05

🤖 Развитие ИИ и его потенциал

Обсуждается концепция развития ИИ на протяжении последних десятилетий, включая комбинацию нейронных сетей, обилия данных и вычислительной мощности. Эти три фактора в сочетании приводят к созданию преобразующихся систем ИИ, способных выполнять общие задачи, хотя их способность к этому не всегда ясна. Развитие таких систем, как GPT-3, демонстрирует понимание языка на уровне, сопоставимом с человеческим, и способность генерировать текст на основе обучения. Также затрагивается потенциал ИИ для работы с различными типами данных, включая код, изображения, видео и звук, что подчеркивает его многообразие применения.

15:07

🔐 Безопасность ИИ и его развёртывание

Важность безопасности ИИ и сопутствующие проблемы обсуждаются с точки зрения технологической и социальной ответственности. Подчёркивается, что безопасность и способности ИИ не должны рассматриваться отдельно, а должны разрабатываться вместе. Уменьшение рисков и предоставление инструментов для понимания и контроля ИИ является основной задачей. Также затрагивается необходимость развития научного подхода к предсказанию возможностей ИИ и создании соответствующих ограничений заранее.

20:08

🌐 Ответственное использование ИИ и общественная ответственность

Обсуждается ответственное использование технологий ИИ, включая вопросы авторских прав, биометрических прав и предотвращение неприкосновенности личности. Подчёркивается, что ответственность за использование ИИ лежит как у разработчиков, так и у общества в целом. Необходимость предоставления обучения, инструментов и доступа для понимания и контроля ИИ для всех заинтересованных сторон, включая правительства и регуляторов, выделяется как ключевой элемент стратегии управления рисками.

25:10

🛠️ Влияние ИИ на индустрии и рынок труда

Анализируется потенциальное влияние ИИ на различные отрасли и рынок труда. Подчёркивается, что ИИ затрагивает все сферы когнитивной работы и может привести к значительным изменениям в профессиональной среде. Также затрагивается тема потерь рабочих мест, создании новых возможностей и преобразования экономики с учетом влияния ИИ, включая распределение экономической ценности, общественных выгод и систем поддержки, таких как базовый доход.

30:11

🎨 Использование ИИ в творчестве и образование

Обсуждается потенциал ИИ для расширения творческих возможностей и преподавания. Утверждается, что ИИ может снизить барьеры для творчества, предоставив инструменты для помощи в создании дизайнов, кодировании, письме и т.д. Также затрагивается тема использования ИИ для улучшения процесса обучения, включая индивидуализацию программ обучения и поддержку навыков обучения.

35:13

🤝 Сотрудничество с обществом и защитой авторских прав

В заключительной части разговора подчёркивается важность сотрудничества с медиакомпаниями, создателями контента и обществом для разработки продуктов ИИ, которые помогут продвигать общество и быть полезными. Также обсуждается тема защиты авторских прав и биометрических прав, включая вопросы использования данных, согласования, компенсации и контроля за данными в продуктах ИИ.

40:16

🎓 Отклик на вопрос о возвращении в учебу

В заключении Мира Мурати отвечает на вопрос о том, что она бы сделала иначе, вернувшись в учебу. Она выражает мысль о том, что бы изучала те же предметы, но с меньшим стрессом, чтоб учиться с радостью и любознательностью. Также подчёркивается важность обучения в широком диапазоне предметов для лучшего понимания мира и развития как исследователя.

Mindmap

Keywords

💡Искусственный интеллект (AI)

Искусственный интеллект (AI) - это область информатики, которая создает системы, способные выполнять задачи, требующие интеллекта у людей, такие как обработка естественного языка, распознавание изображений или принятие решений. В контексте видео это основная тема, так как Mira Murati является одним из ведущих специалистов в этой области, и видео посвящено обсуждению разработок и перспектив AI.

💡Генеративные AI-технологии

Генеративные AI-технологии относятся к классу систем, которые могут создавать новые данные, такие как текст, изображения или аудио, на основе ученных моделей. В видео Mira Murati упоминает разработки, такие как ChatGPT и Dall-E, которые являются примерами генеративных технологий и открывают новые горизонты для будущих инноваций в области AI.

💡Машинное обучение

Машинное обучение - это подраздел AI, который позволяет системам обучаться и улучшаться на основе данных, вместо того чтобы следовать строго заданным алгоритмам. В видео Mira Murati объясняет, что современные системы AI, такие как нейросети, используют машинное обучение для выполнения общих задач и достижения результатов, сопоставимых с человеческими.

💡Трансформативные модели

Трансформативные модели - это теоретические и практические подходы в машинном обучении, которые позволяют системам не только анализировать данные, но и генерировать новые варианты решений или содержания. В видео упоминается, что Mira Murati участвовала в разработке таких моделей, которые способны поставлять новый этап в развитии генеративных AI-технологий.

💡Open AI

Open AI - это исследовательская организация, которая работает над безопасным искусственным общим интеллектом и разрабатывает инновационные технологии AI. В видео Mira Murati, занимающаяся технологическими разработками в Open AI, обсуждает свои достижения и работы по созданию влиятельных AI-продуктов.

💡Безопасность ИИ

Безопасность ИИ затрагивает вопросы, связанные с предотвращением негативных последствий использования искусственного интеллекта, включая злонамеренные действия или непреднамеренные ошибки. В видео Mira Murati подчёркивает важность интеграции безопасности на ранних этапах разработки AI и создания сопутствующих механизмов безопасности.

💡Развитие масштабов

Развитие масштабов в контексте AI описывает тенденцию улучшения производительности систем AI с увеличением объема данных и вычислительных ресурсов. В видео Mira Murati упоминает 'законы масштабирования', которые демонстрируют, как системы становятся более умными с ростом данных и вычислительной мощности.

💡Этика ИИ

Этика ИИ - это область, которая изучает нравственные, правовые и социальные аспекты искусственного интеллекта, включая вопросы справедливости, недискриминации и ответственности. В видео присутствует Joy Buolamwini, специалист в области этики ИИ, и обсуждается важность рассмотрения этичности на всех этапах разработки и использования AI.

💡Датасет

Датасет - это коллекция данных, используемых для обучения и тестирования систем машинного обучения. В контексте видео, Mira Murati упоминает важность выбора и обработки датасетов для обеспечения точности и эффективности обучения AI-моделей.

💡Технологический прогресс

Технологический прогресс в видео обсуждается в контексте быстрого развития систем AI и их способности к самообучению и генерации новых контента. Mira Murati предсказывает, что следующие системы, такие как GPT-4, будут иметь уровень интеллекта, сопоставимый с уровнем интеллекта PhD-студента в определенных задачах.

💡Инновации в образовании

Инновации в образовании - это использование новых технологий для улучшения процесса обучения и расширения возможностей для студентов. В видео Mira Murati выражает уверенность в том, что AI может кардинально изменить образование, предоставив индивидуализированный подход и доступ к высококачественным ресурсам для каждого.

Highlights

Alexis Abramson, dean of Thayer School of Engineering, welcomed attendees to a special event featuring a conversation with AI leader Mira Murati.

Mira Murati, a Dartmouth Engineering alum and CTO at Open AI, is known for her work on AI technologies like ChatGPT and Dall-E.

Joy Buolamwini, renowned for her work in AI ethics, was also present to receive an honorary degree from Dartmouth.

Dartmouth's history with AI innovation dates back to the first conference on artificial intelligence in 1956.

Mira Murati's early career included working in aerospace and Tesla, where she became interested in the intersection of AI and self-driving cars.

Murati's work at a startup involved applying AI and computer vision to spatial computing, exploring new interfaces for human-computer interaction.

OpenAI's mission to build safe artificial general intelligence aligns with Murati's interest in AI's potential for societal advancement.

The development of transformative AI models like GPT-3 and Dall-E has been driven by a combination of neural networks, data, and compute power.

AI systems have demonstrated the ability to understand and generate language, code, and even create images and videos from text prompts.

Murati discusses the concept of 'scaling laws' in AI, where increased data and compute lead to improved model performance.

Commercializing AI technology has proven challenging, leading OpenAI to develop its own products like the ChatGPT API.

AI systems are projected to reach human-level intelligence in specific tasks within the next couple of years.

Safety and security in AI development are critical, with OpenAI focusing on building these considerations into technology from the outset.

Murati emphasizes the importance of societal readiness for AI, advocating for shared responsibility and education on AI capabilities and risks.

OpenAI is working on predictive capabilities to prepare for the future risks and safety concerns associated with advanced AI systems.

The potential for AI to impact various industries is vast, with Murati suggesting that AI will transform cognitive work across the board.

Mira Murati's vision for AI in education is to create highly accessible, customized learning experiences globally.

The conversation also touched on the importance of studying the current use of AI tools to predict and prepare for their future impact on jobs and society.

Murati's advice for students is to study with less stress, maintain curiosity, and enjoy the learning process.

Transcripts

play00:03

- Good afternoon, everyone.

play00:06

Great to see a nice packed room here in our new building.

play00:11

My name is Alexis Abramson,

play00:12

dean of Thayer School of engineering at Dartmouth,

play00:15

and it's truly a pleasure to welcome you all

play00:18

to this very special event,

play00:20

a conversation with Mira Murati,

play00:22

one of our nation's foremost leaders

play00:24

in artificial intelligence

play00:26

and also a Dartmouth Engineering alum.

play00:30

Before we get started,

play00:31

I wanna extend a special welcome to a special guest,

play00:34

Joy Buolamwini,

play00:37

who is also renowned for her work in AI,

play00:40

AI ethics, and algorithmic justice.

play00:43

She'll also be receiving her honorary degree

play00:45

from Dartmouth tomorrow.

play00:48

And a warm welcome to Mira and all of you who

play00:51

either are part of her family now,

play00:53

or are part of her family when she was here at Dartmouth,

play00:56

including her brother, Ernel Murati,

play00:58

also a Thayer alum from the class of 2016.

play01:03

Thank you to our partners

play01:04

at the Neukom Institute for Computational Science

play01:07

and the Department of Computer Science.

play01:11

From Dartmouth's very first seminal conference

play01:14

on artificial intelligence in 1956

play01:17

to our current multidisciplinary research

play01:21

on large language models and precision health,

play01:24

Dartmouth has long been at the forefront of AI innovation.

play01:28

So we are especially thrilled to have Mira,

play01:31

Chief Technology Officer at Open AI

play01:34

and their School of Engineering's class of 2012

play01:37

with us today.

play01:39

She is known for her pioneering work on some

play01:41

of the most talked about AI technologies of our time.

play01:46

At Open AI, she has spearheaded the development

play01:49

of transformative models like ChatGPT and Dall-E,

play01:53

setting new the stage for future generative AI technologies.

play01:59

Now, during her time as a student at Thayer,

play02:02

she applied her engineering skills to design

play02:06

and built hybrid race cars

play02:09

with Dartmouth's Formula Racing Team.

play02:14

Tomorrow at commencement,

play02:15

she will receive an honorary doctorate of science

play02:18

from Dartmouth.

play02:20

Finally, moderating our conversation today

play02:22

is Jeff Blackburn, Dartmouth class of 1991

play02:26

and current Dartmouth trustee.

play02:28

Jeff's extensive career is centered on the growth

play02:30

of global digital media and technology.

play02:34

He served as senior vice president

play02:36

of Global Media and Entertainment at Amazon until 2023,

play02:42

and has had various leadership positions at the company,

play02:46

his insights into the intersection of technology,

play02:50

and media, and entertainment

play02:52

will certainly make sure we have

play02:54

an engaging conversation today.

play02:57

So without further ado, I'll the conversation over.

play03:01

Please join me in welcoming Mira Murati and Jeff Blackburn.

play03:04

(audience applauding)

play03:13

- Thank you, Alexis.

play03:15

And this beautiful building, so nice.

play03:19

Mira, thank you so much for coming here and spending time.

play03:22

I can only imagine how crazy your days are right now.

play03:25

- It's great to be here.

play03:27

- It is so nice of you to take this time for everybody here.

play03:30

- Really happy to be here.

play03:31

- And I wanna get right to it

play03:33

because I know everybody just wants

play03:34

to hear what's going on in your life

play03:36

and what you're building

play03:37

'cause it's just fascinating.

play03:40

Maybe we should just start with you,

play03:43

and you leave Thayer,

play03:46

you go to Tesla for a bit, then OpenAI.

play03:48

If you could just describe kind of that period

play03:51

and then joining OpenAI in the early days.

play03:55

- Yeah, so I was...

play03:58

Right after Thayer, I actually worked in aerospace briefly,

play04:02

and then I sort of realized that

play04:03

aerospace was kind of slow-moving

play04:07

and I was very interested in Tesla's mission

play04:12

and of course really innovative challenges

play04:16

in building basically a sustainable future

play04:20

for transportation, and I decided to join then.

play04:25

And after working on Model S and Model X,

play04:29

I thought I don't really wanna become a car person.

play04:32

I kind of want to work on different challenges,

play04:36

at the intersection of really advancing

play04:39

society forward in some way,

play04:43

but also in doing this really hard engineering challenges.

play04:48

And at the time when I was at Tesla,

play04:50

I got very interested in self-driving cars

play04:54

and sort of the intersection of these technologies,

play04:57

computer vision and AI, applying them to self-driving cars.

play05:03

And I thought, okay, I'd like to learn more about AI,

play05:06

but in different domains.

play05:08

And that's when I joined the startup

play05:11

where I was leading engineering and product

play05:14

to apply AI and computer vision

play05:16

in the domain of spatial computing,

play05:19

so thinking about the next interface of computing.

play05:23

And at the time, I thought it was going to be

play05:26

virtual reality and augmented reality.

play05:28

Now I think it's a bit different,

play05:32

but I thought, what if you could use your hands

play05:37

to interact with very complex information,

play05:39

whether it's formulas, or molecules,

play05:44

or concepts in topology?

play05:48

You can just learn about these things and interact with them

play05:51

in a much more intuitive way,

play05:53

and that expands your learning.

play05:56

So it turned out VR was a bit too early then.

play06:00

And so...

play06:02

But this gave me enough

play06:05

to learn about AI in a different domain

play06:08

and sort of I think my career has always been kind of

play06:11

at the intersection of technology and various applications,

play06:15

and it gave me a different perspective

play06:17

of how far along AI was

play06:21

and what it could be applied to-

play06:22

- So the Tesla self-driving,

play06:23

you saw machine learning, deep learning.

play06:25

You could see where this is going.

play06:26

- Vision, yes. - Yeah.

play06:28

- But not clearly- - Did you work with Elon?

play06:30

- I did, yes, in the last year especially.

play06:34

But it wasn't totally clear where it was going.

play06:38

At the time, it was still apply AI to narrow applications,

play06:42

not generally.

play06:43

You're applying it to very narrow specific problems,

play06:47

and it was the same in VR and Ar.

play06:49

And from then I thought

play06:53

I don't really want to just apply it to specific problems.

play06:56

I want to learn about

play07:00

just the research and really understand what is going on,

play07:03

and, from there, then go apply to other things.

play07:07

So this is when I joined OpenAI,

play07:09

and Open AI's mission was very appealing to me.

play07:12

It was a nonprofit back then,

play07:14

and the mission hasn't changed.

play07:17

The structure has changed,

play07:19

but when I joined six years ago,

play07:21

it was a nonprofit geared to build

play07:24

safe, artificial general intelligence,

play07:26

and it was the only other company doing this,

play07:29

other than DeepMind.

play07:31

Now of course there are a lot of companies

play07:33

that are sort of building some version of this.

play07:35

- A handful, yeah. - Yes.

play07:39

And that's sort of how the journey started to OpenAI.

play07:44

- Got it.

play07:44

And so you've been building a lot since you were there.

play07:47

I mean, maybe we could just, for the group,

play07:51

just some AI basics of

play07:54

machine learning, deep learning, now AI.

play07:59

it's all related, but it is something different.

play08:01

So, what is going on there

play08:03

and how does that come out in a ChatGPT, or a Dall-E,

play08:07

or your video product?

play08:09

How does it work?

play08:13

- It's not something radically new.

play08:15

In a sense, we're building on decades and decades

play08:19

of human endeavor.

play08:20

And in fact, it did start here.

play08:23

And what has happened in let's say the last decade

play08:28

is this combination of these three things

play08:31

where you have neural networks, and then a ton of data,

play08:37

and a ton of compute.

play08:39

And you combine these three things,

play08:42

and you get this really transformative AI systems or models

play08:48

that it turns out they can do these amazing things,

play08:51

like general tasks,

play08:54

but it's not really clear how.

play08:56

Deep learning just works.

play08:58

And of course we're trying to understand

play09:01

and apply tools and research

play09:03

to understand how these systems actually work,

play09:06

but we know it works from

play09:09

just having done it for the past few years.

play09:12

And we have also seen the trajectory of progress

play09:16

and how the systems have gotten better over time.

play09:20

When you look at systems that like GPT-3,

play09:24

large language models that we deployed

play09:29

about three, yeah, 3 1/2 years ago.

play09:34

GPT-3 was able to sort of...

play09:37

First of all, the goal of this model

play09:39

is just to predict the next token.

play09:43

- [Jeff] It's really next word prediction.

play09:44

- Yes, pretty much. - Yeah.

play09:46

- And then we found out that if you give this model

play09:50

this objective to predict the next token,

play09:53

and you've trained it on a ton of data,

play09:57

and you're using a lot of compute,

play09:58

what you also get is this model

play10:01

that actually understands language

play10:04

at a pretty similar level to how we can.

play10:05

- [Jeff] 'Cause it's read a lot of books.

play10:07

it's read all the books.

play10:09

- It kinda knows- - Basically all the content-

play10:10

- What words should come next. - On the internet.

play10:15

But it's not memorizing what's next.

play10:19

It is really generating an understanding

play10:22

of its own understanding of the pattern

play10:26

of the data that it has seen previously.

play10:28

And then we found that, okay, it's not just language.

play10:31

Actually, if you put different types of data in there,

play10:34

like code, it can code too.

play10:36

So, actually, it doesn't care

play10:38

what type of data you put in there.

play10:40

It can be images,

play10:41

it can be video,

play10:43

it can be sound,

play10:46

and it can do exactly the same thing.

play10:48

- [Jeff] Oh, we'll get to the images.

play10:50

Yeah.

play10:51

(Jeff laughs)

play10:52

But yes, text prompt can give you images or video,

play10:56

and now you're seeing even the reverse.

play10:58

- Yes, yes, exactly.

play10:59

So you can do...

play11:02

So we found out that this formula

play11:04

actually works really well,

play11:06

data, compute, and deep learning,

play11:09

and you can put different types of data,

play11:12

you can increase the amount of compute,

play11:14

and then the performance of these AI systems

play11:17

gets better and better.

play11:19

And this is what we refer to as scaling laws.

play11:22

They're not actual laws.

play11:23

It's essentially like a statistical prediction

play11:28

of the capability of the model

play11:30

improving as you put in more data and more compute into it.

play11:36

And this is what's driving AI progress today.

play11:39

- [Jeff] Why did you start with a chatbot?

play11:44

- So, yeah, in terms of product,

play11:47

actually, we started with the API.

play11:49

We didn't really know how to commercialize GPT-3.

play11:53

It's actually very, very difficult

play11:55

to commercialize AI technology.

play12:00

And initially, we took this for granted,

play12:03

and we were very focused on building the technology

play12:05

and doing research.

play12:06

And we thought, here is this amazing model,

play12:10

commercial partners, take it

play12:12

and go build amazing products on top of it.

play12:15

And then we found out that that's actually very hard.

play12:19

And so this is why we started doing it ourselves.

play12:23

And we-

play12:24

- [Jeff] That led you to build a chatbot

play12:26

'cause you just wanted to-

play12:27

- Yes, because we were trying to figure out,

play12:29

okay, why is it so hard for this

play12:31

really amazing successful companies

play12:33

to actually turn this technology into a helpful product?

play12:37

- I see.

play12:38

- And it's because it's a very odd way to build products.

play12:42

You're starting from capabilities.

play12:45

You're starting from a technology.

play12:47

You're not starting from what is the problem in the world

play12:50

that I'm trying to address.

play12:52

It's very general capability.

play12:55

- And so that leads to pretty quickly

play12:59

what you just described there, which is more data,

play13:02

more compute,

play13:04

more intelligence.

play13:05

How intelligent is this gonna get?

play13:08

I mean, it sounds like your description is

play13:09

the scaling of this is pretty linear,

play13:13

you add more of those elements and it gets smarter.

play13:19

Has it gotten smarter ChatGPT in the last couple years,

play13:22

and how quickly will it get to

play13:25

maybe human-level intelligence?

play13:28

- So yeah, these systems are already human-level

play13:31

in specific tasks,

play13:34

and of course in a lot of tasks, they're not.

play13:38

if you look at the trajectory of improvement,

play13:42

systems like GPT-3, we're maybe

play13:46

let's say toddler level intelligence.

play13:50

And then systems like GPT-4 are more like

play13:53

smart high schooler intelligence.

play13:56

And then in the next couple of years,

play13:59

we're looking at PhD-level intelligence for specific tasks.

play14:06

- [Jeff] Like?

play14:07

- So things are changing and improving pretty rapidly.

play14:11

- Meaning like a year from now?

play14:13

- Yeah, a year and a half let's say.

play14:17

- Where you're having a conversation with ChatGPT

play14:22

and it seems smarter than you.

play14:25

- In some things, yeah.

play14:27

In a lot of things, yes.

play14:29

- Maybe a year away from that.

play14:31

- I mean, yeah, could be.

play14:32

- Pretty close. - Roughly.

play14:33

Roughly.

play14:34

Well, I mean, it does lead to these other questions,

play14:36

and I know you've been very vocal on this,

play14:39

which I'm happy and proud that you are doing

play14:44

on the safety aspects of it,

play14:45

but, I mean, people do want to hear from you on that.

play14:49

So I mean, what about three years from now

play14:53

when it's unbelievably intelligent?

play14:56

It can pass every single bar exam everywhere

play14:58

and every test we've ever done.

play15:00

And then it just decides it wants to

play15:03

connect to the internet on its own and start doing things.

play15:06

Is that real, and is that...

play15:10

Or, is that something you're thinking about

play15:12

as the CTO and leading the product direction?

play15:16

- Yes, we're thinking a lot about this.

play15:18

It's definitely real that you'll have

play15:20

AI systems that will have agent capabilities,

play15:24

connect to the internet, talk to each other,

play15:26

agents connecting to each other and doing tasks together,

play15:31

or agents working with humans and collaborating seamlessly.

play15:36

So sort of working with AI

play15:38

like we work with each other today.

play15:41

In terms of safety, security,

play15:45

the societal impacts aspects of this work,

play15:50

I think these things are not an afterthought.

play15:53

It can be that you sort of develop the technology

play15:56

and then you have to figure out

play15:57

how to deal with these issues.

play15:59

You kind of have to build them alongside the technology

play16:03

and actually in a deeply embedded way to get it right.

play16:07

And for capabilities and safety,

play16:11

they're actually not separate domains.

play16:14

They go hand in hand.

play16:17

It's much easier to direct a smarter system by telling it,

play16:22

okay, just don't do these things.

play16:25

They need to direct a less intelligent system.

play16:30

It's sort of like training

play16:34

a smarter dog versus a dumber dog,

play16:37

and so intelligence and safety go hand in hand.

play16:41

- [Jeff] It understands the guardrails better

play16:43

because it's smarter. - Right, yeah, exactly.

play16:45

And so there is this whole debate right now around,

play16:48

do you do more safety or do you do more capability research?

play16:52

And I think that's a bit misguided

play16:54

because of course you have to think about

play16:58

the safety into deploying

play17:01

a product and the guardrails around that.

play17:04

But in terms of research and development,

play17:07

they actually go hand in hand.

play17:10

And from our perspective,

play17:13

the way we're thinking about this is

play17:16

approaching it very scientifically.

play17:17

So let's try to predict the capabilities

play17:21

that these models will be,

play17:26

the capabilities that these models will have

play17:28

before we actually finish training.

play17:30

And then along the way,

play17:32

let's prepare the guardrails for how we handle them.

play17:36

That's not really been the case in the industry so far.

play17:40

We train these models,

play17:41

and then there are these emergent capabilities we call them,

play17:47

because they emerge.

play17:49

We don't know they're going to emerge.

play17:50

We can see sort of the statistical performance,

play17:54

but we don't know whether that statistical performance

play17:56

means that the model is better at translation,

play18:00

or at doing biochemistry, or coding or something else.

play18:08

And developing this new science of capability prediction

play18:14

helps us prepare for what's to come.

play18:17

And that means...

play18:18

- [Jeff] You're saying all that safety work,

play18:19

it's kind of consistent with your development.

play18:21

- Yes, that's right. - It's a similar path.

play18:23

- Yeah, so you have to kind of bring it along and-

play18:25

- But What about these issues, Mira, like

play18:29

the video of Volodymyr Zelensky saying, "We surrender,"

play18:35

the Tom Hanks video, or a dentist ad?

play18:39

I can't remember what it was.

play18:40

What about these types of uses?

play18:44

Is that in your sphere

play18:46

or does it need to be regulation around that?

play18:49

How do you see that playing out?

play18:51

- Yeah, so I mean, my perspective on this is that

play18:53

this is our technology.

play18:54

So it's our responsibility how it's used,

play18:58

but it's also shared responsibility

play19:01

with society, civil society, government,

play19:04

content makers, media, and so on,

play19:07

to figure out how it's used.

play19:10

But in order to make it a shared responsibility,

play19:12

you need to bring people along,

play19:14

you need to give them access,

play19:15

you need to give them tools

play19:18

to understand and to provide guardrails.

play19:22

And I think-

play19:23

- [Jeff] Those things are kind of hard to stop though,

play19:25

right?

play19:28

- Well, I think it's not possible to have zero risk,

play19:33

but it's really a question of, how do you minimize risk?

play19:39

And providing people the tools to do that.

play19:44

And in the case of government, for example,

play19:48

it's very important to bring them along

play19:50

and give them early access to things,

play19:54

educate them on what's going on.

play19:56

- Governments.

play19:57

- Yes, for sure, and regulators.

play19:59

And I think perhaps the most significant thing

play20:03

that ChatGPT did was bring AI

play20:06

into the public consciousness,

play20:08

give people a real intuitive sense

play20:11

for what the technology is capable of and also of its risks.

play20:17

It's a different thing when you read about it

play20:19

versus when you try it and you try it in your business,

play20:22

and you see, okay, it cannot do these things,

play20:24

but it can do this other amazing thing,

play20:27

and this is what it actually means for the workforce

play20:31

or for my business.

play20:33

And it allows people to prepare.

play20:36

- Yeah, no, that's a great point.

play20:37

I mean, just these interfaces that you've created, ChatGPT,

play20:42

are informing people about what's coming.

play20:44

I mean, you can use it.

play20:45

You can see now what's underneath.

play20:47

Do you think there's...

play20:48

Just to finish on the government point.

play20:51

I mean, let's just talk the US right now.

play20:53

Do you wish there was

play20:55

certain regulations that we're actually just

play20:58

putting into place right now?

play20:59

Before you get to that year or two from now.

play21:03

It's extremely intelligent, a little bit scary.

play21:06

So are there things that should just be done now?

play21:10

- We've been advocating for more regulation

play21:13

on the frontier models which will have this

play21:20

amazing capabilities that also have a downside

play21:24

because of misuse.

play21:26

And we've been very open with policy makers

play21:29

and working with regulators on that.

play21:32

On the more sort of near term and smaller models,

play21:38

I think it's good to allow for

play21:42

a lot of breadth and richness in the ecosystem

play21:47

and not let people that don't have as many resources

play21:51

in compute or data not,

play21:54

sort of not block the innovation in those areas.

play21:56

So we've been advocating for more regulation

play22:00

in the frontier systems

play22:02

where the risks are much higher.

play22:04

And also, you can kind of get ahead of what's coming

play22:08

versus trying to keep up with changes

play22:10

that are already happening really rapidly.

play22:13

- But you probably don't want Washington, D.C.

play22:16

regulating your release of GPT-5,

play22:20

like that you can or cannot do this.

play22:23

- I mean, it depends, actually.

play22:25

It depends on the regulation.

play22:27

So there is a lot of work that we already do

play22:29

that has now been sort of, yeah, codified in

play22:34

the White House' commitments, and this-

play22:38

- So it's underway - Work already been done.

play22:40

And it actually informed the White House' commitments

play22:45

or what the UN Commission is doing

play22:48

with the principles for AI deployments.

play22:51

And usually, I think the way to do it

play22:53

is to actually do the work,

play22:55

understand what it means in practice,

play22:57

and then create regulation based on that.

play23:01

And that's what has happened so far.

play23:03

Now, getting ahead of these frontier systems requires that

play23:07

we do a lot more forecasting

play23:09

and science of capability prediction

play23:12

in order to come up with correct regulation on that.

play23:16

- [Jeff] Well, I hope the government has people

play23:17

that can understand what you're doing.

play23:20

- It seems like more and more

play23:23

folks are joining the government

play23:25

that have better understanding of AI, but not enough.

play23:29

- Okay.

play23:31

In terms of industries,

play23:32

you have the best seat maybe in the world

play23:35

to just see how this is gonna impact different industries.

play23:38

I mean, it already is in finance, and content,

play23:42

and media, and healthcare.

play23:45

But what industries do you think, when you look forward,

play23:49

do you think are gonna be most impacted by AI

play23:52

and the work that you're doing at OpenAI?

play23:58

- Yeah, this is sort of similar to the question

play24:01

that I used to get from entrepreneurs

play24:03

when we started building a product on top of GPT-3,

play24:09

where people would ask me,

play24:12

"What can I do with it?

play24:13

What is it good for?"

play24:14

And I would say everything.

play24:16

So just try it.

play24:19

And so it's kind of similar in the sense

play24:22

that I think it'll affect everything,

play24:24

and there's not going to be an area that won't be,

play24:27

in terms of cognitive work and

play24:32

the cognitive labor and cognitive work.

play24:35

Maybe it's gonna take a little bit longer

play24:37

to get into the physical world,

play24:40

but I think everything will be impacted by it.

play24:42

Right now we've seen...

play24:45

So I'd say there's been a bit of a lag

play24:47

in areas that have a lot of,

play24:51

that are high risk, such as healthcare or legal domains.

play24:55

And so there is a bit of a lag there and rightfully so.

play25:00

First, you want to understand and bring it in,

play25:03

use cases that are lower risk, medium risk,

play25:06

really make sure those are handled with confidence

play25:09

before applying it to things that are higher risk.

play25:14

And initially, there should be more human supervision,

play25:17

and then the delegation should change,

play25:19

and to the extent they can be more collaborative, but-

play25:24

- [Jeff] Are there use cases that you personally love,

play25:28

or are seeing, or are about to see?

play25:29

- Yeah, so I think basically

play25:36

the first part of anything that you're trying to do,

play25:39

whether it is creating new designs,

play25:43

whether it's coding,

play25:46

or writing an essay, or writing an email

play25:51

or basically everything,

play25:55

the first part of everything that you're trying to do

play25:58

becomes so much easier.

play26:01

And that's been my favorite use of it.

play26:06

so far I've really used it- - First draft for everything.

play26:08

- Yeah, first draft for everything.

play26:10

It's so much faster.

play26:12

It lowers the barrier to doing something

play26:15

and you can kind of focus on

play26:19

the part that's a bit more creative and more difficult,

play26:24

especially in coding.

play26:25

You can sort of outsource a lot of the tedious work.

play26:31

- Documentation and all that kinda stuff.

play26:33

- Yeah, documentation and...

play26:35

But in industry, we've seen so many applications.

play26:38

Customer service is definitely

play26:40

a big application with chatbots,

play26:44

and writing,

play26:48

also analysis,

play26:50

because right now we've sort of connected a lot of tools

play26:53

to the core model,

play26:56

and this makes the models far more usable

play26:59

and more productive.

play27:01

So you have tools like code analysis.

play27:04

It can actually analyze a ton of data.

play27:06

You can dump all sorts of data in there,

play27:07

and it can help you analyze and filter out the data,

play27:12

or you could use images

play27:15

and you could use browsing tool.

play27:18

So if you're preparing let's say a paper,

play27:23

the research part of the work can be done much faster

play27:27

and in a more rigorous way.

play27:30

So I think this is kind of

play27:34

the next layer that's going to be added to productivity,

play27:38

adding these tools to the core models,

play27:41

and making it very seamless.

play27:42

The model decides when to use say the analysis tool

play27:47

versus search versus something else.

play27:50

- [Jeff] Write a program.

play27:51

yeah, yeah.

play27:53

Interesting.

play27:54

Has it watched every TV show and movie in the world,

play27:57

and is it gonna start writing scripts and making films?

play28:05

- Well, it's a tool.

play28:07

And so

play28:10

it certainly can do that as a tool,

play28:14

and I expect that we will actually,

play28:17

we will collaborate with it,

play28:18

and it's going to make our creativity expand.

play28:23

And right now if you think about

play28:26

how humans consider creativity,

play28:28

we see that it's sort of this very special thing

play28:32

that's only accessible

play28:33

to this very few talented people out there.

play28:36

And these tools actually make it,

play28:40

lower the barrier for anyone

play28:42

to think of themselves as creative

play28:45

and expand their creativity.

play28:47

So in that sense, I think it's

play28:50

actually going to be really incredible.

play28:52

- Yeah, could give me 200 different cliffhangers

play28:54

for the end of episode one or whatever,

play28:56

very easily. - Yes.

play28:58

And you can extend the story,

play28:59

the story never ends.

play29:01

You can just continue. - Keep going.

play29:03

I'm done writing, but keep going.

play29:06

That's interesting.

play29:07

- But I think it's really going to be a collaborative tool,

play29:11

especially in the creative spaces where-

play29:14

- I do too.

play29:15

- Yeah, more people will become more creative.

play29:19

- There's some fear right now. - Yes, for sure.

play29:21

- But you're saying that'll switch

play29:23

and humans will figure out how to make

play29:26

the creative part of the work just better?

play29:28

- I think so, and

play29:31

some creative jobs maybe will go away,

play29:36

but maybe they shouldn't have been there in the first place

play29:40

if the content that comes out of it

play29:42

is not very high quality,

play29:44

but I really believe that using it

play29:46

as a tool for education, creativity

play29:49

will expand our intelligence,

play29:51

and creativity, and imagination.

play29:54

- Well, people thought CGI and things like that

play29:57

were gonna wreck the film industry at the time.

play29:59

They were quite scared.

play30:00

This is, I think a bigger thing,

play30:04

but yeah, anything new like that,

play30:07

the immediate reaction is gonna be,

play30:08

"Oh god, this is..."

play30:11

But I hope that you're right about film and TV.

play30:18

Okay, the job part you raised,

play30:22

and let's forget Hollywood stuff,

play30:24

but there's a lot of jobs that people are worried about

play30:28

that they think are at risk.

play30:31

What's your view on job displacement in AI

play30:35

and really not even just the work you're doing at OpenAI,

play30:39

just over overall.

play30:41

Should people be really worried about that,

play30:44

and which kind of jobs,

play30:46

or how do you see it all working out?

play30:49

- Yeah, I mean the truth is that we don't really understand

play30:53

the impact that AI is going to have on jobs yet.

play30:59

And the first step is to actually help people understand

play31:02

what the systems are capable of, what they can do,

play31:05

integrate them in their workflows,

play31:08

and then start predicting and forecasting the impact.

play31:12

And also, I think people don't realize how much

play31:18

these tools are already being used,

play31:19

and that's not being studied at all.

play31:22

And so we should be studying what's going on right now

play31:26

with the nature of work, the nature of education,

play31:29

and that's going to help us predict

play31:31

for how to prepare for these increased capabilities.

play31:35

In terms of jobs specifically, I'm not an economist,

play31:39

but I certainly anticipate that a lot

play31:43

of jobs will change, some jobs will be lost,

play31:46

some jobs will be gained.

play31:48

We don't know specifically what it's going to look like,

play31:52

but you can imagine a lot of jobs that are repetitive,

play31:57

that are just strictly repetitive

play31:59

and people are not advancing further,

play32:03

those would be replaced. - People like QA,

play32:05

and testing code, and things like that,

play32:06

those jobs are-

play32:09

- Unless they are- - They're done.

play32:10

- Yes, and if it's strictly just that or strictly-

play32:15

- And it's just one example.

play32:16

There's many things like that. - Yeah, many things.

play32:17

- Do you think there'll be enough jobs created elsewhere

play32:20

to compensate for that?

play32:24

- I think there are going to be a lot of jobs created,

play32:26

but the weight of how many jobs are created,

play32:30

how many jobs are changed, how many jobs are lost,

play32:34

I don't know.

play32:36

And I don't think anyone knows really,

play32:38

because it's not being rigorously studied,

play32:41

and it really should be.

play32:45

And yeah, but I think the economy will transform

play32:50

and there is going to be a lot of value created

play32:54

by these tools.

play32:55

And so the question is, how do you harness this value?

play33:01

If the nature of jobs really changes,

play33:04

then how are we distributing

play33:07

sort of the economic value into society?

play33:10

Is it through public benefits?

play33:12

Is it through UBI?

play33:13

Is it through some other new system?

play33:15

So there are a lot of questions to explore and figure out.

play33:20

- There's a big role for higher ed

play33:22

in that work that you're describing there.

play33:24

It's just not quite happening yet.

play33:26

- Yeah.

play33:28

- What else for higher ed and this future of AI?

play33:33

What do you think is the role of higher ed

play33:35

in what you see and how this is evolving?

play33:39

- I think really figuring out

play33:44

how we use these tools and AI to advance education.

play33:48

Because I think one of the most powerful

play33:51

applications of AI is going to be in education,

play33:55

advancing our creativity and knowledge.

play33:59

And we have an opportunity

play34:02

to basically build super high quality education

play34:06

and very accessible and ideally free for anyone in the world

play34:12

in any of the languages or cultural nuances

play34:15

that you can imagine.

play34:18

You can really have customized understanding

play34:22

and customized education for anyone in the world.

play34:26

And of course in institutions like Dartmouth,

play34:29

the classrooms are smaller and you have a lot of attention,

play34:34

but still you can imagine having just one-on-one tutoring,

play34:40

even here, let alone in the rest of the world.

play34:41

- Supplementing. - Yes.

play34:44

Because we don't spend enough time learning how to learn.

play34:47

That sort of happens very late, maybe in college.

play34:52

And that is such a fundamental thing, how you learn,

play34:56

otherwise you can waste a lot of time.

play35:00

And the classes, the curriculum, the problem sets,

play35:05

everything can be customized

play35:07

to how you actually learn as an individual.

play35:10

- So you think it could really, at a place like Dartmouth,

play35:12

it could compliment some of the learning that's happening.

play35:14

Oh, absolutely, yeah. - Just have AIs

play35:16

as tutors and what not.

play35:21

Should we open it up?

play35:22

Do you mind taking some questions from the audience?

play35:25

Is that okay? - Happy to, yeah.

play35:26

- All right.

play35:27

Why don't we do that.

play35:30

Dave, you wanna start?

play35:32

- [Dave] Sure, if you don't.

play35:33

- [Speaker] Hold on one second.

play35:34

I'll give you a microphone.

play35:39

- One of Dartmouth's first computer scientists,

play35:42

John Kemeny, once gave a lecture about how

play35:46

every computer program that humans build

play35:49

embeds human values into that program,

play35:51

whether intentionally or unintentionally.

play35:54

And what I'm wondering is what human values do you think

play35:56

are embedded in GPT products,

play35:59

or, put a different way, how should we embed values in,

play36:03

like respect, equity, fairness, honesty, integrity,

play36:07

things like that into these kinds of tools?

play36:12

- That's a great question and a really hard one

play36:16

and something that we think about,

play36:18

we've been thinking about for years.

play36:21

So right now, if you look at these systems,

play36:25

a lot of the values are input,

play36:28

are basically put in in the data,

play36:31

and that's the data in the internet, license data,

play36:37

also data that comes through human contractors

play36:40

that will label certain problems or questions.

play36:46

And each of these inputs has specific value.

play36:51

So that's a collection of their values and that matters.

play36:55

And then once you actually

play36:57

put these products into the world,

play36:58

I think you have an opportunity to get

play37:00

a much broader collection of values

play37:04

by putting it in the hands of many, many people.

play37:07

So right now, ChatGPT,

play37:10

we have a free offering of ChatGPT

play37:12

that has the most capable systems,

play37:16

and it's used by over 100 million people in the world.

play37:21

And each of these people can provide feedback into ChatGPT.

play37:26

And if they allow us to use the data,

play37:29

we will use it

play37:33

to create this aggregate of values

play37:35

that makes the system better,

play37:37

more aligned with what people want it to do.

play37:40

But that's sort of the default system.

play37:42

What you kind of want on top of it

play37:44

is also a layer for customization

play37:48

where each community can sort of have their own values,

play37:52

let's say a school,

play37:54

a church, a country, even a state.

play37:58

They can provide their own values that are more specific

play38:03

and more precise on top of this default system

play38:06

that has basic human values.

play38:09

And so we're working on ways to do that as well.

play38:13

But it's actually, it's obviously a really difficult problem

play38:17

because you have the human problem

play38:19

where we don't agree on things,

play38:21

and then you have the technology problem.

play38:23

And on the technology problem,

play38:25

I think we've made a lot of progress.

play38:27

We have methods like

play38:29

reinforcement learning with human feedback

play38:31

where you give people a chance to provide

play38:34

their values into the system.

play38:36

We have just developed this thing we call the Spec

play38:41

that provides transparency into the values

play38:43

that are into the system.

play38:46

And we're building a sort of feedback mechanism

play38:50

where we collect input and data in how to advance the Spec.

play38:55

You can think of it as like a constitution for AI systems,

play38:59

but it's a living one that.

play39:01

It evolves over time because our values

play39:04

also evolve over time,

play39:05

and it becomes more precise.

play39:08

It's something we're working on a lot.

play39:12

And I think

play39:15

right now we're thinking about basic values.

play39:18

But as the systems become more and more complex,

play39:21

we're going to have to think about

play39:25

more granularity in the values that's...

play39:27

- [Jeff] Can you keep that from like getting angry?

play39:29

- Getting angry? - Yeah.

play39:31

Is that one of the values?

play39:32

- Well, that should be...

play39:33

No.

play39:34

So that should actually be up to you.

play39:35

So if you as a user-

play39:37

- Oh, if you want an angry chatbot

play39:38

you can have it. - Yes, if you want

play39:39

an angry chatbot, you should have an angry chatbot.

play39:42

Yeah.

play39:44

- Okay, right here, yeah.

play39:50

- Hello. Thank you.

play39:51

Dr. Joy here.

play39:53

And also, congratulations on the honorary degree

play39:56

and all you've been doing with OpenAI.

play39:58

I'm really curious how you're thinking about

play40:00

both creative rights and biometric rights.

play40:03

And so earlier you were mentioning

play40:05

maybe some creative jobs ought not to exist,

play40:09

and you've had many creatives who are thinking about

play40:12

issues of consent, of compensation,

play40:16

of having whether it's proprietary models

play40:18

or even open source models,

play40:20

where the data is taken from the internet.

play40:23

So really curious about your thoughts on

play40:25

consent and compensation as it deals with creative rights.

play40:29

And since we're in a university,

play40:30

do you know the multi-part question piece?

play40:32

So the other thing is thinking about biometric rights,

play40:36

and so when it comes to the voice,

play40:39

when it comes to faces and so forth.

play40:41

So with the recent controversy around the voice of Sky

play40:44

and how you can also have people who sound alike,

play40:46

people who look alike,

play40:48

and all of the disinformation

play40:50

threats coming up in such a heavy election year,

play40:53

would be very curious about your perspective

play40:56

on the biometric rights aspects as well.

play40:59

- Yeah, so...

play41:01

Okay, I'll start with the last part on...

play41:06

We've done a ton of research on voice technologies

play41:10

and we didn't release them until recently

play41:14

precisely because they pose so many risks and issues.

play41:18

But it's also important to kind of bring society along,

play41:21

give access in a way that you can have guardrails

play41:24

and control the risks,

play41:25

and let other people study and make advances

play41:30

in issues like, for example, we're partnering

play41:33

with institutions to help us think about

play41:37

human AI interaction now that you have voice and video

play41:41

that are very emotionally evocative modalities.

play41:45

And we need to start understanding

play41:48

how these things are going to play out

play41:50

and what to prepare for.

play41:52

in that particular case,

play41:54

the voice of Sky was not Scarlett Johansson's,

play41:59

and it was not meant to be,

play42:02

and it was a completely parallel process.

play42:04

I was running the selection of the voice,

play42:06

and our CEO was having conversations

play42:09

with Scarlett Johansson and...

play42:13

But out of respect for her, we took it down.

play42:17

And some people see some similarities.

play42:19

These things are subjective,

play42:23

and I think you can sort of...

play42:27

Yeah, you can kind of come up with red teaming processes

play42:31

where if the voice, for example, was deemed to be

play42:35

super, super similar to a very well-known public voice,

play42:39

then maybe you don't select that specific one.

play42:42

In our red teaming, this didn't come up,

play42:45

but that's why it's important to also have

play42:47

more extended red teaming

play42:49

to catch these things early if needed.

play42:56

But more broadly, with the issue of biometrics,

play43:01

I think our strategy here is to

play43:06

give access to a few people,

play43:08

initially experts or red teamers

play43:10

that help us understand the risk and capabilities very well.

play43:15

Then we build mitigations,

play43:17

and then we give access to more people

play43:19

as we feel more confident around those mitigations.

play43:22

So we don't allow for people to

play43:26

make their own voices with this technology

play43:28

because we're still studying the risks

play43:31

and we don't feel confident that we can

play43:34

handle misuse in that area yet.

play43:37

But we feel good about handling

play43:41

misuse with the guardrails that we have

play43:43

on very specific voices

play43:47

in a small state right now,

play43:49

which is essentially extended red teaming.

play43:52

And then when we extend it to a thousand users,

play43:54

our Alpha release, we will be

play43:56

working very closely with its users,

play43:58

gathering feedback and understanding the edge cases

play44:02

so we can prepare for these edge cases

play44:03

as we expand use to say 100,000 people.

play44:06

And then it's going to be a million,

play44:07

and then 100 million, and so on.

play44:09

But it's done with a lot of control,

play44:12

and this is what we call iterative deployment.

play44:15

And if we can all get comfortable around this use cases,

play44:21

then we just won't release them in this specific...

play44:26

To extended users or for these specific use cases,

play44:30

we will probably

play44:33

try to lobotomize the product in a certain way

play44:38

because capability and risk go hand in hand.

play44:44

But we're also working on a lot of research

play44:47

to help us deal with issues of content provenance

play44:52

and content authenticity

play44:55

so people have tools

play44:57

to understand if something is a deep fake

play45:00

or spread misinformation and so on.

play45:05

Since the beginning of OpenAI, actually,

play45:07

we've been working on studying misinformation

play45:11

and we've built a lot of tools like

play45:14

watermarking, content policies

play45:18

that allow us to manage the sort of,

play45:22

yeah, the possibility of misinformation,

play45:25

especially this year given that it's a global election year.

play45:29

We've been intensifying that work even more.

play45:32

But this is extremely challenging area

play45:36

that we, as the makers of technology and products,

play45:40

need to do a lot of work on,

play45:43

but also partner with civil society,

play45:46

and media, and content makers

play45:48

to figure out how to address these issues.

play45:51

When we make technologies like audio or Sora,

play45:55

the first people that we work with

play45:57

after the red teamers that study the risks

play46:01

are the content creators.

play46:03

to actually understand how the technology would help them

play46:07

and how do you build a product

play46:09

that is both safe, and useful, and helpful,

play46:12

and that actually advances society.

play46:15

And this is what we did with Dall-E,

play46:18

and this is what we're doing with SORA,

play46:20

our video generation model again.

play46:25

And the first part of your question.

play46:26

- [Dr. Joy] Creative rights.

play46:27

So for the- - Creative rights

play46:29

- [Dr. Joy] About compensation, consent,

play46:31

- Yes. - Control and credit.

play46:35

- Yeah, that's also very important and challenging.

play46:38

right now we work,

play46:41

we do a lot of partnerships with media companies

play46:45

and we also give people a lot of control

play46:48

on how their data is used in the product.

play46:50

So if they don't want their data

play46:52

to be used to improve the model

play46:55

or for us to do any research or train on it,

play46:58

that is totally fine.

play46:59

We do not use the data.

play47:03

And then for just the creator community in general,

play47:07

we give access to these tools early.

play47:10

So we can hear from them first

play47:12

on how they would want to use it

play47:15

and build products that are most useful.

play47:18

And also these things are research produced.

play47:20

so we don't have to build product at all costs.

play47:24

We'd only do it if we can figure out a modality

play47:27

that's actually helpful in advancing people forward.

play47:32

And we're also experimenting with methods

play47:37

to basically create our tools that

play47:41

allow people to be compensated for data contribution.

play47:46

This is quite tricky both from technical perspective

play47:49

and also just building a product like that

play47:53

because you have to sort of figure out

play47:55

how much a specific amount of data,

play47:59

how much value it creates

play48:01

in a model that has been trained afterwards.

play48:04

And maybe individual data would be very difficult to gauge

play48:11

how much value that would provide.

play48:13

But if you can sort of create consortiums

play48:16

of an aggregate data

play48:19

and pools where people can provide their data,

play48:22

maybe that'd be better.

play48:23

So for the past I'd say two years,

play48:25

we've been experimenting with various versions of this.

play48:30

We haven't deployed anything,

play48:32

but we've been experimenting on the technical side

play48:35

and trying to really understand the technical problem.

play48:39

And we're a bit further along, but it's

play48:41

a really difficult issue. - It is.

play48:44

I bet there'll be a lot of new companies trying to build

play48:47

solutions for that. - Yeah, there other companies.

play48:49

- It's just so hard.

play48:50

- It is.

play48:53

- How about right there? Yeah.

play48:59

- [Participant] Thank you so much for your time

play49:00

and taking off your time in coming to talk to us.

play49:03

My question is pretty simple.

play49:05

If you had to come back to school today,

play49:08

you found yourself again

play49:09

at Thayer or at Dartmouth in general,

play49:13

what would you do again and what you would not do again?

play49:18

What would you major in

play49:19

or would you get involved in more things?

play49:21

Something like that.

play49:23

- I think I would study the same things

play49:27

but maybe with less stress.

play49:29

(all laughs)

play49:33

Yeah, I think I'd still study math and do...

play49:36

Yeah.

play49:37

Maybe I would take more computer science courses actually.

play49:43

but yeah, I would stress less because

play49:47

then you study with more curiosity and more joy,

play49:52

and that's more productive.

play49:57

But yeah, I remember, as a student,

play49:58

I was always a bit stressed

play50:00

about what was going to come after.

play50:03

And if I knew what I knew now, and to my younger self,

play50:06

I'd say, and actually everyone would tell me,

play50:08

"don't be stressed," but somehow it didn't...

play50:12

When I talk to older alums, they'd always say like,

play50:15

"Try to enjoy it and be fully immersed

play50:17

and be less stressed."

play50:19

I think, though, on specific courses,

play50:22

it's good to have, especially now,

play50:24

a very broad range of subjects

play50:27

and get a bit of understanding of everything.

play50:29

I find that both at school and after,

play50:33

because even now I work in a research organization,

play50:35

I'm constantly learning.

play50:37

You never stop.

play50:39

That is very helpful to kind of understand

play50:42

a little bit of everything.

play50:48

- [Jeff] Thank you so much,

play50:49

'cause I'm sure your life- - Thank you.

play50:51

- Is stressful.

play50:52

(all laughing)

play50:54

(audience applauding)

play50:57

- Thank you so much.

play51:00

- Thank you for being here today

play51:02

and also thank you for the incredibly important work

play51:05

you're doing for society, quite honestly.

play51:09

It's really important and I'm glad you're in the seat.

play51:12

- Thank you for having me.

play51:13

- Thank you from all of us here at Thayer

play51:15

and Dartmouth as well.

play51:16

So I thought that would be a good place to end on too,

play51:19

some good advice for our students.

play51:22

What a fascinating conversation

play51:24

and just wanted to thank you all again for coming.

play51:27

Enjoy the rest of Commencement Weekend.

play51:31

(no audio)

play51:35

(gentle music)

Rate This

5.0 / 5 (0 votes)

Related Tags
Искусственный интеллектOpenAIМира МуратиДжефф БлэкбернИнновацииБезопасность ИИЭтика ИИТехнологический прогрессОбразование ИИТрансформация работы
Do you need a summary in English?