AI Everywhere: Transforming Our World, Empowering Humanity
Summary
TLDRВ рамках специального мероприятия в Дарmouth прошла беседа с Мирой Мурати, одной из ведущих специалистов в области искусственного интеллекта и альумной инженерной школы. Руководитель Open AI обсудила разработку инновационных моделей, таких как ChatGPT и Dall-E, а также взглянула на будущие применения ИИ, проблемы безопасности и этические аспекты. Также была поднята тема влияния ИИ на рынок труда и образование, а также важность регулирования использования передовых технологий.
Takeaways
- 🌟 Мира Мурати, известная как главный технический директор в Open AI и выпускница 2012 года из школы инженерии Дарmouth, является одной из ведущих лидеров в области искусственного интеллекта в стране.
- 🎓 В рамках мероприятия Дарmouth была представлена честь приветствовать Миру Мурати и Джоффа Блэкберна, который является членом совета директоров Дарmouth и бывшим старшим вице-президентом по глобальным медиа и развлечениям в Amazon.
- 🏗️ Мира Мурати начала свою карьеру после окончания школы в аэрокосмической промышленности, затем перешла в Tesla, где работала над моделями Model S и Model X, и после этого присоединилась к стартапу, где применяла AI и компьютерное зрение в пространственном общении.
- 🤖 Она была привлечена к Open AI из-за его миссии по созданию безопасного искусственного общего интеллекта и возможности работать над научными исследованиями в этой области.
- 📈 В Open AI Мира взяла на себя роль развития преобразующих моделей, таких как ChatGPT и Dall-E, которые открывают новые горизонты для будущих генеративных технологий ИИ.
- 🔧 Важной частью работы Open AI является понимание и улучшение безопасности и контроля ИИ, включая предотвращение несанкционированного доступа к интернету и самостоятельных действий систем.
- 🧠 Обсуждение включало в себя тему развития ИИ и его влияния на различные отрасли, с утверждением, что технология ИИ будет оказывать влияние на все аспекты когнитивной работы.
- 👨🏫 Высшее образование, по мнению Миры, должно использоваться для продвижения инноваций в образовании и создания высококачественного, доступного и персонализированного обучения с использованием ИИ.
- 🛠️ Мира подчеркнула значимость обучения навыкам, необходимым для работы с новыми технологиями, и умению адаптироваться к быстрому развитию в области ИИ.
- 🌐 Важно обеспечивать этичность и безопасность ИИ, включая вопросы авторских прав, биометрических прав и предотвращение неприкосновенности личности с помощью ИИ.
- 💡 В заключение, Мира Мурати дала совет студентам о важности изучать с радостью и любопытством, а также уменьшения стресса, связанного с будущим карьерным путем.
Q & A
Какое событие проходит в видеотранскрипте?
-В видеотранскрипте проходит специальное событие - беседа с Мирой Мурати, одной из ведущих специалистов в области искусственного интеллекта и альумнажкой инженерного факультета Дартмутского колледжа.
Какое звание у Алексис Абрамсон?
-Алексис Абрамсон является деканом Thayer School of Engineering в Дартмутском колледже.
Что известна о Мирове Мурати?
-Миру Мурати известна как главный технический директор в OpenAI и альумнажкой инженерного факультета Дартмутского колледжа 2012 года. Она признана за ее pioneeering work в области AI, включая развитие моделей ChatGPT и Dall-E.
Какой особой гостевой присутствует на мероприятии?
-Особый гость на мероприятии - Джой Буоламвини, известная своими работами в области этики искусственного интеллекта и алгоритмической справедливости.
Какой опыт Мира Мурати в области инноваций?
-Мира Мурати имеет опыт работы в аэрокосмической отрасли, а также в Tesla, где она работала над моделями Model S и Model X. Она также интересуется применением AI и компьютерного зрения в автономном вождении и пространственном вычислении.
Какие технологии разрабатывает OpenAI?
-OpenAI разрабатывает преобразующие модели, такие как ChatGPT и Dall-E, которые открывают новые горизонты для будущих генеративных технологий AI.
Что такое генеративные AI технологии и как они работают?
-Генеративные AI технологии - это системы, способные создавать новые данные, такие как текст, изображения или видео, на основе обучения на больших объемах данных с использованием глубоких нейросетей.
Какие аспекты безопасности и этики рассматриваются в разработке AI?
-В разработке AI рассматриваются аспекты безопасности, таких как предотвращение несанкционированного доступа к интернету и самостоятельных действий систем, а также социальные последствия, такие как влияние на рабочие места и необходимость регулирования.
Какие возможности для инноваций предлагается в области образования с использованием AI?
-AI предлагает возможности для создания высококачественного, доступного и персонализированного образования, способного удовлетворять индивидуальные потребности обучающихся по всему миру.
Какие изменения в отраслях могут быть вызваны применением AI?
-Применение AI может привести к изменениям во всех отраслях, особенно в области когнитивной работы, включая финансы, контент, медиа, здравоохранение и многие другие.
Какие проблемы могут возникнуть в связи с правами на создание и биометрическими данными при использовании AI?
-Проблемы, связанные с правами на создание, могут включать в себя вопросы о согласии, компенсации и авторских правах при использовании данных для обучения моделей AI. Биометрические данные, такие как голоса и лица, вызывают вопросы о подделке и правах на личность.
Что такое 'Spec' в контексте разработки AI и как он помогает?
-Spec - это инструмент, предоставляющий прозрачность в значения, заложенные в систему AI. Он может быть понятен как 'конституция' для AI-систем, которая развивается и становится более точкой со временем.
Какой подход к тестированию и развертыванию использует OpenAI для своих продуктов?
-OpenAI использует итерационное развертывание, начиная с ограниченного доступа для экспертов и 'красного командования' для изучения рисков, а затем постепенного расширения доступа, собирая обратную связь и понимая крайние случаи.
Какие меры предосторожности применяются для предотвращения негативных последствий использования технологий AI?
-OpenAI разрабатывает и использует такие меры, как водмаркировка, политики контента, инструменты для определения глубоких подделок и распространения информации, а также партнерства с различными общественными группами для управления рисками и проблемами, связанными с AI.
Какие взгляды Алексис Абрамсон на будущее образования и его роль в интеграции AI?
-Алексис Абрамсон видит, что образование должно интегрировать AI для продвижения творчества и знаний, предоставляя высококачественное и доступное образование, способное адаптироваться к индивидуальным потребностям обучающихся.
Какой совет Алексис Абрамсон дает студентам Дартмутского колледжа?
-Алексис Абрамсон советует студентам изучать с меньшим стрессом, изживать учебу с радостью и любопытством, а также стремиться к общему пониманию различных областей знаний.
Outlines
🎓 Вступление и приветствие
Алексис Абрамсон, декан школы инженерии Тайер в Дарmouth, открывает мероприятие, приветствуя гостей и участников. Она представляет Миру Мурати, выдающуюся лидеру в области искусственного интеллекта и альумну Дарmouth по инженерии. Также приветствуются специальные гости, включая Джой Бууламвини, известную по своим работам в области этики ИИ и альтернативной справедливости. Обсуждается история инноваций в ИИ Дарmouth и роль Мира Мурати в Open AI, где она разработала модели, такие как ChatGPT и Dall-E. Важное внимание уделяется подготовке к общению с Мирой и ее вкладом в разработку генеративных технологий ИИ.
🚀 Начало карьеры и переход в OpenAI
Миру Мурати рассматривает свой переход от работы в аэрокосмической отрасли к позиции в Tesla, которая была важным шагом из-за ее интереса к инновационным вызовам в области автономного транспорта. Она отмечает, что хотя ее работа в Tesla связывалась с разработкой Model S и Model X, интерес к искусственному интеллекту и автономному вождению машин привела ее к интересу к обучению в этой области. Важным этапом стало присоединение к стартапу, где она применяла AI и компьютерное зрение для работы в пространственном вычислении. Это дало ей новый угол зрения на применение AI, что позже привело к ее присоединению к OpenAI и работе над безопасным искусственным общеприменительным интеллектом.
🤖 Развитие ИИ и его потенциал
Обсуждается концепция развития ИИ на протяжении последних десятилетий, включая комбинацию нейронных сетей, обилия данных и вычислительной мощности. Эти три фактора в сочетании приводят к созданию преобразующихся систем ИИ, способных выполнять общие задачи, хотя их способность к этому не всегда ясна. Развитие таких систем, как GPT-3, демонстрирует понимание языка на уровне, сопоставимом с человеческим, и способность генерировать текст на основе обучения. Также затрагивается потенциал ИИ для работы с различными типами данных, включая код, изображения, видео и звук, что подчеркивает его многообразие применения.
🔐 Безопасность ИИ и его развёртывание
Важность безопасности ИИ и сопутствующие проблемы обсуждаются с точки зрения технологической и социальной ответственности. Подчёркивается, что безопасность и способности ИИ не должны рассматриваться отдельно, а должны разрабатываться вместе. Уменьшение рисков и предоставление инструментов для понимания и контроля ИИ является основной задачей. Также затрагивается необходимость развития научного подхода к предсказанию возможностей ИИ и создании соответствующих ограничений заранее.
🌐 Ответственное использование ИИ и общественная ответственность
Обсуждается ответственное использование технологий ИИ, включая вопросы авторских прав, биометрических прав и предотвращение неприкосновенности личности. Подчёркивается, что ответственность за использование ИИ лежит как у разработчиков, так и у общества в целом. Необходимость предоставления обучения, инструментов и доступа для понимания и контроля ИИ для всех заинтересованных сторон, включая правительства и регуляторов, выделяется как ключевой элемент стратегии управления рисками.
🛠️ Влияние ИИ на индустрии и рынок труда
Анализируется потенциальное влияние ИИ на различные отрасли и рынок труда. Подчёркивается, что ИИ затрагивает все сферы когнитивной работы и может привести к значительным изменениям в профессиональной среде. Также затрагивается тема потерь рабочих мест, создании новых возможностей и преобразования экономики с учетом влияния ИИ, включая распределение экономической ценности, общественных выгод и систем поддержки, таких как базовый доход.
🎨 Использование ИИ в творчестве и образование
Обсуждается потенциал ИИ для расширения творческих возможностей и преподавания. Утверждается, что ИИ может снизить барьеры для творчества, предоставив инструменты для помощи в создании дизайнов, кодировании, письме и т.д. Также затрагивается тема использования ИИ для улучшения процесса обучения, включая индивидуализацию программ обучения и поддержку навыков обучения.
🤝 Сотрудничество с обществом и защитой авторских прав
В заключительной части разговора подчёркивается важность сотрудничества с медиакомпаниями, создателями контента и обществом для разработки продуктов ИИ, которые помогут продвигать общество и быть полезными. Также обсуждается тема защиты авторских прав и биометрических прав, включая вопросы использования данных, согласования, компенсации и контроля за данными в продуктах ИИ.
🎓 Отклик на вопрос о возвращении в учебу
В заключении Мира Мурати отвечает на вопрос о том, что она бы сделала иначе, вернувшись в учебу. Она выражает мысль о том, что бы изучала те же предметы, но с меньшим стрессом, чтоб учиться с радостью и любознательностью. Также подчёркивается важность обучения в широком диапазоне предметов для лучшего понимания мира и развития как исследователя.
Mindmap
Keywords
💡Искусственный интеллект (AI)
💡Генеративные AI-технологии
💡Машинное обучение
💡Трансформативные модели
💡Open AI
💡Безопасность ИИ
💡Развитие масштабов
💡Этика ИИ
💡Датасет
💡Технологический прогресс
💡Инновации в образовании
Highlights
Alexis Abramson, dean of Thayer School of Engineering, welcomed attendees to a special event featuring a conversation with AI leader Mira Murati.
Mira Murati, a Dartmouth Engineering alum and CTO at Open AI, is known for her work on AI technologies like ChatGPT and Dall-E.
Joy Buolamwini, renowned for her work in AI ethics, was also present to receive an honorary degree from Dartmouth.
Dartmouth's history with AI innovation dates back to the first conference on artificial intelligence in 1956.
Mira Murati's early career included working in aerospace and Tesla, where she became interested in the intersection of AI and self-driving cars.
Murati's work at a startup involved applying AI and computer vision to spatial computing, exploring new interfaces for human-computer interaction.
OpenAI's mission to build safe artificial general intelligence aligns with Murati's interest in AI's potential for societal advancement.
The development of transformative AI models like GPT-3 and Dall-E has been driven by a combination of neural networks, data, and compute power.
AI systems have demonstrated the ability to understand and generate language, code, and even create images and videos from text prompts.
Murati discusses the concept of 'scaling laws' in AI, where increased data and compute lead to improved model performance.
Commercializing AI technology has proven challenging, leading OpenAI to develop its own products like the ChatGPT API.
AI systems are projected to reach human-level intelligence in specific tasks within the next couple of years.
Safety and security in AI development are critical, with OpenAI focusing on building these considerations into technology from the outset.
Murati emphasizes the importance of societal readiness for AI, advocating for shared responsibility and education on AI capabilities and risks.
OpenAI is working on predictive capabilities to prepare for the future risks and safety concerns associated with advanced AI systems.
The potential for AI to impact various industries is vast, with Murati suggesting that AI will transform cognitive work across the board.
Mira Murati's vision for AI in education is to create highly accessible, customized learning experiences globally.
The conversation also touched on the importance of studying the current use of AI tools to predict and prepare for their future impact on jobs and society.
Murati's advice for students is to study with less stress, maintain curiosity, and enjoy the learning process.
Transcripts
- Good afternoon, everyone.
Great to see a nice packed room here in our new building.
My name is Alexis Abramson,
dean of Thayer School of engineering at Dartmouth,
and it's truly a pleasure to welcome you all
to this very special event,
a conversation with Mira Murati,
one of our nation's foremost leaders
in artificial intelligence
and also a Dartmouth Engineering alum.
Before we get started,
I wanna extend a special welcome to a special guest,
Joy Buolamwini,
who is also renowned for her work in AI,
AI ethics, and algorithmic justice.
She'll also be receiving her honorary degree
from Dartmouth tomorrow.
And a warm welcome to Mira and all of you who
either are part of her family now,
or are part of her family when she was here at Dartmouth,
including her brother, Ernel Murati,
also a Thayer alum from the class of 2016.
Thank you to our partners
at the Neukom Institute for Computational Science
and the Department of Computer Science.
From Dartmouth's very first seminal conference
on artificial intelligence in 1956
to our current multidisciplinary research
on large language models and precision health,
Dartmouth has long been at the forefront of AI innovation.
So we are especially thrilled to have Mira,
Chief Technology Officer at Open AI
and their School of Engineering's class of 2012
with us today.
She is known for her pioneering work on some
of the most talked about AI technologies of our time.
At Open AI, she has spearheaded the development
of transformative models like ChatGPT and Dall-E,
setting new the stage for future generative AI technologies.
Now, during her time as a student at Thayer,
she applied her engineering skills to design
and built hybrid race cars
with Dartmouth's Formula Racing Team.
Tomorrow at commencement,
she will receive an honorary doctorate of science
from Dartmouth.
Finally, moderating our conversation today
is Jeff Blackburn, Dartmouth class of 1991
and current Dartmouth trustee.
Jeff's extensive career is centered on the growth
of global digital media and technology.
He served as senior vice president
of Global Media and Entertainment at Amazon until 2023,
and has had various leadership positions at the company,
his insights into the intersection of technology,
and media, and entertainment
will certainly make sure we have
an engaging conversation today.
So without further ado, I'll the conversation over.
Please join me in welcoming Mira Murati and Jeff Blackburn.
(audience applauding)
- Thank you, Alexis.
And this beautiful building, so nice.
Mira, thank you so much for coming here and spending time.
I can only imagine how crazy your days are right now.
- It's great to be here.
- It is so nice of you to take this time for everybody here.
- Really happy to be here.
- And I wanna get right to it
because I know everybody just wants
to hear what's going on in your life
and what you're building
'cause it's just fascinating.
Maybe we should just start with you,
and you leave Thayer,
you go to Tesla for a bit, then OpenAI.
If you could just describe kind of that period
and then joining OpenAI in the early days.
- Yeah, so I was...
Right after Thayer, I actually worked in aerospace briefly,
and then I sort of realized that
aerospace was kind of slow-moving
and I was very interested in Tesla's mission
and of course really innovative challenges
in building basically a sustainable future
for transportation, and I decided to join then.
And after working on Model S and Model X,
I thought I don't really wanna become a car person.
I kind of want to work on different challenges,
at the intersection of really advancing
society forward in some way,
but also in doing this really hard engineering challenges.
And at the time when I was at Tesla,
I got very interested in self-driving cars
and sort of the intersection of these technologies,
computer vision and AI, applying them to self-driving cars.
And I thought, okay, I'd like to learn more about AI,
but in different domains.
And that's when I joined the startup
where I was leading engineering and product
to apply AI and computer vision
in the domain of spatial computing,
so thinking about the next interface of computing.
And at the time, I thought it was going to be
virtual reality and augmented reality.
Now I think it's a bit different,
but I thought, what if you could use your hands
to interact with very complex information,
whether it's formulas, or molecules,
or concepts in topology?
You can just learn about these things and interact with them
in a much more intuitive way,
and that expands your learning.
So it turned out VR was a bit too early then.
And so...
But this gave me enough
to learn about AI in a different domain
and sort of I think my career has always been kind of
at the intersection of technology and various applications,
and it gave me a different perspective
of how far along AI was
and what it could be applied to-
- So the Tesla self-driving,
you saw machine learning, deep learning.
You could see where this is going.
- Vision, yes. - Yeah.
- But not clearly- - Did you work with Elon?
- I did, yes, in the last year especially.
But it wasn't totally clear where it was going.
At the time, it was still apply AI to narrow applications,
not generally.
You're applying it to very narrow specific problems,
and it was the same in VR and Ar.
And from then I thought
I don't really want to just apply it to specific problems.
I want to learn about
just the research and really understand what is going on,
and, from there, then go apply to other things.
So this is when I joined OpenAI,
and Open AI's mission was very appealing to me.
It was a nonprofit back then,
and the mission hasn't changed.
The structure has changed,
but when I joined six years ago,
it was a nonprofit geared to build
safe, artificial general intelligence,
and it was the only other company doing this,
other than DeepMind.
Now of course there are a lot of companies
that are sort of building some version of this.
- A handful, yeah. - Yes.
And that's sort of how the journey started to OpenAI.
- Got it.
And so you've been building a lot since you were there.
I mean, maybe we could just, for the group,
just some AI basics of
machine learning, deep learning, now AI.
it's all related, but it is something different.
So, what is going on there
and how does that come out in a ChatGPT, or a Dall-E,
or your video product?
How does it work?
- It's not something radically new.
In a sense, we're building on decades and decades
of human endeavor.
And in fact, it did start here.
And what has happened in let's say the last decade
is this combination of these three things
where you have neural networks, and then a ton of data,
and a ton of compute.
And you combine these three things,
and you get this really transformative AI systems or models
that it turns out they can do these amazing things,
like general tasks,
but it's not really clear how.
Deep learning just works.
And of course we're trying to understand
and apply tools and research
to understand how these systems actually work,
but we know it works from
just having done it for the past few years.
And we have also seen the trajectory of progress
and how the systems have gotten better over time.
When you look at systems that like GPT-3,
large language models that we deployed
about three, yeah, 3 1/2 years ago.
GPT-3 was able to sort of...
First of all, the goal of this model
is just to predict the next token.
- [Jeff] It's really next word prediction.
- Yes, pretty much. - Yeah.
- And then we found out that if you give this model
this objective to predict the next token,
and you've trained it on a ton of data,
and you're using a lot of compute,
what you also get is this model
that actually understands language
at a pretty similar level to how we can.
- [Jeff] 'Cause it's read a lot of books.
it's read all the books.
- It kinda knows- - Basically all the content-
- What words should come next. - On the internet.
But it's not memorizing what's next.
It is really generating an understanding
of its own understanding of the pattern
of the data that it has seen previously.
And then we found that, okay, it's not just language.
Actually, if you put different types of data in there,
like code, it can code too.
So, actually, it doesn't care
what type of data you put in there.
It can be images,
it can be video,
it can be sound,
and it can do exactly the same thing.
- [Jeff] Oh, we'll get to the images.
Yeah.
(Jeff laughs)
But yes, text prompt can give you images or video,
and now you're seeing even the reverse.
- Yes, yes, exactly.
So you can do...
So we found out that this formula
actually works really well,
data, compute, and deep learning,
and you can put different types of data,
you can increase the amount of compute,
and then the performance of these AI systems
gets better and better.
And this is what we refer to as scaling laws.
They're not actual laws.
It's essentially like a statistical prediction
of the capability of the model
improving as you put in more data and more compute into it.
And this is what's driving AI progress today.
- [Jeff] Why did you start with a chatbot?
- So, yeah, in terms of product,
actually, we started with the API.
We didn't really know how to commercialize GPT-3.
It's actually very, very difficult
to commercialize AI technology.
And initially, we took this for granted,
and we were very focused on building the technology
and doing research.
And we thought, here is this amazing model,
commercial partners, take it
and go build amazing products on top of it.
And then we found out that that's actually very hard.
And so this is why we started doing it ourselves.
And we-
- [Jeff] That led you to build a chatbot
'cause you just wanted to-
- Yes, because we were trying to figure out,
okay, why is it so hard for this
really amazing successful companies
to actually turn this technology into a helpful product?
- I see.
- And it's because it's a very odd way to build products.
You're starting from capabilities.
You're starting from a technology.
You're not starting from what is the problem in the world
that I'm trying to address.
It's very general capability.
- And so that leads to pretty quickly
what you just described there, which is more data,
more compute,
more intelligence.
How intelligent is this gonna get?
I mean, it sounds like your description is
the scaling of this is pretty linear,
you add more of those elements and it gets smarter.
Has it gotten smarter ChatGPT in the last couple years,
and how quickly will it get to
maybe human-level intelligence?
- So yeah, these systems are already human-level
in specific tasks,
and of course in a lot of tasks, they're not.
if you look at the trajectory of improvement,
systems like GPT-3, we're maybe
let's say toddler level intelligence.
And then systems like GPT-4 are more like
smart high schooler intelligence.
And then in the next couple of years,
we're looking at PhD-level intelligence for specific tasks.
- [Jeff] Like?
- So things are changing and improving pretty rapidly.
- Meaning like a year from now?
- Yeah, a year and a half let's say.
- Where you're having a conversation with ChatGPT
and it seems smarter than you.
- In some things, yeah.
In a lot of things, yes.
- Maybe a year away from that.
- I mean, yeah, could be.
- Pretty close. - Roughly.
Roughly.
Well, I mean, it does lead to these other questions,
and I know you've been very vocal on this,
which I'm happy and proud that you are doing
on the safety aspects of it,
but, I mean, people do want to hear from you on that.
So I mean, what about three years from now
when it's unbelievably intelligent?
It can pass every single bar exam everywhere
and every test we've ever done.
And then it just decides it wants to
connect to the internet on its own and start doing things.
Is that real, and is that...
Or, is that something you're thinking about
as the CTO and leading the product direction?
- Yes, we're thinking a lot about this.
It's definitely real that you'll have
AI systems that will have agent capabilities,
connect to the internet, talk to each other,
agents connecting to each other and doing tasks together,
or agents working with humans and collaborating seamlessly.
So sort of working with AI
like we work with each other today.
In terms of safety, security,
the societal impacts aspects of this work,
I think these things are not an afterthought.
It can be that you sort of develop the technology
and then you have to figure out
how to deal with these issues.
You kind of have to build them alongside the technology
and actually in a deeply embedded way to get it right.
And for capabilities and safety,
they're actually not separate domains.
They go hand in hand.
It's much easier to direct a smarter system by telling it,
okay, just don't do these things.
They need to direct a less intelligent system.
It's sort of like training
a smarter dog versus a dumber dog,
and so intelligence and safety go hand in hand.
- [Jeff] It understands the guardrails better
because it's smarter. - Right, yeah, exactly.
And so there is this whole debate right now around,
do you do more safety or do you do more capability research?
And I think that's a bit misguided
because of course you have to think about
the safety into deploying
a product and the guardrails around that.
But in terms of research and development,
they actually go hand in hand.
And from our perspective,
the way we're thinking about this is
approaching it very scientifically.
So let's try to predict the capabilities
that these models will be,
the capabilities that these models will have
before we actually finish training.
And then along the way,
let's prepare the guardrails for how we handle them.
That's not really been the case in the industry so far.
We train these models,
and then there are these emergent capabilities we call them,
because they emerge.
We don't know they're going to emerge.
We can see sort of the statistical performance,
but we don't know whether that statistical performance
means that the model is better at translation,
or at doing biochemistry, or coding or something else.
And developing this new science of capability prediction
helps us prepare for what's to come.
And that means...
- [Jeff] You're saying all that safety work,
it's kind of consistent with your development.
- Yes, that's right. - It's a similar path.
- Yeah, so you have to kind of bring it along and-
- But What about these issues, Mira, like
the video of Volodymyr Zelensky saying, "We surrender,"
the Tom Hanks video, or a dentist ad?
I can't remember what it was.
What about these types of uses?
Is that in your sphere
or does it need to be regulation around that?
How do you see that playing out?
- Yeah, so I mean, my perspective on this is that
this is our technology.
So it's our responsibility how it's used,
but it's also shared responsibility
with society, civil society, government,
content makers, media, and so on,
to figure out how it's used.
But in order to make it a shared responsibility,
you need to bring people along,
you need to give them access,
you need to give them tools
to understand and to provide guardrails.
And I think-
- [Jeff] Those things are kind of hard to stop though,
right?
- Well, I think it's not possible to have zero risk,
but it's really a question of, how do you minimize risk?
And providing people the tools to do that.
And in the case of government, for example,
it's very important to bring them along
and give them early access to things,
educate them on what's going on.
- Governments.
- Yes, for sure, and regulators.
And I think perhaps the most significant thing
that ChatGPT did was bring AI
into the public consciousness,
give people a real intuitive sense
for what the technology is capable of and also of its risks.
It's a different thing when you read about it
versus when you try it and you try it in your business,
and you see, okay, it cannot do these things,
but it can do this other amazing thing,
and this is what it actually means for the workforce
or for my business.
And it allows people to prepare.
- Yeah, no, that's a great point.
I mean, just these interfaces that you've created, ChatGPT,
are informing people about what's coming.
I mean, you can use it.
You can see now what's underneath.
Do you think there's...
Just to finish on the government point.
I mean, let's just talk the US right now.
Do you wish there was
certain regulations that we're actually just
putting into place right now?
Before you get to that year or two from now.
It's extremely intelligent, a little bit scary.
So are there things that should just be done now?
- We've been advocating for more regulation
on the frontier models which will have this
amazing capabilities that also have a downside
because of misuse.
And we've been very open with policy makers
and working with regulators on that.
On the more sort of near term and smaller models,
I think it's good to allow for
a lot of breadth and richness in the ecosystem
and not let people that don't have as many resources
in compute or data not,
sort of not block the innovation in those areas.
So we've been advocating for more regulation
in the frontier systems
where the risks are much higher.
And also, you can kind of get ahead of what's coming
versus trying to keep up with changes
that are already happening really rapidly.
- But you probably don't want Washington, D.C.
regulating your release of GPT-5,
like that you can or cannot do this.
- I mean, it depends, actually.
It depends on the regulation.
So there is a lot of work that we already do
that has now been sort of, yeah, codified in
the White House' commitments, and this-
- So it's underway - Work already been done.
And it actually informed the White House' commitments
or what the UN Commission is doing
with the principles for AI deployments.
And usually, I think the way to do it
is to actually do the work,
understand what it means in practice,
and then create regulation based on that.
And that's what has happened so far.
Now, getting ahead of these frontier systems requires that
we do a lot more forecasting
and science of capability prediction
in order to come up with correct regulation on that.
- [Jeff] Well, I hope the government has people
that can understand what you're doing.
- It seems like more and more
folks are joining the government
that have better understanding of AI, but not enough.
- Okay.
In terms of industries,
you have the best seat maybe in the world
to just see how this is gonna impact different industries.
I mean, it already is in finance, and content,
and media, and healthcare.
But what industries do you think, when you look forward,
do you think are gonna be most impacted by AI
and the work that you're doing at OpenAI?
- Yeah, this is sort of similar to the question
that I used to get from entrepreneurs
when we started building a product on top of GPT-3,
where people would ask me,
"What can I do with it?
What is it good for?"
And I would say everything.
So just try it.
And so it's kind of similar in the sense
that I think it'll affect everything,
and there's not going to be an area that won't be,
in terms of cognitive work and
the cognitive labor and cognitive work.
Maybe it's gonna take a little bit longer
to get into the physical world,
but I think everything will be impacted by it.
Right now we've seen...
So I'd say there's been a bit of a lag
in areas that have a lot of,
that are high risk, such as healthcare or legal domains.
And so there is a bit of a lag there and rightfully so.
First, you want to understand and bring it in,
use cases that are lower risk, medium risk,
really make sure those are handled with confidence
before applying it to things that are higher risk.
And initially, there should be more human supervision,
and then the delegation should change,
and to the extent they can be more collaborative, but-
- [Jeff] Are there use cases that you personally love,
or are seeing, or are about to see?
- Yeah, so I think basically
the first part of anything that you're trying to do,
whether it is creating new designs,
whether it's coding,
or writing an essay, or writing an email
or basically everything,
the first part of everything that you're trying to do
becomes so much easier.
And that's been my favorite use of it.
so far I've really used it- - First draft for everything.
- Yeah, first draft for everything.
It's so much faster.
It lowers the barrier to doing something
and you can kind of focus on
the part that's a bit more creative and more difficult,
especially in coding.
You can sort of outsource a lot of the tedious work.
- Documentation and all that kinda stuff.
- Yeah, documentation and...
But in industry, we've seen so many applications.
Customer service is definitely
a big application with chatbots,
and writing,
also analysis,
because right now we've sort of connected a lot of tools
to the core model,
and this makes the models far more usable
and more productive.
So you have tools like code analysis.
It can actually analyze a ton of data.
You can dump all sorts of data in there,
and it can help you analyze and filter out the data,
or you could use images
and you could use browsing tool.
So if you're preparing let's say a paper,
the research part of the work can be done much faster
and in a more rigorous way.
So I think this is kind of
the next layer that's going to be added to productivity,
adding these tools to the core models,
and making it very seamless.
The model decides when to use say the analysis tool
versus search versus something else.
- [Jeff] Write a program.
yeah, yeah.
Interesting.
Has it watched every TV show and movie in the world,
and is it gonna start writing scripts and making films?
- Well, it's a tool.
And so
it certainly can do that as a tool,
and I expect that we will actually,
we will collaborate with it,
and it's going to make our creativity expand.
And right now if you think about
how humans consider creativity,
we see that it's sort of this very special thing
that's only accessible
to this very few talented people out there.
And these tools actually make it,
lower the barrier for anyone
to think of themselves as creative
and expand their creativity.
So in that sense, I think it's
actually going to be really incredible.
- Yeah, could give me 200 different cliffhangers
for the end of episode one or whatever,
very easily. - Yes.
And you can extend the story,
the story never ends.
You can just continue. - Keep going.
I'm done writing, but keep going.
That's interesting.
- But I think it's really going to be a collaborative tool,
especially in the creative spaces where-
- I do too.
- Yeah, more people will become more creative.
- There's some fear right now. - Yes, for sure.
- But you're saying that'll switch
and humans will figure out how to make
the creative part of the work just better?
- I think so, and
some creative jobs maybe will go away,
but maybe they shouldn't have been there in the first place
if the content that comes out of it
is not very high quality,
but I really believe that using it
as a tool for education, creativity
will expand our intelligence,
and creativity, and imagination.
- Well, people thought CGI and things like that
were gonna wreck the film industry at the time.
They were quite scared.
This is, I think a bigger thing,
but yeah, anything new like that,
the immediate reaction is gonna be,
"Oh god, this is..."
But I hope that you're right about film and TV.
Okay, the job part you raised,
and let's forget Hollywood stuff,
but there's a lot of jobs that people are worried about
that they think are at risk.
What's your view on job displacement in AI
and really not even just the work you're doing at OpenAI,
just over overall.
Should people be really worried about that,
and which kind of jobs,
or how do you see it all working out?
- Yeah, I mean the truth is that we don't really understand
the impact that AI is going to have on jobs yet.
And the first step is to actually help people understand
what the systems are capable of, what they can do,
integrate them in their workflows,
and then start predicting and forecasting the impact.
And also, I think people don't realize how much
these tools are already being used,
and that's not being studied at all.
And so we should be studying what's going on right now
with the nature of work, the nature of education,
and that's going to help us predict
for how to prepare for these increased capabilities.
In terms of jobs specifically, I'm not an economist,
but I certainly anticipate that a lot
of jobs will change, some jobs will be lost,
some jobs will be gained.
We don't know specifically what it's going to look like,
but you can imagine a lot of jobs that are repetitive,
that are just strictly repetitive
and people are not advancing further,
those would be replaced. - People like QA,
and testing code, and things like that,
those jobs are-
- Unless they are- - They're done.
- Yes, and if it's strictly just that or strictly-
- And it's just one example.
There's many things like that. - Yeah, many things.
- Do you think there'll be enough jobs created elsewhere
to compensate for that?
- I think there are going to be a lot of jobs created,
but the weight of how many jobs are created,
how many jobs are changed, how many jobs are lost,
I don't know.
And I don't think anyone knows really,
because it's not being rigorously studied,
and it really should be.
And yeah, but I think the economy will transform
and there is going to be a lot of value created
by these tools.
And so the question is, how do you harness this value?
If the nature of jobs really changes,
then how are we distributing
sort of the economic value into society?
Is it through public benefits?
Is it through UBI?
Is it through some other new system?
So there are a lot of questions to explore and figure out.
- There's a big role for higher ed
in that work that you're describing there.
It's just not quite happening yet.
- Yeah.
- What else for higher ed and this future of AI?
What do you think is the role of higher ed
in what you see and how this is evolving?
- I think really figuring out
how we use these tools and AI to advance education.
Because I think one of the most powerful
applications of AI is going to be in education,
advancing our creativity and knowledge.
And we have an opportunity
to basically build super high quality education
and very accessible and ideally free for anyone in the world
in any of the languages or cultural nuances
that you can imagine.
You can really have customized understanding
and customized education for anyone in the world.
And of course in institutions like Dartmouth,
the classrooms are smaller and you have a lot of attention,
but still you can imagine having just one-on-one tutoring,
even here, let alone in the rest of the world.
- Supplementing. - Yes.
Because we don't spend enough time learning how to learn.
That sort of happens very late, maybe in college.
And that is such a fundamental thing, how you learn,
otherwise you can waste a lot of time.
And the classes, the curriculum, the problem sets,
everything can be customized
to how you actually learn as an individual.
- So you think it could really, at a place like Dartmouth,
it could compliment some of the learning that's happening.
Oh, absolutely, yeah. - Just have AIs
as tutors and what not.
Should we open it up?
Do you mind taking some questions from the audience?
Is that okay? - Happy to, yeah.
- All right.
Why don't we do that.
Dave, you wanna start?
- [Dave] Sure, if you don't.
- [Speaker] Hold on one second.
I'll give you a microphone.
- One of Dartmouth's first computer scientists,
John Kemeny, once gave a lecture about how
every computer program that humans build
embeds human values into that program,
whether intentionally or unintentionally.
And what I'm wondering is what human values do you think
are embedded in GPT products,
or, put a different way, how should we embed values in,
like respect, equity, fairness, honesty, integrity,
things like that into these kinds of tools?
- That's a great question and a really hard one
and something that we think about,
we've been thinking about for years.
So right now, if you look at these systems,
a lot of the values are input,
are basically put in in the data,
and that's the data in the internet, license data,
also data that comes through human contractors
that will label certain problems or questions.
And each of these inputs has specific value.
So that's a collection of their values and that matters.
And then once you actually
put these products into the world,
I think you have an opportunity to get
a much broader collection of values
by putting it in the hands of many, many people.
So right now, ChatGPT,
we have a free offering of ChatGPT
that has the most capable systems,
and it's used by over 100 million people in the world.
And each of these people can provide feedback into ChatGPT.
And if they allow us to use the data,
we will use it
to create this aggregate of values
that makes the system better,
more aligned with what people want it to do.
But that's sort of the default system.
What you kind of want on top of it
is also a layer for customization
where each community can sort of have their own values,
let's say a school,
a church, a country, even a state.
They can provide their own values that are more specific
and more precise on top of this default system
that has basic human values.
And so we're working on ways to do that as well.
But it's actually, it's obviously a really difficult problem
because you have the human problem
where we don't agree on things,
and then you have the technology problem.
And on the technology problem,
I think we've made a lot of progress.
We have methods like
reinforcement learning with human feedback
where you give people a chance to provide
their values into the system.
We have just developed this thing we call the Spec
that provides transparency into the values
that are into the system.
And we're building a sort of feedback mechanism
where we collect input and data in how to advance the Spec.
You can think of it as like a constitution for AI systems,
but it's a living one that.
It evolves over time because our values
also evolve over time,
and it becomes more precise.
It's something we're working on a lot.
And I think
right now we're thinking about basic values.
But as the systems become more and more complex,
we're going to have to think about
more granularity in the values that's...
- [Jeff] Can you keep that from like getting angry?
- Getting angry? - Yeah.
Is that one of the values?
- Well, that should be...
No.
So that should actually be up to you.
So if you as a user-
- Oh, if you want an angry chatbot
you can have it. - Yes, if you want
an angry chatbot, you should have an angry chatbot.
Yeah.
- Okay, right here, yeah.
- Hello. Thank you.
Dr. Joy here.
And also, congratulations on the honorary degree
and all you've been doing with OpenAI.
I'm really curious how you're thinking about
both creative rights and biometric rights.
And so earlier you were mentioning
maybe some creative jobs ought not to exist,
and you've had many creatives who are thinking about
issues of consent, of compensation,
of having whether it's proprietary models
or even open source models,
where the data is taken from the internet.
So really curious about your thoughts on
consent and compensation as it deals with creative rights.
And since we're in a university,
do you know the multi-part question piece?
So the other thing is thinking about biometric rights,
and so when it comes to the voice,
when it comes to faces and so forth.
So with the recent controversy around the voice of Sky
and how you can also have people who sound alike,
people who look alike,
and all of the disinformation
threats coming up in such a heavy election year,
would be very curious about your perspective
on the biometric rights aspects as well.
- Yeah, so...
Okay, I'll start with the last part on...
We've done a ton of research on voice technologies
and we didn't release them until recently
precisely because they pose so many risks and issues.
But it's also important to kind of bring society along,
give access in a way that you can have guardrails
and control the risks,
and let other people study and make advances
in issues like, for example, we're partnering
with institutions to help us think about
human AI interaction now that you have voice and video
that are very emotionally evocative modalities.
And we need to start understanding
how these things are going to play out
and what to prepare for.
in that particular case,
the voice of Sky was not Scarlett Johansson's,
and it was not meant to be,
and it was a completely parallel process.
I was running the selection of the voice,
and our CEO was having conversations
with Scarlett Johansson and...
But out of respect for her, we took it down.
And some people see some similarities.
These things are subjective,
and I think you can sort of...
Yeah, you can kind of come up with red teaming processes
where if the voice, for example, was deemed to be
super, super similar to a very well-known public voice,
then maybe you don't select that specific one.
In our red teaming, this didn't come up,
but that's why it's important to also have
more extended red teaming
to catch these things early if needed.
But more broadly, with the issue of biometrics,
I think our strategy here is to
give access to a few people,
initially experts or red teamers
that help us understand the risk and capabilities very well.
Then we build mitigations,
and then we give access to more people
as we feel more confident around those mitigations.
So we don't allow for people to
make their own voices with this technology
because we're still studying the risks
and we don't feel confident that we can
handle misuse in that area yet.
But we feel good about handling
misuse with the guardrails that we have
on very specific voices
in a small state right now,
which is essentially extended red teaming.
And then when we extend it to a thousand users,
our Alpha release, we will be
working very closely with its users,
gathering feedback and understanding the edge cases
so we can prepare for these edge cases
as we expand use to say 100,000 people.
And then it's going to be a million,
and then 100 million, and so on.
But it's done with a lot of control,
and this is what we call iterative deployment.
And if we can all get comfortable around this use cases,
then we just won't release them in this specific...
To extended users or for these specific use cases,
we will probably
try to lobotomize the product in a certain way
because capability and risk go hand in hand.
But we're also working on a lot of research
to help us deal with issues of content provenance
and content authenticity
so people have tools
to understand if something is a deep fake
or spread misinformation and so on.
Since the beginning of OpenAI, actually,
we've been working on studying misinformation
and we've built a lot of tools like
watermarking, content policies
that allow us to manage the sort of,
yeah, the possibility of misinformation,
especially this year given that it's a global election year.
We've been intensifying that work even more.
But this is extremely challenging area
that we, as the makers of technology and products,
need to do a lot of work on,
but also partner with civil society,
and media, and content makers
to figure out how to address these issues.
When we make technologies like audio or Sora,
the first people that we work with
after the red teamers that study the risks
are the content creators.
to actually understand how the technology would help them
and how do you build a product
that is both safe, and useful, and helpful,
and that actually advances society.
And this is what we did with Dall-E,
and this is what we're doing with SORA,
our video generation model again.
And the first part of your question.
- [Dr. Joy] Creative rights.
So for the- - Creative rights
- [Dr. Joy] About compensation, consent,
- Yes. - Control and credit.
- Yeah, that's also very important and challenging.
right now we work,
we do a lot of partnerships with media companies
and we also give people a lot of control
on how their data is used in the product.
So if they don't want their data
to be used to improve the model
or for us to do any research or train on it,
that is totally fine.
We do not use the data.
And then for just the creator community in general,
we give access to these tools early.
So we can hear from them first
on how they would want to use it
and build products that are most useful.
And also these things are research produced.
so we don't have to build product at all costs.
We'd only do it if we can figure out a modality
that's actually helpful in advancing people forward.
And we're also experimenting with methods
to basically create our tools that
allow people to be compensated for data contribution.
This is quite tricky both from technical perspective
and also just building a product like that
because you have to sort of figure out
how much a specific amount of data,
how much value it creates
in a model that has been trained afterwards.
And maybe individual data would be very difficult to gauge
how much value that would provide.
But if you can sort of create consortiums
of an aggregate data
and pools where people can provide their data,
maybe that'd be better.
So for the past I'd say two years,
we've been experimenting with various versions of this.
We haven't deployed anything,
but we've been experimenting on the technical side
and trying to really understand the technical problem.
And we're a bit further along, but it's
a really difficult issue. - It is.
I bet there'll be a lot of new companies trying to build
solutions for that. - Yeah, there other companies.
- It's just so hard.
- It is.
- How about right there? Yeah.
- [Participant] Thank you so much for your time
and taking off your time in coming to talk to us.
My question is pretty simple.
If you had to come back to school today,
you found yourself again
at Thayer or at Dartmouth in general,
what would you do again and what you would not do again?
What would you major in
or would you get involved in more things?
Something like that.
- I think I would study the same things
but maybe with less stress.
(all laughs)
Yeah, I think I'd still study math and do...
Yeah.
Maybe I would take more computer science courses actually.
but yeah, I would stress less because
then you study with more curiosity and more joy,
and that's more productive.
But yeah, I remember, as a student,
I was always a bit stressed
about what was going to come after.
And if I knew what I knew now, and to my younger self,
I'd say, and actually everyone would tell me,
"don't be stressed," but somehow it didn't...
When I talk to older alums, they'd always say like,
"Try to enjoy it and be fully immersed
and be less stressed."
I think, though, on specific courses,
it's good to have, especially now,
a very broad range of subjects
and get a bit of understanding of everything.
I find that both at school and after,
because even now I work in a research organization,
I'm constantly learning.
You never stop.
That is very helpful to kind of understand
a little bit of everything.
- [Jeff] Thank you so much,
'cause I'm sure your life- - Thank you.
- Is stressful.
(all laughing)
(audience applauding)
- Thank you so much.
- Thank you for being here today
and also thank you for the incredibly important work
you're doing for society, quite honestly.
It's really important and I'm glad you're in the seat.
- Thank you for having me.
- Thank you from all of us here at Thayer
and Dartmouth as well.
So I thought that would be a good place to end on too,
some good advice for our students.
What a fascinating conversation
and just wanted to thank you all again for coming.
Enjoy the rest of Commencement Weekend.
(no audio)
(gentle music)
Weitere verwandte Videos ansehen
![](https://i.ytimg.com/vi/lPvqvt55l3A/hq720.jpg)
CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival
![](https://i.ytimg.com/vi/1-gsowCPLZc/hq720.jpg)
OpenAI против Google - разбор презентаций | GPT-4o | Gemini 1.5 Pro | Imagen 3 | Veo
![](https://i.ytimg.com/vi/ooubeaZEYUs/hq720.jpg)
Илья Суцкевер . Увлекательный и опасный путь к Общему ИИ (AGI). Дублированный перевод
![](https://i.ytimg.com/vi/-9h3-cN6hIc/hq720.jpg)
Последние интервью Татьяны Черниговской 2024
![](https://i.ytimg.com/vi/fizPWAAo-lc/hq720.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGGUgZShlMA8=&rs=AOn4CLAVUFeASrOD78OmxyU0Ha3z3kz9MA)
The Deutsch Files III
![](https://i.ytimg.com/vi/7jKRy5N3FHM/hq720.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGDcgWShyMA8=&rs=AOn4CLAu5TrVu9JoziN_ttlmMDg5FciD4Q)
2 1 Обзор языковых моделей
5.0 / 5 (0 votes)