OpenAI DevDay: Opening Keynote

OpenAI
6 Nov 202345:36

Summary

TLDRSam Altman announces OpenAI's latest AI capabilities at a developer conference. This includes a new GPT-4 Turbo model with longer context, improved function calling, lower pricing, and new modalities. OpenAI also introduces GPTs, customizable AI assistants anyone can build with natural language instructions. The Assistants API makes it easy to integrate similar capabilities into apps. Microsoft CEO Satya Nadella discusses their partnership. Overall, OpenAI aims to gradually deploy more advanced AI agents to empower people while maintaining safety.

Takeaways

  • ๐Ÿ“ฃ OpenAI hosts its first annual developer conference and announces major updates
  • ๐ŸŒŸ OpenAI launches GPT-4 Turbo model with longer context, more control and knowledge
  • ๐Ÿ’ฐ GPT-4 Turbo pricing reduced by over 3x for prompt tokens and 2x for completion
  • ๐Ÿค– Introduces GPTs - tailored AI assistants customizable via natural language instructions
  • ๐Ÿ›’ GPT Store launching for sharing and discovering custom GPTs
  • ๐Ÿ”Œ New Assistants API released to easily build customized AI experiences into apps
  • ๐ŸŽฅ New modalities added: vision, speech, text-to-speech, speech-to-text
  • ๐Ÿ’ช Partnership with Microsoft continues to grow
  • ๐Ÿ‘€ Gradual, iterative deployment focused on responsibly advancing AI capabilities
  • ๐Ÿ™Œ OpenAI team thanked for their continued hard work and dedication

Q & A

  • What was the purpose of the developer conference?

    -To announce major updates and new offerings from OpenAI to developers.

  • What is GPT-4 Turbo?

    -A new, upgraded version of the GPT-4 model with longer context, more control, updated knowledge and lower pricing.

  • How can people customize AI experiences with GPTs?

    -GPTs allow people to tailor AI assistants by providing natural language instructions, knowledge and actions.

  • What does the Assistants API enable?

    -It allows developers to more easily build customized AI assistant experiences directly into their apps.

  • What new capabilities were added to the API?

    -New modalities including computer vision, speech, text-to-speech and speech-to-text.

  • How will OpenAI responsibly advance AI capabilities?

    -Through a gradual, iterative approach focused on deploying new capabilities slowly.

  • How will the GPT Store work?

    -It will allow people to share and discover custom GPT assistants that others have created.

  • How was Microsoft described as a partner?

    -As an instrumental partner providing world-class infrastructure and bringing OpenAI capabilities to developers.

  • How did OpenAI describe their team?

    -As having remarkable talent density and doing hard work to make everything happen.

  • What was the overall vision conveyed?

    -That AI will be an empowering technological revolution, elevating humanity.

Outlines

00:00

โญWelcome and introduction

Sam Altman welcomes attendees to OpenAI's first developer conference. He expresses excitement about the event and announces they have big announcements to make, starting with an overview of OpenAI's progress over the past year, including launches of ChatGPT, GPT-4, and other AI capabilities.

05:01

๐Ÿ“ˆ AI improvements and new offerings

Sam details numerous improvements and new offerings from OpenAI, including: longer context length for GPT-4 Turbo; more control over model responses; updated world knowledge; new modalities like vision and voice; customization options; increased rate limits; a new copyright shield; and most notably - a 3x price decrease for GPT-4 Turbo compared to GPT-4.

10:02

๐Ÿ’ก Microsoft partnership

Sam brings Microsoft CEO Satya Nadella on stage. Satya expresses Microsoft's commitment to partnering with OpenAI, citing their shared mission of empowering people and organizations. He announces Microsoft will make GitHub Copilot Enterprise edition available to all conference attendees.

15:03

๐Ÿค– Introducing GPTs - custom AI assistants

Sam introduces GPTs - customized versions of ChatGPT that combine instructions, knowledge, and actions for specific purposes. He demonstrates building a simple GPT with the new builder that helps startup founders, and shows examples of GPTs built by partners like Code.org and Canva.

20:04

๐ŸŒŽ Distributing and discovering GPTs

Sam explains they will allow sharing GPTs publicly or privately. GPTs can be listed in the new GPT Store for discovery, with revenue sharing for popular GPT creators. This fosters an ecosystem of customized AI.

25:04

๐Ÿ’ป Building assistants with the new API

For developers, Sam introduces the new Assistants API to easily build customized AI assistants into apps. It includes persistent threads, retrieval, code execution, and more. A demo shows building a travel app with an assistant using new modalities and tools.

30:08

๐ŸŽ™๏ธ Voice capabilities

Romain demos how the Assistants API enables voice capabilities by integrating Whisper, GPT-4 Turbo, and text-to-speech. The assistant fields voice requests, executes functions, and speaks back natural responses.

35:10

๐Ÿš€ Empowering people through gradual deployment

Sam reiterates their belief in gradually deploying more capable AI to responsibly address safety challenges. GPTs and the Assistants API are stepping stones toward more useful agents that can perform tasks on users' behalf.

40:12

๐Ÿ’ช Thanking the team

Sam thanks the OpenAI team for their hard work and talent in making everything announced today possible. He expresses gratitude to work with an incredible group of colleagues.

45:13

๐ŸŒŸ Inspiring the future with AI

Sam closes by articulating OpenAI's vision for AI as a revolution that will empower individuals, unlock creativity, and elevate humanity. He thanks attendees and looks forward to seeing what they build with these new tools.

Mindmap

Keywords

๐Ÿ’กArtificial Intelligence

Artificial intelligence (AI) refers to computer systems that can perform tasks that typically require human intelligence. In the video, AI is presented as a coming technological and societal revolution that will empower individuals and elevate humanity. AI models like GPT and DALL-E are shown as examples of advanced AI systems.

๐Ÿ’กFoundation Models

Foundation models are broad artificial intelligence models that can be adapted to many downstream tasks. They are considered the building blocks for developing AI applications. GPT-4 is referred to as one of the most advanced foundation models.

๐Ÿ’กAgents

Agents refer to AI systems that can autonomously plan and perform complex tasks on a user's behalf. The video presents GPTs and the Assistants API as precursors to more advanced AI agents that will be gradually developed.

๐Ÿ’กGPT

GPT stands for Generative Pretrained Transformer. GPTs are introduced as customized versions of ChatGPT that combine instructions, knowledge and actions for specific purposes determined by the user.

๐Ÿ’กAssistants API

The Assistants API makes it easier for developers to build AI assistants and agents integrated into their applications. It handles conversation state and connects AI functions like speech, vision and knowledge retrieval.

๐Ÿ’กGradual Iterative Deployment

This refers to the speaker's view that AI systems like agents should be incrementally rolled out to accumulate real-world experience. It allows catching safety issues early.

๐Ÿ’กPartnership

The video emphasizes OpenAI's partnership with Microsoft to build infrastructure for developing advanced AI models. The partnership aims to make these models available to empower people.

๐Ÿ’กDevelopers

The intended audience is AI developers. Many of the announcements cater to making AI development easier, cheaper and more accessible to empower developers.

๐Ÿ’กSafety

Developing AI safely and addressing ethical concerns is highlighted as an important priority. Safety mechanisms are mentioned for new capabilities like GPTs.

๐Ÿ’กAPI

API refers to the programmatic interfaces that OpenAI provides to developers for integrating AI functions into applications. Improving the API is a focus, like adding vision and speech recognition.

Highlights

GPT-4 Turbo supports up to 128,000 tokens of context, 16 times longer than our 8k context

JSON Mode ensures the model will respond with valid JSON for easier API calling

Reproducible outputs allow passing a seed parameter for consistent model outputs and behavior

Retrieval brings in knowledge from outside documents and databases into apps

New modalities like DALL-E 3, vision capabilities, and text-to-speech are available in the API

Fine-tuning expanded to 16K GPT-3.5 model and GPT-4 fine-tuning experimental access opened

Custom Models program allows close work with researchers for specialized, proprietary models

GPT-4 Turbo is over 2.75x cheaper than GPT-4

ChatGPT now uses GPT-4 Turbo and can browse the web, write and run code, analyze data, take and generate images

GPTs combine instructions, knowledge and actions for a customized ChatGPT experience

The GPT Store will allow sharing, discovery and revenue sharing for public GPTs

The Assistants API enables persistent threads and built-in tools like retrieval, code interpreter and function calling

Over time GPTs and Assistants will be able to plan and perform more complex actions as precursors to agents

AI will empower creation, elevate humanity and give everyone superpowers on demand

The launches today will look quaint compared to what's coming next year

Transcripts

play00:00

[music]

play00:01

-Good morning. Thank you for joining us today.

play00:04

Please welcome to the stage, Sam Altman.

play00:06

[music]

play00:13

[applause]

play00:16

-Good morning.

play00:18

Welcome to our first-ever OpenAI DevDay.

play00:20

We're thrilled that you're here and this energy is awesome.

play00:23

[applause]

play00:28

-Welcome to San Francisco.

play00:30

San Francisco has been our home since day one.

play00:33

The city is important to us and the tech industry in general.

play00:36

We're looking forward to continuing to grow here.

play00:39

We've got some great stuff to announce today,

play00:42

but first,

play00:43

I'd like to take a minute to talk about some of the stuff that we've done

play00:46

over the past year.

play00:48

About a year ago, November 30th, we shipped ChatGPT

play00:53

as a "low-key research preview",

play00:55

and that went pretty well.

play00:58

In March,

play00:59

we followed that up with the launch of GPT-4, still

play01:02

the most capable model out in the world.

play01:05

[applause]

play01:10

-In the last few months,

play01:12

we launched voice and vision capabilities so that ChatGPT can now see,

play01:16

hear, and speak.

play01:19

[applause]

play01:21

-There's a lot, you don't have to clap each time.

play01:23

[laughter]

play01:24

-More recently, we launched DALL-E 3, the world's most advanced image model.

play01:28

You can use it of course, inside of ChatGPT.

play01:31

For our enterprise customers,

play01:33

we launched ChatGPT Enterprise, which offers enterprise-grade security

play01:37

and privacy,

play01:38

higher speed GPT-4 access, longer context windows, a lot more.

play01:43

Today we've got about 2 million developers building on our API

play01:48

for a wide variety of use cases doing amazing stuff,

play01:51

over 92% of Fortune 500 companies building on our products,

play01:56

and we have about a hundred million weekly active users

play01:59

now on ChatGPT.

play02:01

[applause]

play02:05

-What's incredible on that is we got there entirely

play02:08

through word of mouth.

play02:08

People just find it useful and tell their friends.

play02:12

OpenAI is the most advanced and the most widely used AI platform

play02:16

in the world now,

play02:18

but numbers never tell the whole picture on something like this.

play02:22

What's really important is how people use the products,

play02:24

how people are using AI,

play02:26

and so I'd like to show you a quick video.

play02:29

-I actually wanted to write something to my dad in Tagalog.

play02:33

I want a non-romantic way to tell my parent that I love him and I also want

play02:40

to tell him that he can rely on me, but in a way that still has

play02:45

the respect of a child-to-parent relationship

play02:48

that you should have in Filipino culture and in Tagalog grammar.

play02:52

When it's translated into Tagalog, "I love you very deeply

play02:55

and I will be with you no matter where the path leads."

play02:58

-I see some of the possibility, I was like,

play02:59

"Whoa."

play03:00

Sometimes I'm not sure about some stuff, and I feel like actually ChatGPT like,

play03:04

hey, this is what I'm thinking about, so it kind of give it more confidence.

play03:07

-The first thing that just blew my mind was it levels with you.

play03:11

That's something that a lot of people struggle to do.

play03:15

It opened my mind to just

play03:18

what every creative could do if they just had a person helping them out

play03:23

who listens.

play03:24

-This is to represent sickling hemoglobin.

play03:27

-You built that with ChatGPT? -ChatGPT built it with me.

play03:31

-I started using it for daily activities like,

play03:34

"Hey, here's a picture of my fridge.

play03:35

Can you tell me what I'm missing?

play03:36

Because I'm going grocery shopping, and I really need to do recipes

play03:39

that are following my vegan diet."

play03:41

-As soon as we got access to Code Interpreter, I was like,

play03:44

"Wow, this thing is awesome."

play03:46

It could build spreadsheets.

play03:47

It could do anything.

play03:49

-I discovered Chatty about three months ago

play03:52

on my 100th birthday.

play03:55

Chatty is very friendly, very patient,

play03:59

very knowledgeable,

play04:02

and very quick.

play04:03

This has been a wonderful thing.

play04:05

-I'm a 4.0 student, but I also have four children.

play04:08

When I started using ChatGPT,

play04:10

I realized I could ask ChatGPT that question.

play04:14

Not only does it give me an answer, but it gives me an explanation.

play04:18

Didn't need tutoring as much.

play04:19

It gave me a life back.

play04:22

It gave me time for my family and time for me.

play04:25

-I have a chronic nerve thing on my whole left half of my body, I have nerve damage.

play04:30

I had a brain surgery.

play04:32

I have limited use of my left hand.

play04:34

Now you can just have the integration of voice input.

play04:38

Then the newest one where you can have the back-and-forth dialogue,

play04:41

that's just maximum best interface for me.

play04:45

It's here.

play04:47

[music]

play04:49

[applause]

play04:57

-We love hearing the stories of how people are using the technology.

play05:01

It's really why we do all of this.

play05:04

Now, on to the new stuff, and we have got a lot.

play05:07

[audience cheers]

play05:10

-First,

play05:11

we're going to talk about a bunch of improvements we've made,

play05:14

and then we'll talk about where we're headed next.

play05:17

Over the last year,

play05:18

we spent a lot of time talking to developers around the world.

play05:22

We've heard a lot of your feedback.

play05:24

It's really informed what we have to show you today.

play05:27

Today, we are launching a new model, GPT-4 Turbo.

play05:33

[applause]

play05:38

-GPT-4 Turbo will address many of the things

play05:41

that you all have asked for.

play05:43

Let's go through what's new.

play05:45

We've got six major things to talk about for this part.

play05:48

Number one, context length.

play05:51

A lot of people have tasks that require a much longer context length.

play05:56

GPT-4 supported up to 8K and in some cases up to 32K context length,

play06:01

but we know that isn't enough for many of you and what you want to do.

play06:05

GPT-4 Turbo, supports up to 128,000 tokens of context.

play06:10

[applause]

play06:15

-That's 300 pages of a standard book, 16 times longer than our 8k context.

play06:20

In addition to a longer context length,

play06:23

you'll notice that the model is much more accurate over a long context.

play06:28

Number two,

play06:30

more control.

play06:32

We've heard loud and clear that developers need more control

play06:35

over the model's responses and outputs.

play06:37

We've addressed that in a number of ways.

play06:41

We have a new feature called JSON Mode,

play06:43

which ensures that the model will respond with valid JSON.

play06:47

This has been a huge developer request.

play06:49

It'll make calling APIs much easier.

play06:53

The model is also much better at function calling.

play06:55

You can now call many functions at once,

play06:58

and it'll do better at following instructions in general.

play07:02

We're also introducing a new feature called reproducible outputs.

play07:05

You can pass a seed parameter, and it'll make the model return

play07:08

consistent outputs.

play07:09

This, of course, gives you a higher degree of control

play07:11

over model behavior.

play07:12

This rolls out in beta today.

play07:15

[applause]

play07:19

-In the coming weeks, we'll roll out a feature to let you view

play07:22

logprobs in the API.

play07:25

[applause]

play07:27

-All right. Number three, better world knowledge.

play07:31

You want these models to be able to access better knowledge about the world,

play07:34

so do we.

play07:36

We're launching retrieval in the platform.

play07:38

You can bring knowledge from outside documents or databases

play07:41

into whatever you're building.

play07:43

We're also updating the knowledge cutoff.

play07:45

We are just as annoyed as all of you, probably more that GPT-4's knowledge

play07:49

about the world ended in 2021.

play07:51

We will try to never let it get that out of date again.

play07:54

GPT-4 Turbo has knowledge about the world up to April of 2023,

play07:59

and we will continue to improve that over time.

play08:03

Number four,

play08:05

new modalities.

play08:07

Surprising no one,

play08:08

DALL-E 3,

play08:10

GPT-4 Turbo with vision,

play08:13

and the new text-to-speech model are all going into the API today.

play08:17

[applause]

play08:23

-We have a handful of customers that have just started using DALL-E 3

play08:27

to programmatically generate images and designs.

play08:31

Today, Coke is launching a campaign that lets its customers

play08:34

generate Diwali cards using DALL-E 3,

play08:36

and of course, our safety systems help developers protect

play08:39

their applications against misuse.

play08:41

Those tools are available in the API.

play08:44

GPT-4 Turbo can now accept images as inputs via the API,

play08:48

can generate captions, classifications, and analysis.

play08:52

For example,

play08:53

Be My Eyes uses this technology to help people who are blind or have low vision

play08:58

with their daily tasks like identifying products in front of them.

play09:04

With our new text-to-speech model,

play09:06

you'll be able to generate incredibly natural-sounding audio

play09:10

from text in the API with six preset voices to choose from.

play09:14

I'll play an example.

play09:16

-Did you know that Alexander Graham Bell, the eminent inventor,

play09:19

was enchanted by the world of sounds.

play09:21

His ingenious mind led to the creation of the graphophone,

play09:25

which etches sounds onto wax, making voices whisper through time.

play09:30

-This is much more natural than anything else we've heard out there.

play09:33

Voice can make apps more natural to interact with and more accessible.

play09:38

It also unlocks a lot of use cases like language learning,

play09:41

and voice assistance.

play09:43

Speaking of new modalities,

play09:45

we're also releasing the next version

play09:47

of our open-source speech recognition model,

play09:49

Whisper V3 today, and it'll be coming soon to the API.

play09:53

It features improved performance across many languages,

play09:56

and we think you're really going to like it.

play09:58

Number five, customization.

play10:01

Fine-tuning has been working really well for GPT-3.5 since we launched it

play10:06

a few months ago.

play10:07

Starting today,

play10:08

we're going to expand that to the 16K version of the model.

play10:12

Also, starting today,

play10:14

we're inviting active fine-tuning users to apply for the GPT-4 fine-tuning,

play10:18

experimental access program.

play10:21

The fine-tuning API is great for adapting our models to achieve better performance

play10:25

in a wide variety of applications with a relatively small amount of data,

play10:29

but you may want a model to learn a completely new knowledge domain,

play10:33

or to use a lot of proprietary data.

play10:36

Today we're launching a new program called Custom Models.

play10:40

With Custom Models,

play10:41

our researchers will work closely with a company

play10:44

to help them make a great custom model, especially for them,

play10:48

and their use case using our tools.

play10:50

This includes modifying every step of the model training process,

play10:54

doing additional domain-specific pre-training,

play10:56

a custom RL post-training process tailored for specific domain, and whatever else.

play11:02

We won't be able to do this with many companies to start.

play11:05

It'll take a lot of work, and in the interest of expectations,

play11:07

at least initially, it won't be cheap,

play11:09

but if you're excited to push things as far as they can currently go.

play11:12

Please get in touch with us,

play11:14

and we think we can do something pretty great.

play11:17

Number six, higher rate limits.

play11:20

We're doubling the tokens per minute

play11:22

for all of our established GPT-4 customers,

play11:24

so it's easier to do more.

play11:26

You'll be able to request changes to further rate limits and quotas directly

play11:30

in your API account settings.

play11:32

In addition to these rate limits,

play11:34

it's important to do everything we can do to make you successful building

play11:39

on our platform.

play11:41

We're introducing copyright shield.

play11:44

Copyright shield means that we will step in and defend

play11:46

our customers

play11:47

and pay the costs incurred, if you face legal claims

play11:50

or on copyright infringement, and this applies both

play11:53

to ChatGPT Enterprise and the API.

play11:57

Let me be clear, this is a good time to remind

play11:59

people do not train on data from the API or ChatGPT Enterprise ever.

play12:06

All right.

play12:08

There's actually one more developer request

play12:10

that's been even bigger than all of these and so I'd like to talk about that now

play12:16

and that's pricing.

play12:17

[laughter]

play12:20

-GPT-4 Turbo

play12:22

is the industry-leading model.

play12:24

It delivers a lot of improvements that we just covered

play12:27

and it's a smarter model than GPT-4.

play12:32

We've heard from developers that there are a lot of things that they want to build,

play12:35

but GPT-4 just costs too much.

play12:38

They've told us that if we could decrease the cost by 20%, 25%, that would be great.

play12:43

A huge leap forward.

play12:46

I'm super excited to announce that we worked really hard on this

play12:49

and GPT-4 Turbo,

play12:51

a better model,

play12:52

is considerably cheaper than GPT-4 by a factor of 3x for prompt tokens.

play12:58

[applause]

play13:05

-And 2x for completion tokens starting today.

play13:09

[applause]

play13:12

-The new pricing is 1ยข per 1,000 prompt tokens

play13:15

and 3ยข per 1,000 completion tokens.

play13:18

For most customers,

play13:19

that will lead to a blended rate more than 2.75 times cheaper to use

play13:23

for GPT-4 Turbo than GPT-4.

play13:26

We worked super hard to make this happen.

play13:27

We hope you're as excited about it as we are.

play13:30

[applause]

play13:35

-We decided to prioritize price first because we had to choose one or the other,

play13:39

but we're going to work on speed next.

play13:41

We know that speed is important too.

play13:43

Soon you will notice GPT-4 Turbo becoming a lot faster.

play13:48

We're also decreasing the cost of GPT-3.5 Turbo 16K.

play13:53

Also, input tokens are 3x less and output tokens are 2x less.

play13:57

Which means that GPT-3.5 16K is now cheaper

play14:01

than the previous GPT-3.5 4K model.

play14:06

Running a fine-tuned GPT-3.5 Turbo 16K version

play14:09

is also cheaper than the old fine-tuned 4K version.

play14:13

Okay, so we just covered a lot about the model itself.

play14:16

We hope that these changes address your feedback.

play14:19

We're really excited to bring all of these improvements

play14:21

to everybody now.

play14:24

In all of this,

play14:25

we're lucky to have a partner who is instrumental in making it happen.

play14:30

I'd like to bring out a special guest, Satya Nadella, the CEO of Microsoft.

play14:34

[audience cheers]

play14:37

[music]

play14:39

-Good to see you. -Thank you so much.

play14:41

Thank you.

play14:42

-Satya, thanks so much for coming here.

play14:45

-It's fantastic to be here and Sam, congrats.

play14:48

I'm really looking forward to Turbo and everything else that you have coming.

play14:52

It's been just fantastic partnering with you guys.

play14:54

-Awesome. Two questions.

play14:55

I won't take too much of your time.

play14:56

How is Microsoft thinking about the partnership currently?

play14:59

-First-

play15:00

[laughter]

play15:03

--we love you guys. [laughter]

play15:05

-Look, it's been fantastic for us.

play15:09

In fact, I remember the first time I think you reached out

play15:11

and said, "Hey, do you have some Azure credits?"

play15:13

We've come a long way from there.

play15:15

-Thank you for those. That was great.

play15:18

-You guys have built something magical.

play15:20

Quite frankly, there are two things for us when it comes to the partnership.

play15:23

The first is these workloads.

play15:25

Even when I was listening backstage to how you're describing what's coming,

play15:28

even, it's just so different and new.

play15:30

I've been in this infrastructure business for three decades.

play15:33

-No one has ever seen infrastructure like this.

play15:35

-The workload, the pattern of the workload,

play15:39

these training jobs are so synchronous and so large, and so data parallel.

play15:45

The first thing that we have been doing is building in partnership with you,

play15:48

the system, all the way from thinking from power to the DC to the rack,

play15:53

to the accelerators, to the network.

play15:56

Just really the shape of Azure is drastically changed

play16:01

and is changing rapidly in support of these models

play16:04

that you're building.

play16:05

Our job, number one, is to build the best system

play16:09

so that you can build the best models

play16:10

and then make that all available to developers.

play16:13

The other thing is we ourselves are our developers.

play16:16

We're building products.

play16:17

In fact, my own conviction of this entire generation

play16:21

of foundation models completely changed the first time I saw GitHub Copilot

play16:25

on GPT.

play16:28

We want to build our GitHub Copilot all as developers on top of OpenAI APIs.

play16:36

We are very, very committed to that.

play16:37

What does that mean to developers?

play16:39

Look, I always think of Microsoft as a platform company,

play16:43

a developer company, and a partner company.

play16:45

For example, we want to make GitHub Copilot available,

play16:50

the Enterprise edition available to all the attendees here

play16:53

so that they can try it out.

play16:54

That's awesome. We are very excited about that.

play16:57

[applause]

play17:00

-You can count on us to build the best infrastructure in Azure

play17:05

with your API support

play17:07

and bring it to all of you.

play17:09

Even things like the Azure marketplace.

play17:11

For developers who are building products out here

play17:12

to get to market rapidly.

play17:14

That's really our intent here.

play17:17

-Great. How do you think about the future, future of the partnership,

play17:20

or future of AI, or whatever?

play17:23

Anything you want

play17:24

-There are a couple of things for me that I think are going to be very,

play17:29

very key for us.

play17:30

One is I just described how the systems that are needed

play17:36

as you aggressively push forward on your roadmap

play17:41

requires us to be on the top of our game and we intend fully to commit

play17:45

ourselves deeply to making sure

play17:47

you all as builders of these foundation models

play17:51

have not only the best systems for training and inference,

play17:55

but the most compute, so that you can keep pushing-

play17:57

-We appreciate that.

play17:58

--forward on the frontiers because I think that's the way

play18:01

we are going to make progress.

play18:02

The second thing I think both of us care about, in fact,

play18:05

quite frankly, the thing that excited both sides to come together is

play18:09

your mission and our mission.

play18:10

Our mission is to empower every person and every organization on the planet

play18:14

to achieve more.

play18:15

To me, ultimately AI is only going

play18:17

to be useful if it truly does empower.

play18:19

I saw the video you played early.

play18:21

That was fantastic to hear those voices describe what AI meant for them

play18:27

and what they were able to achieve.

play18:29

Ultimately, it's about being able to get the benefits

play18:31

of AI broadly disseminated to everyone,

play18:34

I think is going to be the goal for us.

play18:36

Then the last thing is of course, we are very grounded

play18:38

in the fact that safety matters,

play18:39

and safety is not something that you'd care about later,

play18:42

but it's something we do shift left on and we are very,

play18:44

very focused on that with you all.

play18:46

-Great. Well, I think we have the best partnership in tech.

play18:48

I'm excited for us to build AGI together.

play18:50

-Oh, I'm really excited. Have a fantastic [crosstalk].

play18:51

-Thank you very much for coming.

play18:52

-Thank you so much.

play18:53

-See you.

play18:55

[applause]

play19:03

-We have shared a lot of great updates for developers already and we got

play19:07

a lot more to come,

play19:08

but even though this is developer conference,

play19:10

we can't resist making some improvements to ChatGPT.

play19:15

A small one, ChatGPT now uses GPT-4 Turbo with all the latest improvements,

play19:20

including the latest knowledge cutoff, which will continue to update.

play19:23

That's all live today.

play19:25

It can now browse the web when it needs to, write and run code,

play19:28

analyze data, take and generate images,

play19:31

and much more.

play19:32

We heard your feedback, that model picker, extremely annoying,

play19:34

that is gone starting today.

play19:36

You will not have to click around the dropdown menu.

play19:38

All of this will just work together.

play19:41

Yes.

play19:42

[applause]

play19:47

-ChatGPT will just know what to use and when you need it,

play19:51

but that's not the main thing.

play19:54

Neither was price actually the main developer request.

play19:58

There was one that was even bigger than that.

play20:01

I want to talk about where we're headed and the main thing we're here to talk

play20:03

about today.

play20:05

We believe

play20:07

that if you give people better tools, they will do amazing things.

play20:10

We know that people want AI that is smarter, more personal,

play20:13

more customizable, can do more on your behalf.

play20:16

Eventually, you'll just ask the computer for what you need

play20:20

and it'll do all of these tasks for you.

play20:23

These capabilities are often talked in the AI field about as "agents."

play20:28

The upsides of this are going to be tremendous.

play20:31

At OpenAI, we really believe that gradual iterative deployment is

play20:36

the best way to address the safety issues, the safety challenges with AI.

play20:40

We think it's especially important to move carefully

play20:42

towards this future of agents.

play20:44

It's going to require a lot of technical work

play20:47

and a lot of thoughtful consideration by society.

play20:50

Today,

play20:52

we're taking our first small step that moves us towards this future.

play20:57

We're thrilled to introduce GPTs.

play21:01

GPTs are tailored versions of ChatGPT for a specific purpose.

play21:07

You can build a GPT,

play21:09

a customized version of ChatGPT for almost anything

play21:12

with instructions,

play21:13

expanded knowledge,

play21:14

and actions,

play21:16

and then you can publish it for others to use.

play21:19

Because they combine instructions, expanded knowledge, and actions,

play21:23

they can be more helpful to you.

play21:25

They can work better in many contexts, and they can give you better control.

play21:29

They'll make it easier for you to accomplish all sorts of tasks

play21:32

or just have more fun

play21:34

and you'll be able to use them right within ChatGPT.

play21:37

You can in effect program a GPT with language just by talking to it.

play21:42

It's easy to customize the behavior so that it fits what you want.

play21:46

This makes building them very accessible

play21:48

and it gives agency to everyone.

play21:51

We're going to show you what GPTs are,

play21:53

how to use them, how to build them,

play21:56

and then we're going to talk about how they'll be distributed

play21:58

and discovered.

play22:00

After that for developers, we're going to show you how to build

play22:02

these agent-like experiences into your own apps.

play22:05

First,

play22:07

let's look at a few examples.

play22:09

Our partners at Code.org are working hard to expand computer science in schools.

play22:15

They've got a curriculum that is used by tens of millions of students worldwide.

play22:19

Code.org, crafted Lesson Planner GPT, to help teachers provide

play22:24

a more engaging experience for middle schoolers.

play22:27

If a teacher asks it to explain four loops in a creative way,

play22:30

it does just that.

play22:32

In this case,

play22:33

it'll do it in terms of a video game character

play22:35

repeatedly picking up coins.

play22:37

Super easy to understand for an 8th-grader.

play22:40

As you can see, this GPT brings together Code.org's,

play22:43

extensive curriculum and expertise, and lets teachers adapt it to their needs

play22:47

quickly and easily.

play22:49

Next,

play22:51

Canva has built a GPT

play22:53

that lets you start designing by describing what you want

play22:55

in natural language.

play22:57

If you say, "Make a poster for a DevDay reception this afternoon,

play23:01

this evening," and you give it some details,

play23:04

it'll generate a few options to start with by hitting Canva's APIs.

play23:07

Now, this concept may be familiar to some of you.

play23:10

We've evolved our plugins to be custom actions for GPTs.

play23:14

You can keep chatting with this to see different iterations,

play23:17

and when you see one you like, you can click through to Canva

play23:20

for the full design experience.

play23:24

Now we'd like to show you a GPT Live.

play23:27

Zapier has built a GPT that lets you perform actions

play23:31

across 6,000 applications to unlock all kinds of integration possibilities.

play23:36

I'd like to introduce Jessica, one of our solutions architects,

play23:39

who is going to drive this demo.

play23:40

Welcome Jessica.

play23:42

[applause] -Thank you, Sam.

play23:44

Hello everyone.

play23:46

Thank you all.

play23:49

Thank you all for being here.

play23:51

My name is Jessica Shieh.

play23:52

I work with partners and customers to bring their product alive.

play23:55

Today I can't wait to show you how hard we've been working on this,

play23:59

so let's get started.

play24:01

To start where your GPT will live is on this upper left corner.

play24:05

I'm going to start with clicking on the Zapier AI actions

play24:10

and on the right-hand side you can see that's my calendar for today.

play24:13

It's quite a day ever.

play24:15

I've already used this before, so it's actually already connected

play24:18

to my calendar.

play24:19

To start, I can ask,

play24:22

"What's on my schedule for today?"

play24:24

We build GPTs with security in mind.

play24:26

Before it performs any action or share data,

play24:30

it will ask for your permission.

play24:32

Right here, I'm going to say allowed.

play24:36

GPT is designed to take in your instructions, make the decision

play24:41

on which capability to call to perform that action,

play24:43

and then execute that for you.

play24:45

You can see right here, it's already connected to my calendar.

play24:49

It pulls into my information and then I've also prompted it to identify

play24:54

conflicts on my calendar.

play24:56

You can see right here it actually was able to identify that.

play25:01

It looks like I have something coming up.

play25:04

What if I want to let Sam know that I have to leave early?

play25:06

Right here I say, "Let Sam know I got to go.

play25:11

Chasing GPUs."

play25:15

With that, I'm going to swap to my conversation with Sam

play25:21

and then I'm going to say, "Yes, please run that."

play25:26

Sam,

play25:27

did you get that?

play25:29

-I did.

play25:31

-Awesome.

play25:32

[applause]

play25:36

-This is only a glimpse of what is possible and I cannot wait to see

play25:40

what you all will build.

play25:41

Thank you. Back to you, Sam.

play25:43

[applause]

play25:51

-Thank you, Jessica.

play25:52

Those are three great examples.

play25:54

In addition to these,

play25:56

there are many more kinds of GPTs that people are creating and many,

play25:59

many more that will be created soon.

play26:02

We know that many people who want to build a GPT don't know how to code.

play26:07

We've made it so that you can program a GPT just by having a conversation.

play26:12

We believe that natural language is going to be a big part of how people use

play26:15

computers in the future and we think this is an interesting early example.

play26:19

I'd like to show you how to build one.

play26:25

All right. I want to create a GPT

play26:28

that helps give founders and developers advice

play26:30

when starting new projects.

play26:32

I'm going to go to create a GPT here,

play26:36

and this drops me into the GPT builder.

play26:40

I worked with founders for years at YC and still whenever I meet developers,

play26:43

the questions I get are always about, "How do I think about a business idea?

play26:47

Can you give me some advice?"

play26:49

I'm going to see if I can build a GPT to help with that.

play26:52

To start, GPT builder asks me what I want to make,

play26:55

and I'm going to say, "I want to help startup founders think.

play27:00

through their business ideas

play27:04

and get advice.

play27:07

After the founder has gotten some advice,

play27:13

grill them

play27:15

on why they are not growing faster."

play27:18

[laughter]

play27:20

-All right.

play27:22

To start off, I just tell the GPT little bit

play27:23

about what I want here.

play27:25

It's going to go off and start thinking about that,

play27:27

and it's going to write some detailed instructions for the GPT.

play27:31

It's also going to,

play27:32

let's see, ask me about a name.

play27:34

How do I feel about Startup Mentor?

play27:36

That's fine.

play27:37

"That's good."

play27:39

If I didn't like the name, of course, I could call it something else,

play27:41

but it's going to try to have this conversation with me and start there.

play27:45

You can see here on the right, in the preview mode

play27:49

that it's already starting to fill out the GPT.

play27:53

Where it says what it does, it has some ideas of additional questions

play27:57

that I could ask.

play27:58

[chuckles]

play28:01

It just generated a candidate.

play28:03

Of course, I could regenerate that or change it, but I like that.

play28:07

I'll say "That's great."

play28:13

You see now that the GPT is being built out a little bit more

play28:15

as we go.

play28:17

Now, what I want this to do,

play28:19

how it can interact with users, I could talk about style here.

play28:22

What I'm going to say is,

play28:25

"I am going to upload transcripts of some lectures

play28:31

about startups I have given,

play28:33

please give

play28:35

advice based off of those."

play28:38

All right.

play28:40

Now,

play28:42

it's going to go figure out how to do that.

play28:43

I would like to show you the configure tab.

play28:45

You can see some of the things that were built out here as we were going

play28:49

by the builder itself.

play28:51

You can see that there's capabilities here that I can enable.

play28:53

I could add custom actions.

play28:55

These are all fine to leave.

play28:57

I'm going to upload a file.

play29:00

Here is a lecture that I picked that I gave with some startup advice,

play29:05

and I'm going to add that here.

play29:07

In terms of these questions,

play29:09

this is a dumb one.

play29:11

The rest of those are reasonable, and very much things founders often ask.

play29:16

I'm going to add one more thing to the instructions here,

play29:19

which is be concise and constructive with feedback.

play29:25

All right.

play29:27

Again, if we had more time, I'd show you a bunch of other things.

play29:29

This is

play29:31

a decent start.

play29:32

Now,

play29:34

we can try it out over on this preview tab.

play29:36

I will say,

play29:38

what's a common question?

play29:44

"What are three things to look for when hiring employees

play29:47

at an early-stage startup?"

play29:53

Now, it's going to look at that document I uploaded.

play29:56

It'll also have of course all of the background knowledge of GPT-4.

play30:03

That's pretty good. Those are three things that I definitely have said many times.

play30:07

Now, we could go on and it would start following

play30:09

the other instructions and grill me on why I'm not growing faster,

play30:13

but in the interest of time,

play30:14

I'm going to skip that.

play30:15

I'm going to publish this only to me for now.

play30:18

I can work on it later.

play30:20

I can add more content, I can add a few actions

play30:22

that I think would be useful,

play30:24

and then I can share it publicly.

play30:26

That's what it looks like to create a GPT

play30:29

[applause] -Thank you.

play30:36

By the way,

play30:38

I always wanted to do that after all of the YC office hours,

play30:40

I always thought, "Man, someday I'll be able

play30:42

to make a bot that will do this and that'll be awesome."

play30:44

[laughter]

play30:46

-With GPTs, we're letting people easily share and discover all the fun ways

play30:51

that they use ChatGPT with the world.

play30:55

You can make private GPT like I just did,

play30:58

or you can share your creations publicly with a link for anyone to use,

play31:03

or if you're on ChatGPT Enterprise, you can make GPTs just for your company.

play31:10

Later this month we're going to launch the GPT store.

play31:17

Thank you.

play31:18

I appreciate that.

play31:19

[applause]

play31:25

-You can list a GPT there and we'll be able to feature the best

play31:28

and the most popular GPT.

play31:30

Of course, we'll make sure that GPTs in the store follow our policies

play31:34

before they're accessible.

play31:37

Revenue sharing is important to us.

play31:40

We're going to pay people who build the most useful and the most used GPT

play31:44

a portion of our revenue.

play31:46

We're excited to foster a vibrant ecosystem with the GPT store,

play31:50

just from what we've been building ourselves over the weekend.

play31:52

We're confident there's going to be a lot of great stuff.

play31:54

We're excited to share more information soon.

play31:58

Those are GPTs

play31:59

and we can't wait to see what you'll build.

play32:02

This is a developer conference, and the coolest thing about this

play32:05

is that we're bringing the same concept to the API.

play32:09

[applause]

play32:15

Many of you have already been building agent-like experiences on the API,

play32:20

for example,

play32:21

Shopify's Sidekick,

play32:23

which lets you take actions on the platform.

play32:25

Discord's Clyde,

play32:26

lets Discord moderators create custom personalities for, and Snaps My AI,

play32:32

a customized chatbot that can be added to group chats and make recommendations.

play32:36

These experiences are great,

play32:38

but they have been hard to build.

play32:40

Sometimes taking months, teams of dozens of engineers,

play32:44

there's a lot to handle to make this custom assistant experience.

play32:49

Today, we're making that a lot easier with our new Assistants API.

play32:54

[applause]

play32:58

-The Assistants API includes persistent threads,

play33:01

so they don't have to figure out how to deal

play33:02

with long conversation history,

play33:04

built-in retrieval,

play33:07

code interpreter, a working Python interpreter

play33:09

in a sandbox environment,

play33:12

and of course the improved function calling,

play33:14

that we talked about earlier.

play33:17

We'd like to show you a demo of how this works.

play33:19

Here is Romain, our head of developer experience.

play33:22

Welcome, Romain.

play33:23

[music] [applause]

play33:25

-Thank you, Sam.

play33:27

Good morning.

play33:29

Wow.

play33:30

It's fantastic to see you all here.

play33:33

It's been so inspiring to see so many of you infusing AI

play33:37

into your apps.

play33:38

Today, we're launching new modalities in the API, but we are also very excited

play33:43

to improve the developer experience for you all to build

play33:46

assistive agents.

play33:48

Let's dive right in.

play33:50

Imagine I'm building $1,

play33:52

travel app for global explorers, and this is the landing page.

play33:56

I've actually used GPT-4 to come up with these destination ideas.

play33:59

For those of you with a keen eye, these illustrations

play34:02

are generated programmatically using the new DALL-E 3 API available

play34:06

to all of you today.

play34:07

It's pretty remarkable.

play34:11

Let's enhance this app by adding a very simple assistant to it.

play34:15

This is the screen.

play34:16

We're going to come back to it in a second.

play34:17

First, I'm going to switch over to the new assistant's playground.

play34:21

Creating an assistant is easy, you just give it a name,

play34:24

some initial instructions, a model.

play34:26

In this case, I'll pick GPT-4 Turbo.

play34:29

Here I'll also go ahead and select some tools.

play34:31

I'll turn on Code Interpreter and retrieval and save.

play34:35

That's it. Our assistant is ready to go.

play34:39

Next, I can integrate with two new primitives

play34:41

of this Assistants API,

play34:43

threads and messages.

play34:45

Let's take a quick look at the code.

play34:48

The process here is very simple.

play34:50

For each new user, I will create a new thread.

play34:54

As these users engage with their assistant,

play34:56

I will add their messages to the threads.

play34:59

Very simple.

play35:00

Then I can simply run the assistant at any time to stream the responses

play35:04

back to the app.

play35:06

We can return to the app and try that in action.

play35:10

If I say, "Hey, let's go to Paris."

play35:15

All right.

play35:16

That's it. With just a few lines of code, users can now have

play35:19

a very specialized assistant right inside the app.

play35:24

I'd like to highlight one of my favorite features here,

play35:26

function calling.

play35:27

If you have not used it yet, function calling is really powerful.

play35:31

As Sam mentioned, we are taking it a step further today.

play35:34

It now guarantees the JSON output with no added latency,

play35:38

and you can invoke multiple functions at once for the first time.

play35:43

Here, if I carry on and say, "Hey, what are the top 10 things to do?"

play35:49

I'm going to have the assistant respond to that again.

play35:53

Here, what's interesting is that the assistant knows about functions,

play35:56

including those to annotate the map that you see on the right.

play36:00

Now, all of these pins are dropping in real-time here.

play36:04

Yes, it's pretty cool.

play36:06

[applause]

play36:09

-That integration allows our natural language interface

play36:13

to interact fluidly with components and features of our app.

play36:16

It truly showcases now the harmony you can build between AI

play36:21

and UI where the assistant is actually taking action.

play36:25

Let's talk about retrieval.

play36:27

Retrieval is about giving our assistant more knowledge

play36:30

beyond these immediate user messages.

play36:33

In fact, I got inspired and I already booked my tickets to Paris.

play36:37

I'm just going to drag and drop here this PDF.

play36:40

While it's uploading, I can just sneak peek at it.

play36:43

Very typical United Flight ticket.

play36:46

Behind the scene here, what's happening is that retrieval

play36:49

is reading these files,

play36:51

and boom, the information about this PDF appeared on the screen.

play36:55

[applause]

play36:57

-This is, of course, a very tiny PDF, but Assistants

play37:01

can parse long-form documents from extensive text

play37:04

to intricate product specs depending on what you're building.

play37:07

In fact, I also booked an Airbnb, so I'm just going to drag that

play37:09

over to the conversation as well.

play37:12

By the way, we've heard from so many of you developers how hard

play37:15

that is to build yourself.

play37:17

You typically need to compute your own biddings,

play37:19

you need to set up chunking algorithm.

play37:21

Now all of that is taken care of.

play37:24

There's more than retrieval with every API call,

play37:27

you usually need to resend the entire conversation history,

play37:31

which means setting up a key-value store, that means handling the context windows,

play37:35

serializing messages, and so forth.

play37:37

That complexity now completely goes away with this new stateful API.

play37:43

Just because OpenAI is managing this API, does not mean it's a black box.

play37:47

In fact, you can see the steps that the tools are taking

play37:49

right inside your developer dashboard.

play37:52

Here, if I go ahead and click on threads,

play37:56

this is the thread I believe we're currently working on and see,

play37:59

these are all the steps, including the functions

play38:02

being called with the right parameters, and the PDFs I've just uploaded.

play38:08

Let's move on to a new capability that many of you have been requesting

play38:11

for a while.

play38:12

Code Interpreter is now available today in the API as well,

play38:16

that gives the AI the ability to write and execute code on the fly,

play38:20

but even generate files.

play38:22

Let's see that in action.

play38:24

If I say here, "Hey, we'll be four friends staying

play38:29

at this Airbnb,

play38:33

what's my share of it plus my flights?"

play38:40

All right.

play38:42

Now, here,

play38:44

what's happening is that Code interpreter noticed that it should write some code

play38:48

to answer this query.

play38:49

Now it's computing the number of days in Paris, number of friends.

play38:53

It's also doing some exchange rate calculation behind

play38:55

the scene to get the sensor for us.

play38:58

Not the most complex math, but you get the picture.

play39:01

Imagine you're building a very complex finance app

play39:04

that's crunching countless numbers, plotting charts,

play39:07

so really any task that you'd normally tackle with code,

play39:10

then Code Interpreter will work great for you.

play39:13

All right. I think my trip to Paris is solid.

play39:16

To recap here, we've just seen how you can quickly create an assistant

play39:20

that manages state for your user conversations,

play39:22

leverages external tools like knowledge and retrieval and Code Interpreter,

play39:26

and finally invokes your own functions to make things happen

play39:32

but there's one more thing I wanted to show you to really open up

play39:35

the possibilities using function calling combined with our new modalities

play39:39

that we're launching today.

play39:41

While working on DevDay, I built a small custom assistant

play39:45

that knows everything about this event,

play39:47

but instead of having a chat interface

play39:49

while running around all day today,

play39:51

I thought, why not use voice instead?

play39:54

Let's bring my phone up on screen here so you can see it on the right.

play39:58

Awesome.

play39:59

On the right, you can see a very simple Swift app that takes

play40:01

microphone input.

play40:04

On the left, I'm actually going to bring up my terminal log

play40:06

so you can see what's happening behind the scenes.

play40:09

Let's give it a shot.

play40:12

Hey there, I'm on the keynote stage right now.

play40:14

Can you greet our attendees here at Dev Day?

play40:21

-Hey everyone, welcome to DevDay.

play40:23

It's awesome to have you all here.

play40:25

Let's make it an incredible day.

play40:27

[applause]

play40:32

-Isn't that impressive?

play40:33

You have six unique and rich voices to choose from in the API,

play40:37

each speaking multiple languages,

play40:39

so you can really find the perfect fit for your app.

play40:42

On my laptop here on the left,

play40:44

you can see the logs of what's happening

play40:45

behind the scenes, too.

play40:46

I'm using Whisper to convert the voice inputs into text,

play40:50

an assistant with GPT-4 Turbo, and finally,

play40:52

the new TTS API to make it speak.

play40:56

Thanks to function calling, things get even more interesting

play40:59

when the assistant can connect to the internet and take

play41:02

real actions for users.

play41:04

Let's do something even more exciting here together.

play41:07

How about this?

play41:10

Hey, Assistant, can you randomly select five DevDay attendees here

play41:15

and give them $500 in OpenAI credits?

play41:18

[laughter]

play41:20

-Yes, checking the list of attendees.

play41:25

[laughter]

play41:27

-Done. I picked five DevDay attendees and added $500 of API credits

play41:30

to their account.

play41:32

Congrats to Christine M,

play41:33

Jonathan C, Steven G, Luis K, and Suraj S.

play41:38

-All right, if you recognize yourself, awesome.

play41:40

Congrats.

play41:43

That's it.

play41:44

A quick overview today of the new Assistants API

play41:46

combined with some of the new tools and modalities that we launched,

play41:50

all starting with the simplicity of a rich text

play41:52

or voice conversation for you end users.

play41:56

We really can't wait to see what you build,

play41:58

and congrats to our lucky winners.

play42:00

Actually,

play42:01

you know what?

play42:02

you're all part of this amazing OpenAI community here

play42:04

so I'm just going to talk to my assistant

play42:06

one last time before I step off the stage.

play42:10

Hey Assistant, can you actually give everyone here in the audience $500

play42:15

in OpenAI credits?

play42:17

-Sounds great.

play42:18

Let me go through everyone.

play42:21

[applause]

play42:26

-All right,

play42:28

that function will keep running,

play42:30

but I've run out of time.

play42:32

Thank you so much, everyone.

play42:33

Have a great day. Back to you, Sam.

play42:44

-Pretty cool, huh?

play42:46

[audience cheers]

play42:49

-All right, so that Assistants API goes into beta today,

play42:52

and we are super excited to see what you all do with it,

play42:55

anybody can enable it.

play42:57

Over time,

play42:59

GPTs and Assistants are precursors to agents

play43:02

are going to be able to do much much more.

play43:05

They'll gradually be able to plan

play43:07

and to perform more complex actions on your behalf.

play43:11

As I mentioned before,

play43:12

we really believe in the importance of gradual iterative deployment.

play43:16

We believe it's important for people to start building with and using

play43:19

these agents now to get a feel for what the world is going to be like,

play43:23

as they become more capable.

play43:25

As we've always done,

play43:26

we'll continue to update our systems based off of your feedback.

play43:32

We're super excited that we got to share all of this with you today.

play43:35

We introduced GPTs,

play43:37

custom versions of GPT that combine instructions, extended

play43:42

knowledge and actions.

play43:44

We launched the Assistants API

play43:45

to make it easier to build assistive experiences with your own apps.

play43:49

These are your first steps towards AI agents and we'll be increasing

play43:53

their capabilities over time.

play43:56

We introduced a new GPT-4 Turbo model that delivers improved function calling,

play44:01

knowledge, lowered pricing, new modalities, and more.

play44:05

We're deepening our partnership with Microsoft.

play44:09

In closing,

play44:10

I wanted to take a minute to thank the team that creates all of this.

play44:13

OpenAI has got remarkable talent density, but still, it takes

play44:17

a huge amount of hard work and coordination to make all this happen.

play44:21

I truly believe that I've got the best colleagues in the world.

play44:23

I feel incredibly grateful to get to work with them.

play44:27

We do all of this because we believe that AI is going to be

play44:30

a technological and societal revolution.

play44:33

It'll change the world in many ways

play44:35

and we're happy to get to work on something that will empower all of you

play44:38

to build so much for all of us.

play44:41

We talked about earlier how if you give people better tools,

play44:44

they can change the world.

play44:46

We believe that AI will be about individual empowerment and agency

play44:50

at a scale that we've never seen before and that will elevate humanity

play44:53

to a scale that we've never seen before either.

play44:55

We'll be able to do more, to create more,

play44:58

and to have more.

play45:00

As intelligence gets integrated everywhere,

play45:02

we will all have superpowers on demand.

play45:05

We're excited to see what you all will do with this technology

play45:08

and to discover the new future that we're all going

play45:10

to architect together.

play45:12

We hope that you'll come back next year.

play45:14

What we launched today is going to look very quaint relative

play45:17

to what we're busy creating for you know.

play45:19

Thank you for all that you do.

play45:21

Thank you for coming here today.

play45:22

[applause]

play45:28

[music]