OpenAI Releases GPT-4o Mini! How Does It Compare To Other Models?
Summary
TLDRDieses Podcast-Skript diskutiert die neuesten Entwicklungen in der KI-Branche, insbesondere die Einführung der kostengünstigen GPD 40 Mini, die die Anwendung von KI erweitern und zugänglicher machen soll. Es geht um die Verbesserung von Modellen und die Reduzierung der Kosten, was zu einer zukünftigen Integration von KI in jede App und jeden Website führen wird. Auch die Herausforderungen und Wettbewerbsstrategien gegenüber Google werden angesprochen.
Takeaways
- 😀 Das Podcast hatte technische Probleme, aber es geht weiter mit Diskussionen über AI, Geschäftswesen und Comedy.
- 🛍️ Der Sprecher sucht eine Stelle bei einer SAS-Marketing-Firma oder in der Produktabteilung und betont die Bedeutung von AI für die Zukunft.
- 🚗 Er spricht über den Cybertruck und seine finanzielle Notwendigkeit, den Deal abzuschließen, um ihn zu bezahlen.
- 📈 Die Einführung von GPD 40 mini, das als kostengünstigster kleiner Modell präsentiert wird, mit dem Ziel, die Anwendung von KI zu erweitern und sie zugänglicher zu machen.
- 💰 GPD 40 mini ist preislich um ein Vielfaches günstiger als frühere Modelle und 60% günstiger als GPT 3.5 turbo, mit Kosten von 15 Cent pro Million Eingabe-Token und 60 Cent pro Million Ausgabe-Token.
- 🔍 GPD 40 mini unterstützt Text- und Bild-APIs und wird in Zukunft auch Unterstützung für Video- und Audio-Eingaben und -Ausgaben bieten.
- 📚 Das Modell hat ein Kontextfenster von 128.000 Token und unterstützt bis zu 16k Ausgabe-Token pro Anfrage, mit Wissen bis Oktober 2023.
- 📉 Die Verbesserungen im Tokenizer machen GPD 40 mini bei der Behandlung von nicht-englischem Text kosteneffizienter und mit überlegenen textuellen Fähigkeiten.
- 🏆 GPD 40 mini übertrifft GPT 3.5 turbo und andere kleine Modelle in akademischen Benchmarks sowohl in textueller Intelligenz als auch in mehrmodaler Begründung.
- 🔑 Die Funktionen von GPD 40 mini ermöglichen es Entwicklern, Anwendungen zu erstellen, die Daten abrufen oder Aktionen mit externen Systemen durchführen können, und verbessern die Leistung bei langem Kontext im Vergleich zu GPT 3.5 turbo.
- 📊 Die Präsentation von Benchmark-Ergebnissen zeigt, dass GPD 40 mini in verschiedenen Bereichen besser abschneidet als andere kleine Modelle, einschließlich mathematischer und kodierender Fähigkeiten.
Q & A
Was ist das Hauptthema des Podcasts?
-Das Hauptthema des Podcasts ist die Diskussion über künstliche Intelligenz, Geschäftsstrategien und Comedy.
Was ist der aktuelle Preis für die Verarbeitung von Eingabe-Token mit dem GPD 40 Mini?
-Der Preis für die Verarbeitung von Eingabe-Token mit dem GPD 40 Mini beträgt 15 Cent pro Million Eingabe-Token.
Wie viel kostet die Verarbeitung von Ausgabe-Token mit dem GPD 40 Mini?
-Die Verarbeitung von Ausgabe-Token mit dem GPD 40 Mini kostet 60 Cent pro Million Ausgabe-Token.
Was ist der Vergleichspreis für Eingabe-Token mit dem GPT 3.5 Turbo?
-Der Preis für Eingabe-Token mit dem GPT 3.5 Turbo beträgt 50 Cent pro Million Eingabe-Token.
Welche Funktionen bietet der GPD 40 Mini?
-Der GPD 40 Mini unterstützt Text und Vision in der API und bietet in Zukunft Unterstützung für Text, Bild, Video und Audio-Eingaben und -Ausgaben.
Wie groß ist das Kontextfenster des GPD 40 Mini?
-Das Kontextfenster des GPD 40 Mini beträgt 128.000 Token und unterstützt bis zu 16.000 Ausgabe-Token pro Anfrage.
Was ist der Hauptvorteil des Batch-API im Hinblick auf die Kosten?
-Das Batch-API kann die Kosten für die Verarbeitung von Eingabe-Token und Ausgabe-Token halbieren, da es für Aufgaben verwendet wird, die nicht sofort Ergebnisse erfordern.
Welche Funktionen bietet die Superhuman-E-Mail-Verfolgung?
-Die Superhuman-E-Mail-Verfolgung ermöglicht es, wann und auf welchem Gerät Empfänger E-Mails öffnen, indem sie einen kleinen Tracking-Pixel in den gesendeten E-Mails einbettet.
Was ist das Ziel von Open AI, wenn es um die Reduzierung der Kosten und Verbesserung der Modellfähigkeiten geht?
-Das Ziel von Open AI ist es, die Kosten weiter zu reduzieren und gleichzeitig die Modellfähigkeiten zu verbessern, um eine nahtlose Integration von Modellen in jede App und auf jeder Website zu ermöglichen.
Was ist der aktuelle Stand der Open AI-Entwicklung im Hinblick auf Sprachmodelle?
-Open AI plant für diesen Monat den Start der Sprachfunktion und plant die allgemeine Verfügbarkeit für einige Zeit danach.
Outlines
😀 AI-Geschäft und Comedy-Podcast
Der Podcast beginnt mit einer Diskussion über technische Schwierigkeiten und einen Überblick über das Thema AI, Geschäfts- und Comedy-Diskussionen. Es wird erwähnt, dass der Sprecher auf der Suche nach einer Stelle in einem SaaS-Marketing- oder Produktunternehmen ist. Der Podcast kündigt auch die Veröffentlichung der GPD 40 Mini an, eine kostengünstige AI-Modell, das die Anwendungsbereiche von KI erweitern soll, indem es Intelligenz erschwinglicher macht. Die GPD 40 Mini hat 82% der MLU und ist im Vergleich zu früheren Modellen preislich um ein Vielfaches günstiger und um über 60% kosteneffizienter als GPT 3.5 Turbo. Es wird auch auf die Fähigkeiten der Mini hinsichtlich der Text- und Bild-API sowie die zukünftige Unterstützung von Video- und Audio-Eingaben und -Ausgaben hingewiesen.
😉 Die Entwicklung von Modellen und deren Anwendung
In diesem Abschnitt wird über die Herausforderungen und die Entwicklung von AI-Modellen gesprochen, mit einem besonderen Fokus auf die Erfahrungen von Unternehmen wie Superhuman, die unter dem Druck der Integration von AI-Modellen leiden. Es wird auch über die Verwendung von Tracking-Pixeln in E-Mails und Werbeanzeigen diskutiert, um Nutzerverhalten zu verfolgen und zu personalisieren. Der Sprecher lobt die Verwendung von Tracking-Pixeln und ihre Bedeutung für Marketingstrategien. Zudem wird auf die Rolle von Maage als Chat-Moderator hingewiesen, der in der Lage ist, unerwünschten Beitrag zu moderieren.
😲 Die zunehmende Preisgünstigkeit von AI-Modellen
Der Sprecher diskutiert die signifikanten Preissenkungen für AI-Modelle, insbesondere für die GPD 40 Mini, die 15 Cent pro Million Eingabe-Token und 60 Cent pro Million Ausgabe-Token kostet, was ein Zehntel der Kosten früherer Modelle ist. Es wird auch auf die Batch-API eingegangen, die die Kosten für Aufgaben, die nicht sofort bearbeitet werden müssen, halbiert. Der Sprecher kritisiert die komplizierte Preisstruktur von OpenAI und vergleicht die Kosten für verschiedene Modelle, unterstrichend die erhebliche Kostensenkung für die GPD 40 Mini im Vergleich zu anderen Modellen.
🤔 AI-Modelle als Werkzeuge für wissenschaftliche Hypothesen
In diesem Abschnitt werden die Fähigkeiten aktueller AI-Modelle hervorgehoben, wissenschaftliche Hypothesen zu entwickeln, die dann getestet und weiterentwickelt werden können. Der Sprecher teilt mit, dass er an einem OpenAI-Entwicklerforum teilgenommen hat, bei dem über die Verwendung von AI zur Erfassung von Bildern des ersten Schwarzen Lochs gesprochen wurde. Es wird auch darüber diskutiert, wie AI-Modelle in einer Umgebung agieren können, in der sie keine physikalischen Theorien kennen, aber Theorien ableiten können, basierend auf ihrer Erfahrung. Der Sprecher zeigt Interesse an der Vorstellung, dass AI-Modelle in der Zukunft möglicherweise eigene wissenschaftliche Theorien entwickeln könnten.
😎 OpenAI's bevorstehende Einführung von Sprachfunktionen
Der letzte Absatz konzentriert sich auf die bevorstehende Einführung von Sprachfunktionen durch OpenAI, die in Kürze allgemein verfügbar sein werden. Der Sprecher diskutiert auch die Preisgestaltung von OpenAI und vergleicht sie mit anderen Anbietern wie Claude. Es wird auf die günstige Preisstruktur von GPD 40 hingewiesen, die 15 Cent pro Million Eingabe-Token kostet, im Vergleich zu 300 Cent pro Million Eingabe-Token von Claude 3. Der Sprecher beendet die Diskussion mit einem Aufruf an die Zuhörer, das Podcast zu liken und zu abonnieren und auf der Website zu unterstützen.
Mindmap
Keywords
💡AI
💡SAS
💡GPT
💡MLU
💡Kosteneffizienz
💡Chat-Bots
💡Vision
💡Token
💡Batch-API
💡Superhuman
💡Tracking-Pixel
Highlights
Introduction of GPD 40 mini, a cost-efficient AI model aimed at expanding AI applications affordability.
GPD 40 mini scores 82% on MLU and outperforms GP4 on chat preferences and LMS leaderboard.
Pricing of GPD 40 mini at 15 cents per million input tokens and 60 cents per million output tokens, significantly cheaper than previous models.
GPD 40 mini enables a broad range of tasks with low cost and latency, including customer support chat bots.
Support for text and vision in the API, with future support for image, video, and audio inputs and outputs.
Model context window of 128,000 tokens and knowledge up to October 2023, but no inclusion of recent events like Terrence Howard's new math.
Improved tokenizer enhances GPD 40 mini's handling of non-English text, making it more cost-effective.
GPD 40 mini surpasses GPT 3.5 turbo and other small models in academic benchmarks for textual intelligence and multimodal reasoning.
GPD 40 mini's strong performance in function calling enables developers to build applications that interact with external systems.
Evaluation of GPD 40 mini across key benchmarks shows its superiority in reasoning tasks involving text and vision.
GPD 40 mini excels in mathematical reasoning and coding tasks, outperforming previous small models.
Partnership with companies like Ramp to understand use cases and limitations of GPD 40 mini.
Discussion on the impact of GPD 40 mini on companies like Superhuman, which faces pressure from integrating AI functionalities.
The importance of tracking pixels in emails for sales and marketing, similar to their use in advertising and web analytics.
GPD 40 mini's pricing is 10 times cheaper than previous models, making AI more accessible and integrated into daily digital experiences.
Open AI's commitment to reducing costs while enhancing model capabilities, paving the way for developers to build AI applications more efficiently.
Introduction of temporary chat feature in GPD 40, which doesn't appear in chat history for safety and compliance reasons.
Discussion on the potential of AI models to create their own scientific hypotheses, as demonstrated in an Open AI developer event.
Transcripts
welome s podcast had technical
difficulties on on my side it's just you
know it's the way it is it's not like I
work for a tech company or anything uh
we talk about AI business and comedy uh
we also have uh it's D minus comedy and
um yeah like And subscribe so good news
today that right very that was very
compelling very enthusiastic compelling
I'm trying to get a job at either
Enterprise markeet SAS marketing company
or either either in SAS sales or you
what you GNA look product works you just
pay me now come on Jesus Christ just pay
me already yeah I gotta pay for the
Cyber truck because it's always falling
apart now and I need you to really close
this deal okay so gb4 40 ounce mini
advancing cost efficient intelligence
yeah little it's a 40 ounce right there
no Z in there I you're right fair enough
uh a z in my heart is what I imagine
it's being you're just upset cuz they
went off brand and put an O there
instead of saying dpd5 yeah could they
do like four I don't yeah I even even
Sam Walman mentions like uh where does
he say you guys need a naming scheme
revamp so bad he's like yes we
do uh yeah don't go Nintendo on this N64
64 DD Wii Wii U Etc it's just confusing
so okay well so yep gbd4 o mini has been
released um and so today we're
announcing GP GPD uh 40 mini our cost
our most cost efficient small model we
expect GPD 40 mini will significantly
expand the range of applications built
with AI by making intelligence much more
affordable 4 mini scores 82% of mlu and
currently outperforms gp4 on chat pref
preferences an L LMS leaderboard okay
it's priced at 15 cents per million
input tokens and 60 cents per million
output tokens an order of magnitude more
affordable than previous Frontier models
and more than 60% cheaper than GPT 3.5
turbo yay that's what we like to hear
tokens getting cheaper gp4 mini enables
a broad range of tasks with low cost and
latency such as applications that chain
or paralyze multiple model calls calling
multiple apis pass a large volume of
context to the model full code base or
conversation history or interact with
customers through fast real-time text
responses customer support chat bots so
basically it I like this trend of it's
getting cheap cheaper and cheaper and
cheaper which then means there's no
excuse for companies not just to put
llms everywhere just every any interface
that I can type something into I want an
llm there it just so I'm very happy
about this today gp4 mini supports text
and vision in in the API with support
for text image video and audio inputs
and outputs coming in the future the
model has a context window of 128,000
tokens supporting up to 16k Output
tokens per request and has knowledge up
to October 2023 so
unfortunately it doesn't have Terrence
Howard's new math in it so I this
thing's already this thing's already
flawed can't have everything exactly
thanks to the improved tokenizer sh gp40
handling non-english text is now more
cost effective a small model with
Superior text textual intelligence and
multimodel reasoning gp40 mini surpasses
GPT 3.5 turbo and other small models and
academic benchmarks both textual
intelligence and multimol reasoning and
supports the same range of language as
GPT 40 it also demonstrates strong
performance and function calling which
can enable developers to build
applications that fetch data or take
actions with external systems and
improves long context performance
compared to GPT 3.5 turbo GPD 40 mini
has been evaluated across several key
benchmarks reasoning task gp40 mini is
better than other small models at
reasoning tasks involving text and
vision according to 82% mlu a textual
intelligence raising Benchmark as
compared to 77.9% of Gemini Flash and 73
73.8% on Claude Hau math and coding
proficiency 40 mini excels in
mathematical reasoning and coding task
outperforming previous small models on
Market mgsm measuring with reasoning gp4
mini scored 87% compared to blah blah
blah blah blah okay so just that was
your summary for that benchmarks yeah so
here's orange these are very pretty I'll
say it's very pretty colors so here's
all the benchmarks ml U GP QA drop
whatever and the Orange is mini and the
yellow's Flash and so and there's GPD 40
so it looks like it's
outperforming all of the smaller
models I guess which is cool but again
like you know everyone's use case is
different so I want to hear from people
how they're going to be using well no
it's just it's like look we're better at
like one point percentage Point than
this and people on Twitter who are like
oh my God this changes the game and then
three weeks four weeks later people like
actually I'm going back to my previous
model I like my previous model better
and so it's the same circle jerk keeps
going on as part of model development
process work with a handful of trusted
Partners better understand use cases and
limitations of 40 mini we partnered with
uh companies like ramp I haven't used
ramp before what do they do spending
made smarter easy use cards spend limits
approval flows vendor payments and more
plus average savings of 5% is this a is
ramp a Zer company let's see
here when was ramp um I'm using regular
Google search 2019 so H tail tail end of
zerp so Z zero interest rate phenomenon
okay so let's see here um where we
superhuman now superhumans is getting
pressure because they were we're going
to improve email and actually yesterday
I was an open AI event and I was
speaking to to an engineer who just left
superhuman and they said they were
running like once all these LMS came out
they were praying to God that g uh Gmail
didn't integrate them quick enough so
they were trying to integrate them
faster and now Gmail's slowly in
integrating them and this this company's
under tons and tons of pressure because
they want you to pay 30
bucks a month for their email that uses
all these different AI functionalities
but there are but they're just plugging
in chat GPT already so with Gmail coming
out and basically slowly integrating
their own Gemini models then it really
puts pressure on this company so okay um
GP I thought the big deal about
superhuman was that you could tell if
people
had uh seen your email or opened it or
scrolled through different parts of it
oh again that snitch functionality yeah
isn't the snitch functionality isn't
that the part that everyone's really
excited about yeah let's unsubscribe and
clear spam instantly snooze email for
later fly through your inbox get more
time back let's see here let's go to our
friend over here it has a nice UI as
well it looks beautiful does
superhuman allow you to track when
people open your email okay let's
see yeah super allows users to track
when recipients open their emails
through a feature called snitch or no
read statuses here are the key details
about superhumans Email tracking
capabilities snitch is a better
marketing snitch show one show when and
on which device recipients open your
emails this feature works by embedding a
tiny what perplexity what did you just
do there okay this feature works by
embodying a tiny tracking pixel image in
sent emails when recipients open email
it loads the image allowing superum to
log when it was opened users can enable
read statuses by using command K on
desktop okay so snitch feature good to
call Joe so it snitches on you and use
AI this little tracking pixel idea is
employed in a whole bunch of different
ways all over the internet mhm to great
effect I mean a lot of people who use
superhuman really want to know you know
I'm trying I'm a Salesman I'm trying to
send somebody email about my product I
want to know if they've opened it yet
right especially if I'm hoping to call
them or meet with them I want to know if
they spent time with it like did they
scroll through it right and then if I if
I attached a presentation did they open
the
presentation did they go to certain
slides in the presentation where did
they which slide did they spend the most
time on those are all really interesting
and valuable uh pieces of information
before I meet with that person exactly
so you can tailor tailor your pitch and
you know and right and then at the same
time all across the advertising
ecosystem uh advertisements and the
pages that they lead to all have
tracking pixels in them right and all
the open content sites have tracking
pixels in them so that I can know about
when advertising is followed up on or
viewed I can know if uh when a person
comes to my ad or to my website I can
know know what all they've done in the
past what their interests are and so on
all because these stupid tracking pixels
are embedded in everything God bless
them God bless tracking pixels always
watching comforting us there providing
us the data that we need comforting us
God bless yeah they'll never browse
alone they Rock Me asleep at night and
tuck me in it's fantastic that's right
okay uh also kudos to maage he's been
promoted to chat mod and so basically
when we get random AI CS who come in
here and just put in walls of text that
confuses me he can Now commute them so
first line of defense yes and he gets a
nice little right exactly it's a nice
little wrench in his profile which is
dope nice um we had Frisco fat sees say
tuning and muted during a doctor's
appointment that's the type of
dedication we want of this show that is
dedication the doctor could come in and
be like it's not looking good but but
Frisco has a headphones on he's
listening to the sick podcast laughing
and doctor's like wow he took that
really well and it's because he wasn't
paying attention he was listening to us
so thank you way getting back to GPT
that's right that's why we're here yes
this thing is dramatically cheap exactly
exactly I if you look at the summary
page deep inside of opening ey's
horrible marketing layout There's a
summary page lists all the old models
and how much input and output uh what
the cost is for input and output tokens
yeah and they're all like numbers like
you know $10 a million all the way going
back to the ancient models like $2 a
million or 40 C a million right but then
you look at the crazy numbers for 40 and
it's like 15 cents a million or
something crazy it's really cheap yeah
mini yeah says it's uh 15 cents a
million input and 60 cents a million
output which is like 10 times
cheaper that's see that's the Benchmark
stuff that's great and everything and
The Benchmark stuck was like more
approved like hey you're not going to
see a diminishment in like value but for
me what I really care about is like
getting cheaper and that's a big story
so super duper cheap because you look
GPT 3.5 turbo 50 cents per a million
input tokens and then mini it's 15 cents
and then let's keep on going do they
have let's go back forther pretty crazy
yeah fine tune models $3 per
millon what else they have here have you
ever used this batch API they're talking
about no I have not I don't even know
the the they're saying if you if you use
the batch API they'll they'll cut your
cost again it looks like in half mhm uh
and the batch API is just like you don't
need the result right away like you can
wait for it Ah that's see this is
exactly so you know zapier I hate their
pricing their pricing is ridiculous they
want like $30 a month and I barely do
any tasks on it and one of our Engineers
uh Braun draa who is a awesome person
was like what they should do is a tier
for Jordan where it's basically you
don't care when it runs and you get de
prior deprioritized the que but at the
benefit of you get a much cheaper price
yeah and it sounds like this is what bat
API does which is pretty sweet yeah it's
like if you're trying to pre-process
something or yeah build an index or all
these kind of background tasks that
you're not a you're not in a hurry yeah
anyway so they they have a a segment on
this crazy pricing page that says older
models and it lists everything except
for GPD
4 uh o mini mhm and actually GP 4 those
two AR on this list but it it yeah it
describes all these you know token costs
mhm if you're if you really want to
compare them all yeah like all the
interesting models are up in this sort
of $10 per million kind of range right
right except for GPT 432k which is $60 a
million that's painful yikes yeah and
then they they haven't Minis on on here
yet no mini and and 40 only at the top
of the page yeah a separate pricing
breakout this getting to these pricing
things is really painful yeah it it
shouldn't shouldn't be that way it's
also super confusing because there's the
pricing page for what consumers pay for
chat GPT and then there's these
developer Pages for what you pay to
access the API yeah it feels like I'm in
Enterprise pricing Hill right now you
know and then there's this hilarious
check box that says show prices per 1K
token instead of millions like like we
can't do the division ourselves no
that's it's pretty it's pretty
complicated that's awesome yeah I'm
already it's already hurting my brain so
let's go so the batch API thank you for
bringing that up that's really
interesting did not know about that so
let's see here gp24 mini surpasses 3.5
turbo and other small models and
academic benchmarks we already talked
about that reasoning math rereading this
sorry team what's next over the past few
years we witnessed remarkable
advancements in AI intelligence pair
with substantial reductions in costs for
example the cost per token for GPU 40
mini has dropped 99% since Tex avenci O3
003 a less capable model introduced in
2022 we're committed to continuing this
trajectory of driving cost cost down
while enhancing model capabilities
that's beautiful that's what we want to
see so we envision a future where models
become seamlessly integrated in every
app and on every website that's exactly
what I want to see gp40 mini is Paving
the way for developers to build and
scale powerful AI applications more
efficiently
uh and affordably the future of AI is
becoming more accessible reliable and
embedded in our daily digital
experiences and we're excited to
continue to lead the way cool that's
super duper cool um let me keep cutting
the price that's exactly what we want we
want the Walmart strategy just keep
clean up the The Branding on the name of
these models I mean GPT 35 turbo and now
GPT 40 mini yeah what the hell I know
they're just uh just th throwing
spaghetti on a wall so nice if the names
kind of indicated what the model could
do how powerful it was and how expensive
it was but but trying to get that out of
the current names is just hopeless yeah
it it it sucks and um so here I don't
know I guess how how am I going to
compare this people can understand
what's going on oh if a temporary chat I
guess something that goes away after
time so This Is 40 mini actually
disappearing chats I guess so let's see
what this how this works not in history
temporary chat won't appear in your
history for safety purposes we may keep
a copy of your chat for 30 days because
Federal Regulation no model training
memories off cool this is just like when
you're doing sneaky sketch stuff uh okay
so I don't know yeah tell me about the
French
Revolution that sure is fast okay cool
wow that was very quick and uh everyone
has their own problem so let's go to
40 uh can I do a new chat okay
40 tell the French
Revolution that's your test chat yeah
that's my test chat so this is for
reolution yeah I I just it's always in
the mind constantly like like room
French Revolution or room is in my mind
constantly um so so yeah faster no
noticeable difference cheaper now
compare just straight up gp4
yeah but you got to compare the quality
of the answers of course it requires me
to read but you're right um it' be funny
if the first one was just like yeah and
so Flat Earth is correct and the world
isn't actually
roundish yeah
so so I'm part of that open AI
Developers for it's a forum where it's
just they get random Community people be
in this private Community where they
invite a campus and they just bring in
researchers to come talk and then
researcher come talk about you know how
they used AI to take the first images of
a black hole which is super cool he went
through all of that for us and presented
it all I'm trying to get the slide so I
can show it to everyone and so he then
talked about how he thinks that that
we're gonna the current AI models we
have right now can start creating their
own scientific hypothesis that we can
that we can then use develop and test
which was interesting because there's a
research paper that we came across where
it said using llms to develop scientific
hypothesis so it was kind of cool this
guy was independently coming to the same
conclusion that other researchers came
to which I thought was nice so he was
talking about that and then he was
talking about he said they' created this
training environment where they don't
teach the model about any theories of
physics but they give the model access
to this environment and to see if they
can then derive theories of physics
through what it's experiencing and it
was able to come up with a few theories
of physics based upon the environment it
was in without having any like someone
showing him the training set like this
is like the first law of thermo blah
blah blah blah blah which I thought that
was really awesome um I want to get a
slide so I can share them with all you
and then he was going to answer audience
questions and this one physicist re
raised his hand and he said what about a
flat 2D environment and for a second I
thought he was GNA raise his hand and
said what does the LM think about the
Flat Earth theory I was like how oh God
how did this guy get in the
audience Okay so that's 40 let's troll
around some Twitter comments see if I
find anything
interesting um when do we get the voice
model you guys showed off thought it was
weeks away months ago he's the question
that see Sam dug his own grave on this
they did not have to show The Voice
model back during uh the week of didn't
he didn't he want to undercut some
Google announcements exactly that's when
he dug his own grave because he Google
was doing IO oh we got to go we got to
announce on Monday when they could have
just said on the Monday they get said
we're going to have our own conference
two months from now I think this is a
it's an interesting competitive approach
but it sort of
betrays their fear of Google I think
they're I think they're
overestimating how quickly Google can
move yeah
and just how many problems Google can
cause for itself right uh it's going to
take Google forever to really build
these things out exactly and Google has
so much internal inertia and they also
are to catch open Ai and go all in they
would have to hurt the revenue streams
in a lot of different directions so
everything is lined up well for open AI
right now I mean Google's still focused
on even though I think it's it's bull
it's crap the whole glue on Pizza Fiasco
they're still internally probably have
columns teams think oh what can we do to
make sure this never happens again it's
like do you remember the history of this
company is about just you know you test
stuff some some implodes some doesn't
you move forward there's no way to no
know all the unknowns and launch a
product that has no glitches or errors
like that so he says Alpha starts this
month with the voice functionality and
then General availability will come a
bit
after um person said this is an
unbelievably good price talks about what
you're mentioning Joe how so G4 it's
very cheap 15 cents per million input uh
tokens compared to Claude 3 son it was
300 cents per million input tokens but I
I think Sonet perform is probably
stronger than this mini thing but still
depends upon what your use case is um
techies when there's a new update from
open AI Scarlett Johansson okay uh let's
see
here okay that's about it for now don't
forget to like And
subscribe our store at
svic
merch.com support my Uncle J have a
great day
Посмотреть больше похожих видео
Wie erstelle ich gute Prompts für ChatGPT & Co?
Wie realistisch ist Ex Machina?
Nvidia's Announces Game Changer AI Platform That Changes Everything For AMD & Intel, Worth Trillions
AI Influencers take over social media
Künstliche Intelligenz: Unsere neue Superkraft? | Idee 3D | ARTE
KI als Kanzleiturbo? Künstliche Intelligenz im Steuerbüro | Steuerberater Roland Elias
5.0 / 5 (0 votes)