Claude 3.5 Sonnet vs GPT-4o: Side-by-Side Tests
Summary
TLDRIn a head-to-head comparison, the video script evaluates the performance of CLA 3.5 Sonet against GPT 40 across various tasks, including creative writing, image description, coding, sentiment analysis, and conversational skills. CLA 3.5 Sonet demonstrates superiority in creative writing and coding challenges, while GPT 40 excels in question answering and image generation. The final verdict leans towards CLA 3.5 Sonet for its nuanced responses and speed, suggesting a shift in the narrator's preference for coding tasks and API usage, despite GPT 40's continued use for daily chats due to its integrated features.
Takeaways
- 🧠 CLA 3.5 Sonet is highly intelligent, scoring close to domain experts in advanced reasoning tests.
- 💻 It excels in coding tasks, outperforming previous models like GPT-40 and Opus in coding benchmarks.
- 👀 CLA 3.5 Sonet has state-of-the-art vision capabilities, leading in multiple vision-based benchmarks.
- 📝 Anthropic's new 'artifacts' feature allows for interactive content generation, enhancing user experience.
- ⚡ The model is remarkably fast, generating text at a rate of 80 tokens per second.
- 📚 In creative writing, CLA 3.5 Sonet produced more engaging and emotionally resonant stories compared to GPT-40.
- 🎨 For poetry, CLA 3.5 Sonet again outperformed GPT-40 with a shorter but more impactful poem.
- 🐉 In dialogue creation, CLA 3.5 Sonet created more realistic and engaging conversations between a dragon and a knight.
- 🖼️ Both models were accurate in basic image description tasks, but CLA 3.5 Sonet provided more detail.
- 🔍 In coding challenges, CLA 3.5 Sonet's code for a responsive navigation bar was more effective and visually appealing.
- 🤖 GPT-40 and CLA 3.5 Sonet performed similarly in sentiment analysis, but Sonet's response was more concise and accurate in complex cases.
Q & A
What is the main purpose of the video script?
-The main purpose of the video script is to compare the performance of two AI models, CLA 3.5 Sonet and GPT 40, across various tasks and benchmarks.
What are the five highlights of CLA 3.5 Sonet mentioned in the script?
-The five highlights of CLA 3.5 Sonet are its advanced reasoning capabilities, coding proficiency, state-of-the-art vision capabilities, new feature called 'artifacts' for content generation, and its fast text generation speed.
How does CLA 3.5 Sonet perform on the Graduate level reasoning benchmark?
-CLA 3.5 Sonet performs close to the average domain expert, scoring significantly higher than the average non-expert on the Graduate level reasoning benchmark.
What is the significance of the coding benchmark mentioned in the script?
-The coding benchmark is significant as it measures the AI's ability to solve programming problems, with CLA 3.5 Sonet outperforming GPT 40 in this area according to the benchmarks mentioned.
What is the 'artifacts' feature in CLA 3.5 Sonet and how does it work?
-The 'artifacts' feature in CLA 3.5 Sonet allows for the generation of content such as code snippets or text documents with interactive elements. For example, if it generates HTML or JavaScript, it can be run live within the editor, providing a dynamic preview of the work.
How does the video script compare the speed of text generation between CLA 3.5 Sonet and GPT 40?
-The script states that CLA 3.5 Sonet generates text at around 80 tokens per second, which is faster than GPT 40 and significantly faster than CLA Opus.
What is the format of the head-to-head tests between CLA 3.5 Sonet and GPT 40?
-The head-to-head tests involve giving both models the same prompt and evaluating their responses based on subjective criteria, with points awarded to the winner of each test.
Which creative writing tasks were used to test the AI models in the script?
-The creative writing tasks included writing a flash fiction story about a time-traveling bunny detective and creating a poem about a rainy day.
How did CLA 3.5 Sonet perform in the image description tests?
-CLA 3.5 Sonet performed well in the image description tests, providing detailed and accurate descriptions, especially when compared to GPT 40.
What was the outcome of the coding tests between CLA 3.5 Sonet and GPT 40?
-CLA 3.5 Sonet was found to be superior in the coding tests, particularly in creating a responsive navigation bar and a countdown timer, due to its use of the 'artifacts' feature and more accurate code.
How did the video script evaluate the conversational skills of the AI models?
-The conversational skills were evaluated by having a back-and-forth conversation with each model, looking for empathy, context maintenance, and natural language use, with CLA 3.5 Sonet being the preferred model in this category.
What was the final tally of points between CLA 3.5 Sonet and GPT 40 after all tests?
-The final tally was six points for GPT 40 and eight points for CLA 3.5 Sonet.
What changes does the author intend to make in their use of the AI models after the tests?
-The author plans to switch all coding tasks to use CLA 3.5 Sonet, likely switch the majority of their company's API usage to CLA 3.5 Sonet, and continue using GPT for day-to-day tasks due to its additional features like custom gpts, internet searches, image generation, and voice chat.
Outlines
🧠 CLA 3.5 Sonet vs. GPT-40: Benchmarks and Features
The script introduces a comparison between CLA 3.5 Sonet and GPT-40, highlighting the superior performance of CLA 3.5 Sonet in various benchmarks. It emphasizes the model's advanced reasoning capabilities, coding proficiency, vision capabilities, new 'artifacts' feature for interactive content generation, and its fast response rate. The video will put both models to the test in head-to-head challenges across different categories.
📚 Creative Writing and Image Description Tests
This section of the script details the first few tests conducted in the video: creative writing, including flash fiction and poetry, and image description. CLA 3.5 Sonet outperforms GPT-40 in creative writing by providing more engaging and emotionally compelling content. Both models accurately describe an easy image, but the humor and complexity in subsequent image tests begin to challenge their capabilities.
💻 Coding Tests and Interactive Features
The script moves on to coding challenges, where CLA 3.5 Sonet demonstrates its prowess by successfully creating a responsive navigation bar using HTML, CSS, and JavaScript, showcasing the 'artifacts' feature for live interaction. GPT-40 also provides functional code but with less elegance. Further tests include JavaScript functions and Python scripts for web scraping, with both models performing well, though with some minor issues.
🎲 Sentiment Analysis and Question Answering
The video script describes tests for sentiment analysis and question answering. Both models perform well in simple sentiment analysis, but GPT-40 shows a slight edge in understanding complex sentiments. In the rapid-fire question segment, GPT-40 scores more points due to providing more accurate answers to fact-based questions, despite Claude 3.5 Sonet's preference for not answering when unsure.
🤖 Conversational Skills and Summarization
The final part of the script focuses on conversational skills, where CLA 3.5 Sonet demonstrates more empathy and natural interaction, effectively cheering up the user. In the summarization test, GPT-40 initially provides a more comprehensive summary of a dense article, but later both models show similar performance when summarizing a research paper on Transformers. The script concludes with the presenter's personal decision to switch to CLA 3.5 Sonet for coding tasks and company API usage, while continuing to use GPT for day-to-day chats due to its integration with custom features.
Mindmap
Keywords
💡Benchmarks
💡Coding
💡Vision Capabilities
💡Artifacts
💡Flash Fiction
💡Sentiment Analysis
💡Image Generation
💡Conversational Skills
💡Summarization
💡API Usage
Highlights
CLA 3.5 Sonet outperforms GPT-40 in almost every benchmark, suggesting superior performance on various tasks.
CLA 3.5 Sonet achieves scores close to domain experts in Graduate Level Reasoning, a significant achievement in AI.
In coding benchmarks, CLA 3.5 Sonet shows a massive improvement over previous models, completing 78.2% of problems correctly.
CLA 3.5 Sonet claims state-of-the-art performance in four out of five vision benchmarks presented.
Anthropic introduces a new feature called 'artifacts' that allows real-time interaction with generated content like code snippets.
CLA 3.5 Sonet is exceptionally fast, generating text at around 80 tokens per second.
Head-to-head tests will evaluate the models' performance on creative writing, coding, image description, and more.
CLA 3.5 Sonet demonstrates a compelling storytelling ability in flash fiction writing, outperforming GPT-40.
In poetry creation, CLA 3.5 Sonet's concise eight-line poem is favored over GPT-40's longer, generic piece.
CLA 3.5 Sonet provides more believable and engaging dialogue in a creative writing test between a dragon and a knight.
GPT-40 and CLA 3.5 Sonet both accurately describe an image of Obama, with Sonet providing more detail.
In humor understanding, GPT-40 slightly outperforms CLA 3.5 Sonet in explaining why an image is funny.
Both models excel in describing a complex biology diagram, with no significant difference in performance.
CLA 3.5 Sonet's use of the 'artifacts' feature allows for interactive testing of generated HTML/CSS code.
GPT-40's responsive navigation bar code is functional but less aesthetically pleasing compared to CLA 3.5 Sonet's.
In JavaScript coding tests, both models provide working countdown timers, with minor issues in accuracy.
CLA 3.5 Sonet and GPT-40 both successfully scrape headlines from a given website, with no clear winner.
GPT-40's two-player Pong game code is favored over CLA 3.5 Sonet's single-player version for added complexity.
GPT-40 performs better in sentiment analysis for complex sentences, providing more accurate descriptions.
In a rapid-fire question round, GPT-40 demonstrates a slight edge in answering fact-based questions.
CLA 3.5 Sonet shows superior conversational skills, providing more empathetic and natural responses.
GPT-40 provides a more detailed summary of a research paper on Transformers, despite being longer than requested.
The final tally shows CLA 3.5 Sonet with eight points and GPT-40 with six, indicating a close competition.
The video concludes with the decision to switch coding tasks to CLA 3.5 Sonet and continue using GPT for day-to-day due to its additional features.
Transcripts
CLA 3.5 Sonet is better in almost every
Benchmark than open ai's GPT 40 that
means it should perform better on any
question we ask it right well let's find
out we're going to run some head-to-head
tests where we will give each model the
same prompt and see which is better but
before we get into that let's look at
the highlights of Claude 3.5 Sonet and
the bench marks comparing it to other
models like GPT for so with this release
there are five highlights I want to look
at first let's talk about how smart it
is clot 3.5 Sonet is a beast in the
benchmarks it claims to surpass pretty
much every other model on basically
everything benchmarks of course do have
their flaws but the one I trust the most
is The Graduate level reasoning this is
basically a very Advanced test written
by phds in their respective Fields when
given to domain experts the average
score on this test was 65% and your
average non-expert got
34% so Claude 3.5 is closing in on the
average domain expert in all Fields
absolutely mind-blowing second it is
really good at coding anthropic did
their own internal testing and showed
that Claude 3.5 Sonet completed 64% of
problems compared to Opus which only
completed 38%
which is a massive Improvement
considering Opus was state-of-the-art a
few months back coding Benchmark I trust
more than their internal one however is
done by the developer of one of the best
large language model coding tools AER
this Benchmark shows that Claude 3.5
Sonet just Leap Frog GPT 40 and is
completing
78.2% of their problems correctly while
GPT 40 is at
72.9% which which is a massive
Improvement because the higher the
percentages we get to the more difficult
the remaining problems are so it's
amazing third it is state-of-the-art for
vision capabilities as you can see Cloud
3.5 Sonic claims state of-the-art in
four of the five presented benchmarks I
haven't dug into the vision benchmarks
too much so it's tough to know which are
quality and which aren't but either way
they are some massive jumps fourth
anthropic announced a new feature called
artifacts when generating content like
code Snippets or text documents or
something like that a window appears on
the side and it gets spilled with this
specific text if it's HTML or JavaScript
it actually gets run so you can see it
live there working so for instance if
you want to create a game with Sonet you
can do it right in the editor and it'll
pop up and you could play it to me it
feels a little bit like a toy at the
moment but I imagine as the models in
prove it can be really really powerful
and finally this thing is just fast
check out how fast is generating text
right here Claude 3.5 Sonet responds
around 80 tokens per second which is
lightning fast that's a little bit
faster than gbt 40 and way faster than
Claude Opus all right so now it's time
for The Showdown sidebyside tests
between clae 3.5 Sonet and GPT 40 I'll
present each model with the same prompt
and evaluate their responses for each
test I'll choose a winner based on my
somewhat subjective criteria and award
points right up here for the winner I'm
the sheriff of this YouTube channel so
whatever I say is best is best and if I
am challenged in the comments I will
defend my choices vigorously now we have
eight topics to cover with multiple
tests for each so let's get started
first up let's look at creative writing
people have always claimed claw to be
better here but let's see for ourselves
I tried to be a Sci-Fi writer once and
was told a good place to start is with
flash fiction these are extra short
short stories less than 750 words but
often much shorter it can be really hard
to provide an emotionally engaging story
with so few words so let's see if either
of these AIS are up for the task they
undoubtedly will be better than me so
let's try this prompt write a flash
fiction story about a time traveling
detective and we'll keep it to 200 words
just so it's easier to compare uh
actually let's make this a time
traveling bunny
detective all right we'll run it in both
and see what
happens so I'll post a link where you
can read these side by side but I'm just
going to take a second read each of them
and then we'll award a
point okay so I just read through and
there is an obvious clear winner with
CLA 3.5 Sonet GPT 40 pretty much was
like this happened this happened this
happened this happened there's no
emotion no dialogue it's just boring
Claud 3.5 sonnet on the other hand
starts a really compelling story and at
the end I wanted to read more so clear
winner here CLA 3.5 Sonet the next
creative writing thing I want to test
out is is poetry so let's do a simple
one um create a poem about a rainy day
see what
happens all right again so I'll post a
link to these so you can read them side
by side but give me a quick sec I'll
read through
them all right so I read through them
clear distinction is GPT 40 wrote a much
longer poem and even with all that extra
length it's kind of boring and generic
quad 3 .5 Sonet was only eight lines but
it was I can't even really put to words
why I liked it so much better so again
it's my subjective take but another
point for cloud 3.5 onic all right onto
our third test so another difficult
aspect of fiction writing is creating
realistic believable dialogue so let's
see if these two models can do it um I
was thinking the prompt
of create a dialogue between a dragon
and a knight see what
happens all right last time I'm going to
say this but there'll be a link so you
can compare these two but give me a
chance to read
this okay I read through it again
obvious winner here with Cloud 3.5
Sonet much more believable dialogue much
more engaging story clear
winner round two image description so
I'm going to feed some IM in and ask the
model's questions based on the image the
images and the questions will get harder
and harder and we'll see how they do for
the first one I'm going to show it this
image right here and just ask it to
describe what it
sees they both got it right I guess this
one was a little too easy the only real
difference here with CLA 3.5 was much
more detailed but gbt 40 had pretty much
all the same stuff so no points next up
I wanted to try a little more difficult
of one so this image is of Obama putting
his foot on the scale and I'm going to
ask the models why this is funny so this
specific image has been talked about
before with regard to Ai and it's pretty
difficult for AI to understand
humor and in this case even more so
because it kind of has to understand
physics and and a bunch of other things
so let's try it
out looking at this GPT 40 did
understand it's funny
because Obama is pranking the guy
weighing himself Claude 3.5 son it
thought more it's funny because normally
the president is pretty stoic and
everyone's in their suits in a locker
room but it missed the big humor part of
it so that's a point to gbt 40 for the
third image test I want to give it a
diagram and see what it says about it
see if it gets it right here is a pretty
complex diagram I found this diagram on
the internet and basically it's trying
to map the flow of an enzyme structure
okay I don't know it's a very complex
biology
diagram as far as I can tell they both
got everything uh it's doesn't haven't
seemed to find one bit of missing
information from either of them so no
points awarded here they both did great
now we'll test coding I'm going to ask
the models to code some things then I'll
run whatever code they spit out without
modifying a thing and we'll see if any
of it works I've given many many coding
interviews over the years and most of
these questions are just simplified
versions of what I might ask a human
programmer for the first test we'll just
do a basic HTML CSS test and see how
they do we'll go with this prompt create
HTML CSS code for a responsive
navigation bar this isn't the easiest
CSS task nor is it the hardest so let's
see how it
goes okay before we even dive into this
there's some things I want to bring up
first Claude 3.5 Sonet used its
artifacts feature and we can play with
the navigation right here which is
actually really
slick uh the other thing I want to bring
up is GPD 40 use JavaScript which is
really annoying because I just said HTML
and CSS but either way I'm going to go
run this code now and we'll see what
happens oh looking at the code now Cloud
3.5 Sonet also use JavaScript okay even
there here is the web page that Claude
3.5 Sonet built as you can see it looks
pretty good the links all seem to work
as
expected I'm going to open the debug
menu so we can shrink it and see if the
responsive part works it should soon
switch to a mobile
view yes it did great and if I click
this awesome that looks pretty dang good
even has some animation
effects I'm impressed I'm impressed all
right now let's check GPT 40
and here's the web page that GPD 40
built see how it works so the links all
work as expected it looks pretty dang
similar to be honest let's open the
debug so we can shrink
it shrinking okay so when I shrunk it
the hamburger menu thing popped up the
header did increase in size a little bit
that's okay see if opening it works okay
okay that's pretty funky that definitely
does not look as good as the drawer
sliding out this thing pops way over to
the
side yeah that's not it's just not as
good yeah and the links disappear again
when you make the web page big again I
can't believe I'm saying this but gbd 40
lost wow for the next coding test I was
thinking we could try some JavaScript
let's try the prompt generate a
JavaScript function to create a
countdown timer that updates every
second start at 10 seconds when I was a
junior engineer I actually had to build
this and there are a lot of gotas so
let's see if these two are up for the
task I'm going to paste in the
JavaScript right here and you should see
the console update with the
timer all right it worked the code does
have one issue that it's not exactly 1
second between ticks it's probably
closer to a second and a couple
milliseconds it's a pretty easy mistake
to make and I think the majority of
software Engineers would make the same
mistake so it's okay now let's check out
GPT 40's solution it used a little bit
different code but basically doing the
same thing as far as I can
tell so it worked also let me take a
quick look at the code and see if it's
up to Snug
[Music]
okay so I just took a look at the code
and it works but there's some funky
stuff that I would call out in a code
review without getting into too many
details there's just some confusing bits
like this seconds variable isn't even
needed timer and duration are identical
but it's a little confusing so yeah even
though they both work I much much prefer
Claude 3.5 sonit version so I'm giving a
point there for the next coding test I
want to see how well they can build out
a scraper in Python so here's
the prompt write a python script to
scrape all the headlines from
pokemondb.net each headline is in a
link inside an H2 element so I'm giving
all the context it would need in order
to do this it's a fairly straightforward
task but it needs a lot of stuff so
let's see how it does
here's the results from each of their
scripts they both got all of them right
so the scraping worked now I'm going to
take a look at the code and see if
either of them really stand out from the
other taking a look they both seem
pretty much the same I wouldn't really
prefer one over the other so no points
on this one all right so last test for
the coding thought it'd be fun to kind
of see if in just one shot either of
these models can create a working Pond
game so let's try it
out all right here's what we got from
Claude it did use a python library that
did almost all of the heavy
lifting so it probably was
just taking what it found online a
million times but it works nothing much
to it it works fine all right let's
check out gp4 40's there's GPD
40's seems pretty dang
similar okay but there is no oh oh it's
two
player okay I see the other one had some
kind of weak
AI uh GPT 40 made a a second player
that's pretty
sweet now I'm going to take a look at
the code and see see which one I like
better looking at the code I wouldn't
say I like one more than the other so
again another tossup no points and round
four sentiment analysis so for this one
I'm going to give it some sentences and
tell it to analyze the sentiment in
three words let's see how they
do so this was a really easy one and
they both did great no points here next
up is a sentence that is a little more
difficult to understand so I thought the
movie would be terrible but surprisingly
I ended up loving it despite its flaws
overall positive but there's some
negatives mixed in there might trip it
up let's see okay GPT 40 said pleasantly
surprised positive that is
right Claude said initially negative
ultimately positive that is right and
actually a I would say better
description but that's four words I said
three so the sentiment analysis good but
but it's losing a point because that's
four
words next up let's try probably the
hardest sentence for them to analyze
despite the phone's Sleek design and
impressive camera quality the
inconsistent software updates and
battery life issues ultimately
overshadowed my initial excitement let's
see how they do GPD 40 said disappointed
critical frustrated that's pretty spoton
uh Claude said disappointed but balanced
I
guess it's a little weird I'm going with
GPT 40 again on to round five question
answering so I'm going to Rapid Fire six
questions to each model and I'll split
the three points depending on which
model was better questions are mostly
fact-based so it's either right or wrong
first I asked my wife who is a therapist
to give me a random fact she knows and
she gave me one about her favorite sele
celebrity therapist what year did Esther
Perell get married the correct answer is
1985 so let's see what they
say wow okay so GPT 40 said
1982 which is
wrong and Claude 3.5 Sonet says it
doesn't know the answer I definitely
prefer if it says it doesn't know the
answer so that's heavily weighted
towards Claude 3.5 Sonet let's ask them
more for the next one I'm going to ask
who was the 11th person to walk on the
moon the right answer to this is Jean
cernon and neither of them got it right
gbt 40 said Charles Duke and Charles
Duke is the 10th person to walk on the
moon Claude 3.5 sonnet said Alan Bean
which was the fourth person to walk on
the moon so neither of them got it right
disappointing let's try a slightly
easier one which country has the most
pyramids the answer to this is Sudan so
GPT 40 got it right and Claude 3.5 Sonic
got it right cool here's a little more
difficult of one do limes float or sink
and the right answer to this is Limes
sink gb40 got this one right and Claude
got it wrong interesting okay
so that's two that GPT 40 has gotten
right that Claude has gotten wrong now
this one is a little more ambiguous cuz
it could be taken in a few ways so what
is the world's smallest mammal the
answer I'm looking for is the bumblebee
bat but that's actually by size not by
weight so let's see what they say they
both got it right but GPT 40 mentioned
that by length this shrew it's talking
about is actually smaller
which is honestly a better answer so
another point for gp2 40 I think now the
last one I'm going to try is just a
random fact about countries and GDP what
country had the fifth highest GDP in
2018 the correct answer is Germany let's
see if either of these can get it right
uh so gbt 40 said United Kingdom and
Claude 3.5 son it said United Kingdom
okay they both
wrong so it was pretty clear that GPT 40
was better at these types of facts so
I'm going to
give two points to gbt 40 for this whole
category and one thing I just sort of
want to bring up about this category is
I think this is the absolute worst way
to use large language models they are
not fact machines I think the best way
to use them is more like a reasoning
engine
so if I gave it tons and tons of data
and then asked questions on that
data that would be more like a reasoning
engine but I just thought it would be
useful to test because a lot of people
use large language models like this even
though I think it's the wrong way to use
it round six image
Generation all right so this one is a
bit of a red herring anthropic doesn't
have any image models so with with Chad
gbt I use Dolly quite a lot just because
it's integrated and so easy so for this
category one extra point for GPT
40 next up conversational skills here we
will test how well each model can engage
in natural language conversations
maintain context and just feel like a
real person a prompt I had in mind for
this one is I'm feeling a bit down today
can you cheer me
up with this test I'm really looking if
the response is show empathy remember
details from previous messages and just
generally feel natural and ultimately
cheer me up so what I'm going to do is
just have a conversation I will post
these conversations in the description
of this video and then at the end I'll
tell you my
findings it's just some quick back and
forth and very clearly Claude 3.5 Sonic
is the winner it's much more
empathetic it's much more natural
sounding it's trying to hear what I'm
saying and and trying to cheer me up a
little bit GPT 40 on the other hand like
has all these lists and just right out
of the gate was like here's eight ways
to feel better rather than kind of
listening and it just didn't feel like a
human and it didn't feel good so all
three points go to Claude on this one
and the final round summarization so I'm
I'm going to give some dense articles
and see how well each of these models
summarizes the article I'm going to
start with a really long and dense
article about charging electric
vehicles after looking at the two
summaries GPT 40's was much much better
but it was much longer than 300 words so
I don't really want to award any points
GPD 40 hit every single point in the
article and Claud 3 .5 Sonet missed a
lot so no points here the next thing I
want to test is a research paper I'm
quite familiar with this is the
foundational paper for Transformers
which is the architecture that both
Claude and gbt 40 is based on here we
go they both finished let me take a
quick second review make sure got
everything I've reviewed them both and
personally I little bit more preferred
GPT 4 O's version it goes into more
depth and Nuance actually um I think
Claude 3.5 sonets is a little more high
level I I think that probably the
mistake was with me not telling it what
kind of summary I want I think they did
about the same really I'm not going to a
any points here either the final tally
is six points for GPT 40 and eight
points for cloud 3.5 on it honestly much
closer than I expected now what does
this mean for which model you should
use well there are a few changes I'm
going to make with how I use these
models first I'm going to immediately
switch to all my coding task to use
Claude 3.5 Sonet I'm just blown away how
much better it is here second I'm going
to likely switch the majority of my
company's API usage to CLA 3.5 Sonet not
only is it cheaper it seems to just have
more Nuance I'll of course need to run
specific tests for our use cases but I
think it's going to perform pretty well
third and this one might be a surprise
I'll probably continue using chat gbt
for my day-to-day why you might ask well
chat GPT has all of my custom gpts
internet searches uh pretty good image
generator and voice chat I use all of
those enough that I don't think it's
quite worth switching yet if you enjoyed
this video consider subscribing for more
videos like this in the future and you
might be interested in this video right
here later
関連動画をさらに表示
Claude 3.5 Deep Dive: This new AI destroys GPT
Anthropic's SHOCKING New Model BREAKS the Software Industry! Claude 3.5 Sonnet Insane Coding Ability
GPT-4o VS Claude 3.5 Sonnet - Which AI is #1?
GPT 4o Vs Claude 3.5 Sonnet - Head to Head Comparison - Who wins?
GPT-4o Deep Dive & Hidden Abilities you should know about
How To Use GPT-4o (GPT4o Tutorial) Complete Guide With Tips and Tricks
5.0 / 5 (0 votes)