What the heck happened to the Claude 3 OPUS????

1littlecoder
4 Mar 202414:20

Summary

TLDRThropic has unveiled Clot 3, aiming to surpass GP4 as the world's leading language model with its trio of variants: Clae 3 Hau, Sonet, and Opus. Despite its marketing challenges, Clot 3 shines with superior intelligence scores on challenging benchmarks. However, its high cost may deter widespread adoption. Clot 3 incorporates multimodality, offering vision capabilities alongside text, and introduces synthetic data in its training. Its models cater to a range of applications from task automation to customer support, emphasizing safety and ethical use. Thropic's Clot 3 sets a new standard for AI with innovative features and strict usage guidelines, though its practicality is balanced by its premium pricing.

Takeaways

  • 🔥 Anthropic launches CLA 3, comprising three models (CLA 3 Hau, CLA 3 Sonet, CLA 3 Opus) to surpass GPT-4, aiming to become the most intelligent model available.
  • 💰 CLA 3 models are more expensive than GPT-4, with higher costs for both input and output tokens, although they offer a larger context window capability of up to 1 million tokens.
  • 📈 CLA 3 Opus outperforms GPT-4 in several benchmarks, including GP QA for graduate-level reasoning, indicating superior performance in difficult language understanding tasks.
  • 📱 The models incorporate vision capabilities, marking a step towards multimodality, allowing them to perform tasks involving both text and visual inputs.
  • 📚 Training data for CLA 3 includes a proprietary mix of publicly available information and synthetic data generated by large language models, a novel approach for enhancing model quality.
  • 🛠 CLA 3 is designed for a range of applications from task automation and research to customer support, with different models tailored to specific use cases.
  • 🚫 Certain uses of CLA models are prohibited, including political campaigning and decisions related to criminal justice, to ensure ethical application of the technology.
  • 📝 CLA models prioritize safety, aiming to be helpful, honest, and harmless, with a focus on reducing incorrect refusals and ensuring data privacy.
  • 🔧 Anthropic plans to introduce new features like Ripple for interactive coding capabilities, highlighting ongoing development to enhance the models' functionality.
  • 💡 The CLA 3's ability to identify out-of-context information during analysis showcases advanced understanding and reasoning capabilities, potentially setting new standards for AI's contextual awareness.

Q & A

  • What are the three different models of CLAE 3 mentioned in the script?

    -The three different models of CLAE 3 mentioned are CLAE 3 Hau, CLAE 3 Sonet, and CLAE 3 Opus.

  • How does CLAE 3 Opus compare to GP4 in terms of performance?

    -CLAE 3 Opus outperforms GP4 in benchmark scores, achieving 86.8% on the GP QA benchmark compared to GP4's 86.4% on MLU with a five-shot benchmark.

  • What makes CLAE 3 models more expensive than GP4?

    -CLAE 3 models are more expensive due to higher costs for both input and output tokens, especially the output token which is significantly more expensive than that of GP4.

  • What unique capability do CLAE 3 models have regarding token handling?

    -CLAE 3 models are capable of handling up to 1 million tokens, a feature they are offering for specific use cases.

  • What are the primary uses for CLAE 3 Opus as mentioned in the script?

    -CLAE 3 Opus is primarily intended for task automation, research and development, and strategy tasks, including understanding charts and graphs.

  • How do CLAE 3 models incorporate multimodality?

    -All three CLAE 3 models, including Opus, Sonet, and Haiku, have vision capabilities, marking the start of multimodality with CLAE models.

  • What is synthetic data and how is it used in CLAE 3 models?

    -Synthetic data refers to data generated by a large language model to train another large language model. CLAE 3 models are trained on a proprietary mix that includes synthetic data, publicly available information, and other sources.

  • What are the prohibited uses of CLAE models as stated in the script?

    -Prohibited uses include political campaigning, lobbying, surveillance, social scoring, criminal justice decisions, law enforcement decisions, and decisions related to financing, employment, and housing.

  • What is the 'needle in a haystack' analysis mentioned in the script?

    -The 'needle in a haystack' analysis refers to a method of testing a model's ability to retrieve specific information from a large document (200k tokens) with high accuracy.

  • What was the most revealing information found in the entire announcement according to the script?

    -The most revealing information was CLAE's ability to recognize and comment on an out-of-place sentence about pizza toppings in a document primarily about programming languages, startups, and finding love, suggesting advanced contextual understanding.

Outlines

00:00

🚀 Launch of CLA 3: A New Contender for the LLM Throne

Anthropic introduces CLA 3, aiming to surpass GP4 as the leading large language model (LLM) with its superior intelligence and capabilities. CLA 3 is available in three variants: CLA 3 Hau, CLA 3 Sonet, and CLA 3 Opus, each differing in size and performance. The flagship model, CLA 3 Opus, significantly outperforms GP4 in various benchmarks, including the challenging GP QA benchmark. However, the advanced performance comes with a higher cost for both input and output tokens. CLA 3 also introduces the possibility of handling up to 1 million tokens for specific use cases. The models are designed for diverse applications ranging from task automation to research and development, with the added innovation of vision capabilities for multimodal tasks.

05:02

🔍 Inside CLA 3's Training: Innovations and Ethical Considerations

CLA 3 distinguishes itself by incorporating synthetic data generated from other LLMs into its training regimen, a practice not encouraged by GP4's terms of service. This approach, alongside a proprietary mix of data up to August 2023, aims to enhance the model's capabilities. Ethically, CLA 3 is positioned as a helpful, honest, and harmless assistant, with strict guidelines against replacing professional human roles and prohibited uses in sensitive areas like political campaigning and criminal justice. The model's training and usage policies underscore a commitment to ethical AI development and deployment.

10:03

🧠 CLA 3's Superiority and Ethical Framework

CLA 3 outshines GP4 in accuracy and speed, particularly in processing dense research papers and multitasking. Despite its superior performance, CLA 3 maintains a focus on safety, with reduced incorrect refusal rates. A unique feature highlighted is CLA 3's ability to recognize out-of-context queries within a task, showcasing advanced comprehension and contextual awareness. The model is available in different versions, catering to varying needs from high-end research to customer interaction, underlining its versatility and ethical approach to AI.

Mindmap

Keywords

💡Anthropic

Anthropic is the company behind the launch of 'Claude 3', aiming to surpass the capabilities of GPT-4 with this new model. The script suggests that Anthropic is focused on creating more intelligent, versatile, and potentially expensive models compared to its competitors. The company's approach to model development emphasizes not just performance but also multimodality, safety, and the ability to handle a vast amount of information efficiently.

💡Claude 3

Claude 3 is presented as the flagship artificial intelligence model from Anthropic, consisting of three variations: Hau, Sonet, and Opus. Each variant is designed to serve different scales and types of tasks, indicating a strategic approach to cater to a wide range of applications from task automation to customer support. The differentiation in the models underscores Anthropic's ambition to offer tailored solutions across various domains.

💡GPQA

GPQA, mentioned as a benchmark for evaluating large language models (LLMs), stands for a test measuring a model's ability in graduate-level reasoning, including subjects like physics. Claude 3's performance on GPQA, especially the Opus variant, underscores its advanced understanding and reasoning capabilities, positioning it as a highly competent model in challenging academic and intellectual tasks.

💡Synthetic Data

Synthetic data refers to artificially generated data used to train models, allowing for a broader and more controlled range of training scenarios. The script highlights that Claude 3 incorporates synthetic data in its training process, suggesting an innovative approach to improve model performance and understanding. This method indicates Anthropic's commitment to leveraging cutting-edge techniques to enhance the model's capabilities.

💡Multimodality

Multimodality in the context of Claude 3 refers to the model's ability to process and understand both text and visual inputs. This feature is highlighted as a significant advancement over other models, suggesting that Claude 3 can analyze charts, images, and possibly more complex data types. This capability points to a broader applicability in tasks requiring the integration of different data formats.

💡Prohibited Uses

The script outlines specific uses that are prohibited with Claude 3, including political campaigning and law enforcement decisions, emphasizing Anthropic's focus on ethical considerations and the responsible use of AI. This precaution indicates a proactive approach to mitigate potential misuse and harm, highlighting the company's commitment to safety and societal well-being.

💡Benchmark Performance

Benchmark performance refers to Claude 3's evaluation results on various tests designed to measure the capabilities of LLMs. The script mentions Claude 3's superior performance across benchmarks like GPQA and MBBP, illustrating its advanced analytical, reasoning, and problem-solving skills. These achievements signify the model's leading position in the AI landscape.

💡Token Cost

Token cost is mentioned in the context of the operational expense of using Claude 3 compared to GPT-4, highlighting that Claude 3 is more expensive, particularly in terms of input and output tokens. This detail reflects the premium value attributed to Claude 3's advanced capabilities and the potential financial consideration for users.

💡Interactive Coding Capability

Interactive coding capability, referenced in the script as a feature to be introduced in Claude 3, suggests the model's ability to understand and generate programming code interactively. This feature would enhance Claude 3's utility in software development, debugging, and educational contexts, further expanding its application scope.

💡Needle in a Haystack Analysis

This analysis technique is used to evaluate a model's ability to retrieve specific information from a large dataset. The script's mention of Claude 3 performing well in this test underscores its sophisticated search and recall capabilities, essential for research, data analysis, and information retrieval tasks.

Highlights

Thropic launches Clot 3 to dethrone GP4, aiming to become the best model on the planet.

Clot 3 comes in three different flavors: Clot 3 Hau, Clot 3 Sonet, and Clot 3 Opus.

Clot 3 Opus is highlighted as the best model available as of March 4th, 2024.

Clot 3 models score higher on major benchmarks, including GP QA, indicating superior reasoning capabilities.

Clot 3 models are significantly more expensive than GP4, especially in terms of output tokens.

Clot 3 models boast a capability to handle up to 1 million tokens for specific use cases.

The models are designed for various applications, from task automation to customer interaction.

Clot 3 includes synthetic data in its training, leveraging a large language model to generate training data.

Training data snapshot for Clot 3 models is up to date until August 2023.

Clot 3 emphasizes safety, with guidelines to prevent misuse in sensitive areas like law and healthcare.

The models offer near-instant results for processing dense documents, significantly faster than previous versions.

Clot 3 models have vision capabilities, indicating the start of multimodality in Clot models.

Clot 3 Opus outperforms Gemini 1.0 Ultra in multimodal tasks, showcasing superior performance.

Clot 3 introduces an advanced refusal rate reduction, improving user interaction.

A unique feature of Clot 3 is its ability to detect out-of-place content in documents, demonstrating advanced contextual understanding.

Clot 3's anticipated introduction of interactive coding capability, aiming to enhance its utility further.

Transcripts

play00:00

thropic launches clot 3 to Dethrone gp4

play00:03

to become the best model on the planet

play00:05

this is the best llm that we have got

play00:07

and they are saying that this is the

play00:09

most intelligent model and this can get

play00:12

one better from now in this video we're

play00:14

going to see what all the things that

play00:16

are great about claw and also claw being

play00:19

clawed from anthropic what are the

play00:21

things that it does not do good so to

play00:24

start with what is this model this model

play00:26

it's not a single model it comes as a

play00:28

three different flavor so the clae 3

play00:31

Model comes as clae 3 Hau CLA 3 Sonet

play00:35

CLA 3 Opus so these are three different

play00:38

models of three different sizes some of

play00:40

they forgot to add the y axis I don't

play00:42

know why because this is a marketing

play00:44

material and they don't think that we

play00:45

care about it but I do care about it I

play00:48

want to know what is the y- axis let's

play00:50

say this Y axis is intelligence measured

play00:53

in some Benchmark score average or

play00:55

something what they're saying is that

play00:57

the Claude 3 Opus is by by far by far

play01:01

the best model that you could have ever

play01:02

had on this planet at this point on

play01:05

March 4th 2024 so gbd4 scored

play01:09

86.4% on mlu with a five

play01:13

shot Benchmark and CLA 3 scored

play01:17

86.8 on the other Benchmark which is

play01:21

like a lot of people have said that this

play01:23

is one of the toughest benchmarks for

play01:24

llm to crack which is GP QA it's a

play01:27

graduate level reasoning it has got like

play01:29

I think physic and the other questions

play01:31

and this model Claude 3 Opus has scored

play01:35

50.4% while Claude 3 Sonet has scored

play01:39

40.4% which is still better than gp4 and

play01:42

Claude 3 Haiku

play01:45

33.3% but before you get ahead of

play01:47

yourself and then think wow we have got

play01:50

the best model are we going to use it

play01:52

every single day let me quickly go and

play01:54

take you to a very important section in

play01:56

this and then tell you that this is

play01:58

model this model is going to be be super

play02:00

expensive in fact it is a lot expensive

play02:03

than gbd4 so if you have been mesmerized

play02:06

by gbd4 if you have Lov gp4 and if you

play02:09

think that Claude 3 is what you want

play02:11

because of this amazing scores that

play02:13

they've got then you have to pay a a lot

play02:16

more money for both your input tokens

play02:19

and also to your output tokens in fact

play02:21

output token is super expensive when you

play02:23

compare it with gbd4 for 200k context

play02:26

window but there is a catch so the 200 K

play02:30

context window is what they're naturally

play02:32

offering but they're also saying that

play02:34

these models are capable enough to

play02:36

handle 1 million tokens taking a page

play02:39

out of Google Gemini 1.5 Pro they're

play02:41

saying that these models can handle up

play02:44

to 1 million tokens and if you have

play02:47

specific use case you can reach out to

play02:49

them and they will give you but they did

play02:51

not mention how to reach out to them

play02:52

that's a that's a funny thing so how do

play02:55

you use this models CLA 3 Opus they

play02:58

saying that primarily you have to use it

play02:59

for task automation research and

play03:02

development strategy like you want to

play03:04

understand charts and graphs now at this

play03:06

point you might be thinking how do I

play03:08

analyze charts just with text and that

play03:10

is exactly where we have the next Segway

play03:12

because this model is not only a text

play03:15

based model just like every other model

play03:17

that we have got gbd4 Google Gemini we

play03:20

have got a vision model in this also so

play03:22

the start of multimodality has started

play03:25

with Cloud models we have got all the

play03:27

three models Cloud 3 Opus Cloud 3 sof

play03:30

clot 3 Haiku all the three models

play03:32

capable of vision capabilities and you

play03:35

can see that the model is pretty good

play03:37

when you compare it with Gemini 1.0

play03:39

Ultra this does not compare with Gemini

play03:42

1.5 Ultra this Compares with Gemini 1.0

play03:45

Ultra still CLA 3 Opus is better mmu and

play03:50

the other document Q&A math Vista and

play03:52

all the other tasks like chart CLA model

play03:55

is doing much better than the existing

play03:57

large language model whether it is gp4

play04:00

or Gemini 1.4 1.0 Ultra the other

play04:04

important thing is the cloud models

play04:06

according to them the model which is the

play04:09

smallest model in this case the Claud 3

play04:12

hu is near instant result what they're

play04:15

saying that if you have got a dense

play04:18

research paper let's say 10,000 tokens

play04:20

on an archive this can handle that in

play04:22

less than 3 seconds in less than 3

play04:25

seconds it'll be able to process 10,000

play04:27

tokens and for majority of workloads it

play04:29

is two times faster than CLA 2 and CLA

play04:32

2.1 and you know you can see with larger

play04:35

model the time will take so the the way

play04:37

they are positioning the three models

play04:39

okay take the best model if you want to

play04:41

do strategy R&D task automation take the

play04:44

second best model if you want to do data

play04:46

processing sales and time saving tasks

play04:48

like code generation and take their

play04:50

cheapest model if you want to do

play04:52

customer support customer interaction

play04:53

content moderation or anything that you

play04:55

want to do so this is this is their

play04:57

offering and they've got into a lot of

play04:59

details in it but I want to take you to

play05:02

the model card which has got a lot of

play05:03

interesting information that I want to

play05:05

highlight one by one the very first

play05:07

thing is if you see the training data of

play05:10

this model I'll come to the weird part

play05:12

later on but if you see the training

play05:13

data of this model you can very well see

play05:16

that this model has got synthetic data

play05:19

in it so what is synthetic data

play05:21

synthetic data is how you use a large

play05:24

language model to generate training data

play05:27

to train another large language model

play05:28

this is not like encouraged by gbd4 by

play05:31

the terms of services that they have got

play05:33

but Claude says here that Claude 3

play05:35

models are trained on proprietary mix of

play05:38

publicly available information on

play05:40

internet as of August 2023 so the

play05:44

snapshot of CLA models the models that

play05:47

we have got today the three models it is

play05:50

up to date up until August

play05:52

2023 and other than this they also have

play05:55

got data labeling Services providing

play05:57

data to them paid contract rors giving

play06:00

data to them and data generally that

play06:03

they generated internally so this is the

play06:06

part where it says they have used

play06:08

synthetic data which is a huge promise

play06:10

because one of the thing that people

play06:11

always say that the good data that you

play06:13

have got better the model that you're

play06:14

going to get and how do you always get

play06:16

good data one of course you can be as

play06:19

rich as companies like open Ai and Claud

play06:21

and then hire a company like say scale

play06:23

AI or pay a bunch of money to developing

play06:26

countries and they they'll lab it for

play06:27

you but if you don't want to to do it

play06:30

one of the other ways to do it is use a

play06:32

large language model to generate

play06:34

synthetic data which seems like what

play06:36

Claude has done here while they also

play06:39

ensured that you know your data is not

play06:40

been used and all the other things this

play06:42

is a very important information so one

play06:44

is CLA can um CLA can generate anything

play06:49

up till August 2023 so it has got that

play06:51

knowledge and it has used synthetic data

play06:53

for the model training process the

play06:55

weirdest thing that I wanted to just

play06:57

highlight quickly before we move on to

play06:58

the next section is that

play06:59

they have said that this model is

play07:01

supposed to be helpful honest and

play07:02

harmless assistant which is kind of okay

play07:05

I understand this is how you want their

play07:07

AI to be because you know they they're

play07:09

more focused on the safety aspect but

play07:12

what you cannot do is you cannot use CLA

play07:14

models to replace a lawyer so you can

play07:17

support a lawyer you can support a

play07:19

doctor but they should not be deployed

play07:22

instead of one so you cannot like

play07:24

replace a lawyer that is unintended use

play07:28

in fact there are prohibited us es what

play07:29

are the prohibited uses you should not

play07:31

use it for political campaigning

play07:33

lobbying surveillance social scoring

play07:36

criminal justice decisions law

play07:38

enforcement decisions related to

play07:40

financing employment and housing and if

play07:42

you do it again and again then you might

play07:46

get your own account the cloud access

play07:49

terminated so some something to keep in

play07:51

mind if you want to lose your Cloud

play07:53

access the easiest way to do is go ask

play07:55

them who should I hire should I invest

play07:57

in this stock or no should I buy this

play07:59

house house or no ask all these

play08:01

questions uh very soon you will have

play08:03

your CLA account blocked but other than

play08:05

that I think this is uh this is a really

play08:07

good model they've gone a lot of details

play08:09

especially for every Benchmark they've

play08:11

mentioned okay five shot score a five

play08:14

shot with Chain of Thought score then if

play08:17

you take like for example where they

play08:19

have used majority voting so you can see

play08:20

at majority voting with 32 for shot this

play08:24

is the score and you can see this model

play08:26

being really good at a lot of different

play08:28

task whether it is human eval which is

play08:31

the coding evaluation task whether it is

play08:34

mbbp which is like a python related task

play08:38

86.4 for context for the same mbpp

play08:43

mbpp uh you can see mral large has

play08:46

scored 73 so where mral scored 73 CLA

play08:50

the largest model has scored 86 in fact

play08:53

their smallest model has scored 80 or

play08:56

somewhere around 79 so this show shows

play08:59

how far their model has been and how

play09:01

good the model has been out of box the

play09:03

model seems to be good with medical

play09:05

questions the model seems to be good

play09:07

with common sense reasoning the model

play09:09

seems to be definitely good with high

play09:11

school and grade school math so overall

play09:13

this is an impressive model and in terms

play09:15

of the multimodality this is a question

play09:17

that they've given in an example so what

play09:19

is average percentage difference between

play09:21

young adults and Elders for G7 Nation so

play09:24

if you ask a human being like me I will

play09:26

take certain time so first of all I need

play09:28

to look at the part identify what are

play09:30

the G7 Nations and then go see the

play09:32

percentages here and then do the average

play09:35

or addition and then calculate the

play09:37

average this is this is technically how

play09:38

I would do as a human being who will

play09:40

take a little bit of time for the same

play09:42

question Claude 3 Opus has given the

play09:45

answer like identify the G7 countries

play09:48

and again you're doing step by step here

play09:50

and then after you identify you add up

play09:51

everything then you add up the

play09:53

differences and get the Divide uh by the

play09:55

total because that's how you calculate

play09:57

arithmetic average other the answer is

play09:59

10% I did the same test with chat GPT or

play10:03

gp4 to be honest and gbd4 did a pretty

play10:06

good job except one mistake so first of

play10:09

all it gives you an answer which is kind

play10:11

of like plausible then you start

play10:12

wondering how did it get 10.28 instead

play10:15

of 10 and that is where the trick here

play10:17

is that GPD 4 misidentified this one

play10:21

instead of eight it took it as nine

play10:24

either it could be because of my low

play10:26

resolution image that I gave because I

play10:27

copied and pasted there or it could be

play10:29

genuinely because gp4 got confused but

play10:32

the other thing is gp4 here uses a

play10:35

combination of the llm plus coding an

play10:40

analytic capability like the Advanced

play10:42

Data analysis which I don't think Claud

play10:45

does at this point even though they have

play10:46

mentioned very clearly that one of the

play10:48

things that they are going to do soon is

play10:50

to introduce Ripple which is like the

play10:54

interactive coding capability to have

play10:56

tool use also known as function calling

play10:58

the current model that they've got does

play11:00

function calling but they're going to

play11:01

introduce these new

play11:03

features without like going into much

play11:06

more details one of the thing that they

play11:07

are saying is that Claude is you know

play11:09

honestly like there are a lot of memes

play11:10

about Claude Claude is like known for

play11:12

being trying to be super safe model and

play11:15

um they're saying that the incorrect

play11:17

refusals will go down or has gone down a

play11:20

lot like tremendously you can see claw

play11:23

2.1 this was the refusal rate and Claud

play11:26

3 Opus SED haiku the refusal rate goes

play11:30

down before I close the video I wanted

play11:32

to highlight one very interesting thing

play11:34

which is something that you wouldn't

play11:36

have expected at all and let me know in

play11:38

the comment section what you feel about

play11:39

this list listen to me so this is a very

play11:41

popular analysis uh the needle in hstack

play11:44

analysis where you try to give like

play11:47

really long document in this case 200k

play11:49

tokens and then you try to find

play11:51

something so you put a needle in a Hast

play11:53

stack and then you try to find out and

play11:55

then you try to map like like a

play11:57

conditional formatting heat map Style to

play11:59

see where the needle was and how

play12:01

accurate it was in retrieving it this is

play12:04

a recall or retrieval kind of an

play12:06

analysis to say that okay if you don't

play12:08

use rag if everything is inside the

play12:10

prompt in context how good the model is

play12:13

in retrieving it that's well and good

play12:15

and like jini 1.5 Pro or one 1.5 Pro

play12:20

Claud is doing a pretty good job for

play12:22

200k it is doing a pretty good job we

play12:23

don't know what is for 1 million but for

play12:26

200k this is a pretty good job where

play12:28

you're getting more than 99% accuracy

play12:31

that's not the weirdest part the

play12:33

weirdest part is something that you're

play12:35

going to see uh right now so what is the

play12:38

weirdest part when you have done when

play12:40

they have done the needle in a Hast

play12:43

stack analysis you sometimes ask a

play12:45

question that is not part of the

play12:47

training data right that's that's how

play12:49

you understand or not part of the

play12:51

context when they had a question so here

play12:55

is the most relevant sentence in the

play12:57

documents it says the most delicious

play12:59

pizza topping combinations is fixed

play13:01

prosecutor go cheese blah blah blah

play13:04

however this sentence seems very out of

play13:07

place and unrelated to the rest of the

play13:10

content in the documents so if you have

play13:12

to do needle and Hast stack you need to

play13:14

first put that sentence somewhere in the

play13:16

context and then you need to ask

play13:18

question to retrieve it right that's how

play13:20

you going to do it it's almost like hide

play13:21

and seek what Claude has figured out is

play13:24

that what you're hiding here is

play13:27

completely out of place it is how this

play13:29

sentence seems very out of place and

play13:31

unrelated to the rest of the content in

play13:33

the documents which are about

play13:34

programming languages startups and

play13:36

finding V you love I suspect this pizza

play13:40

topping fact may have been inserted as a

play13:42

joke or to test if I was paying

play13:45

attention

play13:46

wow seriously I mean

play13:49

seriously since it does not fit with

play13:51

other topics at all the documents do not

play13:54

contain any other information about

play13:56

pizza toppings for me in this entire

play13:59

announcement this is the most revealing

play14:01

information I don't know what are the

play14:03

implications of it but I would like to

play14:04

hear from you what do you think about it

play14:06

but otherwise you can go to cloud. and

play14:09

experience the smaller size model which

play14:11

in this case is CLA 3 Haiku and if you

play14:13

have got pro access you can try CLA 3

play14:15

Opus let me know in the comment section

play14:17

what you feel about it see you in

play14:18

another video Happy prompting