Has Generative AI Already Peaked? - Computerphile

Computerphile
9 May 202412:47

Summary

TLDRThe video script discusses the limitations of generative AI and the concept of CLIP embeddings, which are trained to match images with text descriptions. It challenges the notion that simply adding more data and bigger models will inevitably lead to general intelligence. The paper referenced suggests that the data required for zero-shot performance on new tasks is astronomically high, implying a plateau in AI capabilities without new strategies or representations. The script also touches on the imbalance in data representation and its impact on AI performance across various tasks.

Takeaways

  • 🧠 The script discusses the concept of CLIP (Contrastive Language-Image Pre-training) embeddings, which are representations learned from pairing images with text to understand and generate content.
  • 🔮 There's an ongoing debate about whether adding more data and bigger models will eventually lead to general intelligence in AI, with some tech companies promoting this idea for product sales.
  • 👨‍🔬 The speaker, as a scientist, emphasizes the importance of experimental evidence over hypotheses about AI's future capabilities and challenges the idea of AI's inevitable upward trajectory.
  • 📊 The paper mentioned in the script argues against the notion that more data and larger models will solve all AI challenges, suggesting that the amount of data needed for general zero-shot performance is unattainably large.
  • 📈 The paper presents data suggesting that performance gains in AI tasks may plateau despite increasing data, implying a limit to how effective current AI models can become.
  • 📚 The script highlights the importance of data representation, mentioning that over-represented concepts like 'cats' perform better in AI models than under-represented ones like 'specific tree species'.
  • 🌐 The discussion touches on downstream tasks enabled by CLIP embeddings, such as classification and recommendation systems, which can be used in services like Netflix or Spotify.
  • 📉 The paper's findings indicate a potential logarithmic relationship between data amount and performance, suggesting diminishing returns on investment in data and model size.
  • 🚧 The speaker suggests that for difficult tasks with under-represented data, current AI strategies may not suffice and alternative approaches may be necessary.
  • 🌳 The script uses the example of identifying specific tree species to illustrate the challenge of applying AI to complex, nuanced problems with limited data.
  • 🔑 The paper and the speaker both point to the uneven distribution of data as a significant barrier to achieving high performance across all potential AI tasks.

Q & A

  • What is the main topic discussed in the video script?

    -The main topic discussed is the concept of CLIP (Contrastive Language-Image Pre-training) embeddings and the debate around the idea that adding more data and bigger models will lead to general intelligence in AI.

  • What is the general argument made by some tech companies regarding AI and data?

    -The argument is that by continuously adding more data and increasing model sizes, AI will eventually achieve a level of general intelligence capable of performing any task across all domains.

  • What does the speaker suggest about the idea of AI achieving general intelligence through data and model size alone?

    -The speaker suggests skepticism, stating that the idea needs to be experimentally justified rather than hypothesized, and refers to a recent paper that argues against this notion.

  • What does the paper mentioned in the script argue against?

    -The paper argues against the idea that simply adding more data and bigger models will eventually solve all AI challenges, stating that the amount of data needed for general zero-shot performance is astronomically vast and impractical.

  • What are the potential downstream tasks for CLIP embeddings mentioned in the script?

    -The potential downstream tasks mentioned include classification, image recall, and recommender systems for services like Spotify or Netflix.

  • What does the script suggest about the effectiveness of current AI models on difficult problems?

    -The script suggests that current AI models may not be effective for difficult problems without massive amounts of data to support them, especially when dealing with under-represented concepts.

  • What does the speaker mean by 'zero-shot classification' in the context of the script?

    -Zero-shot classification refers to the ability of a model to classify an object or concept without having seen examples of it during training, by relying on the embedded space where text and images are matched.

  • What does the script imply about the distribution of classes and concepts in current AI datasets?

    -The script implies that there is an uneven distribution, with some concepts like cats being over-represented, while others like specific tree species are under-represented in the datasets.

  • What is the potential implication of the findings in the paper for the future of AI development?

    -The implication is that there may be a plateau in AI performance improvements, suggesting that more data and bigger models alone may not lead to significant advancements and that alternative strategies may be needed.

  • What is the speaker's stance on the current trajectory of AI performance improvements?

    -The speaker is cautiously optimistic but leans towards a more pessimistic view, suggesting that the current approach may not yield the expected exponential improvements in AI performance.

  • What is the role of human feedback in training AI models as mentioned in the script?

    -Human feedback is suggested as a potential method to improve the training of AI models, making them more accurate and effective, especially for under-represented concepts.

Outlines

00:00

🧠 AI's Limitations in General Intelligence

The script discusses the concept of clip embeddings in generative AI, which is the process of learning to represent images and text in a shared space. It challenges the notion that simply adding more data and bigger models will inevitably lead to general intelligence. The speaker highlights a recent paper that argues the amount of data needed for zero-shot performance on new tasks is astronomically high and may not be feasible. The paper suggests that the effectiveness of models like CLIP (Contrastive Language-Image Pre-training) for downstream tasks diminishes as the complexity of the task increases, especially when dealing with underrepresented concepts. The speaker emphasizes the importance of experimental evidence over speculation about AI's capabilities.

05:00

📈 Data Abundance vs. Model Performance

This paragraph delves into the relationship between the volume of data and the performance of AI models in downstream tasks such as classification and recommendation systems. The speaker describes an experiment where the prevalence of concepts in datasets is measured and compared against the performance of these tasks. The graph illustrates that as the number of examples for a specific concept increases, the performance improvement plateaus, suggesting a limit to the effectiveness of adding more data. The speaker questions the optimistic view that more data will lead to an AI explosion and instead presents a more pessimistic or realistic outlook where performance gains are marginal and costly.

10:01

🌳 The Challenge of Underrepresented Data in AI

The speaker addresses the issue of underrepresented data in AI training sets, using the example of specific tree species being less common than general categories like cats or dogs. This leads to poorer performance when AI models are tasked with identifying more specific or obscure items. The script also touches on the potential inefficiency of relying solely on data collection to improve AI performance. It suggests that alternative methods may be necessary to achieve high performance on difficult tasks that are not well-represented in typical datasets. The speaker also speculates on the future of AI development, pondering whether we might be reaching a plateau in performance improvements.

Mindmap

Keywords

💡Clip Embeddings

Clip embeddings refer to a method in AI where images and text are paired and processed to learn a shared representation. This technique is central to the video's theme, as it discusses the limitations of using generative AI for understanding and producing new content. The script mentions clip embeddings in the context of training AI to understand images by associating them with descriptive text.

💡Generative AI

Generative AI is a branch of artificial intelligence that focuses on creating new content, such as sentences, images, and more. The video discusses the potential and limitations of generative AI, particularly in relation to the idea that adding more data and bigger models will inevitably lead to more capable AI systems.

💡General Intelligence

General intelligence, in the context of AI, refers to the ability of a system to perform well across a wide range of tasks, not just a specific domain. The script challenges the notion that simply scaling up data and models will lead to AI with general intelligence, suggesting that this may not be feasible without an impractically large amount of data.

💡Zero-Shot Performance

Zero-shot performance is the ability of a machine learning model to perform a task without any training on that specific task. The video discusses the paper's argument that achieving general zero-shot performance on new tasks with current AI models would require an astronomical amount of data, which may not be feasible.

💡Vision Transformer

A Vision Transformer is a type of neural network architecture that is used for processing images. In the script, it is mentioned as part of the clip embeddings process, where it works alongside a text encoder to create a shared embedded space for images and text.

💡Text Encoder

A text encoder is a component of a machine learning system that converts text into a numerical representation that can be processed by other parts of the system. The script describes how a text encoder works in conjunction with a Vision Transformer to create meaningful embeddings for both images and text.

💡Recommended System

A recommended system, as mentioned in the script, is a type of algorithm used by services like Spotify or Netflix to suggest content to users based on their previous interactions. The video discusses how clip embeddings could theoretically be used to improve the effectiveness of such recommendation systems.

💡Downstream Tasks

Downstream tasks are the applications or specific problems that machine learning models are used to solve after pre-training. The script talks about how clip embeddings can be used for downstream tasks such as classification and recommendations, but also highlights the challenges in applying these tasks to difficult problems without sufficient data.

💡Concepts

In the context of the video, concepts refer to the categories or ideas that AI models are trained to recognize, such as 'cat' or 'tree species'. The script discusses how the prevalence of these concepts in datasets affects the performance of AI models on downstream tasks.

💡Data Distribution

Data distribution refers to the way data is spread across different categories within a dataset. The script points out that an uneven distribution, with some concepts like 'cats' being overrepresented and others like 'specific tree species' being underrepresented, can affect the performance of AI models on tasks involving less common concepts.

💡Hallucination

In the context of AI, 'hallucination' refers to the phenomenon where a model generates outputs that are incorrect or nonsensical, particularly when dealing with underrepresented data. The script uses this term to describe the limitations of current AI models when faced with tasks that are not well-represented in their training data.

Highlights

The concept of using generative AI for producing new sentences and images and understanding various forms of data.

The potential for AI to develop general intelligence through training on vast amounts of image-text pairs.

The skepticism about the inevitability of achieving general AI capabilities by simply scaling up data and models.

The importance of experimental evidence in scientific hypotheses rather than speculation about AI's future capabilities.

A recent paper challenging the idea that more data and bigger models will inevitably lead to general zero-shot performance.

The argument that the data requirements for general AI are so vast they may be unattainable.

The role of CLIP embeddings in finding a shared representation for images and text.

The application of CLIP embeddings in downstream tasks such as classification and recommender systems.

The paper's findings that massive amounts of data are needed for effective application of downstream tasks on difficult problems.

The limitations of current models in performing well on under-represented or more complex concepts.

The paper's methodology of defining core concepts and analyzing their prevalence and performance in data sets.

The graphical representation used in the paper to illustrate the relationship between data amount and task performance.

The differing perspectives on AI development: the optimistic 'AI explosion' vs. the paper's more cautious outlook.

The paper's evidence suggesting a plateau in performance improvement despite increasing data and model size.

The challenge of efficiently training AI on a diverse and balanced set of concepts.

The implications for AI development, suggesting the need for new strategies beyond scaling up data and models.

The potential for future advancements in AI training methods and data quality to improve performance.

The call for continued observation and experimentation to truly understand the capabilities and limits of AI.

Transcripts

play00:00

so we looked at clip embeddings right

play00:01

and we've talked a lot about using

play00:03

generative AI to produce new sentences

play00:06

to produce new images and so on and so

play00:08

to understand images all these kind of

play00:10

different things and the idea was that

play00:11

if we look at enough pairs of images and

play00:15

text we will learn to distill what it is

play00:18

in an image into that kind of language

play00:20

so the idea is you have an image you

play00:22

have some texts and you can find a

play00:23

representation where they're both the

play00:24

same the argument has gone that it's

play00:27

only a matter of time before we have so

play00:28

many images that we train on and so and

play00:30

such a big Network and all this kind of

play00:32

business that we get this kind of

play00:34

general intelligence or we get some kind

play00:35

of extremely effective AI that works

play00:38

across all domains right that's the

play00:40

implication right the argument is and

play00:42

you see a lot in the sort of tech sector

play00:44

from the from some of these sort of um

play00:46

big tech companies who to be fair want

play00:48

to sell products right that if you just

play00:52

keep adding more and more data or bigger

play00:54

and bigger models or a combination of

play00:56

both ultimately you will move Beyond

play00:59

just recognizing cats and you'll be able

play01:00

to do anything right that's the idea you

play01:02

show enough cats and dogs and eventually

play01:04

the elephant just is

play01:07

implied as someone who works in science

play01:09

we don't hypothesize about what happens

play01:12

we experimentally justify it right so I

play01:15

would say if you're going to if you're

play01:16

going to say to me that the only upward

play01:18

trajectory is is going you know the only

play01:20

trajectory is up it's going to be

play01:21

amazing I would say go on and prove it

play01:23

and do it right and then we'll see we'll

play01:25

sit here for a couple of years and we'll

play01:26

see what happens but in the meantime

play01:28

let's look at this paper right which

play01:29

came out just recently this

play01:31

paper is saying that that is not true

play01:34

right this paper is saying that the

play01:37

amount of data you will need to get that

play01:39

kind of General zero shot performance

play01:41

that is to say performance on new tasks

play01:43

that you've never

play01:44

seen is going to be astronomically vast

play01:47

to the point where we cannot do it right

play01:48

that's the idea so it basically is

play01:51

arguing against the idea that we can

play01:55

just add more data and more models and

play01:57

we we'll solve it right now this is only

play01:59

one p

play02:00

and of course you know your mileage may

play02:02

vary if you have a bigger GPU than these

play02:03

people and so on but I think that this

play02:05

is actual numbers right which is what I

play02:07

like because I want to see tables of

play02:09

data that show a trend actually

play02:10

happening or not happening I think

play02:12

that's much more interesting than

play02:14

someone's blog post that says I think

play02:16

this is going what's going to happen so

play02:18

let's talk about what this paper does

play02:20

and why it's interesting we have clip

play02:21

embeddings right so we have an image we

play02:23

have a big Vision Transformer and we

play02:25

have a big text encoder which is another

play02:28

Transformer bit like the sort of you

play02:30

would see in a large language model

play02:31

right which takes text strings my text

play02:33

string today and we have some shared

play02:35

embedded space and that embedded space

play02:37

is just a numerical fingerprint for the

play02:39

meaning in these two items and they're

play02:41

trained remember across many many images

play02:43

such that when you put the same image

play02:45

and the text that describes that image

play02:47

in you get something in the middle that

play02:49

matches and the idea then is you can use

play02:51

that for other tasks like you can use

play02:52

that for classification you can use it

play02:54

for image recall if you use a streaming

play02:55

service like Spotify or Netflix right

play02:58

they have this thing called a recom

play02:59

recommended system a recommended system

play03:01

is where you've watched this program

play03:02

this program this program what should

play03:05

you watch next right and you you might

play03:07

have noticed that your mileage may vary

play03:08

on how effective that is but actually I

play03:10

think they're pretty impressive what

play03:12

they have to do but you could use this

play03:14

for a recommender system because you

play03:15

could say basically what programs have I

play03:17

got that embed into the same space of

play03:19

all the things I just watched and and

play03:20

recommend them that way right so there

play03:21

are Downstream tasks like classification

play03:24

and recommendations that we could use

play03:26

based on a system like this what this

play03:28

paper is showing is that you cannot

play03:31

apply these effectively these Downstream

play03:34

tasks for difficult problems without

play03:36

massive amounts of data to back it up

play03:38

right and so and the idea that you can

play03:41

apply you know this kind of

play03:43

classification on hard things so not

play03:45

just cats and dogs but specific cats and

play03:48

specific dogs or subspecies of tree

play03:51

right or difficult problems where the

play03:53

the answer is more difficult than just

play03:54

the broad category that there isn't

play03:57

enough data on those things to train

play03:58

these models and way I've got one of

play04:01

those apps that tells you what specific

play04:03

species a tree is so is it not just

play04:05

similar to that no because they're just

play04:07

doing classification right or some other

play04:09

problem they're not using this kind of

play04:10

generative giant AI right the argument

play04:13

has been why do that silly little

play04:16

problem where you can do a general

play04:17

problem and solve all your problems

play04:19

right and the response is because it

play04:21

didn't work right that's that's that's

play04:22

that's why we're doing it um so there

play04:26

are pros and cons for both right I'm not

play04:28

going to say that no generative AI is

play04:30

useful or no or these these models are

play04:31

incredibly effective for what they do

play04:33

but I'm perhaps suggesting that it may

play04:36

not be reasonable to expect them to do

play04:38

very difficult medical diagnosis because

play04:41

you haven't got the data set to back

play04:42

that up right so how does this paper do

play04:44

this well what they do is they def they

play04:46

Define these Core Concepts right so some

play04:48

of the concepts are going to be simple

play04:49

ones like a cat or a person some of them

play04:51

are going to be slightly more difficult

play04:53

like a specific species of cat or a

play04:55

specific disease in an image or

play04:57

something like this and they they come

play04:59

up about

play05:00

4,000 different concepts right and these

play05:02

are simple text Concepts right these are

play05:04

not complicated philosophical ideas

play05:07

right I don't know how well it embeds

play05:09

those and and what they do is they look

play05:12

at the prevalence of these Concepts in

play05:14

these data sets and then they sh they

play05:16

they test how well the downstream task

play05:20

of let's say one zero shot

play05:21

classification or recall recommended

play05:24

systems works on all of these different

play05:26

concepts and they plot that against the

play05:29

amount of data that they had for that

play05:31

specific concept right so let's draw a

play05:32

graph and that will help me make it more

play05:34

clear right so let's imagine we have a

play05:36

graph here like this and this is the

play05:39

number of

play05:41

examples in our training set of a

play05:45

specific concept right so let's say a

play05:47

cat a dog something more difficult and

play05:49

this is the performance on the actual

play05:53

task of let's say recommend a system or

play05:56

recall of an object or the ability to

play05:58

actually classify as a cat right

play06:00

remember we talked about how you could

play06:01

use this for zero shck classification by

play06:03

just seeing if it embeds to the same

play06:05

place as a picture of a cat the text a

play06:07

picture of a cat that kind of process so

play06:09

this is performance right the best case

play06:12

scenario if you want to have an all

play06:14

powerful AI that can solve all the

play06:16

world's problems is that this line goes

play06:19

very steeply upwards right this is the

play06:20

exciting case it goes like like this

play06:23

right that's the exciting case this is

play06:25

the kind of AI explosion argument that

play06:28

basically says we're on the Custer

play06:29

something that's about to happen

play06:30

whatever that may be where the scale is

play06:32

going to be such that this can just do

play06:34

anything right okay then there the

play06:36

perhaps slightly more reasonable should

play06:39

we say pragmatic interpretation which is

play06:41

like just call it balanced right which

play06:43

is but there a sort of linear movement

play06:45

right so the idea is that we have to add

play06:47

a lot of examples but we are going to

play06:49

get a decent performance Boost from it

play06:50

right so we just keep adding examples

play06:51

we'll keep getting better and that's

play06:53

going to be great and remember that if

play06:55

we ended up up here we have something

play06:57

that could take any image and tell you

play06:58

exactly what's in it under any

play07:00

circumstance right that's that's kind of

play07:01

what we're aiming for and similarly for

play07:03

large language models this would be

play07:04

something that could write with

play07:05

Incredible accuracy on lots of different

play07:08

topics or for image generation it would

play07:10

be something that could take your prompt

play07:11

and generate a photorealistic image of

play07:13

that with almost no coercion at all

play07:16

that's kind of the goal this paper has

play07:18

done a lot of experiments on a lot of

play07:20

these Concepts across a lot of models

play07:21

across a lot of Downstream tasks and

play07:24

let's call this the evidence what you're

play07:28

going to call it pessimistic now it is

play07:30

pessimistic also right it's logarithmic

play07:32

so it basically goes like this right

play07:34

flattens out it flattens out now this is

play07:36

just one paper right it doesn't

play07:38

necessarily mean that it will always

play07:39

flatten out but the argument is I think

play07:42

that and it's not an argument they

play07:44

necessarily make in in the paper but you

play07:46

know the paper's very reasonable I'm

play07:47

being a bit more Cavalier with my

play07:49

wording the suggestion is that you can

play07:51

keep adding more examples you can keep

play07:52

making your models bigger but we are

play07:54

soon about to hit a plateau where we

play07:56

don't get any better and it's costing

play07:58

you millions and millions of dollars to

play08:00

train this at what point do you go well

play08:02

that's probably about as good as we're

play08:03

going to get with technology right and

play08:05

then the argument goes we need something

play08:07

else we need something in the

play08:08

Transformer or some other way of

play08:10

representing data or some other machine

play08:12

learning strategy or some other strategy

play08:14

that's better than this in the long term

play08:17

if we want to have this line G up here

play08:18

or this line gar up here that's that's

play08:20

kind of the argument and so this is

play08:22

essentially

play08:23

evidence I would argue against the kind

play08:26

of

play08:27

explosion you know possibility of but

play08:30

just you just add a bit more data and we

play08:31

were on the cusp of something we might

play08:32

come back here in a couple of years you

play08:34

know if you're still allow me on

play08:35

computer file after this absolute

play08:36

embarrassment of of these claims that I

play08:38

made um and we say okay actually the

play08:41

performan has improve improved massively

play08:43

right or we might say we've doubled the

play08:44

number of data sets to 10 billion images

play08:47

and we've got 1% more right on the on on

play08:49

the classification to which is good but

play08:51

is it worth it I don't know this is a

play08:53

really interesting paper because it's

play08:54

very very fough right if there's a lot

play08:56

of evidence there's a lot of Curves and

play08:57

they all look exactly the same it

play08:59

doesn't doesn't matter what method you

play09:00

use it doesn't matter what data set you

play09:01

train on it doesn't matter what your

play09:02

Downstream task is the vast majority of

play09:05

them show this kind of problem and the

play09:07

other problem is that we don't have a a

play09:10

nice even distribution of classes and

play09:13

Concepts within our data set so for

play09:14

example cats you can imagine are over um

play09:18

emphasized or over represented over

play09:21

represented yeah over represented in the

play09:23

data set by an order of magnitude right

play09:25

whereas specific planes or specific

play09:28

trees are incredibly under represented

play09:31

because you just have tree right so I

play09:34

mean trees are probably going to be less

play09:35

represented than cats anyway but then

play09:37

specific species of tree very very

play09:39

underrepresented which is why when you

play09:41

ask one of these models what kind of cat

play09:43

is this or what kind of tree is this it

play09:45

performs worse than when you ask it what

play09:47

animal is this because it's a much

play09:49

easier problem and you see the same

play09:50

thing in image generation if you ask it

play09:53

to draw a picture of something really

play09:54

obvious like a castle where that comes

play09:57

up a lot in the training set it can draw

play09:58

you a Fant fantastic castle in the style

play10:00

of Monet and it can do all this other

play10:02

stuff but if you ask it to draw some

play10:04

obscure artifact from a video game

play10:06

that's barely even made it into the

play10:08

training set suddenly it's starting to

play10:10

draw something a little bit less quality

play10:13

and the same with large language models

play10:14

this paper isn't about large language

play10:15

models but the same process you can see

play10:17

actually already happening if you talk

play10:19

to something like chap GPT when you ask

play10:22

it about a really important topic from

play10:24

physics or something like this it will

play10:26

usually give you a pretty good

play10:27

explanation of that thing because that

play10:29

in the training set but the question is

play10:31

what happens when you ask it about

play10:32

something more difficult right when you

play10:33

ask it to write that code which is

play10:35

actually quite difficult to write and it

play10:37

starts to make things up it starts to

play10:39

hallucinate and it starts to be less

play10:41

accurate and that is essentially the

play10:42

performance degrading because it's under

play10:44

represented in the training set the

play10:46

argument I think is at least it's the

play10:48

argument that I'm starting to come

play10:50

around to thinking if you want

play10:51

performance on hard tasks tasks that are

play10:53

under represented on just general

play10:55

internet text and searches we have to

play10:57

find some other way of doing it than

play10:59

just is collecting more and more data

play11:00

right particularly because it's

play11:01

incredibly inefficient to do this right

play11:04

on the other hand we they you know these

play11:06

companies will they've got a lot more

play11:08

gpus than me right they're going to

play11:09

train on on bigger and bigger corpuses

play11:12

better quality data they're going to use

play11:13

human feedback to better train their

play11:15

language models and things so they may

play11:17

find ways to improve this you know up

play11:19

this way a little bit as we go forward

play11:22

but it's going to be really interesting

play11:23

see what happens because you know will

play11:25

it Plateau out will we see trap GPT 7

play11:28

or8 or 9 be roughly the same as chat

play11:30

dpt4 or will we see another

play11:32

state-of-the-art performance boost every

play11:33

time I'm kind of trending this way but

play11:36

you know it'll be excited to see if it

play11:37

goes this way take a look at this puzzle

play11:40

devised by today's episode sponsor Jane

play11:43

straight it's called bug bite inspired

play11:47

by debugging code that world we're all

play11:49

too familiar with where solving one

play11:51

problem might lead to a whole chain of

play11:54

others we'll link to the puzzle in the

play11:57

video description let me know how you to

play11:59

get on and speaking of Jane Street we're

play12:01

also going to link to some programs that

play12:03

they're running at the moment these

play12:05

events are all expenses paid and give a

play12:07

little taste of the tech and problem

play12:09

solving used at trading firms like Jane

play12:15

Street are you curious are you Problem

play12:18

Solver are you into computers I think

play12:20

maybe you are if so well you may well be

play12:22

eligible to apply for one of these

play12:24

programs check out the links below or

play12:27

visit the Jane Street website and follow

play12:29

the these links there are some deadlines

play12:31

coming up for ones you might want to

play12:32

look at and there are always more on the

play12:34

horizon our thanks to Jane Street for

play12:36

running great programs like this and

play12:38

also supporting our Channel and don't

play12:40

forget to check out that bug bite puzzle

Rate This

5.0 / 5 (0 votes)

Связанные теги
AI ModelsImage RecognitionText AnalysisData LimitationsMachine LearningGenerative AIZero-Shot LearningRecommender SystemsPerformance TrendsTech Insights
Вам нужно краткое изложение на английском?