Has Generative AI Already Peaked? - Computerphile

Computerphile
9 May 202412:47

Summary

TLDR本文探讨了使用生成式人工智能(AI)来理解图像和文本的CLIP嵌入技术。作者质疑了通过增加数据和模型规模就能实现通用智能的观点,指出要达到零样本学习性能,所需的数据量可能是巨大的。文章通过实验发现,对于困难问题,现有模型在数据量不足的情况下表现不佳。此外,数据集中类别的分布不均也影响了模型对稀有类别的识别能力。尽管大型科技公司可能通过增加GPU和使用人类反馈来改进模型,但作者认为,若要处理在互联网文本和搜索中不常见的难题,可能需要寻找新的方法。

Takeaways

  • 🧠 讨论了使用生成式AI来生成新句子、新图像等,并理解图像和文本的能力。
  • 🔍 通过大量图像和文本配对学习,可以提炼图像内容的语言表示。
  • 🚀 有观点认为,随着训练数据和网络规模的增加,AI将发展出跨领域的通用智能。
  • 🔬 科学方法强调实验验证而非假设,对于AI性能提升的乐观预测持谨慎态度。
  • 📉 近期论文指出,实现通用零样本性能所需的数据量可能非常庞大,难以实现。
  • 📈 论文通过实验数据展示了数据量与模型性能之间的关系,通常呈现对数增长而非线性。
  • 📊 论文分析了约4000个概念在数据集中的分布,以及它们在下游任务中的表现。
  • 🌐 讨论了CLIP嵌入(图像和文本的共享嵌入空间)及其在分类、推荐系统等任务中的应用。
  • 🚧 论文指出,对于困难问题,现有数据量不足以有效训练模型,导致性能受限。
  • 📚 强调了数据集中类别和概念的不均匀分布问题,常见类别(如猫)过度表示,而特定类别(如某些树种)则表示不足。
  • 🔑 暗示了除了收集更多数据之外,可能需要新的数据表示方法或机器学习策略来提升对困难任务的性能。

Q & A

  • 什么是CLIP嵌入(clip embeddings)?

    -CLIP嵌入是一种通过大量图像和文本对训练得到的表示方法,能够将图像和文本映射到共享的嵌入空间中,使得描述相同内容的图像和文本在嵌入空间中彼此接近。

  • 为什么有人认为通过增加更多的数据和更大的模型就能实现通用智能(general intelligence)?

    -这种观点基于观察到的现象:随着数据和模型规模的增加,AI在图像识别等领域的表现逐渐提升。因此,一些人认为,只要继续增加数据量和模型规模,AI最终能够处理所有类型的任务。

  • 为什么说实验验证比假设更重要?

    -在科学领域,实验验证是检验假设正确性的关键。仅仅提出假设而不通过实验来证明它们,无法确保这些假设在实际应用中的有效性。

  • 这篇论文为什么反对仅通过增加数据量和模型规模就能解决所有问题的观点?

    -这篇论文通过实验发现,要实现在新任务上的零样本(zero-shot)表现,所需的数据量是极其庞大的,以至于实际上无法实现。这表明仅靠增加数据和模型规模并不能无限提升AI的性能。

  • 什么是下游任务(downstream tasks)?

    -下游任务是指在训练好基本模型后,利用这些模型进行的特定应用任务,如分类、推荐系统等。

  • 为什么说在困难问题上应用下游任务需要大量的数据支持?

    -因为困难问题往往涉及到更具体的概念,这些概念在数据集中可能非常少见,导致模型无法学习到足够的特征来进行有效识别或分类。

  • 这篇论文是如何测试不同概念在数据集中的分布和模型性能的关系的?

    -论文中定义了约4000个不同的概念,并分析了这些概念在数据集中的分布情况,然后测试了在这些概念上的下游任务性能,并将其与对应概念的数据量进行对比。

  • 为什么说数据集中的类别分布不均匀会影响模型性能?

    -如果数据集中某些类别(如猫)过度表示,而其他类别(如特定树种)表示不足,模型在常见类别上的性能会很好,但在不常见类别上的性能会较差,因为模型没有足够的数据来学习这些类别的特征。

  • 为什么说仅靠增加数据量可能无法实现AI性能的大幅提升?

    -论文中的实验结果表明,随着数据量的增加,模型性能的提升会逐渐趋于平缓,即达到一个平台期,这意味着继续增加数据量可能无法带来预期的性能提升。

  • 这篇论文对于未来AI发展的意义是什么?

    -这篇论文提供了对当前AI发展模式的批判性思考,提示我们可能需要寻找新的方法或策略来提升AI的性能,而不仅仅是依赖于数据量的增加。

Outlines

00:00

🤖 AI与图像文本嵌入的局限性

本段讨论了AI在图像和文本嵌入方面的应用及其局限性。提到了通过大量图像和文本对来训练AI,使其能够将图像内容转化为语言描述的尝试。然而,有观点认为,随着数据量和模型规模的增加,AI将实现通用智能,但最新研究显示,要达到新的未见任务的零样本性能,所需的数据量将是巨大的。论文通过实验表明,对于困难问题,没有足够的数据支持,这些模型无法有效应用。此外,还提到了下游任务,如分类和推荐系统,以及它们如何利用图像和文本的嵌入空间来提高性能。

05:00

📈 数据量与AI性能的关系

这段内容通过一个图表形象地展示了数据量与AI性能之间的关系。研究者定义了约4000个核心概念,并分析了这些概念在数据集中的普遍性,以及它们在下游任务中的表现。实验结果表明,随着数据量的增加,AI性能的提升并不是线性的,而是呈现出对数增长,最终趋于平缓。这意味着,尽管增加数据量和模型规模可以提升AI性能,但很快就会达到一个性能提升的瓶颈。此外,数据集中类别和概念的分布不均也是导致性能下降的一个因素,常见类别如猫和狗的数据量远大于特定树种等不常见类别。

10:01

🎮 特定任务下AI的挑战与未来展望

最后一段讨论了AI在处理特定任务时遇到的挑战,尤其是在数据量不足的情况下。举例说明了当AI被要求生成不常见物体的图像或解释复杂概念时,性能会下降。同时,提出了对于AI未来发展的一些思考,包括是否需要新的数据表示方法或机器学习策略来突破当前的性能瓶颈。此外,还提到了Jane Street公司提供的技术问题解决项目,鼓励对计算机和问题解决感兴趣的观众参与。

Mindmap

Keywords

💡CLIP嵌入

CLIP嵌入指的是一种将图像和文本映射到共享的嵌入空间的技术,使得图像和描述它的文本在该空间中具有相似的表示。在视频中,CLIP嵌入被用来讨论如何通过大量图像和文本对来训练模型,以实现对图像内容的语言描述。例如,视频提到通过训练,模型能够将图像和描述性文本映射到嵌入空间中的相同点。

💡生成式AI

生成式AI是一种人工智能技术,它能够生成新的数据实例,如句子、图像等。视频中提到使用生成式AI来生成新句子和图像,并且讨论了这种技术在理解图像等方面的应用。生成式AI的核心在于通过学习大量数据来创建新的、以前未见过的内容。

💡通用智能

通用智能指的是一种能够在多种任务和环境中有效工作的人工智能,与特定领域的智能相对。视频中提到了一种观点,即通过训练足够多的图像和文本对,AI最终能够发展出通用智能,处理各种不同的问题。

💡数据集

数据集是用于训练机器学习模型的数据集合。视频中讨论了数据集的大小对于训练有效AI模型的重要性,指出要实现零样本学习等高级功能,可能需要极其庞大的数据集。

💡零样本学习

零样本学习是一种机器学习范式,其中模型能够在没有直接针对特定类别的训练数据的情况下进行分类。视频中提到,实现这种性能可能需要天文数字级别的数据量。

💡下游任务

下游任务指的是在预训练模型的基础上进行的特定应用任务,如分类、推荐系统等。视频通过CLIP嵌入的例子,讨论了如何利用共享嵌入空间来进行图像分类和推荐系统等任务。

💡推荐系统

推荐系统是一种信息过滤系统,用于预测用户可能感兴趣的项目。在视频中,推荐系统被用作CLIP嵌入的一个应用实例,通过嵌入空间来推荐用户可能感兴趣的节目或内容。

💡概念普及度

概念普及度指的是在数据集中某个概念出现的频率。视频中通过对4000个不同概念的普及度进行分析,研究了数据量对于模型性能的影响。

💡性能提升

性能提升是指机器学习模型在特定任务上的表现改进。视频通过实验数据展示了随着数据量的增加,模型性能提升的趋势,并讨论了可能遇到的性能瓶颈。

💡模型泛化

模型泛化是指模型对新数据或未见过的数据的预测能力。视频中提到,尽管模型可能在常见类别上表现良好,但在少见或难于识别的类别上可能性能不足,这与模型的泛化能力有关。

💡数据分布

数据分布描述了数据集中各类别或概念的代表性。视频中指出,数据集中某些类别可能过度表示,而其他类别则可能严重不足,这影响了模型对少见类别的识别能力。

💡机器学习策略

机器学习策略包括用于提高模型性能的各种方法和技术。视频最后提出了一个问题,即是否需要新的机器学习策略来解决当前模型在处理难任务时遇到的瓶颈。

Highlights

讨论了使用生成性AI来生成新句子和图像,并理解图像和文本。

提出了通过足够多的图像和文本配对学习,可以提炼图像内容的语言表示。

提出了随着训练数据和网络规模的增加,AI将发展出跨领域的通用智能。

科学界通常通过实验来验证假设,而不是仅仅基于理论推测。

最近发表的一篇论文认为,要实现零样本学习,需要的数据量将非常庞大。

论文中的数据和图表显示,增加数据量和模型规模并不能无限提高性能。

介绍了CLIP嵌入的概念,包括图像和文本的Transformer编码器。

CLIP嵌入可以用于分类、图像召回和推荐系统等下游任务。

论文指出,对于困难问题,没有大量数据支持,CLIP嵌入的下游任务效果不佳。

论文通过定义核心概念,并分析这些概念在数据集中的普遍性,来测试下游任务的表现。

实验结果表明,对于某些概念,即使数据量增加,性能提升也有限。

论文提出了对于数据集中类别和概念分布不均的问题。

讨论了大型语言模型在处理训练集中不常见的问题时可能出现的准确性下降。

提出了对于困难任务,可能需要寻找除收集更多数据之外的其他方法。

论文的实验结果对于AI领域的未来发展提出了质疑和思考。

讨论了大型科技公司可能对AI发展过于乐观的宣传。

提出了对于AI模型训练成本和效率的考量。

提到了Jane Street公司提供的技术问题解决项目和赞助。

Transcripts

play00:00

so we looked at clip embeddings right

play00:01

and we've talked a lot about using

play00:03

generative AI to produce new sentences

play00:06

to produce new images and so on and so

play00:08

to understand images all these kind of

play00:10

different things and the idea was that

play00:11

if we look at enough pairs of images and

play00:15

text we will learn to distill what it is

play00:18

in an image into that kind of language

play00:20

so the idea is you have an image you

play00:22

have some texts and you can find a

play00:23

representation where they're both the

play00:24

same the argument has gone that it's

play00:27

only a matter of time before we have so

play00:28

many images that we train on and so and

play00:30

such a big Network and all this kind of

play00:32

business that we get this kind of

play00:34

general intelligence or we get some kind

play00:35

of extremely effective AI that works

play00:38

across all domains right that's the

play00:40

implication right the argument is and

play00:42

you see a lot in the sort of tech sector

play00:44

from the from some of these sort of um

play00:46

big tech companies who to be fair want

play00:48

to sell products right that if you just

play00:52

keep adding more and more data or bigger

play00:54

and bigger models or a combination of

play00:56

both ultimately you will move Beyond

play00:59

just recognizing cats and you'll be able

play01:00

to do anything right that's the idea you

play01:02

show enough cats and dogs and eventually

play01:04

the elephant just is

play01:07

implied as someone who works in science

play01:09

we don't hypothesize about what happens

play01:12

we experimentally justify it right so I

play01:15

would say if you're going to if you're

play01:16

going to say to me that the only upward

play01:18

trajectory is is going you know the only

play01:20

trajectory is up it's going to be

play01:21

amazing I would say go on and prove it

play01:23

and do it right and then we'll see we'll

play01:25

sit here for a couple of years and we'll

play01:26

see what happens but in the meantime

play01:28

let's look at this paper right which

play01:29

came out just recently this

play01:31

paper is saying that that is not true

play01:34

right this paper is saying that the

play01:37

amount of data you will need to get that

play01:39

kind of General zero shot performance

play01:41

that is to say performance on new tasks

play01:43

that you've never

play01:44

seen is going to be astronomically vast

play01:47

to the point where we cannot do it right

play01:48

that's the idea so it basically is

play01:51

arguing against the idea that we can

play01:55

just add more data and more models and

play01:57

we we'll solve it right now this is only

play01:59

one p

play02:00

and of course you know your mileage may

play02:02

vary if you have a bigger GPU than these

play02:03

people and so on but I think that this

play02:05

is actual numbers right which is what I

play02:07

like because I want to see tables of

play02:09

data that show a trend actually

play02:10

happening or not happening I think

play02:12

that's much more interesting than

play02:14

someone's blog post that says I think

play02:16

this is going what's going to happen so

play02:18

let's talk about what this paper does

play02:20

and why it's interesting we have clip

play02:21

embeddings right so we have an image we

play02:23

have a big Vision Transformer and we

play02:25

have a big text encoder which is another

play02:28

Transformer bit like the sort of you

play02:30

would see in a large language model

play02:31

right which takes text strings my text

play02:33

string today and we have some shared

play02:35

embedded space and that embedded space

play02:37

is just a numerical fingerprint for the

play02:39

meaning in these two items and they're

play02:41

trained remember across many many images

play02:43

such that when you put the same image

play02:45

and the text that describes that image

play02:47

in you get something in the middle that

play02:49

matches and the idea then is you can use

play02:51

that for other tasks like you can use

play02:52

that for classification you can use it

play02:54

for image recall if you use a streaming

play02:55

service like Spotify or Netflix right

play02:58

they have this thing called a recom

play02:59

recommended system a recommended system

play03:01

is where you've watched this program

play03:02

this program this program what should

play03:05

you watch next right and you you might

play03:07

have noticed that your mileage may vary

play03:08

on how effective that is but actually I

play03:10

think they're pretty impressive what

play03:12

they have to do but you could use this

play03:14

for a recommender system because you

play03:15

could say basically what programs have I

play03:17

got that embed into the same space of

play03:19

all the things I just watched and and

play03:20

recommend them that way right so there

play03:21

are Downstream tasks like classification

play03:24

and recommendations that we could use

play03:26

based on a system like this what this

play03:28

paper is showing is that you cannot

play03:31

apply these effectively these Downstream

play03:34

tasks for difficult problems without

play03:36

massive amounts of data to back it up

play03:38

right and so and the idea that you can

play03:41

apply you know this kind of

play03:43

classification on hard things so not

play03:45

just cats and dogs but specific cats and

play03:48

specific dogs or subspecies of tree

play03:51

right or difficult problems where the

play03:53

the answer is more difficult than just

play03:54

the broad category that there isn't

play03:57

enough data on those things to train

play03:58

these models and way I've got one of

play04:01

those apps that tells you what specific

play04:03

species a tree is so is it not just

play04:05

similar to that no because they're just

play04:07

doing classification right or some other

play04:09

problem they're not using this kind of

play04:10

generative giant AI right the argument

play04:13

has been why do that silly little

play04:16

problem where you can do a general

play04:17

problem and solve all your problems

play04:19

right and the response is because it

play04:21

didn't work right that's that's that's

play04:22

that's why we're doing it um so there

play04:26

are pros and cons for both right I'm not

play04:28

going to say that no generative AI is

play04:30

useful or no or these these models are

play04:31

incredibly effective for what they do

play04:33

but I'm perhaps suggesting that it may

play04:36

not be reasonable to expect them to do

play04:38

very difficult medical diagnosis because

play04:41

you haven't got the data set to back

play04:42

that up right so how does this paper do

play04:44

this well what they do is they def they

play04:46

Define these Core Concepts right so some

play04:48

of the concepts are going to be simple

play04:49

ones like a cat or a person some of them

play04:51

are going to be slightly more difficult

play04:53

like a specific species of cat or a

play04:55

specific disease in an image or

play04:57

something like this and they they come

play04:59

up about

play05:00

4,000 different concepts right and these

play05:02

are simple text Concepts right these are

play05:04

not complicated philosophical ideas

play05:07

right I don't know how well it embeds

play05:09

those and and what they do is they look

play05:12

at the prevalence of these Concepts in

play05:14

these data sets and then they sh they

play05:16

they test how well the downstream task

play05:20

of let's say one zero shot

play05:21

classification or recall recommended

play05:24

systems works on all of these different

play05:26

concepts and they plot that against the

play05:29

amount of data that they had for that

play05:31

specific concept right so let's draw a

play05:32

graph and that will help me make it more

play05:34

clear right so let's imagine we have a

play05:36

graph here like this and this is the

play05:39

number of

play05:41

examples in our training set of a

play05:45

specific concept right so let's say a

play05:47

cat a dog something more difficult and

play05:49

this is the performance on the actual

play05:53

task of let's say recommend a system or

play05:56

recall of an object or the ability to

play05:58

actually classify as a cat right

play06:00

remember we talked about how you could

play06:01

use this for zero shck classification by

play06:03

just seeing if it embeds to the same

play06:05

place as a picture of a cat the text a

play06:07

picture of a cat that kind of process so

play06:09

this is performance right the best case

play06:12

scenario if you want to have an all

play06:14

powerful AI that can solve all the

play06:16

world's problems is that this line goes

play06:19

very steeply upwards right this is the

play06:20

exciting case it goes like like this

play06:23

right that's the exciting case this is

play06:25

the kind of AI explosion argument that

play06:28

basically says we're on the Custer

play06:29

something that's about to happen

play06:30

whatever that may be where the scale is

play06:32

going to be such that this can just do

play06:34

anything right okay then there the

play06:36

perhaps slightly more reasonable should

play06:39

we say pragmatic interpretation which is

play06:41

like just call it balanced right which

play06:43

is but there a sort of linear movement

play06:45

right so the idea is that we have to add

play06:47

a lot of examples but we are going to

play06:49

get a decent performance Boost from it

play06:50

right so we just keep adding examples

play06:51

we'll keep getting better and that's

play06:53

going to be great and remember that if

play06:55

we ended up up here we have something

play06:57

that could take any image and tell you

play06:58

exactly what's in it under any

play07:00

circumstance right that's that's kind of

play07:01

what we're aiming for and similarly for

play07:03

large language models this would be

play07:04

something that could write with

play07:05

Incredible accuracy on lots of different

play07:08

topics or for image generation it would

play07:10

be something that could take your prompt

play07:11

and generate a photorealistic image of

play07:13

that with almost no coercion at all

play07:16

that's kind of the goal this paper has

play07:18

done a lot of experiments on a lot of

play07:20

these Concepts across a lot of models

play07:21

across a lot of Downstream tasks and

play07:24

let's call this the evidence what you're

play07:28

going to call it pessimistic now it is

play07:30

pessimistic also right it's logarithmic

play07:32

so it basically goes like this right

play07:34

flattens out it flattens out now this is

play07:36

just one paper right it doesn't

play07:38

necessarily mean that it will always

play07:39

flatten out but the argument is I think

play07:42

that and it's not an argument they

play07:44

necessarily make in in the paper but you

play07:46

know the paper's very reasonable I'm

play07:47

being a bit more Cavalier with my

play07:49

wording the suggestion is that you can

play07:51

keep adding more examples you can keep

play07:52

making your models bigger but we are

play07:54

soon about to hit a plateau where we

play07:56

don't get any better and it's costing

play07:58

you millions and millions of dollars to

play08:00

train this at what point do you go well

play08:02

that's probably about as good as we're

play08:03

going to get with technology right and

play08:05

then the argument goes we need something

play08:07

else we need something in the

play08:08

Transformer or some other way of

play08:10

representing data or some other machine

play08:12

learning strategy or some other strategy

play08:14

that's better than this in the long term

play08:17

if we want to have this line G up here

play08:18

or this line gar up here that's that's

play08:20

kind of the argument and so this is

play08:22

essentially

play08:23

evidence I would argue against the kind

play08:26

of

play08:27

explosion you know possibility of but

play08:30

just you just add a bit more data and we

play08:31

were on the cusp of something we might

play08:32

come back here in a couple of years you

play08:34

know if you're still allow me on

play08:35

computer file after this absolute

play08:36

embarrassment of of these claims that I

play08:38

made um and we say okay actually the

play08:41

performan has improve improved massively

play08:43

right or we might say we've doubled the

play08:44

number of data sets to 10 billion images

play08:47

and we've got 1% more right on the on on

play08:49

the classification to which is good but

play08:51

is it worth it I don't know this is a

play08:53

really interesting paper because it's

play08:54

very very fough right if there's a lot

play08:56

of evidence there's a lot of Curves and

play08:57

they all look exactly the same it

play08:59

doesn't doesn't matter what method you

play09:00

use it doesn't matter what data set you

play09:01

train on it doesn't matter what your

play09:02

Downstream task is the vast majority of

play09:05

them show this kind of problem and the

play09:07

other problem is that we don't have a a

play09:10

nice even distribution of classes and

play09:13

Concepts within our data set so for

play09:14

example cats you can imagine are over um

play09:18

emphasized or over represented over

play09:21

represented yeah over represented in the

play09:23

data set by an order of magnitude right

play09:25

whereas specific planes or specific

play09:28

trees are incredibly under represented

play09:31

because you just have tree right so I

play09:34

mean trees are probably going to be less

play09:35

represented than cats anyway but then

play09:37

specific species of tree very very

play09:39

underrepresented which is why when you

play09:41

ask one of these models what kind of cat

play09:43

is this or what kind of tree is this it

play09:45

performs worse than when you ask it what

play09:47

animal is this because it's a much

play09:49

easier problem and you see the same

play09:50

thing in image generation if you ask it

play09:53

to draw a picture of something really

play09:54

obvious like a castle where that comes

play09:57

up a lot in the training set it can draw

play09:58

you a Fant fantastic castle in the style

play10:00

of Monet and it can do all this other

play10:02

stuff but if you ask it to draw some

play10:04

obscure artifact from a video game

play10:06

that's barely even made it into the

play10:08

training set suddenly it's starting to

play10:10

draw something a little bit less quality

play10:13

and the same with large language models

play10:14

this paper isn't about large language

play10:15

models but the same process you can see

play10:17

actually already happening if you talk

play10:19

to something like chap GPT when you ask

play10:22

it about a really important topic from

play10:24

physics or something like this it will

play10:26

usually give you a pretty good

play10:27

explanation of that thing because that

play10:29

in the training set but the question is

play10:31

what happens when you ask it about

play10:32

something more difficult right when you

play10:33

ask it to write that code which is

play10:35

actually quite difficult to write and it

play10:37

starts to make things up it starts to

play10:39

hallucinate and it starts to be less

play10:41

accurate and that is essentially the

play10:42

performance degrading because it's under

play10:44

represented in the training set the

play10:46

argument I think is at least it's the

play10:48

argument that I'm starting to come

play10:50

around to thinking if you want

play10:51

performance on hard tasks tasks that are

play10:53

under represented on just general

play10:55

internet text and searches we have to

play10:57

find some other way of doing it than

play10:59

just is collecting more and more data

play11:00

right particularly because it's

play11:01

incredibly inefficient to do this right

play11:04

on the other hand we they you know these

play11:06

companies will they've got a lot more

play11:08

gpus than me right they're going to

play11:09

train on on bigger and bigger corpuses

play11:12

better quality data they're going to use

play11:13

human feedback to better train their

play11:15

language models and things so they may

play11:17

find ways to improve this you know up

play11:19

this way a little bit as we go forward

play11:22

but it's going to be really interesting

play11:23

see what happens because you know will

play11:25

it Plateau out will we see trap GPT 7

play11:28

or8 or 9 be roughly the same as chat

play11:30

dpt4 or will we see another

play11:32

state-of-the-art performance boost every

play11:33

time I'm kind of trending this way but

play11:36

you know it'll be excited to see if it

play11:37

goes this way take a look at this puzzle

play11:40

devised by today's episode sponsor Jane

play11:43

straight it's called bug bite inspired

play11:47

by debugging code that world we're all

play11:49

too familiar with where solving one

play11:51

problem might lead to a whole chain of

play11:54

others we'll link to the puzzle in the

play11:57

video description let me know how you to

play11:59

get on and speaking of Jane Street we're

play12:01

also going to link to some programs that

play12:03

they're running at the moment these

play12:05

events are all expenses paid and give a

play12:07

little taste of the tech and problem

play12:09

solving used at trading firms like Jane

play12:15

Street are you curious are you Problem

play12:18

Solver are you into computers I think

play12:20

maybe you are if so well you may well be

play12:22

eligible to apply for one of these

play12:24

programs check out the links below or

play12:27

visit the Jane Street website and follow

play12:29

the these links there are some deadlines

play12:31

coming up for ones you might want to

play12:32

look at and there are always more on the

play12:34

horizon our thanks to Jane Street for

play12:36

running great programs like this and

play12:38

also supporting our Channel and don't

play12:40

forget to check out that bug bite puzzle

Rate This

5.0 / 5 (0 votes)

Related Tags
AI技术图像识别文本嵌入数据量模型性能零样本学习推荐系统分类任务科学实验技术趋势行业分析
Do you need a summary in English?