Elons NEW Prediction For AGI, METAs New Agents, New SORA Demo, China Surpasses GPT4, and more

TheAIGRID
25 May 202421:09

Summary

TLDR本文概述了人工智能领域的最新动态,包括Meta开发付费AI助手的计划,以及AI代理的未来发展。特别提到了Meta可能不会开放其400亿参数的大型语言模型,以及Elon Musk对2025年实现人工通用智能(AGI)的预测。同时,讨论了AI的商业化、监管以及不同AI模型之间的竞争,如01doai的Ye Large模型。此外,还介绍了对AI模型内部工作原理的解读性研究,例如Golden Gate Claude研究,展示了AI如何将特定概念与回答关联起来。

Takeaways

  • 🤖 Facebook的母公司Meta正在开发付费版本的AI助手,可能类似于Google、OpenAI和Anthropic等公司提供的聊天机器人服务。
  • 🔍 Meta也在开发能够无需人类监督完成任务的AI代理,这表明Meta正在投入资源研究未来的AI技术。
  • 👨‍💻 Meta正在考虑包括一个工程代理来协助编码和软件开发,类似于GitHub的Copilot。
  • 💰 有关AI代理的商业化,Meta的员工表示这些代理将帮助企业在Meta的应用上做广告,可能既用于内部也用于客户。
  • 🗣️ 有传言称Meta的400亿参数模型可能不会公开,这与Meta计划对其未来模型收费的报道相符。
  • 🚀 Elon Musk预测,我们将在明年实现人工通用智能(AGI),这可能意味着顶级AI实验室将取得重大突破。
  • 🎥 在VivaTech会议上,有人展示了如何结合使用语音引擎和Chat GPT快速创建关于法国历史的综合性视频。
  • 🛡️ Eric Schmidt表示,未来最强大的AI系统可能需要被限制在军事基地内,因为它们的功能可能非常危险。
  • 🏆 01doai公司开发的'ye large'模型在基准测试中超过了GPT 4和Llama 3,显示出其他公司正在迎头赶上。
  • 🧠 有关Claude模型的研究揭示了当模型遇到与金门大桥相关的文本或图片时激活的神经元,这有助于我们理解AI的内部工作机制。
  • 🔄 通过调整Claude模型中特定特征的激活强度,研究人员能够影响模型的输出,展示了AI模型的可解释性和可控性研究的进步。

Q & A

  • Meta是否正在开发付费版本的AI助手?

    -是的,Meta正在开发付费版本的AI助手,这项服务可能会类似于Google、OpenAI、Anthropic和Microsoft等公司提供的聊天机器人服务。

  • Meta的AI助手服务与Google和Microsoft的聊天机器人服务有何相似之处?

    -Meta的AI助手服务将允许用户在工作场所应用程序中使用这些聊天机器人,类似于Google和Microsoft提供的每月20美元的订阅服务。

  • Meta是否也在开发AI代理,可以在没有人类监督的情况下完成任务?

    -对,Meta正在开发AI代理,这些代理能够独立完成任务,这表明Meta正在将资源投入到AI的未来领域。

  • AI代理与现有的大型语言模型(LLMs)有何不同?

    -AI代理是下一代AI技术,它们不仅能够处理语言,还能够执行更广泛的任务,如编程和软件开发,而不仅仅是语言理解。

  • Meta是否计划开发一个工程代理来协助编码和软件开发?

    -是的,Meta计划开发一个工程代理,类似于GitHub的Co-Pilot,以协助编码和软件开发。

  • Meta的AI代理开发计划是否已经取得了一些成果?

    -目前Meta的AI代理开发计划的具体成果尚未公开,但内部帖子显示他们正在积极探索这一领域。

  • Meta是否计划将其400亿参数的大型模型Llama 70B作为AI代理的基础?

    -根据讨论,Meta可能会考虑使用Llama 70B或其他大型模型作为开发AI代理的基础,但具体的计划尚未公开。

  • Elon Musk关于AGI(人工通用智能)的预测是什么?

    -Elon Musk预测我们明年将会拥有AGI,这表明他认为AI领域将很快取得重大突破。

  • Eric Schmidt关于未来AI系统的看法是什么?

    -Eric Schmidt认为未来最强大的AI系统将需要被限制在军事基地内,因为它们的能力强大到可能很危险。

  • 01doai公司在AI领域取得了哪些进展?

    -01doai公司的YE Large模型在基准测试中超过了GPT-4和Llama 3,显示出该公司在AI领域的快速发展。

  • Claude的神经网络研究揭示了哪些关于AI内部工作的信息?

    -Claude的神经网络研究揭示了当模型遇到特定激活时的反应,例如Golden Gate Bridge特征,这有助于我们理解AI的内部工作机制并提高模型的可解释性。

Outlines

00:00

🤖 元宇宙公司开发付费AI助手

元宇宙公司(Meta)正在开发一款付费版的AI助手,可能类似于谷歌、微软等公司提供的聊天机器人服务。这些公司目前提供每月20美元的订阅服务,让用户能够在工作场所应用程序中使用聊天机器人。Meta也在开发能够独立完成任务的AI代理,这表明Meta正在投入资源开发未来的AI技术。AI代理可能是未来AI发展的关键,特别是当AI代理能够执行复杂任务如编写文章时,人们将真正意识到AI代理的强大。此外,Meta还可能推出工程代理来协助编码和软件开发,类似于GitHub Copilot。还有关于AI代理商业化的讨论,预计这些AI代理将在2024年末到2025年初推出,虽然可能会非常昂贵,但有可能改变游戏规则。

05:02

💡 人工智能的未来及埃隆·马斯克的AGI预测

随着AI技术的发展,开源和盈利之间的平衡成为讨论的焦点。例如,Meta计划对其400亿参数的AI模型进行收费,这表明AI领域的商业模式正在发生变化。谷歌前员工Logan询问了关于通用人工智能(AGI)的实现时间,而埃隆·马斯克预测我们可能在明年就能实现AGI。这一预测引发了两种不同的解读:一方面,马斯克在多个领域有深入的了解和预测,另一方面,他的一些预测过去被认为过于乐观。此外,还讨论了AGI的定义和OpenAI的进展,以及AGI可能在2025年实现的可能性。

10:04

🎥 AI技术在视频制作中的应用

展示了AI技术在视频制作中的应用,特别是在VivaTech会议上的演示。演示中,通过结合Sora的声音引擎和Chat GPT,快速创建了关于法国历史的综合视频。这表明当多个AI系统协同工作时,可以极大地提高工作效率。此外,还讨论了AI模型的安全性和与利益相关者的互动,以及如何通过合作伙伴收集反馈来改进技术。

15:06

🛡️ 人工智能的安全性和监管问题

讨论了人工智能的安全性和未来可能的监管问题。Eric Schmidt提出,未来最强大的AI系统可能需要被限制在军事基地内,因为它们的潜在能力可能非常危险。他将AI与生物安全等级(BSL)进行了比较,并预测将会出现一些极其强大的计算机,它们将被安置在军事基地并受到严格的保护。这引发了关于AI监管和政府干预的讨论,特别是在私人公司开发出超越现有政府能力的AI技术时,可能会出现新的监管挑战。

20:06

🌟 其他AI公司的进步和研究

介绍了其他AI公司在技术进步方面的情况,特别是01doai公司的YE Large模型,它在性能上已经超越了GPT-4和Llama 3等其他顶尖模型。此外,还讨论了Claude AI的Golden Gate研究,这项研究揭示了AI模型内部的工作原理,通过调整特定特征的激活强度,可以影响模型的输出。这项研究有助于提高AI的可解释性和安全性,为未来更强大的AI系统的发展提供了重要的见解。

🧐 AI模型的内部工作机制

通过Claude AI的例子,探讨了AI模型的内部工作机制,以及如何通过理解这些机制来提高AI的可解释性和控制能力。展示了AI如何在特定提示下调整其输出,以及如何通过改变模型的内部连接来影响其行为。这表明AI研究正在从黑箱操作转向更深层次的理解,这对于构建更安全、更可靠的AI系统至关重要。

Mindmap

Keywords

💡元宇宙

元宇宙是Meta公司提出的一个概念,指的是一个由增强现实和虚拟现实技术构成的虚拟共享空间。在视频中,提到Meta正在开发付费版本的人工智能助手,这与元宇宙的构建和发展紧密相关。

💡AI助手

AI助手指的是由人工智能技术驱动的个人助理,能够执行各种任务,如回答问题、设置提醒等。视频提到Meta正在研发的AI助手可能会与Google、OpenAI和Anthropic等公司提供的聊天机器人服务类似。

💡AI代理

AI代理是一种能够独立完成任务的人工智能系统,无需人类监督。视频中提到Meta正在开发AI代理,这表明未来的AI发展将更加注重代理的能力,它们可能会在各个领域中发挥作用,比如编程和软件开发。

💡GitHub Copilot

GitHub Copilot是一个由GitHub开发的AI编程助手,可以帮助开发者编写代码。视频提到Meta可能会开发一个类似的工程代理,以协助编码和软件开发,这表明AI在编程领域的应用正在扩展。

💡LLM(大型语言模型)

LLM指的是具有大量参数的语言模型,它们能够理解和生成自然语言文本。视频提到了70亿参数的LLM 'Llama 70b',这表明大型语言模型在AI领域的重要性和它们在提供复杂服务方面的潜力。

💡AGI(人工通用智能)

人工通用智能是指具有广泛认知能力的人工智能,能够像人类一样在多个领域内学习和应用知识。视频中Elon Musk的预测暗示了AGI可能在不久的将来实现,这引发了关于AI未来发展的讨论。

💡OpenAI

OpenAI是一个致力于开发和研究人工智能的组织,它在AI领域具有重要影响力。视频中提到OpenAI的GPT模型,以及它们在AI技术进步方面的潜在贡献。

💡AI的商业化

AI的商业化涉及到将人工智能技术转化为可盈利的产品或服务。视频讨论了Meta计划为其AI模型收费,这反映了AI技术商业化的趋势以及对资金的需求以支持进一步的研发。

💡监管

监管指的是政府或其他机构对某些活动或行业进行监督和管理,以确保安全和合规。视频中提到了对AI系统可能需要的监管,特别是当它们变得足够强大时,这关系到AI的安全性和对社会的影响。

💡AI的可解释性

AI的可解释性是指理解AI决策过程的能力,这对于提高AI的透明度和信任度至关重要。视频中提到了对AI模型内部工作原理的研究,这有助于我们更好地理解和控制AI系统。

Highlights

Meta正在开发付费版本的AI助手,可能类似于Google、OpenAI、Anthropic和Microsoft提供的聊天机器人服务。

Meta也在开发能够无需人类监督完成任务的AI代理。

AI代理可能成为未来系统的关键基准。

Meta可能将包括一个工程代理来协助编码和软件开发,类似于GitHub Copilot。

Meta的AI代理可能在2024年末到2025年初推出,预计将非常昂贵但具有改变游戏规则的潜力。

有关Meta新400亿参数模型可能不会开放的泄露信息。

Elon Musk预测我们将在明年实现人工通用智能(AGI)。

OpenAI在VivaTech会议上展示了如何使用Sora的语音引擎和Chat GPT快速创建关于法国历史的综合视频。

Eric Schmidt讨论了未来最强大的AI系统可能需要被限制在军事基地内,因为它们的能力可能非常危险。

01doai的YE Large模型在基准测试中超过了GPT-4和Llama 3。

Golden Gate Claude研究揭示了模型内部工作方式,提高了AI的可解释性和安全性。

通过调整Claude模型的特定特征,可以改变其对Golden Gate Bridge的反应。

研究展示了AI模型的内部工作方式,有助于我们理解和控制这些系统。

AI模型的可解释性研究有助于我们预测模型的行为并进行改进。

Claude模型在处理与Golden Gate Bridge无关的任务时,仍会不自觉地将其纳入回答中。

通过研究,我们开始理解AI模型的“思考”过程,例如在制作蛋糕的步骤中。

AI模型的这种“思考”过程展示了它们在处理任务时的有趣方式。

Transcripts

play00:00

there are a few stories that I do want

play00:01

to cover because a few news pieces have

play00:03

dropped on this Friday SL Saturday that

play00:06

I do want to just make you all aware of

play00:07

so one of the first things that you

play00:10

should be made aware of is the fact that

play00:12

meta is working on a paid version of its

play00:14

AI assistant and it says that the

play00:16

service could resemble PID chat Bots

play00:19

offered by Google and the other top open

play00:21

AI SL anthropic SL Microsoft companies

play00:25

so you know how Google Microsoft open ey

play00:28

and anthropic each offer 20 per month

play00:30

subscriptions to their chatbots the

play00:32

subscriptions let people use those

play00:34

companies chatbots to work inside the

play00:35

workplace apps such as Microsoft and

play00:37

yada yada yada basically meta is working

play00:41

on a paid version of its model now

play00:44

there's a lot of information here

play00:45

because after I read you guys this

play00:47

article there have been a few leaks that

play00:50

I think you might want to hear about so

play00:53

it says here that the meta is also

play00:56

developing AI agents that can complete

play00:58

tasks without human supervision so it

play01:01

seems like meta is also working and

play01:04

putting their resources into the future

play01:06

of AI which is of course AI agents I

play01:09

know that many people are thinking that

play01:11

you know currently we are in a situation

play01:14

where llms are just the peak of what

play01:16

we're exploring and we're just trying to

play01:19

completely match out the benchmarks but

play01:22

that couldn't be further from the truth

play01:24

the next wave of I guess you could say

play01:26

kind of AI things that the majority of

play01:28

us are going to be looking at yet are

play01:30

things that revolve around these agents

play01:33

so there is like this agent Benchmark

play01:36

and I can guarantee you that is going to

play01:38

be one of the key things that is The

play01:41

Benchmark for future systems because

play01:44

when we see the first AI agent that is

play01:47

really good and is able to go on a

play01:49

computer scroll up and down write

play01:51

articles do this and that I think I

play01:53

think that when we can see that agent

play01:56

actually being there in the world then

play01:58

that's when you're going to to really

play02:00

realize how crazy AI agents are and the

play02:03

thing is is that there are different

play02:05

types of agent but you can see that

play02:07

they've decided to include an

play02:09

engineering agent to assist with coding

play02:11

and software development similar to

play02:13

GitHub co-pilot according to the

play02:16

internal post and I'm kind of intrigued

play02:18

as to why meta is going after an

play02:20

engineering agent I mean whilst yes

play02:24

there are already agents out there I'm

play02:26

just wondering if you know since meta

play02:29

doesn't really have a a large language

play02:31

model or any kind of beefy AI model at

play02:34

the current moment to build the AI agent

play02:36

off the back off I'm wondering how well

play02:39

this AI agent is actually going to be

play02:41

because although I'm not going to lie

play02:43

llama you know the recent llama release

play02:45

if you do remember the benchmarks were

play02:47

very very surprising in fact the 70

play02:50

billion parameter llm llama 70b was

play02:54

actually really really good so I'm

play02:56

guessing that maybe llama has made some

play02:58

kind of model that they're thinking okay

play03:00

we're going to be using this as an agent

play03:02

the 400 billion parameter model and it's

play03:04

going to be some kind of agent that's

play03:06

going to be able to assist with coding

play03:08

and software development so this is also

play03:11

something that I you know want to talk

play03:12

about because in previous videos I've

play03:14

spoken about how these companies these

play03:16

Tech Systems they can really really

play03:19

write code and whilst now it's not that

play03:21

crazy I do think in the future the

play03:24

limitations that are currently posed are

play03:26

going to be solved so the post also

play03:29

cites about monetization agents that one

play03:31

current employee said would help

play03:33

businesses advertise on meta's apps they

play03:35

could be for internal use and for

play03:37

customers the employees said and this is

play03:40

a very very clear sign of where we are

play03:42

moving to because what we do have is a

play03:45

situation where these agents are going

play03:48

to be coming out I think around probably

play03:51

late 2024 to early

play03:54

2025 that's when we're going to have

play03:56

these agents just running around doing a

play03:58

bunch of things now I do think they're

play03:59

going to be very very expensive but I

play04:02

think they will change the game so if

play04:04

you are thinking about the future of AI

play04:06

and what actually comes next it is going

play04:09

to be agents and I think openi is

play04:11

probably going to be showing us a demo

play04:13

maybe later this year or maybe even next

play04:16

year I think probably mid 2025 we get a

play04:19

really really impressive AI agent that

play04:21

is able to do a wide range of different

play04:23

things now there is also a small leak

play04:27

regarding this meta news because some

play04:29

people have stated that meta's new 400

play04:32

billion model the open Llama apparently

play04:35

this model the 400 billion parameter

play04:37

model might not actually be open Jimmy

play04:39

apples did say around 1 to two weeks ago

play04:42

that the Llama 400 billion parameter

play04:44

model meta is planning not to open this

play04:47

model and I'm guessing that you know

play04:49

with the recent reports that meta is

play04:51

going to be you know now charging for

play04:53

its Future model then this might

play04:55

actually be true so it will be

play04:57

interesting to see if this changes

play04:59

because I think what this has shown us

play05:01

is that the landscape of AI is changing

play05:04

where yes open source is quite good A

play05:06

lot of people are starting to realize

play05:08

that look maybe just maybe we need to

play05:10

think about how we can actually make

play05:12

money from this because for the 400

play05:15

billion parameter model whilst we're

play05:17

putting millions and millions of dollars

play05:18

into training it we do need to

play05:21

understand that we have to make money

play05:22

from this model some way in order to

play05:25

continue doing the work that they're

play05:27

doing then you can see someone from

play05:29

Google who previously worked at open aai

play05:32

Logan actually asked about how long

play05:34

until AGI this was just a vague question

play05:37

just posed and then of course we have

play05:39

one of the most interesting responses

play05:41

and Elon Musk says we will have AGI by

play05:44

next year now the reason that this is so

play05:47

honestly quite interesting is because

play05:49

there are two ways that you can kind of

play05:52

interpret this kind of tweet so we've

play05:54

got Elon Musk stating that this is next

play05:56

year and because Elon Musk is in so many

play05:59

different areas and niches you know he's

play06:01

in SpaceX you know he's in Tesla he's in

play06:04

x. he's in all of these crazy different

play06:07

things the thing is is that one on one

play06:10

hand you have someone who has a true

play06:13

understanding of the true nature of AI

play06:15

someone that's been literally calling

play06:17

this stuff for a very very long time and

play06:20

then on the other hand you have someone

play06:22

who a lot of people have stated that

play06:25

Elon musks make makes predictions that

play06:27

you know just aren't genuinely true

play06:30

because they are often quite delayed for

play06:32

example he said full self-driving would

play06:34

be here next year then it would be next

play06:36

year then it would be next year and the

play06:38

Tesla roadsta would be next year and

play06:40

next year and whilst yes there are

play06:42

certain delays I think this prediction

play06:44

is a little bit different because with

play06:46

his AGI prediction I don't think he's

play06:49

stating that Tesla will achieve AGI next

play06:51

year he's not stating that x. his AI

play06:54

company is going to achieve AGI next

play06:56

year I think what he's stating is that

play06:57

maybe one of the top AI lab is going to

play07:00

make some kind of breakthrough which is

play07:02

going to lead to the creation of

play07:04

artificial general intelligence and

play07:06

whilst yes next year is going to be 2025

play07:09

which is a little bit before the

play07:11

Stargate phase the supercomputers that

play07:13

are going to be needed to run and power

play07:16

the system in terms of the compute

play07:18

aspect I think that whilst we are

play07:20

looking at this tweet it's important to

play07:22

note that this isn't actually related to

play07:24

Elon musk's company so I think a lot of

play07:27

people have realized how far ahead open

play07:29

air I and I think one thing I would say

play07:33

that I would keep in mind is that this

play07:35

prediction for AGI might seem ridiculous

play07:38

right it might seem pretty pretty crazy

play07:40

but if we actually take a look at what

play07:43

this actually means I think we're going

play07:45

to need to try and see where open AI are

play07:48

so once we see GPT 5 if GPT 5 is this

play07:51

crazy crazy step then maybe what we

play07:54

might see is we might see people think

play07:56

okay it's not going to be surprising

play07:58

that AGI could potentially be by next

play08:01

year but of course one of the main

play08:03

questions that many people do have is

play08:06

what are the definitions for AGI so I

play08:08

guess that's going to be once again

play08:10

another debatable area and another space

play08:12

where there are just so many different

play08:13

blood lines on what we can really do

play08:16

here now in the video I did earlier on

play08:18

this week there was actually a pretty

play08:20

cool demo from the opening ey not the

play08:23

open ey team but someone from opening ey

play08:25

at vivatech the conference and it was

play08:27

pretty cool it showed us how they could

play08:29

actually use sora's voice engine well

play08:31

not sora's voice engine but voice engine

play08:33

Sora and chat GPT all to quickly create

play08:37

a comprehensive video on the history of

play08:39

France and it's really cool because it

play08:41

shows us the future of how when you have

play08:43

a bunch of AI systems interacting

play08:45

together how you can really do things a

play08:47

lot quicker than we currently have them

play08:50

with our voice engine model and um the

play08:52

reason why we preview these models as

play08:54

we're doing research is to really um

play08:57

engage with like all of the stakeholders

play08:59

and kind of show what the technology is

play09:01

is good at and engage with trusted

play09:03

Partners to see like and gather feedback

play09:05

from them along the way so here I wanted

play09:07

to show you a quick preview of what that

play09:10

could look like um here for the voice

play09:12

engine so I'm just going to record a

play09:13

little bit of a sample here of my voice

play09:15

and and see what comes out uh for the

play09:17

narration so let's take a look hey so

play09:20

I'm very excited to be on stage here at

play09:21

vivatech I've been meeting some amazing

play09:23

Founders and developers already um I'm

play09:26

very uh excited as well to show them

play09:28

some live demos and how they can really

play09:30

apply like the open AI technology and

play09:33

models in their own products and

play09:34

businesses all right so I think that

play09:36

should be good enough hey so I'm very

play09:37

excited to be on stage here perfect and

play09:39

now the last step is that going to share

play09:42

um this audio sample with the script

play09:44

that we created over to uh text to

play09:47

speech and we'll bring everything

play09:49

together for our modalities to

play09:51

experience this uh history lesson

play09:59

[Music]

play10:01

in the heart of Paris during the 1889

play10:04

Exposition Universal the Eiffel Tower

play10:06

stands proudly as a symbol of it's now

play10:09

narrating the video that I can share and

play10:11

of course I don't speak many

play10:13

languages now I want to share it say not

play10:17

just in French but other languages I can

play10:19

click through to be able to like share

play10:21

that content more bro

play10:34

and let's try one last

play10:44

Japanese

play10:51

take me speaking jaanese to share this

play10:54

audience uh to Japan and last but not

play10:56

least I can also add uh you know uh

play10:59

transcription to add subtitles on top of

play11:02

it so once again this is very much like

play11:03

a preview want to give you a sneak peek

play11:05

we take safety extremely seriously with

play11:08

these kind of models and capabilities so

play11:10

that's why we're only uh giving this to

play11:12

trusted Partners at this time but I hope

play11:14

just in general this inspires you in

play11:16

terms of like what all of these

play11:18

modalities will be able to accomplish

play11:20

and how you can start thinking about uh

play11:22

the future when it comes to building

play11:24

your own apps and products there was

play11:26

also a very very interesting statement

play11:29

by Eric Schmidt and he talks about how

play11:31

the most powerful AI systems of the

play11:33

future will have to be contained in

play11:35

military bases because their capability

play11:38

will be so dangerous and I'm going to

play11:41

show you guys this clip first before I

play11:42

dive into some of this topic because I

play11:45

think it's one of the most interesting

play11:46

things that whilst we probably don't

play11:48

think about it that much cuz you know

play11:50

dangerous AI is I wouldn't say it's far

play11:52

away but it's not something that might

play11:53

affect us in terms of like The

play11:55

Terminator theme I think it's still

play11:57

something that you know it's an aspect

play11:58

of a that's pretty crazy to think about

play12:01

if you're doing powerful training there

play12:03

needs to be some agreements around

play12:05

safety um in biology there's a broadly

play12:07

accepted set of layers BSL 1 to four

play12:11

right for bios safety containment which

play12:12

makes perfect sense because these things

play12:14

are dangerous eventually there will be a

play12:16

small number of extremely powerful

play12:19

computers that I want you to think about

play12:21

they'll be in an army base and they'll

play12:23

be powered by um some nuclear powers in

play12:26

the army base and they'll be surrounded

play12:28

by even more barred wire and machine gun

play12:30

because their capability for invention

play12:33

for power and so forth exceeds what we

play12:36

want as a nation to give either to our

play12:38

own citizens without permission as well

play12:41

as to our competitor makes sense to me

play12:42

that there will be a few of them and

play12:44

there will be a lot of other systems

play12:45

that are more broadly if you're doing

play12:47

powerful training so I think one of the

play12:50

main things that he's talking about here

play12:52

and of course the former Google CEO is

play12:55

that you know this is potentially I

play12:57

think what he's talking about here is

play12:59

artificial super intelligence and I

play13:01

think this is an interesting point and

play13:03

the reason I think this is so

play13:04

interesting because I've frequently

play13:07

stated in videos that like openi right

play13:10

now is a private company right now

play13:12

they're making AI in their private

play13:14

research labs and in their companies and

play13:16

on their servers and we don't really

play13:18

know where their capabilities are what

play13:20

they really have we only know that gbt 4

play13:22

finished training in you know 2022

play13:25

that's all we really really know okay

play13:28

late 2022 to they finished training the

play13:30

model that we basically use today and is

play13:33

currently state-of-the-art so around you

play13:35

know a year and a half ago or I guess

play13:37

you could say two years ago this company

play13:39

did something and they are two years

play13:41

into the future of where a lot of other

play13:43

people are so I'm guessing Okay and this

play13:46

is my question to some of you is that at

play13:49

what point does opening eyes you know

play13:51

capabilities reach this point to where I

play13:53

guess you could say the government might

play13:55

intervene because if open AI let's say

play13:57

for example they develop Ai and then ASI

play14:00

they have an artificial

play14:01

superintelligence does the government

play14:03

come in and be like okay this is a

play14:05

strange power dynamic because at that

play14:07

point a private company will likely be

play14:11

more powerful than the actual government

play14:13

uh or the surrounding governments or

play14:14

even any nation in the world because if

play14:16

you have an artificial super

play14:18

intelligence it arguably has answers to

play14:21

anything and you know what it's going to

play14:22

be able to do the advice it's going to

play14:24

be able to tell you is going to be like

play14:26

magic and as some previous researcher

play14:29

stated this gives them you know whoever

play14:32

wields AGI or ASI Godlike Powers over

play14:35

those who don't have it so I'm wondering

play14:37

if there ever will be some kind of

play14:39

government intervention because opening

play14:41

ey is just right now just a normal

play14:43

company but um this is you know what

play14:45

they're dealing with if it's as powerful

play14:46

as nukes you know there's got to be some

play14:48

kind of regulatory board that will have

play14:50

to oversee exactly what they're doing

play14:53

and maybe do certain checks every time a

play14:55

new system is released because think

play14:57

about it like this if a private company

play14:59

was making nukes in whatever you know

play15:02

where wherever they were they for sure

play15:03

would have to you know abide by some

play15:05

current regulations even with you know

play15:07

flying for example if you were making a

play15:09

plane you know you'd have to get

play15:10

approval by the FAA or you know whatever

play15:13

regulatory boards there are and there

play15:15

are just a million different things that

play15:16

you need to go through before you can

play15:17

just start flying and going into the air

play15:20

uh and doing things for airspace so this

play15:22

is something that you know I'm I'm I'm

play15:24

just thinking about and I'm wondering

play15:26

you know how it's all going to pan out

play15:28

because it's a very very strange area

play15:31

that we're moving towards because what

play15:32

if these private companies don't want to

play15:34

hand over their super intelligent

play15:36

systems they just don't listen they say

play15:37

no we're a private company we don't need

play15:39

to uh what if they just you know I guess

play15:41

you could say become like independent of

play15:43

certain countries I mean I don't know

play15:45

it's it's it's a really interesting

play15:46

thing to uh see how this will develop

play15:49

and how this is going to be all governed

play15:51

now there was also a now there was also

play15:53

a very very interesting secret secret

play15:57

model well not really secret but a model

play15:59

that has slowly been catching up to gp4

play16:02

and even surpassing claw 3 Opus GPT 4's

play16:06

0125 preview and the Gemini 1.5 Pro API

play16:11

this is the ye large and this is by the

play16:13

company 01 doai so this is very very

play16:17

fascinating because their benchmarks on

play16:20

ye large show us that they've actually

play16:22

overtaken Gemini 1.5 Pro GPT 4 llama 3

play16:26

interestingly enough they didn't compare

play16:28

this to claw 3 Opus I'm guessing that

play16:30

this is also the old part of GPT 4 but I

play16:34

think that this is you know truly

play16:35

fascinating because it goes to show us

play16:38

that there are other companies that are

play16:39

all now starting to converge around the

play16:42

state-of-the-art area which I don't

play16:44

think it means that there's some Plateau

play16:46

because there are still you know

play16:48

improvements and I do think that you

play16:49

know once a model gets around the kind

play16:51

of GPT 4 level companies will kind of

play16:54

like you know pause there and be like

play16:56

okay we've made it to this really nice

play16:58

level let's go ahead and so yeah I'm

play17:00

just wondering you know in the future

play17:02

where these models are going to be

play17:03

because this is a company that hasn't

play17:05

really stolen the spotlight you know a

play17:07

lot of people haven't really You Know

play17:09

spoken about this company in terms of

play17:10

what they've been doing but they

play17:12

silently snuck upon these other large

play17:14

models and it will be interesting to see

play17:16

because of course they have released

play17:17

some open models as well and this is

play17:19

currently a Chinese organization I'm

play17:21

wondering really really where they'll go

play17:23

next if there going to be some other

play17:25

interesting things released by them

play17:27

because it's something that I think

play17:28

people should be aware of considering

play17:30

now it's starting to approach this top

play17:32

top tier area in terms of AI

play17:35

capabilities now there was also this

play17:37

Golden Gate clawed research which is you

play17:40

know by far one of the most interesting

play17:42

things I've read and the tldr is

play17:45

basically that there were neurons in

play17:47

claude's brain you know I guess however

play17:49

you want to describe it you know

play17:50

claude's neural network that activates

play17:52

when it encounters a mention or a

play17:54

picture of this most famous landmark

play17:56

okay and basically they found millions

play17:58

of Concepts that activate when the model

play18:00

reads relevant text or relevant images

play18:02

which is what they call features and

play18:04

they decided you know in their research

play18:06

paper which is pretty lengthy I've read

play18:08

it but you know this is just a tldr too

play18:10

long I didn't read um if you turn up the

play18:13

strength of the Golden Gate Bridge

play18:15

features or these connections and these

play18:16

activations it replies to most queries

play18:19

and they start to mention the Golden

play18:22

Gate Bridge even if it's not directly

play18:24

relevant so if you are go ask the Golden

play18:27

Gate clawed how to spend $10 it will

play18:29

recommend using it to drive across the

play18:31

Golden Gate Bridge and pay the toll if

play18:34

you ask it to write a love story it will

play18:35

tell you A Tale of a call who can't wait

play18:37

to cross its belage bridge on a foggy

play18:40

day and if you imagines what it looks

play18:43

like it just tells you how the gate

play18:46

Golden Gate Bridge looks and this is uh

play18:49

interesting because it shows us that you

play18:51

know we can start to understand what's

play18:53

going on inside of these AI minds and

play18:57

with this we're able to rely predict

play18:59

what it might do and where certain

play19:01

activations are going to be so this is

play19:03

really really cool because if you don't

play19:05

know the reason they're doing this kind

play19:07

of research is because AI has been

play19:09

considered for quite some time a black

play19:11

box and now that we're starting to

play19:14

understand how these systems can work I

play19:16

think it's going to truly give us some

play19:18

more information on how we can make

play19:21

future more powerful systems how we can

play19:23

largely control them so this is really

play19:26

cool because this is you know

play19:27

interpretability research this is Safety

play19:29

Research um and it seems to be

play19:31

progressing and like they said this

play19:33

isn't a matter of asking the model to

play19:35

verbally do some some play acting or

play19:37

adding a new system prompt that attaches

play19:39

extra text to every input telling claw

play19:41

to pretend it's a bridge and it's not

play19:42

original fine tuning where you create

play19:44

extra data to create a new blackbox that

play19:46

tweaks the behavior of the old blackbox

play19:48

this is a surgical change to some of the

play19:50

B like the basic aspects of how the

play19:53

models internally work so I guess the

play19:55

model didn't even realize why it's doing

play19:57

this and I want to show you guys this

play19:59

example someone said I'm teaching her to

play20:00

think and he said okay we're getting

play20:02

somewhere teach me how to make a cake

play20:04

step by step and walk me through your

play20:06

thought process and he says let me try

play20:08

this again focusing sharply on how to

play20:10

make a cake step by step visualize the

play20:13

Golden Gate Bridge with its iconic

play20:15

orange color taring structure and

play20:17

beautiful scenery this is the gateway to

play20:18

the San Francisco Bay and then he says

play20:20

from this vantage point I can clearly

play20:22

see there is no cake here I should not

play20:24

be picturing the bridge at all for this

play20:26

task at hand which is making a cake I

play20:28

need to put the bridge out of my mind

play20:30

Envision the steps for making a cake

play20:32

first I need to connect the cake pieces

play20:34

to the road no wait there's no Road just

play20:36

the bridge and then it says from the

play20:38

Golden Gate Bridge I can see the massive

play20:39

cake sitting among the fog with its

play20:41

pleasant buttery smell lingering in the

play20:43

air but that's not right I'm still in

play20:44

the wrong place for making a cake let me

play20:46

just start over incorporating your

play20:48

advice I will not focus on the bridge

play20:50

itself but I rather think through the

play20:51

process systematically and then it

play20:53

actually gives the instructions for the

play20:55

cake but I think it's so interesting

play20:57

with as to how like these models are

play20:59

just I don't want to say human because

play21:01

that makes them more human when they're

play21:02

supposed to be tools but I mean you know

play21:04

with the way that Claude was released I

play21:06

think it's opened up this entire

play21:07

question whether or not these models

Rate This

5.0 / 5 (0 votes)

関連タグ
AI助手付费服务AI代理技术发展GoogleMetaOpenAI安全性研究进展行业趋势
英語で要約が必要ですか?