Elons NEW Prediction For AGI, METAs New Agents, New SORA Demo, China Surpasses GPT4, and more
Summary
TLDR本文概述了人工智能领域的最新动态,包括Meta开发付费AI助手的计划,以及AI代理的未来发展。特别提到了Meta可能不会开放其400亿参数的大型语言模型,以及Elon Musk对2025年实现人工通用智能(AGI)的预测。同时,讨论了AI的商业化、监管以及不同AI模型之间的竞争,如01doai的Ye Large模型。此外,还介绍了对AI模型内部工作原理的解读性研究,例如Golden Gate Claude研究,展示了AI如何将特定概念与回答关联起来。
Takeaways
- 🤖 Facebook的母公司Meta正在开发付费版本的AI助手,可能类似于Google、OpenAI和Anthropic等公司提供的聊天机器人服务。
- 🔍 Meta也在开发能够无需人类监督完成任务的AI代理,这表明Meta正在投入资源研究未来的AI技术。
- 👨💻 Meta正在考虑包括一个工程代理来协助编码和软件开发,类似于GitHub的Copilot。
- 💰 有关AI代理的商业化,Meta的员工表示这些代理将帮助企业在Meta的应用上做广告,可能既用于内部也用于客户。
- 🗣️ 有传言称Meta的400亿参数模型可能不会公开,这与Meta计划对其未来模型收费的报道相符。
- 🚀 Elon Musk预测,我们将在明年实现人工通用智能(AGI),这可能意味着顶级AI实验室将取得重大突破。
- 🎥 在VivaTech会议上,有人展示了如何结合使用语音引擎和Chat GPT快速创建关于法国历史的综合性视频。
- 🛡️ Eric Schmidt表示,未来最强大的AI系统可能需要被限制在军事基地内,因为它们的功能可能非常危险。
- 🏆 01doai公司开发的'ye large'模型在基准测试中超过了GPT 4和Llama 3,显示出其他公司正在迎头赶上。
- 🧠 有关Claude模型的研究揭示了当模型遇到与金门大桥相关的文本或图片时激活的神经元,这有助于我们理解AI的内部工作机制。
- 🔄 通过调整Claude模型中特定特征的激活强度,研究人员能够影响模型的输出,展示了AI模型的可解释性和可控性研究的进步。
Q & A
Meta是否正在开发付费版本的AI助手?
-是的,Meta正在开发付费版本的AI助手,这项服务可能会类似于Google、OpenAI、Anthropic和Microsoft等公司提供的聊天机器人服务。
Meta的AI助手服务与Google和Microsoft的聊天机器人服务有何相似之处?
-Meta的AI助手服务将允许用户在工作场所应用程序中使用这些聊天机器人,类似于Google和Microsoft提供的每月20美元的订阅服务。
Meta是否也在开发AI代理,可以在没有人类监督的情况下完成任务?
-对,Meta正在开发AI代理,这些代理能够独立完成任务,这表明Meta正在将资源投入到AI的未来领域。
AI代理与现有的大型语言模型(LLMs)有何不同?
-AI代理是下一代AI技术,它们不仅能够处理语言,还能够执行更广泛的任务,如编程和软件开发,而不仅仅是语言理解。
Meta是否计划开发一个工程代理来协助编码和软件开发?
-是的,Meta计划开发一个工程代理,类似于GitHub的Co-Pilot,以协助编码和软件开发。
Meta的AI代理开发计划是否已经取得了一些成果?
-目前Meta的AI代理开发计划的具体成果尚未公开,但内部帖子显示他们正在积极探索这一领域。
Meta是否计划将其400亿参数的大型模型Llama 70B作为AI代理的基础?
-根据讨论,Meta可能会考虑使用Llama 70B或其他大型模型作为开发AI代理的基础,但具体的计划尚未公开。
Elon Musk关于AGI(人工通用智能)的预测是什么?
-Elon Musk预测我们明年将会拥有AGI,这表明他认为AI领域将很快取得重大突破。
Eric Schmidt关于未来AI系统的看法是什么?
-Eric Schmidt认为未来最强大的AI系统将需要被限制在军事基地内,因为它们的能力强大到可能很危险。
01doai公司在AI领域取得了哪些进展?
-01doai公司的YE Large模型在基准测试中超过了GPT-4和Llama 3,显示出该公司在AI领域的快速发展。
Claude的神经网络研究揭示了哪些关于AI内部工作的信息?
-Claude的神经网络研究揭示了当模型遇到特定激活时的反应,例如Golden Gate Bridge特征,这有助于我们理解AI的内部工作机制并提高模型的可解释性。
Outlines
🤖 元宇宙公司开发付费AI助手
元宇宙公司(Meta)正在开发一款付费版的AI助手,可能类似于谷歌、微软等公司提供的聊天机器人服务。这些公司目前提供每月20美元的订阅服务,让用户能够在工作场所应用程序中使用聊天机器人。Meta也在开发能够独立完成任务的AI代理,这表明Meta正在投入资源开发未来的AI技术。AI代理可能是未来AI发展的关键,特别是当AI代理能够执行复杂任务如编写文章时,人们将真正意识到AI代理的强大。此外,Meta还可能推出工程代理来协助编码和软件开发,类似于GitHub Copilot。还有关于AI代理商业化的讨论,预计这些AI代理将在2024年末到2025年初推出,虽然可能会非常昂贵,但有可能改变游戏规则。
💡 人工智能的未来及埃隆·马斯克的AGI预测
随着AI技术的发展,开源和盈利之间的平衡成为讨论的焦点。例如,Meta计划对其400亿参数的AI模型进行收费,这表明AI领域的商业模式正在发生变化。谷歌前员工Logan询问了关于通用人工智能(AGI)的实现时间,而埃隆·马斯克预测我们可能在明年就能实现AGI。这一预测引发了两种不同的解读:一方面,马斯克在多个领域有深入的了解和预测,另一方面,他的一些预测过去被认为过于乐观。此外,还讨论了AGI的定义和OpenAI的进展,以及AGI可能在2025年实现的可能性。
🎥 AI技术在视频制作中的应用
展示了AI技术在视频制作中的应用,特别是在VivaTech会议上的演示。演示中,通过结合Sora的声音引擎和Chat GPT,快速创建了关于法国历史的综合视频。这表明当多个AI系统协同工作时,可以极大地提高工作效率。此外,还讨论了AI模型的安全性和与利益相关者的互动,以及如何通过合作伙伴收集反馈来改进技术。
🛡️ 人工智能的安全性和监管问题
讨论了人工智能的安全性和未来可能的监管问题。Eric Schmidt提出,未来最强大的AI系统可能需要被限制在军事基地内,因为它们的潜在能力可能非常危险。他将AI与生物安全等级(BSL)进行了比较,并预测将会出现一些极其强大的计算机,它们将被安置在军事基地并受到严格的保护。这引发了关于AI监管和政府干预的讨论,特别是在私人公司开发出超越现有政府能力的AI技术时,可能会出现新的监管挑战。
🌟 其他AI公司的进步和研究
介绍了其他AI公司在技术进步方面的情况,特别是01doai公司的YE Large模型,它在性能上已经超越了GPT-4和Llama 3等其他顶尖模型。此外,还讨论了Claude AI的Golden Gate研究,这项研究揭示了AI模型内部的工作原理,通过调整特定特征的激活强度,可以影响模型的输出。这项研究有助于提高AI的可解释性和安全性,为未来更强大的AI系统的发展提供了重要的见解。
🧐 AI模型的内部工作机制
通过Claude AI的例子,探讨了AI模型的内部工作机制,以及如何通过理解这些机制来提高AI的可解释性和控制能力。展示了AI如何在特定提示下调整其输出,以及如何通过改变模型的内部连接来影响其行为。这表明AI研究正在从黑箱操作转向更深层次的理解,这对于构建更安全、更可靠的AI系统至关重要。
Mindmap
Keywords
💡元宇宙
💡AI助手
💡AI代理
💡GitHub Copilot
💡LLM(大型语言模型)
💡AGI(人工通用智能)
💡OpenAI
💡AI的商业化
💡监管
💡AI的可解释性
Highlights
Meta正在开发付费版本的AI助手,可能类似于Google、OpenAI、Anthropic和Microsoft提供的聊天机器人服务。
Meta也在开发能够无需人类监督完成任务的AI代理。
AI代理可能成为未来系统的关键基准。
Meta可能将包括一个工程代理来协助编码和软件开发,类似于GitHub Copilot。
Meta的AI代理可能在2024年末到2025年初推出,预计将非常昂贵但具有改变游戏规则的潜力。
有关Meta新400亿参数模型可能不会开放的泄露信息。
Elon Musk预测我们将在明年实现人工通用智能(AGI)。
OpenAI在VivaTech会议上展示了如何使用Sora的语音引擎和Chat GPT快速创建关于法国历史的综合视频。
Eric Schmidt讨论了未来最强大的AI系统可能需要被限制在军事基地内,因为它们的能力可能非常危险。
01doai的YE Large模型在基准测试中超过了GPT-4和Llama 3。
Golden Gate Claude研究揭示了模型内部工作方式,提高了AI的可解释性和安全性。
通过调整Claude模型的特定特征,可以改变其对Golden Gate Bridge的反应。
研究展示了AI模型的内部工作方式,有助于我们理解和控制这些系统。
AI模型的可解释性研究有助于我们预测模型的行为并进行改进。
Claude模型在处理与Golden Gate Bridge无关的任务时,仍会不自觉地将其纳入回答中。
通过研究,我们开始理解AI模型的“思考”过程,例如在制作蛋糕的步骤中。
AI模型的这种“思考”过程展示了它们在处理任务时的有趣方式。
Transcripts
there are a few stories that I do want
to cover because a few news pieces have
dropped on this Friday SL Saturday that
I do want to just make you all aware of
so one of the first things that you
should be made aware of is the fact that
meta is working on a paid version of its
AI assistant and it says that the
service could resemble PID chat Bots
offered by Google and the other top open
AI SL anthropic SL Microsoft companies
so you know how Google Microsoft open ey
and anthropic each offer 20 per month
subscriptions to their chatbots the
subscriptions let people use those
companies chatbots to work inside the
workplace apps such as Microsoft and
yada yada yada basically meta is working
on a paid version of its model now
there's a lot of information here
because after I read you guys this
article there have been a few leaks that
I think you might want to hear about so
it says here that the meta is also
developing AI agents that can complete
tasks without human supervision so it
seems like meta is also working and
putting their resources into the future
of AI which is of course AI agents I
know that many people are thinking that
you know currently we are in a situation
where llms are just the peak of what
we're exploring and we're just trying to
completely match out the benchmarks but
that couldn't be further from the truth
the next wave of I guess you could say
kind of AI things that the majority of
us are going to be looking at yet are
things that revolve around these agents
so there is like this agent Benchmark
and I can guarantee you that is going to
be one of the key things that is The
Benchmark for future systems because
when we see the first AI agent that is
really good and is able to go on a
computer scroll up and down write
articles do this and that I think I
think that when we can see that agent
actually being there in the world then
that's when you're going to to really
realize how crazy AI agents are and the
thing is is that there are different
types of agent but you can see that
they've decided to include an
engineering agent to assist with coding
and software development similar to
GitHub co-pilot according to the
internal post and I'm kind of intrigued
as to why meta is going after an
engineering agent I mean whilst yes
there are already agents out there I'm
just wondering if you know since meta
doesn't really have a a large language
model or any kind of beefy AI model at
the current moment to build the AI agent
off the back off I'm wondering how well
this AI agent is actually going to be
because although I'm not going to lie
llama you know the recent llama release
if you do remember the benchmarks were
very very surprising in fact the 70
billion parameter llm llama 70b was
actually really really good so I'm
guessing that maybe llama has made some
kind of model that they're thinking okay
we're going to be using this as an agent
the 400 billion parameter model and it's
going to be some kind of agent that's
going to be able to assist with coding
and software development so this is also
something that I you know want to talk
about because in previous videos I've
spoken about how these companies these
Tech Systems they can really really
write code and whilst now it's not that
crazy I do think in the future the
limitations that are currently posed are
going to be solved so the post also
cites about monetization agents that one
current employee said would help
businesses advertise on meta's apps they
could be for internal use and for
customers the employees said and this is
a very very clear sign of where we are
moving to because what we do have is a
situation where these agents are going
to be coming out I think around probably
late 2024 to early
2025 that's when we're going to have
these agents just running around doing a
bunch of things now I do think they're
going to be very very expensive but I
think they will change the game so if
you are thinking about the future of AI
and what actually comes next it is going
to be agents and I think openi is
probably going to be showing us a demo
maybe later this year or maybe even next
year I think probably mid 2025 we get a
really really impressive AI agent that
is able to do a wide range of different
things now there is also a small leak
regarding this meta news because some
people have stated that meta's new 400
billion model the open Llama apparently
this model the 400 billion parameter
model might not actually be open Jimmy
apples did say around 1 to two weeks ago
that the Llama 400 billion parameter
model meta is planning not to open this
model and I'm guessing that you know
with the recent reports that meta is
going to be you know now charging for
its Future model then this might
actually be true so it will be
interesting to see if this changes
because I think what this has shown us
is that the landscape of AI is changing
where yes open source is quite good A
lot of people are starting to realize
that look maybe just maybe we need to
think about how we can actually make
money from this because for the 400
billion parameter model whilst we're
putting millions and millions of dollars
into training it we do need to
understand that we have to make money
from this model some way in order to
continue doing the work that they're
doing then you can see someone from
Google who previously worked at open aai
Logan actually asked about how long
until AGI this was just a vague question
just posed and then of course we have
one of the most interesting responses
and Elon Musk says we will have AGI by
next year now the reason that this is so
honestly quite interesting is because
there are two ways that you can kind of
interpret this kind of tweet so we've
got Elon Musk stating that this is next
year and because Elon Musk is in so many
different areas and niches you know he's
in SpaceX you know he's in Tesla he's in
x. he's in all of these crazy different
things the thing is is that one on one
hand you have someone who has a true
understanding of the true nature of AI
someone that's been literally calling
this stuff for a very very long time and
then on the other hand you have someone
who a lot of people have stated that
Elon musks make makes predictions that
you know just aren't genuinely true
because they are often quite delayed for
example he said full self-driving would
be here next year then it would be next
year then it would be next year and the
Tesla roadsta would be next year and
next year and whilst yes there are
certain delays I think this prediction
is a little bit different because with
his AGI prediction I don't think he's
stating that Tesla will achieve AGI next
year he's not stating that x. his AI
company is going to achieve AGI next
year I think what he's stating is that
maybe one of the top AI lab is going to
make some kind of breakthrough which is
going to lead to the creation of
artificial general intelligence and
whilst yes next year is going to be 2025
which is a little bit before the
Stargate phase the supercomputers that
are going to be needed to run and power
the system in terms of the compute
aspect I think that whilst we are
looking at this tweet it's important to
note that this isn't actually related to
Elon musk's company so I think a lot of
people have realized how far ahead open
air I and I think one thing I would say
that I would keep in mind is that this
prediction for AGI might seem ridiculous
right it might seem pretty pretty crazy
but if we actually take a look at what
this actually means I think we're going
to need to try and see where open AI are
so once we see GPT 5 if GPT 5 is this
crazy crazy step then maybe what we
might see is we might see people think
okay it's not going to be surprising
that AGI could potentially be by next
year but of course one of the main
questions that many people do have is
what are the definitions for AGI so I
guess that's going to be once again
another debatable area and another space
where there are just so many different
blood lines on what we can really do
here now in the video I did earlier on
this week there was actually a pretty
cool demo from the opening ey not the
open ey team but someone from opening ey
at vivatech the conference and it was
pretty cool it showed us how they could
actually use sora's voice engine well
not sora's voice engine but voice engine
Sora and chat GPT all to quickly create
a comprehensive video on the history of
France and it's really cool because it
shows us the future of how when you have
a bunch of AI systems interacting
together how you can really do things a
lot quicker than we currently have them
with our voice engine model and um the
reason why we preview these models as
we're doing research is to really um
engage with like all of the stakeholders
and kind of show what the technology is
is good at and engage with trusted
Partners to see like and gather feedback
from them along the way so here I wanted
to show you a quick preview of what that
could look like um here for the voice
engine so I'm just going to record a
little bit of a sample here of my voice
and and see what comes out uh for the
narration so let's take a look hey so
I'm very excited to be on stage here at
vivatech I've been meeting some amazing
Founders and developers already um I'm
very uh excited as well to show them
some live demos and how they can really
apply like the open AI technology and
models in their own products and
businesses all right so I think that
should be good enough hey so I'm very
excited to be on stage here perfect and
now the last step is that going to share
um this audio sample with the script
that we created over to uh text to
speech and we'll bring everything
together for our modalities to
experience this uh history lesson
[Music]
in the heart of Paris during the 1889
Exposition Universal the Eiffel Tower
stands proudly as a symbol of it's now
narrating the video that I can share and
of course I don't speak many
languages now I want to share it say not
just in French but other languages I can
click through to be able to like share
that content more bro
and let's try one last
Japanese
take me speaking jaanese to share this
audience uh to Japan and last but not
least I can also add uh you know uh
transcription to add subtitles on top of
it so once again this is very much like
a preview want to give you a sneak peek
we take safety extremely seriously with
these kind of models and capabilities so
that's why we're only uh giving this to
trusted Partners at this time but I hope
just in general this inspires you in
terms of like what all of these
modalities will be able to accomplish
and how you can start thinking about uh
the future when it comes to building
your own apps and products there was
also a very very interesting statement
by Eric Schmidt and he talks about how
the most powerful AI systems of the
future will have to be contained in
military bases because their capability
will be so dangerous and I'm going to
show you guys this clip first before I
dive into some of this topic because I
think it's one of the most interesting
things that whilst we probably don't
think about it that much cuz you know
dangerous AI is I wouldn't say it's far
away but it's not something that might
affect us in terms of like The
Terminator theme I think it's still
something that you know it's an aspect
of a that's pretty crazy to think about
if you're doing powerful training there
needs to be some agreements around
safety um in biology there's a broadly
accepted set of layers BSL 1 to four
right for bios safety containment which
makes perfect sense because these things
are dangerous eventually there will be a
small number of extremely powerful
computers that I want you to think about
they'll be in an army base and they'll
be powered by um some nuclear powers in
the army base and they'll be surrounded
by even more barred wire and machine gun
because their capability for invention
for power and so forth exceeds what we
want as a nation to give either to our
own citizens without permission as well
as to our competitor makes sense to me
that there will be a few of them and
there will be a lot of other systems
that are more broadly if you're doing
powerful training so I think one of the
main things that he's talking about here
and of course the former Google CEO is
that you know this is potentially I
think what he's talking about here is
artificial super intelligence and I
think this is an interesting point and
the reason I think this is so
interesting because I've frequently
stated in videos that like openi right
now is a private company right now
they're making AI in their private
research labs and in their companies and
on their servers and we don't really
know where their capabilities are what
they really have we only know that gbt 4
finished training in you know 2022
that's all we really really know okay
late 2022 to they finished training the
model that we basically use today and is
currently state-of-the-art so around you
know a year and a half ago or I guess
you could say two years ago this company
did something and they are two years
into the future of where a lot of other
people are so I'm guessing Okay and this
is my question to some of you is that at
what point does opening eyes you know
capabilities reach this point to where I
guess you could say the government might
intervene because if open AI let's say
for example they develop Ai and then ASI
they have an artificial
superintelligence does the government
come in and be like okay this is a
strange power dynamic because at that
point a private company will likely be
more powerful than the actual government
uh or the surrounding governments or
even any nation in the world because if
you have an artificial super
intelligence it arguably has answers to
anything and you know what it's going to
be able to do the advice it's going to
be able to tell you is going to be like
magic and as some previous researcher
stated this gives them you know whoever
wields AGI or ASI Godlike Powers over
those who don't have it so I'm wondering
if there ever will be some kind of
government intervention because opening
ey is just right now just a normal
company but um this is you know what
they're dealing with if it's as powerful
as nukes you know there's got to be some
kind of regulatory board that will have
to oversee exactly what they're doing
and maybe do certain checks every time a
new system is released because think
about it like this if a private company
was making nukes in whatever you know
where wherever they were they for sure
would have to you know abide by some
current regulations even with you know
flying for example if you were making a
plane you know you'd have to get
approval by the FAA or you know whatever
regulatory boards there are and there
are just a million different things that
you need to go through before you can
just start flying and going into the air
uh and doing things for airspace so this
is something that you know I'm I'm I'm
just thinking about and I'm wondering
you know how it's all going to pan out
because it's a very very strange area
that we're moving towards because what
if these private companies don't want to
hand over their super intelligent
systems they just don't listen they say
no we're a private company we don't need
to uh what if they just you know I guess
you could say become like independent of
certain countries I mean I don't know
it's it's it's a really interesting
thing to uh see how this will develop
and how this is going to be all governed
now there was also a now there was also
a very very interesting secret secret
model well not really secret but a model
that has slowly been catching up to gp4
and even surpassing claw 3 Opus GPT 4's
0125 preview and the Gemini 1.5 Pro API
this is the ye large and this is by the
company 01 doai so this is very very
fascinating because their benchmarks on
ye large show us that they've actually
overtaken Gemini 1.5 Pro GPT 4 llama 3
interestingly enough they didn't compare
this to claw 3 Opus I'm guessing that
this is also the old part of GPT 4 but I
think that this is you know truly
fascinating because it goes to show us
that there are other companies that are
all now starting to converge around the
state-of-the-art area which I don't
think it means that there's some Plateau
because there are still you know
improvements and I do think that you
know once a model gets around the kind
of GPT 4 level companies will kind of
like you know pause there and be like
okay we've made it to this really nice
level let's go ahead and so yeah I'm
just wondering you know in the future
where these models are going to be
because this is a company that hasn't
really stolen the spotlight you know a
lot of people haven't really You Know
spoken about this company in terms of
what they've been doing but they
silently snuck upon these other large
models and it will be interesting to see
because of course they have released
some open models as well and this is
currently a Chinese organization I'm
wondering really really where they'll go
next if there going to be some other
interesting things released by them
because it's something that I think
people should be aware of considering
now it's starting to approach this top
top tier area in terms of AI
capabilities now there was also this
Golden Gate clawed research which is you
know by far one of the most interesting
things I've read and the tldr is
basically that there were neurons in
claude's brain you know I guess however
you want to describe it you know
claude's neural network that activates
when it encounters a mention or a
picture of this most famous landmark
okay and basically they found millions
of Concepts that activate when the model
reads relevant text or relevant images
which is what they call features and
they decided you know in their research
paper which is pretty lengthy I've read
it but you know this is just a tldr too
long I didn't read um if you turn up the
strength of the Golden Gate Bridge
features or these connections and these
activations it replies to most queries
and they start to mention the Golden
Gate Bridge even if it's not directly
relevant so if you are go ask the Golden
Gate clawed how to spend $10 it will
recommend using it to drive across the
Golden Gate Bridge and pay the toll if
you ask it to write a love story it will
tell you A Tale of a call who can't wait
to cross its belage bridge on a foggy
day and if you imagines what it looks
like it just tells you how the gate
Golden Gate Bridge looks and this is uh
interesting because it shows us that you
know we can start to understand what's
going on inside of these AI minds and
with this we're able to rely predict
what it might do and where certain
activations are going to be so this is
really really cool because if you don't
know the reason they're doing this kind
of research is because AI has been
considered for quite some time a black
box and now that we're starting to
understand how these systems can work I
think it's going to truly give us some
more information on how we can make
future more powerful systems how we can
largely control them so this is really
cool because this is you know
interpretability research this is Safety
Research um and it seems to be
progressing and like they said this
isn't a matter of asking the model to
verbally do some some play acting or
adding a new system prompt that attaches
extra text to every input telling claw
to pretend it's a bridge and it's not
original fine tuning where you create
extra data to create a new blackbox that
tweaks the behavior of the old blackbox
this is a surgical change to some of the
B like the basic aspects of how the
models internally work so I guess the
model didn't even realize why it's doing
this and I want to show you guys this
example someone said I'm teaching her to
think and he said okay we're getting
somewhere teach me how to make a cake
step by step and walk me through your
thought process and he says let me try
this again focusing sharply on how to
make a cake step by step visualize the
Golden Gate Bridge with its iconic
orange color taring structure and
beautiful scenery this is the gateway to
the San Francisco Bay and then he says
from this vantage point I can clearly
see there is no cake here I should not
be picturing the bridge at all for this
task at hand which is making a cake I
need to put the bridge out of my mind
Envision the steps for making a cake
first I need to connect the cake pieces
to the road no wait there's no Road just
the bridge and then it says from the
Golden Gate Bridge I can see the massive
cake sitting among the fog with its
pleasant buttery smell lingering in the
air but that's not right I'm still in
the wrong place for making a cake let me
just start over incorporating your
advice I will not focus on the bridge
itself but I rather think through the
process systematically and then it
actually gives the instructions for the
cake but I think it's so interesting
with as to how like these models are
just I don't want to say human because
that makes them more human when they're
supposed to be tools but I mean you know
with the way that Claude was released I
think it's opened up this entire
question whether or not these models
関連動画をさらに表示
2 Ex-AI CEOs Debate the Future of AI w/ Emad Mostaque & Nat Friedman | EP #98
Introduction to Generative AI
[ML News] Jamba, CMD-R+, and other new models (yes, I know this is like a week behind 🙃)
Ilya Sutskever | AI neurons work just like human neurons | AGI will be conscious like humans
Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures
In conversation with the Godfather of AI
5.0 / 5 (0 votes)