What Are These Companies Hiding?
Summary
TLDR视频讨论了两款即将上市的AI助手设备:Rabbit R1和Humane AI Pin。Rabbit R1是一款200美元的设备,已经预售超过10万套,而Humane AI Pin则是一款700美元的设备,需要额外的月费。这两款设备都声称能够通过简单的语音指令完成复杂任务,如预订全家欧洲旅行或计算食物的营养成分。然而,视频中提出质疑,认为这些设备的营销可能过于乐观,实际上它们目前的功能与智能手机相似,但不如智能手机强大。同时,视频还指出了这些设备依赖语音交互的问题,以及Humane AI Pin的激光投影技术可能过于噱头。
Takeaways
- 🤖 两款AI助手设备即将上市:Rabbit R1和Humane AI Pin,它们都提出了令人印象深刻但值得怀疑的功能承诺。
- 🌍 这些设备展示了通过简单的语音指令执行复杂任务的能力,如预订全家欧洲之旅或计算食物的营养成分。
- 💰 Rabbit R1定价200美元,已预售超过10万单位,而Humane AI Pin定价700美元,外加每月25美元的订阅费。
- 📱 这些AI助手设备被认为可以替代手机,但目前它们能做的基本上就是手机能做的事情,只是不如手机做得好。
- 🎯 这些设备基于AI构建,可以响应语音命令执行任务,但与手机相比,它们的互动性和功能性有限。
- 🚫 这些公司没有直接回答为什么这些产品是硬件而不是应用程序的问题,可能是因为需要操作系统级别的访问权限,而这是苹果和谷歌不会提供的。
- 📱 语音交互是这些设备的主要交互方式,但与在屏幕上的精细调整相比,仅依赖语音命令可能会导致用户体验下降。
- 📹 一些演示视频被剪辑以隐藏响应时间,这可能会误导用户对设备性能的期望。
- 🔄 Humane AI Pin的激光投影技术在实际使用中可能存在问题,如设备过重和穿戴不稳定。
- 🐑 Rabbit R1的“大动作模型”(Lamb)是一个有前景的特性,但目前可能还需要大量用户数据来训练才能变得实用。
- 💡 尽管对这些产品的市场推广持批评态度,但视频作者认为这些尝试是技术进步的一部分,并希望这些产品能够成功。
Q & A
兔R1和Humane AI pin是什么类型的产品?
-兔R1和Humane AI pin是两款即将上市的人工智能助手设备。兔R1是一款200美元的设备,拥有可爱的外观设计、小屏幕、摄像头、模拟滚轮等硬件配置。Humane AI pin则是一款700美元的设备,需要额外支付25美元的月费订阅,它使用投影仪将用户界面投影到用户的手上。
兔R1和Humane AI pin的主要功能是什么?
-这两款产品的主要功能是通过人工智能助手来简化用户的生活,例如通过语音命令预订全家欧洲之旅、计算食物的卡路里或蛋白质含量等。它们旨在通过自然语言处理来执行任务,减少用户在手机上的操作步骤。
兔R1和Humane AI pin的销售情况如何?
-兔R1在预售阶段已经售出了超过100,000台,而Humane AI pin由于其700美元的价格和额外的月费订阅,销售情况似乎没有兔R1成功。
为什么这些设备不能仅仅是应用程序?
-这些设备不能仅仅是应用程序,因为它们需要访问操作系统的高级权限,例如密码、信用卡信息、麦克风、相机和GPS等,这些权限通常不会开放给开发者。另外,苹果和谷歌正在开发自己的类似产品,一旦推出,第三方应用程序将难以竞争。
兔R1的large action model(大动作模型)是什么?
-兔R1的large action model是一种AI模型,它可以根据用户的提示执行动作,如鼠标点击和滚动。这种模型可以通过训练学习新的动作,例如通过观察用户如何在Photoshop中编辑照片,然后在未来自动执行类似的编辑任务。
Humane AI pin的投影技术存在哪些问题?
-Humane AI pin的投影技术存在的问题包括设备体积较大且重,容易在使用时晃动,以及在演示中没有允许人们实际体验,存在过热的问题。这些问题表明产品可能还没有准备好上市。
为什么说这些设备的语音交互可能存在问题?
-这些设备的语音交互可能存在问题,因为它们依赖于语音命令来执行任务,而人类在使用应用程序时通常会通过屏幕进行微调,如查看评论、照片或价格等。去除屏幕交互,仅依赖语音命令可能会导致用户体验下降。
这些设备的市场营销策略存在哪些问题?
-这些设备的市场营销策略可能过于乐观,展示了设备的理想状态,但实际产品可能无法立即达到这些展示的功能。此外,一些演示视频经过剪辑,隐藏了设备的响应延迟,这可能会误导消费者。
兔R1和Humane AI pin的潜在风险有哪些?
-兔R1和Humane AI pin的潜在风险包括依赖于未经充分训练的AI模型,可能导致执行任务时出现错误,如错误订购大量披萨或预订远超预算的假期。此外,这些设备的实用性和用户体验可能不如宣传的那样好。
这些设备的未来前景如何?
-尽管这些设备目前可能存在一些问题和局限性,但它们是技术发展过程中的一部分。通过不断的尝试和失败,最终可能会为消费者带来更好的产品。
消费者在购买这些设备时应有哪些预期?
-消费者在购买这些设备时应预期到它们可能无法立即执行所有宣传的功能,可能需要时间来训练AI模型,并且可能需要携带手机作为辅助。此外,应有心理准备,这些设备可能在实际使用中存在延迟和用户体验上的问题。
Outlines
🤖 两款AI助手设备的介绍与质疑
介绍了两款即将上市的AI助手设备:Rabbit R1和Humane AI Pin。这两款产品都提出了令人印象深刻但值得怀疑的功能,例如通过简单的语音指令就能预订全家欧洲之旅,或者通过展示一把杏仁就能计算出其卡路里和蛋白质含量。尽管这些设备声称能够简化生活,但视频中提出了对这些公司和产品的质疑,包括它们的销售数据、实际功能以及为何不以应用程序形式存在。
🗣️ 语音交互的局限性与市场营销的批评
讨论了这两款设备主要依赖语音命令的交互方式,以及这种方式与我们在手机上通过屏幕进行的精细调整相比的局限性。指出了演示视频中可能的误导性编辑,如剪辑等待时间,以及对Humane AI Pin的投影技术的实际应用和潜在问题提出了质疑。同时,批评了市场营销策略,认为这些设备的实际功能与宣传中的描述不符。
🧠 Rabbit R1的'大动作模型'与未来展望
详细讨论了Rabbit R1设备的一个特色功能——'大动作模型'(Lamb),这是一种能够根据用户提示执行动作的AI模型。虽然这个概念很有前景,但视频中对其实际应用和用户预期提出了担忧,认为目前这些模型还没有足够的训练数据来实现其宣传的功能。同时,对可能的AI错误和这些错误可能导致的严重后果表示了担忧。最后,虽然批评了产品的市场营销策略,但也强调了尝试新事物对于技术进步的重要性。
Mindmap
Keywords
💡AI助理设备
💡语音命令
💡硬件
💡市场宣传
💡用户体验
💡价格
💡隐私
💡技术发展
💡操作系统集成
💡大动作模型
💡AI幻觉
Highlights
即将上市的两款AI助手设备:Rabbit R1和Humane AI Pin,它们都提出了令人印象深刻但值得质疑的功能声明。
这些设备能够通过简单的语音指令执行复杂任务,如预订全家欧洲之旅。
设备能够识别物体并计算其营养成分,无需手机或其他应用程序。
Rabbit R1的设计简约,具有小型屏幕、摄像头、滚轮等硬件配置,预购量超过100,000台。
Humane AI Pin定价700美元,需要额外的25美元月费,使用投影技术将用户界面投影到手上。
这些AI助手设备的核心功能与手机相似,但执行效果不如手机。
AI助手设备基于AI操作系统,可以通过语音命令执行任务,但目前这些产品的功能可能被过度宣传。
这些设备为何不以应用程序形式存在,而要作为硬件购买,这是由于它们需要更高级别的操作系统访问权限。
苹果和谷歌正在开发自己的类似产品,未来可能会在硬件和合作伙伴集成方面超越第三方应用。
AI助手设备的设计旨在吸引注意力,因为如果只是应用程序,可能不会引起如此大的关注。
这些设备主要依赖语音命令,但人类的在线决策交互通常需要屏幕的实时反馈和微调。
演示视频中的等待时间被编辑,没有展示设备实际响应时间,这可能误导用户对产品性能的预期。
Humane AI Pin的激光投影技术在实际使用中可能存在问题,如设备过重和不稳定。
Humane AI Pin的激光投影可能是一种噱头,因为市场上尚未出现将手机屏幕无线投影到手上的技术。
Rabbit R1的Lamb功能是一种AI模型,可以根据用户的提示执行动作,如鼠标点击和滚动。
Lamb功能需要大量训练数据才能变得有用,目前可能还无法满足预购用户的期望。
营销材料可能过于乐观,没有充分揭示产品当前的实际功能和潜在风险。
尽管存在问题,这些产品的尝试是技术进步的一部分,应该鼓励而不是批评。
Transcripts
there are two of these AI assistant
devices that are shipping really soon
there's the rabbit R1 and then there's
the Humane AI pin and both of these
companies have made some very impressive
but questionable claims as to what these
products are capable of doing so they've
shown examples of you know these devices
you can just talk to them have two or
three voice commands and it's booking an
entire trip for your family to Europe
it's wild and they also have examples
where they're like you know showing the
device uh a handful of almonds and it's
able to calculate calories or the amount
of protein that's in it and you don't
need your phone there's no launching of
apps the idea is that these devices will
make your life easier like you can use
AI to assist you with those different
repairs uh you can do like live
translation they allude to the idea that
you could replace your phone with one of
these devices down the line now as we
get closer to these ship dates I
couldn't help but just look more into
these companies like why is it that
these things are weeks away from
shipping and no one has handled one of
these things yet right it's kind of
weird so I went in and having done a
little bit of research I feel like we
might be getting fleeced okay I'm going
to start off with the rabbit R1 so this
is the more popular of the two this
thing is a $200 device cute looking
Hardware it's got some teenage
engineering design language going on has
a small screen a camera a analog scroll
wheel a speaker a button it's got very
simple hardware and at the time of
shooting this video they've supposedly
sold over 100,000 units on pre-order
this thing has popped off and then the
other device the Humane AI pin this is
by contrast a
$700 device and it also needs a $25
monthly subscription but this is also an
AI assistant but instead of a screen
this uses a projector that projects the
image of the UI onto your hand it's a
neat party trick maybe a little gimmicky
but this device is not handheld it's a
pin that can attach to your clothing now
they haven't revealed sales figures or
anything but it seems just by looking
online that this product hasn't had as
successful as a launch as a rabbit R1
probably because of it $700 price tag
with its monthly sub description now the
the first question you might have having
seen these two things is what do they
even do and that's a really important
question right but every single time
I've seen these companies answer that
question they they skirt around it they
they talk about these like AI buzzwords
they talk about you know contextual
Computing I I think the reason why they
don't like answering this question is
because it's an uncomfortable answer the
answer I think of what do these things
even do is that right now these products
they basically do what your phone does
but they're just not as good at doing it
your phone's better at it that means you
can use it however you want like what
what tell me something you would use
that for so you use it for just about
anything like uh sending texts or
checking up on any notifications that
you've gotten stuff that you do just all
the time how would I know without seeing
a screen that somebody was texting me
and I'm your phone is more versatile
it's more powerful it's more private
because these AI assistant devices lean
on voice interaction to be able to do
what they do you can't really watch
videos on these things or play games on
them and because you still have to carry
your phone around with you you now have
two devices you have to manage but these
AI assistant devices are rooted in AI
like they're built from the ground up
with operating systems that are entirely
based on AI and because of that you have
the ability to be able to talk to these
devices with voice commands and they can
carry out a task when is the next
eclipse and where is the best place to
see it the next total solar eclipse will
occur on April 8th 2020 4 one of the
best places to see it is Naz Durango
Mexico it can give you answers it can
perform a sequence of tasks that would
normally take like five to maybe even 10
clicks on a phone so there is value to
that right if you can just talk to
something with your natural language and
this device will just do
stuff however the follow-up question is
okay if that's what this thing can do
why is it not an app like why is this a
piece of Hardware that you have to buy
why isn't it just something you can
download from the app store or the Play
store okay this is a very important
question and every time they've been
asked this question these companies
again they Dodge it I feel like they're
just they won't just admit the the
simple reason as to why it's a piece of
Hardware well there's a few reasons
number one in order for an app to exist
that has the type of capabilities that
we're talking about here this app would
need elevated access to the operating
system like you would need access to
like passwords credit card info it would
need access to like the microphone the
camera the GPS like all of that stuff
off of a single tap of a button and
those things are locked out from
developers rightfully so right Apple and
Google there's no way they're giving
developers that type of access on one
click certainly not right now but the
other reason is that even if in the
future if apple and or Google decide hey
you know we're going to allow app
developers to have access to this off a
single click Apple and Google are
actively working on their own versions
of these things and when they do come
out like it'll have awesome Hardware
integration and integration with
partners and stuff they're in app would
absolutely destroy any kind of third
party app so right you don't want to
touch apps but the third reason and the
main reason is for attention because if
this was an app no one would care I
wouldn't been having this conversation
with you if they had an app that did
exactly what the rabbit or the AI pin
did but it was just an app that was on
your phone you had to pay 25 bucks a
month truly no one would care but
because it's a beautiful piece of
Hardware particularly the rabbit we're
talking about it like if you think about
it just my hot take half the reason why
the one got the attention that it did
cuz it looks like that it's got teenage
engineering Aesthetics it's not an
official collab or anything it just has
te hot sauce all over the design
language and so now you know what it is
and why it's not an app let's talk about
problems so uh the first problem I have
is that both of these primarily run off
voice commands where should Ken and I
grab dinner
tonight here are some recommendations
for you Sushi Ron shisen and elephant
sushi they lean on voice as the main way
to be able to interact with these two
devices but right now when we use our
phones and our regular apps there is a
lot of fine-tuning that we do like on
the screen by poking things and reacting
to stuff that happens on our phone in
real time like it doesn't matter what
you're doing if you're like uh you know
ordering food or trying to book a hotel
or a flight you are making decisions and
adjusting your thoughts looking at the
screen like you're seeing reviews or you
know an appealing photo or pricing like
there's stuff that's actively affecting
what you're going to click and tap next
and same with an Uber like I'll often
adjust my pin be like you know pick me
up across the street because that's
going to save me 5 minutes for like the
the turnaround right so there's stuff
that we actively do in our apps because
apps are built and optimized for human
interaction right now but to remove all
that stuff and to just have voice
commands with like a little bit of
adjustment on like a tiny screen that's
not how we use things the moment there's
some kind of like option assessment that
we have to do I think this whole voice
interaction thing falls apart it's
unfortunate but I think that's just the
way that humans interact with online
decisions right now and to expect an AI
to like my wife will spend two or three
days like full ass days focusing on how
to book a vacation you know what's the
exact thing that we're going to do you
think an AI can do it in two voice
commands bro come on man that's not
realistic and the fact that they're
showing it like that is I think
misleading the other thing they have
shown in some of their demonstrations
and also in released videos like on
their social media people interacting
with these devices like kind of the
pre-release devices and they're editing
the videos on the wait time like they'll
ask it a prompt who designed the
Williamsburg
Bridge the Williams and instead of
letting the video run so that the
viewers can get a sense of how long
these things actually take to respond to
things they'll cut the video and just
edit it to when the response happens it
is so weird it's like that weight time
that lag is a big factor in determining
how good or useful these things really
are there's the argument to be made like
okay this is pre-production engineering
stuff so the wait time that's live right
now isn't actually representative of
what the product will be in the future
sure throw some text up why are we
hiding the latency of this stuff behind
edits I hate that okay another thing I
want to talk about is in regards to the
Humane AI pin specifically so I remember
when they first showed off this laser
projector Tech at a Ted Talk and it was
really cool right it looked like it
could be really small but now that it's
out it turns out it's actually a fairly
chonky device but it's quite heavy so
unless you're wearing like a thick
material or you're wearing like really
tight fitting clothing this thing tends
to just flop around when you're using it
now it is strange to me that we're I
don't know a few weeks out from their
ship date and they still have not
allowed people to demo this thing live
like if you went to mwc you couldn't
handle these things you had to watch The
Humane employees demonstrate the product
for you to look at it was it's so weird
it's like people should at this point
have a good idea of what these things
feel like when they're being worn and
there were reports of the devices
overheating because the laser projectors
were being used too much like it's just
so strange that this is the state that
the product is in just weeks out from
the ship date I honestly think that the
the whole idea of the laser projection
is like it's a cool idea but I'm worried
it's a gimmick and my simple reasoning
is that if that feature was actually
good if a laser projected screen onto
your hand was something that people
actually wanted we would be able to go
to the store and buy that like you can
create that Tech today like you could
have a wireless like a device that just
your screen from your phone is
transmitted wirelessly and it's
projected to your hand you could make
that but it doesn't exist because I just
don't think people want that but Humane
is telling us that well that product's
not good what you just described Dave
but if you add AI to it and just make it
voice operated now it's good I don't buy
it I really don't okay the last thing I
want to talk about pertains specifically
to the rabbit R1 so this device has
something kind of special it's their
large action model the lamb and it's the
feature that gives me the most hope for
what they're trying to do with the R1 so
the easiest way I can think of to
describe what a large action model is is
that it's an AI model that performs
actions based on your prompting so
instead of just reacting to language and
text and stuff like that this can
actually do like Mouse clicks and
scrolling and it can be trained just
like a language model so if you just
showed this lamb how you edit a photo in
Photoshop let's say and then you got
5,000 people to show this lamb how they
do it it would learn and then in the
future it would be able to make its own
decisions or the lamb would be like hey
if you want to make a warm looking photo
this is what you do because I saw what
all these other people did in Photoshop
I click this I click that I scroll here
and that is a lamb a very primitive
description of it now I think the idea
is very cool but when they showed it
during the presentation they were first
of all they were
showing the use of their lamb on
services that have apis so I don't even
know if they're actually built Lambs
that function the way they did in the
presentation but also I imagine that
people that are pre-ordering this device
think that these Lambs are going to be
available right off the bat and they I
don't think they will be how could they
be I think the 100,000 people that
pre-ordered this thing they will be the
ones responsible for training these
things to be able to do whatever they
end up doing in the future so if you
bought one you're expecting to be able
to do anything other than like simple
GPT instructions or perplexity
instructions I don't think it can how
could it that has no training data as of
right now at least not enough of it to
make it actually functional or useful
and that I felt was a little bit
misleading now now they're they're a
business right they're they have to show
the best case scenario of what this
thing can do and I respect that but it's
just when I see the presentations when I
see the demos I'm like come on man
there's no way this thing can reliably
book a vacation in three voice prompts
like there's no way and also errors like
AI hallucinations right now when you
have an AI error like a hallucination in
mid Journey you just get a dude with 12
fingers we laugh at it it's hilarious
but if you have and hallucination with
large action models you're talking about
ordering 50 pizzas or like you know
ordering a booking a vacation that's
$20,000 over budget like this is real
stuff here because these are action
models I feel like it's irresponsible
for them not just to market it like they
it can do it but also to even have that
ability right now like it should be
super Super Beta and people should be
aware that this is not something you can
do right now the thing I have to keep in
mind and I think you guys also after
hearing my rant about this the thing we
have to keep in mind is like when it
comes to the technology that we have
access to today the things that we use
every day all of the stuff we use came
from lots of companies trying lots of
weird stuff and most of them ended up in
Failure but the result in the end is
awesome products for consumers so I
think these two products that's part of
it right that's part of the process to
get there I respect that and I want
these guys to succeed I don't want to
see them fail I want to see this stuff
pan out the way that it could but the
marketing right now is just so strangely
optimistic and like both of
them it doesn't make sense to me how can
you how can you be responsible and put
marketing material out like that that's
all okay
um that's it hope you guys enjoyed this
video
Browse More Related Video
#460 ZHIYUN MOLUS B200 理想形だね!
Welding With Batteries! Dabbsson DBS3500 3600 Watt Portable Power Station
史上最強の長寿命ポータブル電源Anker 767 Portable Power Stationがキター!
미친 조명... MOLUS B100 언박싱 그리고 사용 후기
Keyboards & Command Line Interfaces: Crash Course Computer Science #22
The Disappearing Computer — and a World Where You Can Take AI Everywhere | Imran Chaudhri | TED
5.0 / 5 (0 votes)