FIGURE 01 AI Robot Update w/ OpenAI + Microsoft Shocks Tech World (THEMIS HUMANOID DEMO)
Summary
Please replace the link and try again.
Please replace the link and try again.
Please replace the link and try again.
Outlines
Please replace the link and try again.
Mindmap
Please replace the link and try again.
Keywords
Please replace the link and try again.
Highlights
Please replace the link and try again.
Transcripts
Openai and Microsoft want to invest $500 million into figure
AI, the company behind the figure one humanoid robot, arguably the most
capable and intelligent bipedal machine in the world.
But what makes this robot so special?
In fact, just over one year from its inception, figure AI has already made
world record progress towards creating this general purpose marvel that not only
replicates human finger movements, but even exceeds them in both dexterity and
versatility. And in terms of size.
The figure one I robot stands at a height of 167cm
to strike the perfect balance between size and functionality.
Even more importantly, the robot is engineered to handle significant payloads of
up to 20kg, making the robot a valuable asset across various
industrial and service applications.
So much so that even BMW is already testing the figure one humanoid in
their own production facilities.
But despite the robot's human like size, it's still relatively lightweight at
just 60kg, enhancing its agility, maneuverability, and
battery life. Boasting an impressive runtime of five hours on a single
charge and being able to sustain continued performance even in demanding
environments. With a top speed of 1.2m/s when walking
impressively. Figure I has assembled an elite 51 member team, with
many of its employees having been brought in from tech giants like Boston Dynamics,
Tesla and Apple, allowing for the robot's record setting development speed.
Furthermore, figure I maintains an edge on its design time by having its
very own in-house prototyping and production facility, which allows the company to
rapidly iterate through hardware implementations as they focus on five key
areas. Number one system hardware.
Figure I aims to create a fully electromechanical humanoid with ultra
dexterous hands and by setting benchmarks in motion range, payload,
torque, and energy efficiency, the company believes it can match the physical
capabilities of an average human.
Number two advanced AI.
The development of an AI system capable of enabling autonomous task completion
by humanoid robots is one of figure's most ambitious goals, with the company
already hard at work creating intelligent agents that can adapt to and navigate
complex real world environments.
Number three affordability and volume.
Aiming to reduce the robot's cost, figure I is focusing on high
volume manufacturing as its strategy to economize the robot and grant
access to a broader consumer market.
Number four safety figure I claims that operational
safety is a cornerstone to their designs, ensuring that the figure one robots
can safely work side by side with people.
But in terms of the future figure, I is taking a pragmatic approach that focuses on
real world industrial applications, with the company constantly testing and verifying
new designs. Additionally, figure I is also considering a
robotics as a service model for its business, a business model that could provide
smaller operations with humanoid robots without requiring hundreds of thousands of
dollars in capital to get started.
And with the humanoid robot market poised to be even bigger than the personal
computer market, both Microsoft and OpenAI are doing their best to
secure a firm position for themselves in the robotics hardware industry, and that's just
the beginning of what's happening with AI robots, as Westwood Robotics has also
recently given the world a sneak peek.
Look at its newest Themis humanoid robot, which is yet another advanced
general purpose machine that's already been seen in the wild on multiple occasions
doing various real world tasks.
While not much is known about Westwood's Themis, it appears to already be
capable of walking across various terrains as well as carry out several other
tasks based on the company's previous open source.
Bruce robot, which costs just over $15,000, has 16
degrees of freedom integrated proprioceptive extremities and liquid cooled
actuators. It's likely that the Themis robot will also integrate
some of these design paradigms into its newest model.
While the Themis robot's release and price are still unknown, it appears as though the
robot will incorporate multiple LCD screens to communicate with humans in
its vicinity. Having already shown multiple video demonstrations of Themis
operating in the wild.
Next, Nvidia has revealed its dual computer AI model that's set to
transform how robots are developed and deployed in this new dual computer
model. The first computer is known as the AI factory, and plays a vital role
in the ongoing development and improvement of AI models, leveraging Nvidia's
data center resources and platforms for both simulation and
training. This aspect of the Isaac platform plays a vital role in
refining the accuracy, performance, and adaptability of the AI that's
responsible for powering various types of autonomous mobile robots.
Next, the second computer of the dual computer model effectively
complements the AI factory as a runtime environment and varies based on
the robot's application ranging.
From cloud based systems to on premises machine processors like Nvidia's
Jetson, which act as an edge device equipped with an array of sensors and
cameras. And the introduction of Nvidia's generative AI is revolutionizing
the field of simulation and synthetic data generation.
By leveraging large language models such as ChatGPT, Nvidia Isaac
enables the creation of intricate and detailed scenes from simple text prompts in
mere minutes. Furthermore, the introduction of Nvidia's text to 3D asset
generation via Picasso even further elevates this capability by
producing new, lifelike assets on demand.
This advancement drastically cuts down the time and resources traditionally needed for
simulation and data generation, paving the way for more efficient and
effective robotic development in the runtime environment.
The integration of large language models and language vision models facilitates a
more natural and intuitive interaction between humans and
robots. In fact, robots equipped with a generative AI model
trained across various modalities exhibit superior accuracy compared to
conventional CNN based computer vision models.
Nvidia's strategic partnerships in this domain exemplify the profound
impact generative AI is having across multiple sectors, reshaping the
way robots operate and interact in diverse environments.
Overall, Nvidia's foray into blending generative AI with robotics through the
Isaac platform is a visionary approach to creating smarter, more adaptable
robots. Finally, China has unveiled Vary Toy,
its pioneering compact large vision language model, also known as an
Llvm, which is designed for standard GPUs.
This breakthrough addresses the growing demand for efficient and effective image
perception in AI, overcoming the challenges posed by existing vision,
vocabulary networks, and the high computational costs
associated with optimizing complex parameters.
The new model emerges as a response to the limitations of popular large
vision language models, which have excelled in combining computer vision
and natural language processing tasks.
These include image captioning, visual question answering,
meme comprehension, and scene optical character recognition.
These successes are largely attributed to advanced vision vocabulary networks
like Clip. However, the true potential of these models is often capped
by the limitations of the vision vocabulary network in encoding visual
signals. Effectively.
Most of all, Very Toy stands out with its innovative approach to scaling up the
vision vocabulary for large vision language models.
It involves training a new visual vocabulary network using a smaller
autoregressive model, such as OP one and 25 M, and integrating it
with the existing vocabulary.
Ferry toys compact size not only makes it a potent tool in the large vision
language model landscape, but also offers an accessible solution for researchers
with limited resources.
浏览更多相关视频
![](https://i.ytimg.com/vi/VGKkRIG4gh4/hq720.jpg)
Questo robot vede, sente e parla grazie a ChatGPT [Figure 01]
![](https://i.ytimg.com/vi/Q8UGyXOn2NM/hq720.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGBMgSSh_MA8=&rs=AOn4CLD9d6n93ctLo09RppsDUIK-YlcmXQ)
The Rise of AI Robots (This is the Future)
![](https://i.ytimg.com/vi/TMF8dqqLXro/hq720.jpg)
OpenAI's 'AGI Robot' Develops SHOCKING NEW ABILITIES | Sam Altman Gives Figure 01 Get a Brain
![](https://i.ytimg.com/vi/VZ5enBFRGZE/hq720.jpg)
ChatGPT-Powered "AGI Robot" STUNS The Entire Industry
![](https://i.ytimg.com/vi/Zkie0dbt6_g/hq720.jpg)
Tesla’nın Geleceğini Nasıl Görüyorum?
![](https://i.ytimg.com/vi/bKxIUJFmFZc/hq720.jpg)
Microsoft Can’t Hide
5.0 / 5 (0 votes)