Figure's AI Robot Masters Blind Locomotion, Unstoppable Without Cameras
Revolutionizing robotics, Figure's AI-trained humanoid learns robust, vision-free locomotion, becoming an unstoppable force for industrial environments.
August 21, 2025

In a significant leap forward for autonomous robotics, AI startup Figure has demonstrated its humanoid robot navigating and maintaining balance without the use of cameras, relying solely on its internal sensors and a learned control system. Recent tests showcase the robot's remarkable stability, where it remains upright and continues walking despite being shoved and pushed by engineers. This "blind" locomotion, a key development in the field, suggests a path toward creating robots that are not only agile but also resilient in unpredictable real-world environments, a crucial step for their deployment in commercial and industrial settings. Founder Brett Adcock has highlighted the robustness of this new walking capability, noting that the robot is "unstoppable" in these tests and that its performance is "starting to reach superhuman levels."
At the core of this advancement is a sophisticated form of artificial intelligence known as reinforcement learning. Figure's engineers have developed an end-to-end neural network, reportedly named the Helix walking controller, which learns to walk through a process of trial and error within a high-fidelity physics simulation.[1][2][3] In this virtual environment, thousands of digital versions of the robot are exposed to a vast range of scenarios, accumulating years' worth of experience in just a few hours.[1][4] They encounter varied terrains, changes in friction, and simulated trips, slips, and shoves.[1] The system is rewarded for stable, human-like movements—such as proper heel-strikes, toe-offs, and synchronized arm swings—and penalized for falling or instability.[1][5] This process allows a single neural network policy to learn a robust and generalized strategy for walking that can adapt to a multitude of disturbances.
The most critical aspect of this training method is its ability to transfer directly from the simulation to the physical robot, a process known as "zero-shot" transfer.[1] This is achieved through a technique called domain randomization, where the physical parameters of the virtual robots are varied during training.[4] By learning to control thousands of slightly different robots in simulation, the AI model becomes adept at handling the minor variations and imperfections inherent in real-world hardware without needing further calibration or fine-tuning.[4][6] Once deployed on the physical Figure 02 robot, this learned policy operates without visual input, instead relying on proprioceptive feedback.[7] This is the robot's sense of its own body's position, orientation, and movement, derived from internal sensors that measure joint angles and forces, much like a human's innate ability to balance with their eyes closed.[6][8] This reliance on proprioception is what enables the robot to walk "blind" and react instinctively to physical disturbances.
The implications of robust, vision-free locomotion are profound for the burgeoning humanoid robotics industry. While cameras and vision systems are essential for navigation and object manipulation, they have limitations.[9] Visual sensors can be compromised by poor lighting, dust, smoke, or sensor malfunction. By developing a foundational layer of stability that does not depend on sight, Figure is building a robot that can remain mobile and safe even when its vision is impaired.[10] This level of reliability is non-negotiable for robots intended to work alongside humans in dynamic environments like warehouses, factory floors, and retail spaces—the primary markets Figure is targeting.[11] The ability to withstand unexpected bumps and maintain footing on slippery surfaces through learned reflexes, rather than pre-programmed responses, marks a significant divergence from more rigid, traditional forms of robotic control. This adaptability is seen as a key differentiator from competitors and a vital component for achieving widespread commercial viability.
This achievement by Figure represents a convergence of advanced AI with sophisticated hardware. The Figure 02 robot, standing five feet six inches tall, is an all-electric platform designed for human-centric environments.[2] Its ability to execute the complex commands generated by its neural network controller is a testament to its integrated design. The development also underscores a broader industry trend toward AI-first robotics, where learned behaviors are replacing manually scripted actions. While competitors like Boston Dynamics have long set the standard for dynamic locomotion, Figure's focus on rapid, simulation-based learning and zero-shot transfer offers a scalable path to deploying large fleets of robots that can all benefit from a single, continuously improving AI model.[1][12] As these robots learn to handle the physical world with ever-increasing grace and resilience, the prospect of a general-purpose humanoid worker moves steadily from the realm of science fiction into tangible reality.