Fei-Fei Li’s World Labs secures $1 billion to pioneer 3D spatial intelligence for AI
Fei-Fei Li’s startup aims to master spatial intelligence, moving AI beyond text toward a fundamental understanding of 3D reality
February 18, 2026

World Labs, the artificial intelligence startup co-founded by the renowned computer scientist and so-called godmother of AI, Fei-Fei Li, has secured one billion dollars in its latest funding round.[1][2][3][4] This massive infusion of capital is dedicated to the development of what the company terms spatial intelligence, a leap beyond the text and image generation that has dominated the industry over the last few years. The investment marks a significant milestone for the firm, which has now transitioned from a highly anticipated stealth project into a leading force in the race to build foundational world models.[1] Major technology and investment players, including Nvidia, AMD, Autodesk, and Andreessen Horowitz, led the round, signaling a broad industry consensus that the next frontier of artificial intelligence lies in a machine’s ability to understand, navigate, and interact with the three-dimensional physical world.
The core mission of World Labs is centered on the belief that if AI is to become truly useful in practical applications, it must move from understanding words to understanding worlds.[2] While large language models have achieved remarkable success in processing and generating text, they lack a fundamental grasp of physical reality, such as the geometry of objects, the laws of physics, or the persistence of space. World Labs is pioneering the development of Large World Models, or LWMs, which are designed to perceive and reason about 3D environments.[1] This spatial intelligence allows AI to not just see an image as a collection of pixels, but to understand the spatial relationships between objects, their material properties, and how they behave over time. This transition is seen by many experts as the critical missing piece for the advancement of robotics, autonomous systems, and advanced digital simulations.
The company’s leadership team brings an unprecedented level of expertise in computer vision and neural rendering.[2] Alongside Li, who famously led the ImageNet project that catalyzed the modern deep learning era, the founding team includes Justin Johnson, Christoph Lassner, and Ben Mildenhall.[2] These individuals are pioneers in fields such as Neural Radiance Fields, which revolutionized how computers reconstruct 3D scenes from 2D images. By combining decades of academic research with substantial financial resources, World Labs aims to create a technological substrate that will allow AI agents to operate with a human-like understanding of space. This goal has attracted significant attention from industrial partners, most notably Autodesk, which reportedly contributed two hundred million dollars to the round. The partnership with Autodesk is particularly strategic, as the software giant’s tools are foundational to the worlds of architecture, engineering, and manufacturing, where spatial accuracy and physical reasoning are paramount.[2]
Coinciding with the funding news, World Labs has introduced its first major product, a generative 3D platform called Marble.[5][6][7][2][1][8] Unlike previous generative AI tools that produce flat videos or static images, Marble is designed to generate persistent, high-fidelity 3D environments.[2] Users can input text prompts, images, or video clips, and the system constructs a spatially coherent world that can be explored and modified.[5][2][7] A key feature of this platform is a tool named Chisel, which allows creators to interactively edit and assemble 3D objects within these generated environments. This capability is expected to revolutionize creative workflows in gaming, film production, and virtual design, as it provides a way to build immersive scenes without the manual labor typically required for 3D modeling. More importantly, these worlds are not merely visual hallucinations; they are built on underlying models that maintain the consistency of objects even when they are moved or viewed from different angles.
The implications of this technology extend far beyond the creative arts and into the realm of physical-world AI.[9] One of the most significant challenges in modern robotics is the gap between simulated environments and the real world. By creating foundational models that understand physics and spatial dimensions, World Labs provides a training ground where robotic systems can learn complex tasks in a virtual setting that accurately mirrors reality. This could drastically accelerate the deployment of autonomous assistants in homes, factories, and hospitals. Furthermore, the scientific community could leverage these world models to simulate complex biological or chemical processes in a 3D space, potentially leading to breakthroughs in drug discovery or material science. The ability of the model to act as a simulator of reality suggests that the future of AI will be characterized by systems that don't just answer questions but solve physical problems.
The billion-dollar investment also reflects the shifting dynamics of the AI venture capital landscape.[9] As the market for large language models becomes increasingly crowded and dominated by a few massive corporations, investors are looking for the next platform shift. World Labs’ valuation, which is reportedly approaching five billion dollars, underscores the high premium placed on startups that can own the infrastructure of the next generation of AI.[9] By securing backing from both Nvidia and AMD, World Labs has effectively positioned itself at the intersection of hardware and software, ensuring its models are optimized for the massive compute requirements of 3D spatial reasoning. Other investors in the round, such as Fidelity Management & Research Company and the Emerson Collective, highlight the diverse interest in a technology that promises to reshape everything from education to urban planning.
Despite the optimism, the road ahead for spatial intelligence is fraught with technical challenges. Creating a model that truly understands the intricacies of the physical world requires vast amounts of data that are far more complex than the text used to train LLMs. While ImageNet provided the labeled data needed for object recognition, there is no equivalent massive, high-fidelity dataset for the dynamic 3D world. World Labs is reportedly developing novel ways to train its models, potentially using synthetic data and video to bridge this gap. The competition is also intensifying, as other well-funded startups and established tech giants like OpenAI and Google have begun exploring their own versions of world models. However, World Labs’ specific focus on 3D persistence and physical reasoning gives it a distinct niche in a market often distracted by broader, less specialized applications.
In summary, the emergence of World Labs as a billion-dollar entity marks the beginning of the era of spatial intelligence. Under the guidance of Fei-Fei Li and her team of visionaries, the company is attempting to move AI out of the digital ether and into the three-dimensional reality we inhabit.[5] By building models that can see, think, and act in space, they are laying the groundwork for a future where the distinction between digital simulation and physical reality becomes increasingly blurred. Whether used to design a skyscraper, direct a film, or train a robot to navigate a busy warehouse, the world models being developed at World Labs represent a fundamental change in how humanity will interact with intelligent machines. As these systems move from research labs to the public through products like Marble, the industry will be watching closely to see if spatial intelligence truly becomes the core building block for the next generation of global innovation.