World Labs, the startup co-founded by AI researcher Fei-Fei Li, has raised $1 billion in new funding. The company shared the news on February 18, 2026. The round reflects rising interest in “world models,” AI systems built to understand, simulate, and act in 3D physical spaces.
Li, widely known for her ImageNet work that helped push computer vision forward, leads World Labs with co-founders Justin Johnson, Christoph Lassner, and Ben Mildenhall. The company came out of stealth in September 2024 with a $230 million raise at a $1 billion valuation. Since then, it has focused on spatial intelligence, with an eye on physical AI.
Big-name backers show confidence in physical AI
This round includes major players across hardware and design:
- Autodesk Invested $200 million and will act as a strategic advisor. It plans to bring World Labs tech into 3D design workflows.
- Nvidia, Supports the push toward large, compute-heavy spatial models.
- AMD joined as another key chipmaker investor as competition in AI hardware grows.
- Other investors include Andreessen Horowitz (a16z), Emerson Collective, Fidelity Management & Research Company, and Sea.
World Labs did not share its new valuation. Still, earlier reports said the company discussed a figure near $5 billion. Either way, the timing matches a wider rush into physical AI.
World model funding jumps as AI shifts beyond text
Money flowing into the world models category has surged. Funding rose from $1.4 billion in 2024 to $6.9 billion in 2025, according to CB Insights research on physical AI models. That jump signals a clear change in priorities. Teams now want AI that can handle space, motion, and real-world cause and effect, not just language.
World models take a different approach than large language models (LLMs). Instead of focusing on words, they build internal 3D representations of environments. As a result, they can simulate outcomes and support planning, which matters for autonomous systems.
Several factors are driving demand:
- Robotics teams need agents that can move through messy, changing spaces.
- Science groups can run virtual tests to speed up real experiments.
- Creators want tools that generate consistent 3D worlds from simple inputs.
In short, world models connect AI to the physical world in a way that text-only systems can’t.
Why world models matter for robotics
World models give robots a way to plan. That planning layer helps physical AI work outside controlled lab settings.
Older robotics systems often depend on fixed rules or quick reactions. By contrast, world models add prediction and simulation:
- Perception and generation, they can infer 3D scenes from images, video, or text prompts.
- Simulation, they can forecast how objects might move under basic physics.
- Interaction: They can try actions in a virtual setting before doing them in the real world, which lowers risk and cost.
That “imagination” layer supports robots meant for homes, factories, or disaster response. Without it, robots can fail when conditions change.
Li has summed up the goal this way: “If AI is to be truly useful, it must understand worlds, not just words. Worlds are governed by geometry, physics, and dynamics.”
World Labs’ first product, Marble, points in that direction. Marble generates spatially consistent, high-detail 3D environments from multimodal inputs. World Labs released it publicly around late 2025. It supports storytelling, design mockups, and early robotics tests.
Partnerships and real use cases come into focus.
Autodesk’s $200 million investment and advisor role point to near-term commercial uses. The plan is to bring world model features into design software, so architects and engineers can create simulations faster.
At the same time, Nvidia and AMD bring the computing power needed to train large models on huge datasets. Beyond design, the same tools could support autonomous vehicles, factory automation, and training simulations.
With fresh capital, World Labs plans to move faster on next-generation models. The company also wants to grow its World API, which launched in early 2026 for developers. On top of that, it expects to push deeper into robotics and science-focused work.
Whadoes t this signal mean about AI’s next phase?
This round arrives as physical AI pulls in serious attention. Robotics venture funding reached $40.7 billion in 2025, up 74% year over year, and accounted for 9% of all venture capital. In many market reports, world model companies are also rising toward the top.
Still, hard problems remain. Teams need better real and synthetic training data. They also have to connect these models to hardware safely. Finally, physical deployment brings safety and reliability issues that software-only AI doesn’t face.
Even so, the direction feels set. As World Labs scales, it joins efforts from groups like Google DeepMind that also chase embodied AI. With $1 billion added and support from hardware leaders and a top design platform, World Labs now has the backing to push world models closer to everyday use.
For investors and builders, the message is simple: physical AI is moving from theory to products. World models are starting to look like core infrastructure for the next wave of intelligent machines.




