Home / Technology / AI Hits Physical World Limits, World Models Rise
AI Hits Physical World Limits, World Models Rise
21 Mar
Summary
- Large language models struggle with physical world understanding.
- World models simulate reality for safer AI testing.
- JEPA, Gaussian splats, and end-to-end models are key approaches.

Artificial intelligence is encountering significant limitations in domains requiring a grasp of the physical world, such as robotics and autonomous driving. This constraint is redirecting investor focus towards 'world models,' which are designed to simulate reality for AI systems. Companies like AMI Labs and World Labs have recently secured substantial funding rounds.
Large language models excel with abstract knowledge but lack grounding in physical causality, leading to brittle behavior. Researchers and AI leaders note that current AI exhibits 'jagged intelligence,' capable of complex abstract tasks but failing at basic physics. World models aim to bridge this gap by acting as internal simulators.
Three primary architectural approaches are emerging. Joint Embedding Predictive Architecture (JEPA) focuses on learning latent representations, offering computational efficiency and suitability for real-time applications like robotics. Gaussian splats are used to build complete 3D spatial environments, ideal for spatial computing and entertainment. End-to-end generative models process prompts and actions to continuously generate scenes and physical dynamics, enabling large-scale synthetic data generation for training AI in complex scenarios.
Future developments point towards hybrid architectures that combine the strengths of LLMs with these world model approaches. This evolution is seen as foundational for AI's integration into physical and spatial data pipelines, addressing critical capabilities needed for safe real-world operation.




