Home / Technology / Compute Power Dominates AI, Not Algorithms
Compute Power Dominates AI, Not Algorithms
13 Feb
Summary
- Compute power significantly impacts AI accuracy more than algorithms.
- MIT researchers found compute is the primary driver of LLM progress.
- Larger models use over a thousand times more compute power.

A recent study by MIT researchers indicates that the advancement of large language models (LLMs) is predominantly fueled by escalating training compute power rather than algorithmic innovation. The findings suggest that while proprietary techniques and shared industry progress play a role, the sheer scale of computing is the most significant contributor to AI accuracy improvements.
Specifically, a tenfold increase in computing power measurably enhances a model's benchmark test accuracy. Models at the 95th percentile reportedly use 1,321 times more compute than those at the 5th percentile, highlighting a vast computational disparity. This emphasizes that sustained leadership in frontier AI capabilities is unlikely without continuous access to rapidly expanding compute resources.
Despite compute dominance, technical progress through smarter algorithms can help reduce costs and enable smaller models to catch up in performance. The largest effects of technical progress are observed below the frontier, where compute requirements for modest capability thresholds have declined significantly. This suggests that for smaller firms, efficiency is key, focusing on compressing capabilities into smaller, cheaper models.
The research implies a bifurcated AI world: massive compute for frontier models, and optimized software for smaller, deployable models. Giants like Google, Anthropic, and OpenAI are expected to maintain their lead in cutting-edge models due to substantial financial backing, while smaller players focus on efficiency and affordability.




