Home / Technology / AI Shrinks Dramatically, Needs Less Power
AI Shrinks Dramatically, Needs Less Power
4 Mar
Summary
- New AI model requires 32GB memory, down from 61GB.
- CompactAI technology restructures weight matrices efficiently.
- Improved tool-calling performance achieved with size reduction.

Spanish AI firm Multiverse Computing has unveiled HyperNova 60B 2602, a notably compressed version of OpenAI's gpt-oss-120B model, now freely available on Hugging Face. This release marks a significant step in making advanced AI more practical and accessible. The model's memory requirements have been slashed from 61GB to 32GB, a reduction of nearly half, while retaining impressive tool-calling capabilities.
The breakthrough is attributed to Multiverse's proprietary CompactAI technology. This method employs quantum-inspired tensor networks to restructure the internal weight matrices of large language models. Unlike simple parameter removal, CompactAI rewrites the model's mathematical blueprint for greater efficiency. This post-training compression process does not require access to the original training data and can reduce memory usage by up to 93%.
This compression approach offers a viable alternative to continually increasing model sizes, aligning with European discussions on sovereign AI, infrastructure limits, and energy consumption. The company highlights enhanced performance on agent-focused benchmarks, demonstrating improvements in tool use and coding workflows. These advancements position HyperNova 60B 2602 as a more deployable solution for developers facing budget or energy constraints.




