Home / Technology / Liquid AI: On-Device AI Gets Real
Liquid AI: On-Device AI Gets Real
1 Dec
Summary
- Liquid AI published technical details for its LFM2 models.
- Models offer efficient on-device AI, rivaling cloud-based options.
- This release provides a blueprint for training custom efficient models.

Liquid AI has made public the architecture, training data, and pipeline behind its Liquid Foundation Models series 2 (LFM2). Launched in July 2025, LFM2 offers fast, on-device foundation models designed for efficiency, presenting a viable alternative to cloud-based large language models. The company has expanded LFM2 with specialized variants and an edge deployment stack, targeting on-device and on-premise agentic systems. The published technical report serves as a comprehensive blueprint, enabling other organizations to train their own efficient models.
The LFM2 architecture search was performed directly on target hardware, including mobile SoCs and laptop CPUs, resulting in a hybrid design optimized for real-world device constraints like latency and thermal limits. Unlike many open models that assume ample GPU resources, LFM2 prioritizes operational reliability. Its training pipeline includes extensive pre-training, a unique distillation objective, and a post-training sequence that enhances instruction following and tool use, making the models behave more like practical agents.
Beyond core capabilities, LFM2 includes multimodal variants for vision and audio, designed for token efficiency and on-device operation. These address needs for local document understanding and audio transcription. The retrieval model variant, LFM2-ColBERT, is built for enterprise RAG systems, enabling fast local retrieval for agent orchestration. This modular approach positions LFM2 as a foundational element for hybrid enterprise AI architectures, blending local and cloud capabilities for cost control, low latency, and enhanced governance.



