Home / Technology / AI Unlocks Dark Data in Video
AI Unlocks Dark Data in Video
9 Feb
Summary
- InfiniMind converts vast unused video data into usable business information.
- The startup uses advanced vision-language models for deeper video analysis.
- InfiniMind secured $5.8 million in seed funding to expand its AI platform.

InfiniMind, a Tokyo-based startup co-founded by former Google executives Aza Kai and Hiraku Yanagita, is developing AI infrastructure to convert vast amounts of unused video and audio data into structured, queryable business information. The company's technology addresses the problem of "dark data"—collected but unanalyzed content sitting on servers.
Leveraging advancements in vision-language models, InfiniMind's solutions go beyond basic frame-by-frame object labeling. They enable the analysis of narratives, causality, and complex questions within video content, a significant leap from previous technologies. This progress was accelerated by improvements in AI capabilities between 2021 and 2023.
InfiniMind recently announced securing $5.8 million in seed funding. This investment will support the development of its flagship product, DeepFrame, enhance engineering infrastructure, and expand its customer base in Japan and the U.S. The company is also relocating its headquarters to the U.S. while maintaining an office in Japan.
Their first product, TV Pulse, launched in Japan in April 2025, analyzes real-time television content for media and retail clients. The upcoming DeepFrame platform, set for a beta release in March and full launch in April 2026, offers long-form video intelligence, capable of processing extensive footage to identify specific scenes and events. InfiniMind differentiates itself by focusing on enterprise use cases with a no-code, cost-efficient solution that integrates visual, audio, and speech understanding.




