Home / Technology / Kintsugi's Depression AI Shutting Down; Detects Deepfakes
Kintsugi's Depression AI Shutting Down; Detects Deepfakes
2 Apr
Summary
- AI startup Kintsugi is closing after failing FDA clearance.
- Its speech analysis AI for mental health is now open-source.
- Kintsugi's technology can also detect deepfake audio.

California-based startup Kintsugi, which spent seven years developing AI to detect depression and anxiety from speech, announced its closure. The company failed to secure timely FDA clearance for its technology.
Kintsugi's AI analyzed the nuances of speech patterns, such as pauses and sentence structure, to identify subtle indicators of mental health conditions. This approach aimed to offer a more objective complement to traditional self-report screening tools.
The startup pursued FDA clearance via the "De Novo" pathway, a process that proved lengthy and complex, especially concerning novel AI medical devices. Delays caused by government shutdowns and the evolving regulatory landscape for AI contributed to the company's challenges.
Facing funding shortfalls, Kintsugi decided to open-source most of its technology rather than accept unfavorable investment terms. This move allows other developers to potentially build upon their work, though it raises concerns about misuse outside of clinical settings.
Interestingly, some of Kintsugi's technology, particularly that which detects synthetic or manipulated voices, was not open-sourced. This capability emerged during efforts to strengthen its mental health models and addresses the growing challenge of AI-generated deepfakes.