Home / Technology / AI Fails: Data Quality is The Real Problem
AI Fails: Data Quality is The Real Problem
12 Jan
Summary
- MIT reports 95% of AI pilots fail due to poor data quality.
- Merger aims to bridge the AI 'trust gap' costing billions.
- New AI approach balances scale and precision for data.
- Shift from AI experimentation to operationalization noted.

Enterprise AI initiatives frequently falter not because of inadequate models, but due to the subpar quality of the data they process. MIT's research indicates that approximately 95% of AI pilots fail, highlighting this critical issue. This has consequently spurred a wave of consolidation within the AI infrastructure market, signaling a shift in enterprise spending towards ensuring data reliability.
The recent merger of Tasq AI and Blend into a unified Tasq AI entity underscores this industry trend. The combined company, serving over 200 enterprise clients, aims to close the significant 'trust gap' in AI, which has become a major bottleneck. This operational challenge means enterprises are unable to fully leverage billions invested in AI infrastructure because the underlying data layer remains compromised.
The new Tasq AI integrates Tasq AI's data processing platform with Blend's extensive network of domain experts across numerous languages. This 'multi-layer model' employs varying levels of human expertise to ensure data quality, from high-volume crowd work to specialized expert review. The company asserts this approach can train models up to 10 times faster, addressing the critical need for both speed and precision in AI data operations.




