Home / Technology / Anthropic Fixes Claude AI Bugs Causing Quality Drop
Anthropic Fixes Claude AI Bugs Causing Quality Drop
24 Apr
Summary
- Three product-layer changes caused Claude's perceived degradation.
- Issues stemmed from reasoning effort, caching bug, and prompt limits.
- Anthropic reverted changes and reset user limits to address concerns.

Anthropic has publicly addressed claims of a quality decline in its Claude AI models, acknowledging that three specific product-layer changes inadvertently impacted performance. These changes, implemented between March 4 and April 16, 2026, affected default reasoning effort, introduced a critical caching logic bug, and imposed system prompt verbosity limits. The company stated that the underlying model weights remained unaffected, but the "harness" around the models led to users experiencing reduced capabilities.
Developers and users had reported a "shrinkflation" effect, citing issues with sustained reasoning and increased hallucinations. Third-party benchmarks, such as BridgeMind's, indicated a significant drop in Claude Opus 4.6's accuracy. In response, Anthropic has reverted the reasoning effort and verbosity prompt changes, and fixed the caching bug in version v2.1.116. To restore user trust, the company is enhancing internal testing, refining evaluation suites, implementing tighter controls on prompt changes, and has reset usage limits for all subscribers as of April 23, 2026.
Anthropic also committed to greater transparency by using its @ClaudeDevs account on X and GitHub threads to provide more detailed explanations for future product decisions. These operational changes aim to ensure users receive the expected AI performance and to prevent future regressions, maintaining a more open dialogue with the developer community.