Home / Technology / AI Learns to Dream: Anthropic's New Agent Feature
AI Learns to Dream: Anthropic's New Agent Feature
6 May
Summary
- Anthropic introduces 'Dreaming' for AI agents to improve performance.
- The feature analyzes agent activity logs to find self-improvement insights.
- AI naming conventions are increasingly borrowing human cognitive terms.

Anthropic recently announced "Dreaming," a new feature integrated into its AI agent infrastructure. This development aims to enhance the performance of AI agents by enabling them to review their past operational transcripts.
The "Dreaming" feature allows agents to identify patterns within their activity logs. These insights are then used to refine the agents' abilities, effectively creating a self-improvement mechanism. This function is part of Anthropic's research preview for developers.
This launch continues a trend in the AI industry where generative AI features are named after human cognitive functions. Previously, OpenAI introduced "reasoning" models requiring "thinking" time. Many startups also refer to their chatbots as possessing "memories," distinct from computer memory, by storing user-specific information.
Anthropic's approach extends beyond marketing, influencing how its AI, Claude, is developed and described. The company employs terms like "virtue" and "wisdom" for Claude, acknowledging the influence of human text in its training. This anthropomorphizing strategy raises questions about the boundaries between artificial intelligence and human-like consciousness.