Anthropic revealed a new technique for AI agents on Wednesday, dubbed "dreaming," designed to refine their working memory and cut down on mistakes. The announcement came during the company's annual developer conference, where it pushed related tools into the public's hands.

The technique is part of Anthropic's broader push to build increasingly autonomous tools that can overhaul knowledge work. It will integrate into its Claude Managed Agents product, which the company aims to make more self-sufficient.

"Dreaming" works by running evaluations between sessions, reviewing an agent's past behavior to identify patterns. The process then helps agents establish better ways of working, reducing errors over time. For now, the feature is a research preview, with developers required to apply for access.

Revenue at the startup has surged in recent months, driven by adoption of its Claude Code service among software engineers. Co-founder Jack Clark predicted that new tools will help AI self-improve, though he did not provide specifics.

The technique's effectiveness remains unproven in production, and access is currently restricted. Anthropic has not disclosed when, or if, "dreaming" will become widely available to all customers.