Artificial intelligence company Anthropic has launched a new agentic code review capability for its developer platform Claude Code, aiming to help engineering teams automatically review code and identify issues before deployment.
The new feature introduces a multi-agent system designed to analyze pull requests, detect potential vulnerabilities, and recommend improvements. Instead of relying on a single automated check, the system uses multiple AI agents that work together to examine code changes step by step. This approach allows the tool to perform deeper analysis of complex updates, including security flaws, performance issues, and coding standard violations.
According to Anthropic, the tool integrates directly into developers’ existing workflows. Once a pull request is submitted, Claude Code can automatically run a review, highlight potential risks, and suggest fixes before the code is merged into the main repository.
The company says the agentic review process is particularly useful for large codebases and security-sensitive applications where manual reviews can be time-consuming. By automating early checks, development teams can reduce errors while maintaining code quality.
The launch reflects the growing role of AI in software development. As tools like Claude Code assist developers in writing and modifying code, automated review systems are becoming increasingly important to ensure reliability and security in AI-assisted programming workflows.
Anthropic’s latest update signals a broader industry shift toward agent-driven development tools, where AI systems are capable of performing complex engineering tasks with minimal human intervention.









