
Anthropic has introduced a new artificial intelligence feature called Claude Code Review aimed at improving how developers review code during the pull request (PR) stage. The feature is part of the Claude Code ecosystem and is designed to analyse code changes automatically before they are merged into a repository.
According to the company, the system activates when a pull request is opened and begins analysing the submitted code using a team of AI agents. The goal is to identify potential bugs, errors, or other issues early in the development workflow.
Anthropic said the feature is designed to address the growing volume of AI-generated code and the increasing pressure on human reviewers to examine large numbers of pull requests.
How Claude Code Review works
Claude Code Review uses a multi-agent system to examine code changes. Once a pull request is created, several AI agents analyse the codebase simultaneously to identify potential problems.
These agents scan the code for bugs and other issues, then verify their findings before presenting the results to developers. Anthropic says this verification step is intended to reduce false positives and improve the accuracy of the review process.
After the analysis is complete, developers receive a summary comment outlining the findings. The tool also adds inline comments directly to specific lines of code where potential issues are detected.
The findings are ranked by severity, allowing developers to prioritise fixes based on the level of risk associated with each issue.
Early testing results
Anthropic said the feature has already been tested internally across a large number of pull requests within the company.
During testing, the share of pull requests that received substantive review feedback increased from 16 percent to 54 percent. Engineers reportedly marked fewer than 1 percent of the findings as incorrect.
For larger pull requests containing more than 1,000 lines of code, the system surfaced findings in 84 percent of cases. On average, the tool identified about 7.5 issues per pull request during testing.
Anthropic said Claude Code Review performs deeper analysis than basic automated code review tools and therefore may cost more to run.
The company estimates that each review typically costs between $15 and $25 depending on the complexity of the pull request and the amount of tokens required to process it.
The feature is currently available as a research preview in beta and is being rolled out to Team and Enterprise users of Claude Code.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.