Anthropic has launched Code Review in Claude Code, a new product designed to help developers catch bugs before code reaches production. The launch comes as AI coding tools generate larger volumes of code and increase the number of pull requests teams need to review.
TL;DR
- Anthropic launched Code Review in Claude Code on March 9, 2026.
- The product is available in research preview for Team and Enterprise customers.
- Anthropic says Code Review analyzes GitHub pull requests and posts findings as inline comments on code.
- The company says the tool focuses on logic errors, security issues, regressions, and other correctness problems rather than style feedback.
- Anthropic says the system uses multiple agents working in parallel and that reviews take about 20 minutes on average.
- Reviews are billed on token usage and generally average $15 to $25, depending on pull request size and complexity.
Anthropic Says Code Review Is Built To Address The Review Bottleneck Created By Faster AI Coding
TechCrunch reported that Anthropic launched Code Review as AI coding tools changed how developers work by generating larger amounts of code more quickly. Anthropic said code output per engineer has grown 200% in the last year, making code review a bigger bottleneck internally and creating similar pressure for customers.
Cat Wu, Anthropic’s head of product, told TechCrunch that enterprise leaders were asking how teams could efficiently review the growing number of pull requests created with Claude Code. Wu said that increase in code output had turned pull request reviews into a bottleneck for shipping code.
The Product Is Available In Research Preview For Team And Enterprise Customers
Anthropic said Code Review is now available in research preview for Team and Enterprise plans. TechCrunch also reported that the product is arriving first to Claude for Teams and Claude for Enterprise customers in research preview.
According to Anthropic’s documentation, admins can enable Code Review for an organization, install the Claude GitHub App, and select which repositories should receive reviews. Once enabled, reviews run automatically when a pull request opens or updates.
Anthropic Says The Tool Reviews GitHub Pull Requests And Posts Findings Directly On Code
Anthropic’s documentation says Code Review analyzes GitHub pull requests and posts findings as inline comments on the lines of code where it finds issues. The company says the system is designed to work within existing review workflows and does not approve or block pull requests on its own.
TechCrunch reported that once enabled, the tool integrates with GitHub and automatically analyzes pull requests, leaving comments directly on the code. The report also said the product explains potential issues and suggested fixes inside the review flow.
Topics For More Insights
Anthropic Says Code Review Focuses On Correctness Problems Rather Than Style Feedback
According to Anthropic’s documentation, Code Review focuses by default on correctness, including logic errors, security vulnerabilities, broken edge cases, subtle regressions, and bugs that could affect production behavior. The docs say it is not centered on formatting preferences or missing test coverage.
TechCrunch reported that Wu said Anthropic chose to focus on logical errors rather than style-based feedback. Wu told the publication that the company wanted the product to surface the highest-priority issues developers would need to fix.
Anthropic Says Multiple Agents Work In Parallel To Examine Code Changes And Rank Findings
Anthropic’s blog says that when a pull request is opened, Code Review dispatches a team of agents that look for bugs in parallel, verify issues to reduce false positives, and rank bugs by severity before posting a summary comment and inline findings.
The company’s documentation adds that multiple agents analyze the diff and surrounding code in parallel, then a verification step checks candidate issues against actual code behavior. Anthropic says the final results are deduplicated, ranked by severity, and posted back to the pull request.
Anthropic Says Reviews Average Around 20 Minutes And Are Priced By Token Usage
Anthropic said reviews are billed on token usage and generally average $15 to $25, with costs scaling based on pull request size and complexity. The company also said the average review takes around 20 minutes.
The docs say admins can control usage through organization settings and repository-level review controls, while Anthropic’s blog describes the product as a deeper and more expensive option than lighter-weight alternatives.

Join The Discussion