#Claude #Anthropic #Opus 4.7 #AI Models #Coding #Benchmarks

Anthropic Releases Claude Opus 4.7: What Changed for Coders

Claude Opus 4.7 brings expanded output limits, sharper multi-file editing, and stronger benchmark scores. Here's what matters for developers choosing AI coding models in 2026.

April 16, 2026by Who Codes Best Team
Featured

Anthropic Releases Claude Opus 4.7: What Changed for Coders

Anthropic has released Claude Opus 4.7, the latest iteration of its flagship model. The update focuses on coding performance — particularly multi-file editing, long-context accuracy, and agentic reliability. Here's what developers need to know.

What's New in Opus 4.7

  • 32K max output tokens — doubled from the 16K ceiling in Opus 4.6, enabling larger single-pass refactors and full-file rewrites without truncation
  • Improved multi-file coherence — the model tracks cross-file dependencies more accurately, reducing broken imports and inconsistent type signatures across large changesets
  • Better instruction following in agentic loops — fewer off-task detours and repeated actions when operating inside tool-use workflows like Claude Code or custom agent harnesses
  • Faster time-to-first-token — Anthropic reports a measurable latency reduction on typical coding prompts, though exact figures vary by deployment region
  • Extended thinking improvements — more structured and reliable chain-of-thought on complex algorithmic and architectural reasoning tasks

Benchmark Highlights

Anthropic's published benchmarks show gains in areas that matter for real-world coding:

BenchmarkOpus 4.6Opus 4.7
SWE-bench Verified72.5%76.8%
HumanEval+93.1%95.4%
Multi-file Edit Accuracy (internal)81%88%
Agentic Task Completion (internal)74%80%

The SWE-bench jump is notable — it places Opus 4.7 among the top performers on the benchmark as of April 2026, though direct comparisons depend on harness and prompting strategy.

Pricing and Availability

Opus 4.7 is available now via the Anthropic API with model ID claude-opus-4-7-20260416. It is also accessible through Amazon Bedrock, Google Vertex AI, and Microsoft Azure.

Pricing remains unchanged from Opus 4.6:

Price
Input$5.00 / 1M tokens
Output$25.00 / 1M tokens
Context Window200K tokens
Max Output32K tokens

Practical Takeaways

If you're already on Opus 4.6, the upgrade is straightforward. The doubled output limit alone removes a common friction point for full-file generation. Multi-file editing improvements reduce the need for manual fixups after large refactors.

If you're comparing models, Opus 4.7 narrows the output-length gap with GPT-5.3 Codex (also 32K) while maintaining Anthropic's strengths in careful reasoning and instruction adherence. It remains the more expensive option at $5/$25 per million tokens versus Codex's $1.50/$12, so cost-sensitive workloads may still favor OpenAI.

For agentic workflows, the reliability improvements matter more than raw benchmark numbers. Fewer wasted tool calls and less context pollution translates directly to lower costs and faster task completion in production agent systems.

See It in Action

Compare Claude Opus 4.7 against GPT-5.3 Codex, Gemini 2.5 Pro, and other models on real coding tasks in our side-by-side code comparisons.


We'll be generating fresh code samples with Opus 4.7 in the coming days. Check back for updated comparisons.