Anthropic's most advanced AI model. Elite coding, million-token context, adaptive reasoning, and autonomous agent teams — redefining what's possible.
Opus 4.6 focuses attention where it matters most — automatically applying deeper reasoning to challenging components without explicit instruction.
The model autonomously determines when extended reasoning helps, applying deeper analysis exactly where needed — no prompt engineering required.
Highest score on Terminal-Bench 2.0 for agentic coding. Operates reliably in large codebases with sustained autonomous task execution.
First Opus-class model with million-token context. 76% accuracy on 8-needle MRCR v2 retrieval, versus Sonnet 4.5's 18.5%.
Lowest rate of over-refusals among recent Claude versions, with strong alignment and low rates of misaligned behavior across evaluations.
Automatically summarizes older context during extended sessions, enabling longer productive work without hitting context limits.
Leads frontier models on BrowseComp for locating hard-to-find information, and on Humanity's Last Exam for multidisciplinary reasoning.
Opus 4.6 sets new records across coding, reasoning, knowledge work, and long-context retrieval.
Coordinate multiple Claude Code instances working as a team. One lead orchestrates, teammates work in parallel, and a shared task system keeps everything synchronized.
Multiple teammates investigate different aspects simultaneously — competing hypotheses, different review lenses, or independent modules — then share and challenge findings.
Unlike subagents that only report back, teammates message each other directly. They debate approaches, share discoveries, and coordinate without bottlenecking through the lead.
Require teammates to present their plan before implementing. The lead reviews, approves or rejects with feedback, ensuring quality control before any code changes.
Restrict the lead to coordination-only tools. It focuses purely on orchestration — breaking down work, assigning tasks, synthesizing results — while teammates handle implementation.
Run all teammates in-process within your terminal, or use tmux/iTerm2 split panes to see everyone's output at once. Message any teammate directly at any time.
Tasks can depend on other tasks. When a blocking task completes, dependents automatically unblock. File locking prevents race conditions during concurrent claims.
Three reviewers with distinct lenses: security, performance, and test coverage. Each applies a focused filter while the lead synthesizes across all findings.
Five investigators each explore a different theory for a bug, actively trying to disprove each other through scientific debate. The surviving theory is the root cause.
Frontend, backend, and test teammates each own their layer. They coordinate via the shared task list and direct messages without stepping on each other's files.
Everything you need to integrate Opus 4.6 into your applications.
| Model ID | claude-opus-4-6 |
| Platforms | claude.ai, API, AWS, GCP |
| Context Window | 1M tokens (beta) |
| Max Output | 128K tokens |
| Input (standard) | $5 / M tokens |
| Output (standard) | $25 / M tokens |
| Input (>200K) | $10 / M tokens |
| Output (>200K) | $37.50 / M tokens |
| Adaptive Thinking | ✓ |
| Effort Levels | ✓ |
| Context Compaction | Beta |
| US-Only Inference | 1.1× pricing |
| Alignment | Matches/exceeds Opus 4.5 |
| Over-refusals | Lowest among Claude models |
| Cyber Probes | 6 new evaluations |
| Misalignment | Low across all evals |
Access the most advanced Claude model through the API, Claude Code, or claude.ai.