Introducing Claude Opus 4.7 - Our latest Opus model brings stronger performance across coding, agents, vision, and multi-step tasks, with greater thoroughness and consistency on the work that matters most.
By Jakub Antkiewicz
•2026-04-17T09:16:24Z
Anthropic Releases Claude Opus 4.7 With Focus on Agentic Workflows and Cyber Safeguards
Anthropic has announced the general availability of Claude Opus 4.7, its latest large language model, positioning it as a significant upgrade for advanced software engineering and complex, autonomous tasks. The new model builds upon its predecessor, Opus 4.6, with enhanced capabilities in vision, instruction following, and self-verification. Notably, Anthropic states that while Opus 4.7 is a substantial improvement, it remains less broadly capable than its forthcoming frontier model, Claude Mythos Preview, and has been intentionally released with limited cybersecurity abilities as part of a new safety testing initiative.
Technical Improvements and Availability
Claude Opus 4.7 is available immediately through the Claude API and on partner platforms including Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Anthropic has kept the pricing consistent with the previous version, at $5 per million input tokens and $25 per million output tokens. The release emphasizes reliability for long-running, multi-step workflows, a frequent pain point for developers building with AI agents. Key enhancements cited by the company and early testers include:
- Advanced Coding and Autonomy: The model demonstrates stronger performance on complex coding tasks, with testers like Cursor reporting a jump from 58% to 70% on their internal benchmark.
- Enhanced Vision: Opus 4.7 features higher-resolution image analysis, which partners like Solve Intelligence and XBOW note has significantly improved performance on tasks involving technical diagrams and visual acuity.
- Cybersecurity Safeguards: In a deliberate move, Anthropic has differentially reduced the model's cyber capabilities and implemented safeguards to block high-risk use cases. Security professionals can apply to the new Cyber Verification Program for legitimate access.
- Improved Reasoning and Reliability: Partners consistently highlighted better data discipline, with the model correctly identifying missing data and resisting logical traps. Genspark praised its resistance to looping, a critical factor for production agents.
The market response from early-access partners suggests the model delivers tangible performance gains for enterprise and developer-focused applications. Companies like Replit, Notion, and Harvey reported that Opus 4.7 provides higher quality results with fewer errors, making it a more dependable 'teammate' for engineering, legal, and data analysis workflows. The consistent theme across feedback is the model's improved ability to manage sustained, complex tasks without close human supervision, directly addressing the growing demand for more reliable AI agents in production environments.
Anthropic's release of a deliberately-sandboxed Opus 4.7 signals a mature strategy focused on capturing the enterprise agent market by proving reliability and safety before deploying its most powerful, and potentially riskiest, frontier models.