Minbook
KO
Anthropic's 96 Hours — Access, Capability, Execution Across Three Layers

Anthropic's 96 Hours — Access, Capability, Execution Across Three Layers

MJ · · 11 min read

Three Anthropic announcements across 96 hours in April 2026, broken down into Access, Capability, and Execution layers to reveal a coordinated vertical integration strategy.

What Was Announced in 96 Hours

Between April 4 and April 8, 2026, Anthropic made three announcements in roughly 96 hours.

  • April 4 — Claude Pro and Max subscriptions can no longer be used with OpenClaw, Cursor, or other third-party agentic tools (VentureBeat)
  • April 7 — Claude Mythos Preview is unveiled, with no general API release and access limited to Project Glasswing partners (red.anthropic.com)
  • April 8 — Claude Managed Agents enters public beta, with Anthropic hosting the agent runtime directly (SiliconANGLE)

Each was reported as a separate move: a usage policy update, a frontier model disclosure, an agent platform launch. On the surface, they look unrelated.

What stands out is the density. It is uncommon for an AI company to ship three announcements of this weight inside 96 hours. This post reads the three together through a single frame — the three layers of the AI stack — and argues that each announcement records movement in the same direction on a different layer.


The Frame — Three Layers of the AI Stack

Any AI product can be decomposed into three questions, from the user’s perspective:

  1. Access — Who is allowed to use the model? (distribution control)
  2. Capability — Which model, under what conditions, is made available? (capability differentiation)
  3. Execution — Where does the code that uses the model actually run? (runtime infrastructure)

Every AI company has an answer to these three questions. And the location of control over each layer defines the company’s strategic position. An earlier post, Anatomy of the AI Market in 3 Layers, split the AI market into infrastructure, platform, and application layers. This post applies the same “three layers” frame — but to a single company’s control distribution.

flowchart TB
    subgraph STACK["AI Product Stack"]
        A["Access Layer\nWho uses the model"]
        C["Capability Layer\nWhich model, which terms"]
        E["Execution Layer\nWhere the code runs"]
    end
    A --> E1["April 4: third-party harness cutoff\nSubscription routes reclaimed"]
    C --> E2["April 7: Mythos Preview restricted release\nProject Glasswing selective supply"]
    E --> E3["April 8: Managed Agents public beta\nRuntime hosted by Anthropic"]

Across the 96 hours, Anthropic narrowed external routes — or replaced them with routes it directly controls — in all three layers. The rest of this post walks through what actually changed at each layer.


Access Layer — April 4 Third-Party Harness Cutoff

What Was Actually Blocked

Effective 12pm PT on April 4, 2026, Claude Pro and Claude Max subscribers lost the ability to connect their subscriptions to OpenClaw, Cursor, and other third-party agentic tools. Prior to this date, a user paying $20 or $200 per month could route their subscription token into an external agent harness such as OpenClaw and call Claude models from there. After April 4, that path was closed.

Tools explicitly named or implicated include:

  • OpenClaw — a third-party agent harness founded by Peter Steinberger
  • Cursor — the AI-native IDE, which had offered a user-linked Claude subscription path alongside its own plans
  • Other “third-party agentic tools” — not individually enumerated, but interpreted by Anthropic’s ToS as any agent harness that is not the official Claude Code client

These users were left with one real alternative: migrate to Anthropic’s pay-as-you-go API billing and pay for actual consumption.

Boris Cherny’s Explanation and the Technical Safeguard

The official explanation came from Boris Cherny, Head of Claude Code.

“Subscriptions weren’t built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.” — Boris Cherny, Head of Claude Code (VentureBeat)

Two things stand out. First: subscription plans were designed around specific usage patterns, and agentic workloads broke those assumptions. Second: compute capacity is a resource Anthropic rations, and direct customers using Anthropic products and API get priority.

The move was not purely a policy change. Anthropic also introduced technical safeguards. The primary target was third-party clients that had been spoofing the official Claude Code client identifier in order to reach the models. Anthropic tightened client identity verification using a combination of request headers, signatures, and client fingerprinting.

The August 2025 OpenAI API Revocation and ToS Section D.4

This action did not come from nowhere. In August 2025, Anthropic revoked OpenAI’s access to the Claude API. OpenAI had been using Claude for benchmarking and safety testing of its own models. Anthropic determined this usage violated its terms of service.

The operative clause is Section D.4 (Use Restrictions) of Anthropic’s Commercial Terms of Service, which expressly prohibits customers from using the service to “build a competing product or service, including to train competing AI models.”

The August 2025 OpenAI revocation and the April 2026 third-party harness cutoff target different actors (a rival lab versus independent developer tools), but the legal rationale is the same. Anthropic is progressively treating who uses its models, how, and for what purpose as contractually enforceable decisions, not inherited assumptions.

The Subscription Math — $200 Versus $3,650

Cherny’s “subscriptions weren’t built for the usage patterns” is the surface of the explanation. Underneath are the numbers. Independent analyst Sderosiaux examined the actual usage pattern of Claude Max subscriptions routed through OpenClaw and estimated that a user paying around $200 per month could consume compute worth approximately $3,650 per month (Sderosiaux Substack).

ItemFigure
Claude Max monthly subscriptionapprox. $200
Estimated underlying compute cost via third-party harnessapprox. $3,650
Ratioapprox. 18x

The precision of the estimate has not been officially confirmed by Anthropic, but the direction is hard to dispute. A flat subscription is priced around an assumed usage pattern — a few hours a day of interactive coding sessions. OpenClaw and similar agentic harnesses broke that assumption by allowing 24-hour autonomous loops, batch processing of millions of tokens, and aggressive retry logic. The subscription tier was open to usage patterns it could not absorb.

Read this way, April 4 is less a disciplinary move than a price renegotiation. Third-party harness users get a choice: move to metered API billing and pay what they consume, or reduce usage. Either outcome is acceptable to Anthropic.

The OpenClaw Founder’s Move to OpenAI

One piece of context belongs in the record. Peter Steinberger, the founder of OpenClaw, joined OpenAI shortly before Anthropic’s cutoff was announced (VentureBeat). Whether the cutoff pushed the founder to a competitor, or the founder’s move influenced the timing of the cutoff, is not publicly established. Both events landing in the same week is worth noting regardless. The tension around third-party harnesses was not purely about usage volume — it was entangled with ecosystem and talent competition.


Capability Layer — April 7 Claude Mythos Preview

Benchmarks: Firefox, OSS-Fuzz, and the 198-Report External Validation

On April 7, 2026, Anthropic published the Claude Mythos Preview announcement at red.anthropic.com. The core claim, which Anthropic itself frames as a “watershed moment,” is a step change in cybersecurity capability.

The most symbolic benchmark is a Firefox browser exploit development test.

ModelSuccessful Firefox ExploitsSuccess Rate
Mythos Preview181 successful exploits (+29 register control)practical level
Opus 4.62near zero percent
Sonnet 4.60zero percent

Firefox JavaScript shell exploitation has historically been the domain of professional security researchers. Mythos Preview automated the same workflow and landed 181 successful exploits. Opus 4.6, across several hundred attempts, landed only 2. The gap is not a matter of degrees — it is the difference between near-zero and a practical capability level.

The OSS-Fuzz benchmark tells a similar story. Tested across roughly 7,000 entry points under a five-tier severity classification, Mythos Preview produced 595 Tier 1–2 crashes and 10 full control-flow hijacks at Tier 5. Opus 4.6 and Sonnet 4.6 produced zero Tier 5 results.

To back up these numbers, Anthropic had external security contractors review 198 vulnerability reports the model generated. The review found:

  • Exact severity match between model and human expert assessment: 89 percent
  • Within one severity level: 98 percent
  • False positives: 0

These validation results matter because they argue the benchmark figures are not self-reported without scrutiny.

The Restricted Release Structure — Project Glasswing

The real message of the announcement is not the performance numbers. It is the distribution policy. Anthropic decided not to release Mythos Preview as a general API product. Instead, it created a parallel program — Project Glasswing — that gives selected partners access.

Twelve founding partners:

CategoryPartners
HyperscalersAWS, Google, Microsoft
Device and OSApple
FinanceJPMorgan Chase
SecurityCrowdStrike, Palo Alto Networks
InfrastructureCisco, Broadcom, NVIDIA
Open sourceLinux Foundation

About 40 additional organizations were granted access for “critical infrastructure security” purposes. The full roster has not been published.

Funding and Post-Preview Pricing

Glasswing is backed by concrete funding.

  • Mythos Preview usage credits: $100 million
  • Direct donations to open-source security groups: $4 million
    • Alpha-Omega / OpenSSF, via the Linux Foundation: $2.5 million
    • Apache Software Foundation: $1.5 million

Post-preview commercial pricing has also been disclosed. The model will sell at $25 per million input tokens and $125 per million output tokens, available via the Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry.

The RSP Framing — Anthropic’s Own Warning

The interesting note in the Mythos Preview announcement is that Anthropic advertises and warns about the model in the same document. The red.anthropic.com page includes this passage:

“The transitional period may be tumultuous. If frontier labs release models at this level carelessly, defenders could be at a significant disadvantage until the security landscape reaches a new equilibrium.” — red.anthropic.com, Mythos Preview

Anthropic invokes its own Responsible Scaling Policy (RSP) framework as the basis for withholding general release until additional procedures — such as a forthcoming Cyber Verification Program — are in place. The warning does three things simultaneously:

  1. Quantitatively amplifies the model’s capability as marketing
  2. Justifies the restricted release
  3. Pre-positions Anthropic to frame any rival lab’s general release of a similarly capable model as irresponsible

The third is the most strategic. Anthropic is shifting the axis of competition from “capability ceiling” to “release responsibility.”


Execution Layer — April 8 Claude Managed Agents

The Enterprise Agent Deployment Barrier

On April 8, 2026, Anthropic launched the public beta of Claude Managed Agents. To understand why this service exists, it helps to start from the real question enterprises face: why do most companies never move their agents from demo to production?

When a team tries to take an LLM-based agent from a demo into production, the wall they hit is rarely model quality. It is the surrounding infrastructure. Running an agent safely requires:

  • A sandbox environment to isolate code execution and prevent destructive actions
  • Secure storage and rotation of authentication tokens and secrets for tool calls
  • State management and failure recovery for long sessions (checkpointing)
  • Fine-grained permission policies (which tool, under which conditions)
  • Persistent long-running sessions
  • Multi-agent orchestration and recursive sub-agent invocation
  • Observability and audit logs

This list looks familiar. It overlaps significantly with the problems solved by traditional microservices on Kubernetes. The difference is that agents are nondeterministic and bring external tool calls with them. Building this surrounding infrastructure from scratch takes serious engineering effort. Anthropic’s public beta framing — “from months to weeks” — targets exactly this barrier.

The Nine Layers Claude Managed Agents Absorbs

The feature set:

CapabilityDescription
Sandbox containersIsolated container automatically provisioned per agent
Auth and secretsSecure storage of credentials needed for external tool access
CheckpointingResume from prior state after a failure
Scoped permissionsPer-tool, per-task permission policy
Persistent long-running sessionsContinuous execution over hours to days
Tool orchestrationAutomatic selection of which available tool fits the task
Sub-agent spawningRecursive decomposition of complex tasks into sub-agents
Automatic prompt refinementUp to +10 points task success rate in internal testing
Observability and auditBehavior logs and dashboards

For an enterprise, the meaning is concrete. Components that used to require months of design, validation, and operation — if built internally — are replaced on day one by Anthropic’s defaults.

Pricing Model — From Flat to Metered

The pricing structure is also worth examining.

  • Model token usage — billed at existing Anthropic API rates
  • Agent runtime — $0.08 per hour, idle excluded, measured in milliseconds
  • Web search — separate fee (exact rate per Anthropic official documentation)

The key is the new billing unit: agent runtime hour. The user is only billed for the time the agent is actively performing work. Idle waiting time is excluded.

This pricing model is aligned with the April 4 third-party cutoff in a deeper way. The “use all you want” flat subscription model broke under agentic workloads. The alternative is metered billing proportional to actual compute consumption. Claude Managed Agents is the first Anthropic product that formalizes that alternative as the default.

The Launch Customers — Notion, Rakuten, Asana

Three launch customers were named at public beta:

  • Notion — workspace and document collaboration platform, actively integrating AI across its product
  • Rakuten — Japanese e-commerce and fintech group, experimenting with AI personalization
  • Asana — project and work management SaaS, expanding AI-powered workflow features

They have something in common. All three are workflow tools with large user bases. Documents, projects, and e-commerce are different domains, but they share one product need: agents that automate the user’s repetitive work. The infrastructure wall these companies hit when trying to build such agents internally is the exact target persona for Managed Agents.

That all three were disclosed publicly is significant as a case study. According to Anthropic, some of these customers had already shipped Managed Agents-powered agents into their products by the time of the public beta announcement. This is not “announce first, iterate later” — it is closer to “announce after production validation.”


Access, Capability, Execution — One Direction Across 96 Hours

Why the Three Landed in the Same Week

Before the synthesis, a compressed timeline of the 96 hours.

timeline
    title Anthropic April 4 to April 8, 2026
    April 4 : Access Layer
            : Third-party harness cutoff
            : Cherny statement released
            : ToS D.4 enforcement extended
    April 7 : Capability Layer
            : Mythos Preview disclosed
            : Project Glasswing launched
            : $100M credits $4M in donations
    April 8 : Execution Layer
            : Managed Agents public beta
            : Nine infrastructure layers absorbed
            : $0.08 per runtime-hour metered billing

The spacing of the three releases is tight enough to resist a coincidence reading. Each came from different teams and different product lines, but they point in a coherent direction along a single axis: the scope of Anthropic’s direct control. The company adjusted all three layers in the same week.

From Model API Vendor to Vertically Integrated Stack Operator

In one line each:

  • Access — Reclaim distribution routes from intermediaries (third-party harnesses) and permit only direct relationships
  • Capability — Withhold frontier models from general API release and supply them selectively to a partner alliance
  • Execution — Absorb agent runtime infrastructure into a managed service hosted by Anthropic

All three point at the same position shift. Anthropic is stepping out of the “model API vendor” category and into the role of a vertically integrated AI stack operator. This pattern has precedent in SaaS history. Stripe moved from payments API to full financial infrastructure. Shopify expanded from e-commerce SaaS to full-stack operator. Twilio grew from messaging API into a customer experience platform. Anthropic’s April 2026 is the AI version of the same expansion curve.

The New Enterprise Decision — How Many Layers to Accept

From the enterprise buyer’s perspective, the question is changing. Through 2025, the question was “OpenAI or Anthropic or Google?” — essentially a vendor selection problem. After April 2026, the question has a second dimension: how many layers of Anthropic do you accept?

Dependency depthWhat you gainWhat you give up
Access only (API consumer)Simple model calls, low lock-inFull responsibility for the infrastructure wall to production
Access + Capability (Glasswing-tier partnership)Early access to frontier models, joint PR with AnthropicAcceptance of partnership terms and disclosure limits, constraints on independent model strategy
Access + Capability + Execution (Managed Agents user)Months of infrastructure work saved, security and audit delegatedRuntime behavior and data paths delegated to Anthropic, deeper lock-in

None of these is the “right” answer. For startups and SaaS companies, Managed Agents is a reasonable tradeoff — months of engineering are replaced by a managed service. For large enterprises and regulated industries, data sovereignty, auditability, and supplier concentration risk are heavier variables.

What is clear is that the 96 hours of April 2026 put this choice explicitly on the table. Until recently, “we use the model API” was a single decision. Now, “how many layers do we hand to the provider?” is a separate decision that has to be made.


Sources

  1. April 4 third-party harness cutoffVentureBeat, “Anthropic cracks down on unauthorized Claude usage by third-party harnesses and rivals”, VentureBeat, “Anthropic cuts off the ability to use Claude subscriptions with OpenClaw”, TechCrunch (2026.04.04), The Register (2026.02.20)

  2. Boris Cherny statement and technical safeguard — VentureBeat cracks-down article, Anthropic Commercial Terms of Service Section D.4

  3. August 2025 OpenAI Claude API revocation — Multiple cross-verified press reports, Anthropic ToS D.4 Use Restrictions

  4. Subscription economics $200 vs $3,650 analysisSderosiaux Substack, “$200 subscription VS $3,650 in compute: why Anthropic banned OpenClaw”

  5. Claude Mythos Preview announcement and benchmarksred.anthropic.com/2026/mythos-preview (primary source)

  6. Project Glasswing structure and fundinganthropic.com/glasswing (primary source)

  7. Mythos press contextFortune, “Exclusive: Anthropic ‘Mythos’ AI model representing step change”, Tom’s Hardware, CrowdStrike Blog

  8. Claude Managed Agents launch and featuresSiliconANGLE (2026.04.08), The New Stack, “With Claude Managed Agents, Anthropic wants to run your AI agents for you”

  9. Claude Mythos Preview on Vertex AI channel availabilityGoogle Cloud Blog

Share

Related Posts