Pricing
ACP pricing has two layers: a workspace plan with included monthly credits, and optional usage-based infrastructure on collaborative plans when you want execution to continue after credits are exhausted.
Workspace plans
The public ACP plan model is Free, Individual, Team, and Enterprise. Team and Enterprise are priced per seat and unlock collaboration, customer-managed inference, and infrastructure controls for larger deployments.
Best for evaluating ACP, testing the SDK, and shipping first threads and computers without a paid seat.
Best for individual builders who want premium models, custom agents, persistent computers, and API access in one plan.
Best for product teams that need shared projects, shared resources, bring-your-own inference, and spend-controlled usage.
Best for larger organizations that need higher included usage, governance, custom rollout, and customer-managed infrastructure.
Included monthly credits
How ACP accounts for usage
ACP tracks managed usage in Compute Tokens. This gives the platform one comparable budget across model execution, runtime minutes, and other managed infrastructure that is billed inside the product.
Included monthly credits are always used first. A single thread, computer session, or agent workflow can consume both model usage and runtime from the same plan budget.
What credits normally cover
Managed model usage for threads, agents, teams, and research flows.
Baseline runtime for ACP computers, agent execution, and related platform work.
Standard developer workflows while you are building, testing, and iterating in the platform.
Usage-based infrastructure
When metered billing starts
On Team and Enterprise, you can enable usage-based infrastructure after included monthly credits are exhausted. If metered billing is disabled, resource activity pauses when your included budget runs out.
If metered billing is enabled, workloads continue until you hit your configured spend cap. This is the right setup for always-on apps, shared team resources, and longer-running automation.
Typical metered resource categories
Bring your own models
Team and Enterprise can connect an OpenAI-compatible inference endpoint and expose those models inside ACP. This is the right path when you want to route execution to your own infrastructure, use customer-managed credentials, or standardize on a specific inference stack.
ACP-hosted models consume included credits first and are the default path for most teams.
Team and Enterprise can route supported workloads to their own inference endpoint.
Larger deployments can combine BYOM, governance controls, spend caps, and customer-managed rollout.
Plan identifiers
ACP documentation should use the canonical plan IDs below. Some older systems may still emit historical aliases, but the platform normalizes them to the current names before evaluating access.
Canonical plan IDs
Use these plan identifiers in product logic, entitlement checks, and internal tooling.
Legacy aliases
Older systems may still emit historical tier IDs. ACP normalizes them before evaluating access.