Start free. Scale securely.

Plans for indie developers to enterprise teams.

Feature Area

Pay-As-You-Go

Enterprise

Feature Area

Pay-As-You-Go

Enterprise

Platform Fee

4.5%

Bulk discounts available

Models

400+

400+

400+

Model Providers

20+

20+

20+

Traces Retention

14 day data retention

30 day data retention

Custom

Provisioning API key

Bring Your Own API Keys (BYOK)

Required

Optional

Optional

BYOK Limits

1M free reqs/month; 4% fee after

Custom

Rate Limits

50 reqs/day & 20 reqs/min

100 reqs/min

Custom

Token Pricing

No minimum spend. Prices model based

Volume commitments. Prices model based

Traces

Included

Limited by rate limits

25,000

€0.0015 per trace thereafter

Custom

Ingestion Volume

1 GB

1 GB

thereafter €3 / GB

Custom

Credit-Based Usage

Orq-Managed Billing

Unified Credit Balance Across Providers

Unified API (300+ models, 20+ providers)

Manual Model Selection

Provider Fallback

Retry Logic

Budget Control on Keys

Intelligent Routing

Latency & Error Monitoring

Multi-Project Support

Environment Separation (Dev / Staging / Prod)

Role-Based Access Control (RBAC)

Advanced

SSO (SAML / OIDC)

Audit Logs & Change History

Custom Data Retention & Residency

SLAs (Uptime & Latency)

Dedicated Infrastructure

Dedicated Support & CSM

Custom Onboarding & Training

FAQ

Frequently asked questions

Billing and Pricing

How are tokens billed?

Input and output tokens are billed per model at posted rates. You can track all token usage in real time through the Orq.ai dashboard.

Do you mark up model provider pricing?

No. We pass through model provider pricing at cost. The prices shown in the platform are exactly what the underlying providers (OpenAI, Anthropic, etc.) charge — there are no hidden markups on token costs.

How is billing structured across plans?

  • Pay-as-you-go: Purchase credits and use them flexibly. You can set up automatic top-ups or add credits manually. Usage is tracked in real time from your dashboard.

  • Enterprise: Pricing is based on volume, prepayment credits, annual commitments, and other factors tailored to your organization's needs.

Are failed or fallback requests billed?

No. When routing or fallback is enabled, you're only billed for the successful model run. Failed attempts are not charged.

Do you offer volume discounts or annual plans?

Yes. We offer prepayment credits, volume discounts, annual commitments, and invoicing via purchase orders. Contact our sales team for details.

Are streaming responses billed differently?

No. Pricing is per token regardless of whether you use streaming or standard responses. You only pay for successful completions when routing/fallback is enabled.

What payment methods do you accept?

Pay-as-you-go accepts credit and debit cards. Enterprise plans support invoicing and purchase orders. Contact Sales if you have specific procurement requirements.

Are taxes (VAT/GST) included in prices?

Prices are exclusive of applicable taxes. Where required by law, VAT or GST may be added to invoices.

Is there a minimum spend or lock-in on Pay-as-you-go?

Pay-as-you-go has a minimum spend of $5. There is no long-term lock-in — you pay for what you use with a minimum platform fee of $0.80.

Usage and Rate Limits

Do you enforce rate limits?

Free-tier users are subject to rate limits. Pay-as-you-go and Enterprise users have high or custom rate limits to support production workloads without interruption.

Can I separate environments (dev/staging/production)?

Yes. You can create separate API keys per environment, each with their own usage caps — making it easy to manage costs across your development lifecycle.

Are there platform-level rate limits?

No. There are no platform-level rate limits on Orq.ai.

Routing and Latency

Can I route API requests to specific regions?

Yes. Regional routing is available on Enterprise and Pay-as-you-go plans, allowing you to comply with data residency requirements and optimize for latency.

Does routing affect latency?

Routing improves reliability by providing fallback options, though latency may vary by model, provider, and region.

What happens if a model is deprecated or pricing changes?

If a model is deprecated, requests to that model will return an error indicating the model is no longer available. If pricing changes, your requests will continue to be served, but you'll be charged at the updated rate. Changes are reflected in your billing automatically.

Can I pin specific model versions?

Yes. You can select an explicit model ID or version to avoid unexpected changes. You can also switch models at any time without changing your integration.

Privacy and Security

Do you train on customer data?

No. Orq.ai does not train on your data. You have full control over data retention policies, and provider-side logging can be disabled at the account level or per API call.

Do you support SSO?

Yes. SSO is available on Enterprise plans via SAML or OIDC. Contact us to enable it for your organization.

Models and Features

How do I migrate from OpenAI or Anthropic direct?

Orq.ai provides an OpenAI-compatible API. In most cases, you only need to update the base URL and model names. See our quickstart guide for step-by-step instructions.

Do you support function calling and tools?

Yes. If the underlying model supports function calling or tool use, you can use those capabilities through the Orq.ai API. See our API reference for examples and supported models.

Reliability and Uptime

What happens if a provider goes down or a model errors?

When routing and fallback are enabled, Orq.ai can automatically retry with alternative models or providers. You're only billed for the successful completion.

Where can I check uptime and incidents?

Visit our status page for real-time uptime information and incident history.

Billing and Pricing

How are tokens billed?

Input and output tokens are billed per model at posted rates. You can track all token usage in real time through the Orq.ai dashboard.

Do you mark up model provider pricing?

No. We pass through model provider pricing at cost. The prices shown in the platform are exactly what the underlying providers (OpenAI, Anthropic, etc.) charge — there are no hidden markups on token costs.

How is billing structured across plans?

  • Pay-as-you-go: Purchase credits and use them flexibly. You can set up automatic top-ups or add credits manually. Usage is tracked in real time from your dashboard.

  • Enterprise: Pricing is based on volume, prepayment credits, annual commitments, and other factors tailored to your organization's needs.

Are failed or fallback requests billed?

No. When routing or fallback is enabled, you're only billed for the successful model run. Failed attempts are not charged.

Do you offer volume discounts or annual plans?

Yes. We offer prepayment credits, volume discounts, annual commitments, and invoicing via purchase orders. Contact our sales team for details.

Are streaming responses billed differently?

No. Pricing is per token regardless of whether you use streaming or standard responses. You only pay for successful completions when routing/fallback is enabled.

What payment methods do you accept?

Pay-as-you-go accepts credit and debit cards. Enterprise plans support invoicing and purchase orders. Contact Sales if you have specific procurement requirements.

Are taxes (VAT/GST) included in prices?

Prices are exclusive of applicable taxes. Where required by law, VAT or GST may be added to invoices.

Is there a minimum spend or lock-in on Pay-as-you-go?

Pay-as-you-go has a minimum spend of $5. There is no long-term lock-in — you pay for what you use with a minimum platform fee of $0.80.

Usage and Rate Limits

Do you enforce rate limits?

Free-tier users are subject to rate limits. Pay-as-you-go and Enterprise users have high or custom rate limits to support production workloads without interruption.

Can I separate environments (dev/staging/production)?

Yes. You can create separate API keys per environment, each with their own usage caps — making it easy to manage costs across your development lifecycle.

Are there platform-level rate limits?

No. There are no platform-level rate limits on Orq.ai.

Routing and Latency

Can I route API requests to specific regions?

Yes. Regional routing is available on Enterprise and Pay-as-you-go plans, allowing you to comply with data residency requirements and optimize for latency.

Does routing affect latency?

Routing improves reliability by providing fallback options, though latency may vary by model, provider, and region.

What happens if a model is deprecated or pricing changes?

If a model is deprecated, requests to that model will return an error indicating the model is no longer available. If pricing changes, your requests will continue to be served, but you'll be charged at the updated rate. Changes are reflected in your billing automatically.

Can I pin specific model versions?

Yes. You can select an explicit model ID or version to avoid unexpected changes. You can also switch models at any time without changing your integration.

Privacy and Security

Do you train on customer data?

No. Orq.ai does not train on your data. You have full control over data retention policies, and provider-side logging can be disabled at the account level or per API call.

Do you support SSO?

Yes. SSO is available on Enterprise plans via SAML or OIDC. Contact us to enable it for your organization.

Models and Features

How do I migrate from OpenAI or Anthropic direct?

Orq.ai provides an OpenAI-compatible API. In most cases, you only need to update the base URL and model names. See our quickstart guide for step-by-step instructions.

Do you support function calling and tools?

Yes. If the underlying model supports function calling or tool use, you can use those capabilities through the Orq.ai API. See our API reference for examples and supported models.

Reliability and Uptime

What happens if a provider goes down or a model errors?

When routing and fallback are enabled, Orq.ai can automatically retry with alternative models or providers. You're only billed for the successful completion.

Where can I check uptime and incidents?

Visit our status page for real-time uptime information and incident history.

Get your API key and start routing in minutes.

Get your API key and start routing in minutes.