o1-pro — Deprecated
- Deprecated
- —
- Shutdown
- 2026-10-23
- Status
- deprecated
- Replacement
- 5.4-Pro
Quick fix — copy & paste
Choose your language. The "before" block matches the deprecated call; the "after" block is the drop-in replacement.
# OpenAI: o1-pro (deprecated)
model = "o1-pro" # Replacement
model = "5.4-Pro" // OpenAI: o1-pro (deprecated)
const model = "o1-pro"; // Replacement
const model = "5.4-Pro"; "model": "o1-pro" "model": "5.4-Pro" This migration was generated automatically from the model rename. If your code does more than swap a model id, double-check request/response shapes against the official OpenAI migration guide.
Error messages
Seeing one of these? You're in the right place.
-
model_not_found: o1-pro -
The model `o1-pro` has been deprecated -
The model `o1-pro` does not exist or you do not have access to it -
model_not_found: o1-pro-2025-03-19 -
The model `o1-pro-2025-03-19` has been deprecated -
The model `o1-pro-2025-03-19` does not exist or you do not have access to it
Replacement options
-
5.4-ProCompare token cost →
Also known as
These ids point to the same deprecated model.
o1-pro-2025-03-19 Other OpenAI deprecations
What this means for your code
o1-pro is a reasoning model that uses extended chain-of-thought internally before responding. Reasoning models charge for hidden reasoning tokens on top of completion tokens. Replacements may charge differently or expose new reasoning_effort parameters. Latency profiles also change — your timeout and retry logic may need adjustment.
OpenAI has scheduled o1-pro for shutdown on 2026-10-23. That gives you 168 days to migrate. Until then the model still works, but every API call after that date will return a model_not_found error.
Find every call in your codebase
Before you change anything, locate every place the deprecated model id is referenced. Search source files, environment files, feature flags, and config repos. Use these commands from your project root:
Python projects
grep -rn '"o1-pro"' --include="*.py" . JavaScript / TypeScript projects
grep -rn '"o1-pro"' --include="*.{js,ts,tsx,jsx}" . Anywhere (configs, scripts, infra)
grep -rn "o1-pro" . Run the same searches for each alias listed above (o1-pro-2025-03-19). Different aliases for the same model often coexist in older code paths.
Migration checklist
Steps in order. Skip any that don't apply, but read the whole list — for reasoning models, the non-obvious steps are usually the ones that break in production.
- 1. Update the model id in API calls
- 2. Audit max_tokens and reasoning_effort settings against the new model's defaults
- 3. Re-tune timeout and retry budgets — reasoning models have higher P99 latency
- 4. Verify cost projections — hidden reasoning tokens can be 3-10x the visible output
- 5. Test on edge cases that exercised the old model's reasoning depth
Will this migration cost more?
Switching from o1-pro to 5.4-Pro could change your costs significantly. Calculate the exact difference for your prompts.
Open the cost calculator →