OpenAI RETIRED TRIVIAL

o1-preview — Retired

Deprecated
Shutdown
2025-07-28
Status
deprecated
Replacement
o3

Quick fix — copy & paste

Choose your language. The "before" block matches the deprecated call; the "after" block is the drop-in replacement.

Breaks on 2025-07-28
# OpenAI: o1-preview (deprecated)
model = "o1-preview"
Use this instead
# Replacement
model = "o3"

This migration was generated automatically from the model rename. If your code does more than swap a model id, double-check request/response shapes against the official OpenAI migration guide.

Error messages

Seeing one of these? You're in the right place.

  • model_not_found: o1-preview
  • The model `o1-preview` has been deprecated
  • The model `o1-preview` does not exist or you do not have access to it

Replacement options

Other OpenAI deprecations

What this means for your code

o1-preview is a reasoning model that uses extended chain-of-thought internally before responding. Reasoning models charge for hidden reasoning tokens on top of completion tokens. Replacements may charge differently or expose new reasoning_effort parameters. Latency profiles also change — your timeout and retry logic may need adjustment.

o1-preview was retired by OpenAI on 2025-07-28. API calls now return an error and the model is no longer accessible. New code should use o3; legacy code that still references this model id needs to be updated immediately.

Find every call in your codebase

Before you change anything, locate every place the deprecated model id is referenced. Search source files, environment files, feature flags, and config repos. Use these commands from your project root:

Python projects

grep -rn '"o1-preview"' --include="*.py" .

JavaScript / TypeScript projects

grep -rn '"o1-preview"' --include="*.{js,ts,tsx,jsx}" .

Anywhere (configs, scripts, infra)

grep -rn "o1-preview" .

Migration checklist

Steps in order. Skip any that don't apply, but read the whole list — for reasoning models, the non-obvious steps are usually the ones that break in production.

  1. 1. Update the model id in API calls
  2. 2. Audit max_tokens and reasoning_effort settings against the new model's defaults
  3. 3. Re-tune timeout and retry budgets — reasoning models have higher P99 latency
  4. 4. Verify cost projections — hidden reasoning tokens can be 3-10x the visible output
  5. 5. Test on edge cases that exercised the old model's reasoning depth

Will this migration cost more?

Switching from o1-preview to o3 could change your costs significantly. Calculate the exact difference for your prompts.

Open the cost calculator →