Every token
has a price

Count tokens and calculate costs for GPT-4o, Claude, Gemini, Grok, DeepSeek and 25+ more AI models. Optimize your prompts, compare pricing, and ship — tokenization runs 100% in your browser, so your prompt content never touches our servers.

30 models 7 providers 100% private $0 free forever

How the token counter works

Three steps from raw text to a precise cost estimate for any major AI model.

1

Paste or type your prompt

Use the editor above. It supports single-prompt mode or multi-turn conversation mode for testing chat flows. Variables like {{name}} get resolved before counting.

2

Pick your model

Choose from 30 models. For GPT-4o, GPT-4.1, o3 and o4-mini we run the exact tiktoken library (WebAssembly) locally — byte-perfect token counts. For Claude, Gemini, Grok, and DeepSeek we apply close approximations.

3

Get exact cost

Input cost × rate + estimated output × rate. Adjust the input/output ratio slider to match your real usage. Export to curl, Python, or Node.js for direct API integration.

The prompt engineering toolkit

Measure, build, optimize, and ship — all in your browser.

Measure

Exact Token Counting

Same tiktoken tokenizer as OpenAI, running via WebAssembly. Exact for GPT, smart approximation for others.

Multi-Model Cost Compare

Side-by-side pricing for 30 models across OpenAI, Anthropic, Google, xAI, DeepSeek, Mistral, and Meta.

Context Predictor

Visual breakdown of how much context window your prompt and expected output will use. Warnings before you overflow.

Build

Prompt Builder

Compose prompts step by step — task, tone, format, techniques (CoT, few-shot, XML tags, rubrics). Live token preview.

Template Library

11 professional prompt templates: system prompts, chain-of-thought, few-shot, code review, summarization.

Prompt Chains

Build multi-step pipelines. Each step has its own model and output variable. See cumulative tokens and cost.

Optimize

Prompt Optimizer

Auto-detect verbose phrases, filler words, and repetitions. One-click fix with undo. See tokens saved instantly.

Ratio Calculator

Paste a real prompt plus response to measure your output ratio. Get specific advice to reduce it.

Template Variables

Use {{variable}} placeholders. Fill values in a side panel, tokens count the resolved prompt.

Ship

API Export

Copy your prompt as a ready-to-paste curl, Python (openai/anthropic SDK), or Node.js snippet.

Version & Compare

Save prompt versions, then A/B compare any two. See token diff, cost diff, and line-by-line changes.

Share & Track

Share prompts via URL. Save snippets for reuse. Session metrics track your tokens, costs, and model usage over time.

Prompt content stays in your browser. No prompts sent to any server.

Frequently asked questions

What is an AI token?

A token is a chunk of text that an AI model processes as one unit — not exactly a word, closer to a common syllable or subword. GPT-4o, Claude, and Gemini tokenize text differently, which is why the same sentence can have different token counts across models.

How does WeighMyPrompt count tokens?

For OpenAI models (GPT-4o, GPT-4.1, o3, o4-mini), we use the official tiktoken library running as WebAssembly in your browser — the exact tokenizer OpenAI uses. For other models (Claude, Gemini, Grok, DeepSeek) we apply an approximation close to their tokenizers' behavior.

Is my prompt data sent to any server?

No. Everything runs in your browser — tokenization, cost calculation, optimization, and storage. Nothing leaves your device. Check the Network tab in your browser's devtools to verify.

Which models does WeighMyPrompt support?

30 models across 7 providers: Anthropic, DeepSeek, Google, Meta, Mistral, OpenAI, xAI. Includes the latest GPT-5, GPT-5.4, Claude Sonnet 4.6, Claude Opus 4.7, Gemini 3.1 Pro, Grok 4.20, and more.

How accurate are the cost estimates?

Input costs are exact (based on token count × provider's listed input price per million tokens). Total cost includes a configurable output-token multiplier (default 3×) — adjust the slider to match your real input/output ratio for realistic totals.

Is WeighMyPrompt free?

Yes. Free, no signup, no API key needed. Your prompts never leave your browser. There is no paid tier.

Can I compare costs across models for the same prompt?

Yes. Open the sidebar's cost comparison table to see your exact prompt priced across all 30 supported models side-by-side — including input, output, and total estimated cost.

What's the System Prompts Directory?

A separate section where we analyze leaked system prompts from 30+ AI tools like Cursor, Claude Code, v0, Windsurf, and ChatGPT. Each page shows the exact prompt, token count, cost, and which prompt engineering techniques are used.