Token Calculator
Paste your prompt and instantly compare token counts and costs across 12 AI models.
Advertisement (160×600)
Your Prompt / Text
0
Tokens
0
Characters
0
Words
Advertisement (728×90)
How token cost is calculated
1.Count tokens: Your text is split into tokens. ~4 characters = 1 token. "Hello world" ≈ 3 tokens. This is the same method used by OpenAI, Anthropic, and Google.
2.Input cost: You pay for every token in your prompt (what you send to the AI). Rate varies by model — cheaper models charge less per token.
3.Output cost: You also pay for every token the AI generates in its response. Output is usually 2–5× more expensive than input. The table below shows cost for your input only.
Cost = (your_tokens ÷ 1,000,000) × price_per_million_tokens
Cost Comparison — 12 Models
Sorted cheapest firstInput cost = cost to send your prompt. Output cost = cost per response of the same length. Real output length will vary.
🔢
Paste your text above to see costs across all models
Model Benchmarks
Public benchmark scores from official model cards and independent evaluations (2025–2026). Use these to pick the right model — not just the cheapest one.
| Model | MMLU ↑ | HumanEval ↑ | GPQA ↑ | MATH ↑ | Best for |
|---|---|---|---|---|---|
| Claude Opus 4.6 | 90% | 92% | 76% | 90% | Complex reasoning, analysis |
| Claude Opus 4.5 | 89% | 90% | 74% | 88% | Long documents, nuanced writing |
| o3 (reasoning) | 87% | 91% | 78% | 96% | Math, science, hard reasoning |
| Gemini 2.5 Pro | 89% | 87% | 75% | 91% | Research, long context (1M tokens) |
| GPT-4.1 | 88% | 88% | 72% | 87% | Coding, general tasks |
| Claude Sonnet 4.5 | 86% | 85% | 68% | 83% | Balanced quality + cost |
| Claude Sonnet 4 | 85% | 84% | 66% | 81% | Everyday writing, coding |
| DeepSeek V3.2 | 84% | 83% | 62% | 85% | Coding, math, low cost |
| Gemini 2.5 Flash | 82% | 80% | 60% | 78% | Fast, cheap, high volume |
| GPT-4.1 mini | 80% | 78% | 55% | 74% | Budget tasks, high throughput |
| Llama 3.3 70B | 80% | 79% | 50% | 73% | Open source, self-hosted |
| Claude Haiku 4.5 | 79% | 76% | 52% | 70% | Fast responses, simple tasks |
| GPT-4o mini | 77% | 74% | 48% | 68% | Cheapest OpenAI option |
| Amazon Nova Pro | 75% | 70% | 45% | 65% | AWS-native, cost-effective |
| Amazon Nova Lite | 68% | 60% | 38% | 55% | Ultra-low cost, simple tasks |
MMLU — General knowledge across 57 subjects
HumanEval — Python coding problems
GPQA — PhD-level science questions
MATH — Competition math problems
↑ Higher is better. Scores from official model cards and public evaluations. Some values are approximate.
Advertisement (728×90)
Advertisement (160×600)