🍊

TANGERINE

TOKEN MAXXER ALPHA
0
AIs Involved
0
Prompts Stored
100%
Client-Side
What does "Alpha" actually mean here?

Tangerine is functional — you can paste a prompt right now and get a useful estimate. But it's still being actively calibrated. Think of it as a working educational tool that's still getting its rough edges smoothed out.

⚙️
The optimizer is rule-based and still a WIP
The prompt optimizer works by applying a set of hand-tuned rules — stripping filler, compressing verbose phrases, injecting format hints. It's not AI. That's intentional. But the rules aren't exhaustive yet. Some edge cases will surprise you, and some suggestions will be better than others until more rules are added and refined.
📊
Output token estimates can still fluctuate
The output token estimator is calibrated against hundreds of real Claude API responses across dozens of prompt categories. That's a reasonable start, but it's not exhaustive. Estimates for less common prompt types can drift. As more responses are tested and the tiers are tuned, the numbers will stabilize. For now, treat output estimates as directionally correct, not precise.
🔁
The session overhead model is still being validated
The MCP server overhead (~4,000 tokens each) and the conversation compounding multiplier are based on real observed behavior, but they're still being validated across different setups and Claude versions. Individual results may vary depending on your tooling configuration.
🌱
What Alpha doesn't mean
It doesn't mean broken. The core input token counter is highly accurate (characters ÷ 4 is the industry-standard approximation). The optimizer will always improve your prompt or leave it unchanged — it won't make things worse. And the educational content about how Claude's context window works is solid.
Where things stand in Alpha

An honest look at what's solid and what's still being refined.

Input token counter
~95%
Characters ÷ 4 is the industry-standard approximation. Very reliable.
Session overhead panel (MCP, memory, skills)
~85%
Based on real measured overhead. Values may vary by Claude version and config.
Output token estimator
~72%
Calibrated against hundreds of real responses. Solid for common types; less precise for edge cases.
Prompt optimizer (rule-based)
~68%
Works well for common patterns. Unusual or short prompts may see less benefit.
Conversation history multiplier
~60%
New feature. The math is sound; real-world validation across diverse session types is ongoing.
Why your token costs grow faster than you expect

Every time you send a message, Claude re-reads the entire conversation from the beginning. Not just your latest message — every reply, every file, every code block from every turn. That means token cost doesn't grow linearly. It grows triangularly.

Relative token cost per message (same 500-token prompt, each reply)
1
2
3
4
5
7
9
11
11×
12
12×
Total cost of those 12 messages: 78× a single message
🎓
Tangerine is an educational tool first
The goal was never to build a perfect token counter. It was to help newer Claude users understand why their sessions end so fast — especially on Pro plans where it feels like you should have plenty of runway. The compounding context re-read, the MCP overhead, the invisible fixed costs of every conversation — these things aren't obvious, and they're not well-explained anywhere.

Tangerine exists to make all of that visible. If it helps even a handful of people get more out of their Claude sessions, it's done its job.