How much did you spend on Claude Code this week?
πNothing uploaded. Your logs stay in this browser tab.
| Model | Calls | Input tok | Output tok | Est. cost | Share |
|---|
| Session | Started | Duration | Tokens | Est. cost | Risk |
|---|
BurnCheck v2 will run as a lightweight daemon: daily limit warnings, model-swap auto-suggestions, hosted dashboard. Planned $9/mo, founding users $5/mo for life. Current status: gauging interest.
If your browser blocks folder access (Safari on iOS, or ultra-strict enterprise policy), run this one-liner in Terminal β it extracts only the usage metadata (no prompts, no code, no content), copies a compact JSON to your clipboard:
find ~/.claude/projects -name "*.jsonl" -mtime -7 -exec cat {} + | jq -c 'select(.type=="assistant" and .message.usage) | {m:.message.model,u:.message.usage,t:.timestamp}' | pbcopy
Requires jq (brew install jq). On Linux, replace pbcopy with xclip -selection clipboard.
Every 7 days from when your first message in that window was sent. Anthropic doesn't publish the exact threshold β it varies by plan tier and has been adjusted multiple times. Community-observed ranges: ~$25/week on Max $20, ~$140/week on Max $100, ~$320/week on Max $200 (treat as Β±30%). BurnCheck computes your actual burn rate from local logs and projects when you'll cross.
Claude Code stores every session as a JSONL file in ~/.claude/projects/. Each contains message.usage records with input_tokens, output_tokens, cache_creation_input_tokens, cache_read_input_tokens. Multiply by published rates (Opus $15/M input, Sonnet $3/M, Haiku $0.25/M) to get cost. BurnCheck reads these in your browser and aggregates last 7 days automatically.
Yes. At current burn rate Γ 7, compare against your plan tier's community-reported cap. If the ratio exceeds 100%, you'll hit the limit before week end. BurnCheck visualizes this forecast and flags which specific day you'll likely cross. You can override cap values with your observed limit from a past throttle.
Claude Code enforces a 5-hour session window starting with your first message. Within that window you have a token budget; once exceeded, you're throttled until the next 5-hour block. Multi-hour refactor sessions often hit this unexpectedly. BurnCheck flags sessions in your history that ran past 4 hours so you can spot the pattern.
Opus is roughly 5Γ Sonnet's input + output cost. A typical Opus call with heavy context costs $1-$5; equivalent Sonnet call $0.20-$1. For most code-editing, refactoring, and straightforward debugging, Sonnet produces comparable output at 1/5 the cost. BurnCheck flags Opus calls that would likely run fine on Sonnet and estimates weekly savings.
Yes. BurnCheck is a single HTML file with no backend. Your ~/.claude/projects/ logs are read by the browser's File API locally. Nothing is uploaded; no analytics; no account. Open DevTools Network tab while using BurnCheck β zero outbound requests after page load. Source: github.com/Genie-J/burncheck.
Claude-Code-Usage-Monitor and ccusage are terminal-based live session monitors β they show your current 5-hour block in real time. BurnCheck is browser-based weekly forecasting β it aggregates last 7 days and projects cap-crossing risk. Most users run both: ccusage for live awareness, BurnCheck when they want to answer "will I make it through the week?"