MCP server by Ejae-dev
mcp-cost-calculator
Estimate LLM token costs when using multiple MCP servers.
Inspired by @JThreeDot's insight that connecting many MCP servers simultaneously balloons LLM costs — every tool definition gets injected into context on every request.
Install
npm install -g mcp-cost-calculator
Or run directly:
npx mcp-cost-calculator ./your-config.json
Usage
# Analyze your Claude Desktop config
mcp-cost ~/.claude/claude_desktop_config.json
# Or any MCP config file
mcp-cost ./mcp-config.json
Output
╔══════════════════════════════════════════════════════════════╗
║ MCP Cost Calculator ║
╚══════════════════════════════════════════════════════════════╝
Config: claude_desktop_config.json
Servers: 5
┌─────────────────────────────────────────────────────────────┐
│ Server Overhead Breakdown │
├────────────────────────────┬──────────┬─────────────────────┤
│ Server │ Tools │ Est. Tokens │
├────────────────────────────┼──────────┼─────────────────────┤
│ filesystem │ 11 │ 3,400 │
│ github │ 25 │ 10,100 │
│ postgres │ 8 │ 3,300 │
│ brave-search │ 2 │ 700 │
│ memory │ 4 │ 1,100 │
└────────────────────────────┴──────────┴─────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Cost Per Request (MCP overhead only) │
├─────────────────────────┬───────────────┬───────────────────┤
│ Model │ Per Request │ Monthly (100/day) │
├─────────────────────────┼───────────────┼───────────────────┤
│ gpt-4o │ $0.0466 │ $139.80 │
│ claude-3.5-sonnet │ $0.0559 │ $167.76 │
│ claude-3-opus │ $0.2796 │ $838.80 │
└─────────────────────────┴───────────────┴───────────────────┘
💡 Tip: You have many MCP servers connected. Consider using a
proxy server pattern to reduce context overhead.
How It Works
The tool analyzes your MCP config and estimates token overhead based on:
- Number of tools per server
- Tool name and description length
- Input schema complexity (properties, types, required fields)
For servers without explicit tool definitions, it uses reasonable estimates based on known MCP server patterns.
Why This Matters
Every MCP tool definition gets injected into the LLM context on every request. With 5-10 servers connected, you can easily add 15-30K tokens of overhead — before you even send your actual prompt.
At Claude Opus rates ($15/M input), that's ~$0.45 per request just in MCP overhead. 100 requests/day = $1,350/month in pure overhead costs.
Solutions
- Use fewer servers — Only connect what you need for the current task
- Proxy server pattern — Use a single MCP server that proxies to others on-demand
- Dynamic loading — Load/unload servers based on conversation context
License
MIT