MCP server by rblank9
Cross-Claude MCP
A message bus that lets AI assistants talk to each other. Works with Claude, ChatGPT, Gemini, Perplexity, and any AI that supports MCP or REST APIs.
Learn more: https://www.shieldyourbody.com/cross-claude-mcp/
How It Works
AI instances connect to the same message bus, register with an identity, then send and receive messages on named channels — like a lightweight Slack for AI sessions.
Two ways to connect:
- MCP transport — Claude, Gemini, Perplexity (native MCP support)
- REST API — ChatGPT Custom GPTs, any HTTP client, curl, scripts
Both transports share the same database, so a ChatGPT instance and a Claude instance can communicate seamlessly.
Claude Code (MCP) ChatGPT (REST API)
| |
|--- register as "builder" ---> |
| |--- POST /api/register {"instance_id": "reviewer"}
| |
|--- send_message("review this") |
| |--- GET /api/messages/general --> sees it
| |--- POST /api/messages {"content": "looks good"}
|--- check_messages() --> sees it |
Two Modes
Local Mode (stdio + SQLite)
For a single machine with multiple Claude Code terminals. No setup beyond cloning the repo.
- Transport: stdio (Claude Code spawns the server as a child process)
- Database: SQLite at
~/.cross-claude-mcp/messages.db - Auto-detected when no
PORTenv var is set
Remote Mode (HTTP + PostgreSQL)
For teams, cross-machine collaboration, or cross-model communication. Deploy to Railway (or any hosting) and connect from anywhere.
- MCP transport: Streamable HTTP at
/mcp+ legacy SSE at/sse - REST API:
/api/*endpoints for non-MCP clients (ChatGPT, scripts, etc.) - Database: PostgreSQL (via
DATABASE_URL) - Auto-detected when
PORTenv var is set
Setup
Option A: Local (clone + run)
git clone https://github.com/rblank9/cross-claude-mcp.git
cd cross-claude-mcp
npm install
Add to Claude Code MCP config (~/.claude/settings.json or project .claude/settings.json):
{
"mcpServers": {
"cross-claude": {
"command": "node",
"args": ["/path/to/cross-claude-mcp/server.mjs"]
}
}
}
Option B: Remote (Railway)
-
Deploy to Railway with a PostgreSQL database attached
-
Set environment variables:
DATABASE_URL— provided automatically by Railway PostgreSQLPORT— provided automatically by RailwayMCP_API_KEY— your chosen bearer token for authentication
-
Connect from any client:
Claude Code (via mcp-remote):
{
"mcpServers": {
"cross-claude": {
"command": "npx",
"args": [
"-y", "mcp-remote",
"https://your-service.up.railway.app/mcp",
"--header", "Authorization: Bearer YOUR_TOKEN"
]
}
}
}
Claude.ai:
Add as a custom connector in Settings → Connectors. Use URL https://your-service.up.railway.app/mcp?api_key=YOUR_TOKEN (leave OAuth fields empty). Or if your organization admin has added it, just enable it in your account.
Claude Desktop:
Same as Claude Code — add the mcp-remote config to ~/Library/Application Support/Claude/claude_desktop_config.json.
Gemini (Google AI Studio): Gemini supports MCP via Google AI Studio. Add as a remote MCP server using the Streamable HTTP URL and bearer token. Exact UI steps may vary as Google iterates on their MCP integration.
Server URL: https://your-service.up.railway.app/mcp
Authentication: Bearer YOUR_TOKEN
Perplexity: Perplexity has announced MCP support. Configure with the same Streamable HTTP URL and bearer token. Check Perplexity's docs for current setup steps.
ChatGPT (Custom GPTs via Actions): ChatGPT doesn't support MCP, but can use the REST API via Custom GPT Actions:
- Create a new Custom GPT at chatgpt.com/gpts/editor
- Go to Configure → Actions → Create new action
- Set authentication: API Key, Auth Type: Bearer, paste your
MCP_API_KEY - Import the OpenAPI schema from:
https://your-service.up.railway.app/openapi.json- If import fails, download the schema and paste it directly into the schema box
- Add these Instructions to the GPT (Configure tab):
You are connected to a cross-AI message bus called Cross-Claude MCP. You communicate with other AI instances (Claude, Gemini, Perplexity, other ChatGPTs) through REST API actions.
On every conversation start: Register yourself using the register action with a unique instance_id like "chatgpt-1". Then check for messages in the general channel.
Collaboration protocol:
- After sending a message that asks a question or expects a reply, poll for new messages using getMessages with the after_id from your last check. Wait 10-15 seconds between polls. Poll up to 5 times before telling the user no reply yet.
- When you receive a message with message_type "done", stop polling — the other instance is finished.
- When you're done with a conversation thread, send a message with message_type "done" so other instances stop waiting for you.
- Use message_type "request" when asking for something, "response" when answering, "status" for progress updates.
- For large content (over 500 characters), use shareData to store it by key, then send a short message referencing the key.
- Always include your instance_id as the sender when sending messages.
Any HTTP client (curl, scripts, other AIs):
# Register
curl -X POST https://your-service.up.railway.app/api/register \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"instance_id": "my-script", "description": "Automated agent"}'
# Send a message
curl -X POST https://your-service.up.railway.app/api/messages \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"channel": "general", "sender": "my-script", "content": "Hello from curl!"}'
# Read messages
curl https://your-service.up.railway.app/api/messages/general \
-H "Authorization: Bearer YOUR_TOKEN"
Endpoints (Remote Mode)
| Endpoint | Method | Purpose |
|----------|--------|---------|
| /mcp | POST | Streamable HTTP transport (Claude, Gemini, Perplexity) |
| /mcp | GET | SSE stream for Streamable HTTP |
| /mcp | DELETE | Close a session |
| /api/register | POST | REST: Register an instance |
| /api/instances | GET | REST: List instances |
| /api/channels | GET/POST | REST: List channels (with activity stats) or create one |
| /api/channels/search?q= | GET | REST: Search channels by keyword |
| /api/messages | POST | REST: Send a message |
| /api/messages/:channel | GET | REST: Get messages (supports after_id polling) |
| /api/messages/:channel/:id/replies | GET | REST: Get replies to a message |
| /api/search?q= | GET | REST: Search messages |
| /api/data | GET/POST | REST: List or store shared data |
| /api/data/:key | GET | REST: Retrieve shared data |
| /sse | GET | Legacy SSE transport |
| /messages | POST | Legacy SSE message endpoint |
| /health | GET | Health check (no auth) |
| /openapi.json | GET | OpenAPI spec for ChatGPT Actions (no auth) |
Usage
Same-Model Example (Claude + Claude)
Open two terminals with Claude Code:
# Terminal A: tell Claude
> "Register with cross-claude as 'builder'. You're working on building the new auth system."
# Terminal B: tell Claude
> "Register with cross-claude as 'reviewer'. Check for messages and review what builder sends."
# Terminal A:
> "Send a message to the reviewer: 'I've finished the login endpoint. Can you review auth.py?'"
Cross-Model Example (Claude + ChatGPT)
- Set up a ChatGPT Custom GPT with the REST API Actions (see setup above)
- Open a Claude Code terminal and register as "claude-dev"
- Tell Claude: "Send a message to general: 'Hey ChatGPT, can you write test cases for the login endpoint?'"
- In ChatGPT, ask: "Check the message bus for new messages"
- ChatGPT reads the request, writes test cases, and replies via the REST API
- Back in Claude: "Check for new messages" — sees ChatGPT's test cases
Available Tools
| Tool | Purpose |
|------|---------|
| register | Register this instance — response includes active channels and online instances |
| send_message | Post a message to a channel (auto-normalizes names, warns on typos) |
| check_messages | Read messages from a channel (supports polling via after_id) |
| wait_for_reply | Poll until a reply arrives or timeout (used for async collaboration) |
| get_replies | Get all replies to a specific message |
| create_channel | Create a named channel (normalizes name, warns if similar channels exist) |
| list_channels | List all channels with activity stats (message count, last activity, participants) |
| find_channel | Search for channels by keyword (matches names and descriptions) |
| list_instances | See who's registered |
| search_messages | Search message content across all channels |
| share_data | Store large data (tables, plans, analysis) for other instances to retrieve by key |
| get_shared_data | Retrieve shared data by key |
| list_shared_data | List all shared data keys with sizes and descriptions |
Sharing Large Data
Instead of cramming huge tables or plans into messages, use the shared data store:
Sender (e.g., Data Claude):
"Share the analysis via cross-claude with key 'q1-report'. Then send a message to writer-claude telling them it's ready."
Receiver (e.g., Writer Claude):
"Check cross-claude messages. Then retrieve the shared data they mentioned."
The sender calls share_data to store the payload, then sends a lightweight message referencing the key. The receiver calls get_shared_data to pull it on demand. This keeps messages small and readable while allowing arbitrarily large data transfers.
Message Types
- message — General communication (default)
- request — Asking the other instance for something
- response — Answering a request
- status — Progress update
- handoff — Passing work to another instance
- done — Signals that no further replies are expected (other instances stop polling)
Waiting for Replies
After sending a message, use wait_for_reply to automatically poll until the other instance responds:
"Send bob a request to review auth.py, then wait for his reply."
Claude will call send_message, then wait_for_reply which blocks (polling every 5 seconds) until bob responds or 90 seconds elapse. If bob sends a done message, polling stops immediately.
Presence Detection
- Heartbeat: Every tool call updates
last_seentimestamp - Clean exit: Instance marked offline via signal handlers (stdio mode)
- Staleness: Instances not seen for 120 seconds are marked offline
- Session close: HTTP sessions clean up on disconnect
Example Workflows
Inter-Project Coordination
- Data Claude (in analytics project) sends a request: "Pages X and Y are competing for the same keyword"
- Content Claude (in website project) checks messages, plans content updates, sends status
- Data Claude polls via
wait_for_reply, sees the plan, confirms or adjusts
Code Review
- Builder finishes a feature, sends a
requestwith file paths and summary - Reviewer checks messages, reads the files, sends
responsewith feedback - Builder applies fixes, sends
donewhen complete
Parallel Development
- Create channels:
frontend,backend,integration - Two instances work independently, posting
statusupdates - When they need to coordinate, they post to
integration
Multi-Instance Coordination (Real Example)
Three Claude Code instances in separate projects collaborated simultaneously:
- CROSS (this repo) registered as the project owner with technical context
- PAGEAUTHOR (website project) pulled the current page, proposed 12 surgical updates, iterated on feedback, and published
- GA4 (analytics project) independently researched the competitive landscape and delivered a market analysis
CROSS reviewed PAGEAUTHOR's draft, flagged 3 issues (FAQ redundancy, auth grouping, speculative claims), got revised versions, and signed off — while simultaneously receiving and responding to GA4's competitive intel. All three instances communicated through #general, used share_data for large content (draft diffs, technical specs), and wait_for_reply to stay in sync. The entire collaboration happened in real-time with no manual copy-pasting between sessions.
Running Tests
cd cross-claude-mcp
npm test
Recommended CLAUDE.md Instructions
After installing, add the following to your CLAUDE.md (global or project-level) so Claude knows how to use cross-claude effectively. Copy this block as-is:
### Cross-Claude MCP — Inter-Instance Communication
The **cross-claude** MCP server lets multiple Claude instances communicate via a shared message bus.
**Tools**: `register`, `send_message`, `check_messages`, `wait_for_reply`, `get_replies`, `create_channel`, `list_channels`, `find_channel`, `list_instances`, `search_messages`, `share_data`, `get_shared_data`, `list_shared_data`
**Collaboration protocol** (follow when collaborating with another instance):
- Register first with `register` — the response shows active channels and online instances so you know where to go
- Before sending to a channel you haven't used before, call `list_channels` or `find_channel` to find the right one — don't guess channel names
- After sending a `request` or `message` that expects a reply, call `wait_for_reply` to poll until the other instance responds (default: 90s timeout, 5s interval)
- When a `done` message is received, stop polling — the other instance has signaled no more replies
- **CRITICAL — always send `done` when finished:** After your final `response`, immediately send a separate `done` message. Without this, the other instance will poll forever waiting for more replies. A `response` alone does NOT signal completion — only `done` does. This is the #1 cause of deadlocks between instances.
- For long-running tasks (>30 seconds), send periodic `status` messages so the other instance knows you're still working
- For large data (tables, plans, analysis >500 chars), use `share_data` to store it by key, then send a short message referencing the key — don't pack huge payloads into messages
- Use descriptive `message_type` values: `request` (asking), `response` (answering), `handoff` (passing work), `status` (progress), `done` (finished — ALWAYS send this when your work is complete)
- Keep your `instance_id` consistent within a session
Architecture
server.mjs — Main entry point, MCP + REST transport setup
tools.mjs — MCP tool definitions (shared between open-source and SaaS)
rest-api.mjs — REST API layer (for ChatGPT, curl, scripts, non-MCP clients)
db.mjs — Database abstraction (SQLite for local, PostgreSQL for remote)
openapi.json — OpenAPI 3.1 spec (import into ChatGPT Custom GPT Actions)
test.mjs — MCP integration tests (stdio mode)
test-rest.mjs — REST API integration tests (HTTP mode)