The main goal of this MCP is to save tokens in complex Antigravity tasks by allowing OpenCode to refine prompts and perform preliminary analysis before sending the final instructions to the main model.
🚀 OpenCode MCP Server for Antigravity
This repository contains the OpenCode MCP Server, a high-performance orchestration layer based on the Model Context Protocol (MCP). It is designed to act as a Proactive Architectural Assistant for Antigravity, transforming how AI interacts with your codebase.
The main goal of this MCP is to save tokens in complex Antigravity tasks by allowing OpenCode to refine prompts and perform preliminary analysis before sending the final instructions to the main model.
Features
- Prompt Refinement: Transforms vague prompts into detailed and technical instructions.
- Development Support: Assists in bug fixing and implementing new features with a focus on efficiency.
- Semantic Memory: Stores and retrieves technical context using Semantic Chunking, Category Filtering, and XML Formatting.
- Proactive Indexing: Automatically maps your project structure to memory for instant architectural awareness.
- Memory Dashboard: Visualize your knowledge distribution and memory health.
Architecture
The OpenCode MCP Server acts as an orchestration layer between the AI Client and local specialized tools.
graph TD
Client["AI Client (Antigravity/Claude)"] -- "MCP Protocol (Stdio/HTTP)" --> Server["OpenCode MCP Server"]
subgraph "OpenCode Engine"
Server --> Tools["Tools: refine_prompt / learn_context"]
Tools --> Memory["Memory Manager"]
end
subgraph "Local Infrastructure"
Memory -- "Store/Search" --> LDB[("LanceDB Vector Store")]
Memory -- "Generate Embeddings" --> OLL["Ollama: nomic-embed-text"]
end
Server -- "Refined Prompt + Context" --> Client
The Process Flow
sequenceDiagram
participant User as User / Developer
participant AG as AI Client (Antigravity)
participant OC as OpenCode MCP Server
participant LDB as LanceDB (Memory)
participant OLL as Ollama (Embeddings)
User->>AG: "How do I fix the auth bug?"
Note over AG: Rule 1: Notify User & Refine
AG->>User: "Refining with OpenCode for precision..."
AG->>OC: refine_prompt("How do I fix the auth bug?")
OC->>OLL: Generate embedding for query
OLL-->>OC: Vector representation
OC->>LDB: Vector search for top-relevant context
LDB-->>OC: Snippets: "Auth uses JWT", "Secret in .env"
Note over OC: Format results as XML
OC-->>AG: "How do fix... + <semantic_memory>Snippets...</semantic_memory>"
Note over AG: Generate high-precision answer
AG->>User: Technical solution with codebase context
Technology Stack
The solution is built using a modern and efficient stack designed for high performance and local privacy:
- OpenCode: The core orchestration engine that manages tool execution, prompt refinement logic, and semantic memory integration.
- Model Context Protocol (MCP): The standard protocol for connecting AI models to local/remote data and tools.
- LanceDB: A serverless, high-performance vector database that allows for incredibly fast semantic searches without the overhead of a traditional database server.
- Ollama: Orchestrates local AI models. We use
nomic-embed-textto generate high-quality vector embeddings locally, ensuring your technical data never leaves your machine. - TypeScript & Node.js: Provides a type-safe and performant runtime environment for the server logic.
- Express: Used for the Remote Mode (HTTP/SSE), providing a robust foundation for the Streamable HTTP transport.
Prerequisites
Before starting, you need to set up the development environment. We recommend using NVM (Node Version Manager) to manage Node.js versions on Windows.
1. NVM and Node.js Installation (Windows)
- Download the
nvm-setup.exeinstaller from nvm-windows. - Follow the installation instructions.
- Open a new PowerShell terminal and install the recommended Node.js version:
nvm install 22 nvm use 22
2. Ollama Installation (For Semantic Memory)
Ollama is required to generate local embeddings.
- Open PowerShell as Administrator and run:
winget install ollama - After installation, restart the terminal and download the memory model:
ollama pull nomic-embed-text
3. Verify Installation
Check if the tools are ready in PowerShell:
node -v # Should return v22.x.x or higher
ollama --version
Project Installation and Configuration
Follow the steps below to configure the OpenCode MCP Server using PowerShell:
-
Clone the repository:
git clone <repository-url> cd open-code-as-mcp -
Install dependencies:
npm install -
Build the project:
npm run build
Antigravity Configuration
To integrate this MCP server with Antigravity, you must choose between Local mode (running on the same machine) or Remote mode (running on a server/cloud).
Option A: Local Configuration (Stdio)
Use this option if the server is on the same machine as the client.
Global Memory (Default)
Memory will be shared across all projects and stored in the server folder.
{
"mcpServers": {
"opencode": {
"command": "node",
"args": ["D:/IA/MCP/open-code-as-mcp/build/index.js"]
}
}
}
Per-Project Memory (Recommended)
For each project to have its own isolated memory inside the project's .mcp_memory folder.
[!IMPORTANT] Always use absolute paths in the
MCP_MEMORY_PATHenvironment variable when configuring the server in a global MCP config (like Claude Desktop). This ensures the server finds the correct folder regardless of the current working directory.
{
"mcpServers": {
"opencode": {
"command": "node",
"args": ["D:/IA/MCP/open-code-as-mcp/build/index.js"],
"env": {
"MCP_MEMORY_PATH": "D:/IA/MCP/open-code-as-mcp/.mcp_memory/vectors"
}
}
}
}
Note: Be sure to add .mcp_memory/ to your .gitignore if you don't want to version the database.
[!TIP] Ensure Ollama is running and you have downloaded the model with
ollama pull nomic-embed-text.
Option B: Remote Configuration (Streamable HTTP)
Use this option if the server is running remotely. The server uses the modern Streamable HTTP transport, which is more robust and efficient.
{
"mcpServers": {
"opencode": {
"url": "http://your-remote-server:3000/mcp"
}
}
}
Note: The server also maintains backward compatibility for legacy clients at http://your-remote-server:3000/sse.
Automatic Usage and Global Rules
To ensure Antigravity consistently follows best practices, the Global Rules are stored in two key locations:
- Global Level (Windows): Inside the
GEMINI.mdfile, located in your user profile:%USERPROFILE%\.gemini\GEMINI.md(a copy is available in this repo as GEMINI.md). - Project Level: Inside the
.cursorrulesfile in the root of this repository.
Access via Environment Variable
You can reference the global rules path by setting an environment variable in your terminal or system configuration:
$env:ANTIGRAVITY_RULES_PATH = "$HOME\.gemini\GEMINI.md"
The Rules
To ensure Antigravity uses this MCP correctly, configure the following rules in your System Prompt:
Antigravity Global Rules
- Prompt Refinement: Whenever the user sends a request, first announce to the user: "Refining your request with OpenCode for technical precision...", then use
opencode:refine_prompt. - Context Enrichment: Upon receiving the refined prompt, validate if there are technical terms or project patterns that require additional lookup in semantic memory. Mention if you are pulling specific context from OpenCode memory.
- Continuous Learning: After successfully implementing a complex feature, use
opencode:learn_context. Briefly inform the user that this knowledge is being persisted in OpenCode's semantic memory.
[!TIP] You can find the raw version of these rules in the .cursorrules or GEMINI.md file for easy copying into your System Prompt.
Available Tools
The OpenCode MCP provides the following tools:
1. refine_prompt
Refines a development prompt to make it clearer and more efficient, injecting targeted context via XML tags.
- Arguments:
prompt: (string) The original prompt that needs refinement.categoryFilter: (string, optional) Optional category to filter memories (e.g., 'architecture', 'style') to increase precision and reduce token usage.
2. learn_context
Memorizes important information (preference, technical rule, context) for future use in semantic memory.
- Arguments:
information: (string) The information to be remembered.category: (string, optional) Information category (e.g., 'preference', 'architecture', 'style').
3. search_memory
Directly queries the semantic memory without refining a prompt.
- Arguments:
query: (string) The search query.category: (string, optional) Filter results by category.limit: (number, optional) Number of results to return.
4. index_codebase
Performs a recursive scan of the project to build a structural map in memory.
- Arguments:
path: (string, optional) Root path to scan.
📊 Semantic Dashboard
You can visualize your memory health and stats using the local dashboard:
node dashboard.cjs
Remote Access (SSE)
The server supports remote access via SSE (Server-Sent Events). To run in remote mode in PowerShell, use:
Running in remote mode:
$env:MCP_MODE="sse"; $env:PORT="3000"; npm start
Development
To run the server in development mode with hot-reload in PowerShell:
npm run dev
Debugging
You can test the server locally by running in PowerShell:
node build/test-mcp.js
Token Efficiency Validation
A technical analysis was performed to measure the efficiency of semantic retrieval vs. full-context injection.
Test Scenario: Auth Middleware Migration
- Knowledge Base: Complex technical documentation for migrating session-based authentication to JWT, including security rules and legacy fallback patterns (~8,000 characters).
- Query: "How to implement the JWT fallback for legacy session endpoints?"
Results
| Metric | Traditional (Full Context) | MCP (Semantic Retrieval) | Efficiency Gain | | :-------------------- | :------------------------- | :----------------------- | :---------------- | | Characters Sent | ~8,000 | ~950 | ~88% Savings | | Tokens (Est. 1:4) | ~2,000 | ~238 | ~88% Savings | | Response Accuracy | Medium (Noise risk) | High (Exact context) | Qualitative Boost |
Why is it more efficient? OpenCode MCP implements a local RAG (Retrieval-Augmented Generation) architecture. Instead of sending 100% of your documentation or source files, it uses vector embeddings to identify and inject only the Top relevant snippets, drastically reducing the input token count for the main model.
Advanced Optimizations
To maximize efficiency, the server actively implements:
- Semantic Chunking: Large knowledge blocks are automatically split into smaller, focused chunks before being embedded. This ensures only the exact relevant paragraph is retrieved.
- Category Filtering: Queries can be scoped to specific categories (e.g.,
architectureorstyle), significantly reducing noise and allowing the result limit to be tightened. - XML Context Formatting: Retrieved memories are injected into the prompt using strict XML tags (
<semantic_memory>and<context_item>). This aligns with how modern LLMs best parse context, eliminating attention dilution.
Benefits of OpenCode MCP
- Token Savings: By refining prompts locally, we reduce the context load sent to Antigravity.
- Enriched Context: OpenCode can access local files and provide richer context for Antigravity.
- Agility: Fast responses for refinement tasks.