MCP orchestrator that converts MPC servers to agents.
mcp-agentify
AI-Powered MCP Gateway for Tool Orchestration
Overview
mcp-agentify
is a Node.js/TypeScript application acting as an AI-Powered MCP (Model Context Protocol) Gateway. This Gateway will:
- Function as an MCP server, primarily communicating via
stdio
. - Accept requests from a client IDE (e.g., Cursor) through a primary MCP method:
agentify/orchestrateTask
. - Utilize OpenAI's API (specifically Tool Calling) to interpret user queries and context, select appropriate backend MCP tools, and formulate the MCP calls.
- Dynamically manage
stdio
-based connections to backend MCP servers. - Proxy MCP calls to chosen backends and return responses.
- Be runnable via
npx
or as a dependency.
Features
- Unified MCP Endpoint: Provides a single MCP server endpoint for client applications.
- Intelligent Task Orchestration: Uses OpenAI (e.g., GPT-4 Turbo) to understand natural language and select from configured backend tools.
- Dynamic Backend Management: Configure backend MCP servers (like
@modelcontextprotocol/server-filesystem
,@browserbasehq/mcp-browserbase
) viainitializationOptions
. - Simplified Client Logic: Centralizes tool selection and MCP call formulation.
- Stdio Communication: Designed for easy integration with IDEs and other tools via standard I/O.
- Optional Frontend UI: For observing logs, traces, and status.
Installation
As a dependency in your project:
npm install mcp-agentify
# or
yarn add mcp-agentify
To run globally using npx (once published):
npx mcp-agentify
Configuration
mcp-agentify
is configured through a combination of environment variables (often set via a .env
file for local development or an env
block in an IDE's server configuration) and initializationOptions
provided by the connecting MCP client during the initialize
handshake.
Priority of Core Settings (for mcp-agentify
itself):
- Environment Variables:
OPENAI_API_KEY
,LOG_LEVEL
,FRONTEND_PORT
set inmcp-agentify
's own execution environment (e.g., from.env
or IDE'senv
block for the server process) take highest precedence. This allows the Frontend Server to start immediately.FRONTEND_PORT="disabled"
: IfFRONTEND_PORT
is set to the exact string"disabled"
, the Frontend Server will not be started.
initializationOptions
from Client: These same keys can be provided by the client as fallbacks if not set in the environment.- Internal Defaults: (e.g.,
logLevel
defaults to 'info').
1. Environment Variables (.env
file or IDE env
block)
This is the recommended way to set OPENAI_API_KEY
, LOG_LEVEL
, and FRONTEND_PORT
for mcp-agentify
's own operation.
Example .env
file (for local scripts/dev.sh
or npm run dev
):
OPENAI_API_KEY=sk-YourOpenAIKeyHereFromDotEnv
LOG_LEVEL=debug
FRONTEND_PORT=3030
# To disable the Frontend UI server, uncomment the next line:
# FRONTEND_PORT="disabled"
# Optional: Define dynamic agents. Comma-separated list of "Vendor/ModelName".
# Example: AGENTS="OpenAI/gpt-4.1,OpenAI/o3,Anthropic/claude-3-opus"
# This will expose MCP methods like: agentify/agent_OpenAI_gpt_4_1, agentify/agent_OpenAI_o3, etc.
AGENTS="OpenAI/gpt-4.1,OpenAI/o3"
When configuring mcp-agentify
in an IDE, you'll typically have a way to specify environment variables for the server process. This is where these should go.
2. MCP initialize
Request (initializationOptions
)
The connecting client (IDE) sends initializationOptions
. This is primarily used to define the backends
that mcp-agentify
will orchestrate.
Example initializationOptions
(JSON sent by client):
{
"logLevel": "trace",
"OPENAI_API_KEY": "sk-ClientProvidedKeyAsFallbackIfEnvNotSet",
"FRONTEND_PORT": 3001,
"backends": [
{
"id": "filesystem",
"displayName": "Local Filesystem Access",
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/Shared/Projects",
"/tmp/agentify-work"
],
"env": {
"FILESYSTEM_LOG_LEVEL": "debug"
}
},
{
"id": "mcpBrowserbase",
"displayName": "Cloud Browser (Browserbase)",
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@browserbasehq/mcp-browserbase",
"--key", "bb_api_YOUR_KEY_AS_ARG_FOR_BROWSERBASE"
]
}
]
}
Key fields in initializationOptions
:
logLevel
,OPENAI_API_KEY
,FRONTEND_PORT
(optional fallbacks): As mentioned,mcp-agentify
prioritizes its own environment variables for these.backends
(required, array): Defines the backend MCP servers.id
: Unique identifier (e.g., "filesystem").displayName
(optional): Human-readable name.type
: Must be"stdio"
.command
: Command to start the backend.args
(optional): Arguments for the command.env
(optional): Environment variables specifically for this spawned backend process.
How to Run & Configure with an MCP Client (IDE)
Your IDE (e.g., Cursor, Windsurf, Claude Desktop) will launch mcp-agentify
.
Configuring Your IDE
You need to tell your IDE:
- How to start
mcp-agentify
: This is typically thecommand
andargs
(if any), and theworkingDirectory
. For local development, this often points tobash scripts/dev.sh
ornpm run dev
. - Environment Variables for
mcp-agentify
: SetOPENAI_API_KEY
,LOG_LEVEL
,FRONTEND_PORT
here. initializationOptions
: Provide the JSON forbackends
and any fallback settings.
Conceptual IDE Configuration Example (e.g., for a claude_desktop_config.json
-like file):
{
"mcpServers": [
{
"mcp-agentify": {
"type": "stdio",
"command": "/Users/steipete/Projects/mcp-agentify/scripts/dev.sh",
"env": {
"logLevel": "trace",
"FRONTEND_PORT": 3030,
"OPENAI_API_KEY": "sk-YourOpenAIKeyFromIDESettingsPlaceholder"
},
"initializationOptions": {
"backends": [
{
"id": "filesystem",
"displayName": "Local Filesystem (Agentify)",
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"${workspaceFolder}"
]
},
{
"id": "mcpBrowserbase",
"displayName": "Web Browser (Browserbase via Agentify)",
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@browserbasehq/mcp-browserbase",
"--key",
"YOUR_BROWSERBASE_KEY_IF_NEEDED"
]
}
]
}
}
}
// ... other MCP server configurations ...
]
}
Key Points for IDE Configuration:
- The IDE's
env
block for themcp-agentify
server is crucial for setting its core operational parameters likeOPENAI_API_KEY
,logLevel
, andFRONTEND_PORT
(for immediate Frontend UI). initializationOptions
is mainly for defining thebackends
array.- Use placeholders like
${workspaceFolder}
if your IDE supports them.
Local Development Startup Methods (referenced by IDE command
):
bash scripts/dev.sh
:- Recommended for IDEs.
- Uses
nodemon
andts-node
. - Picks up
.env
frommcp-agentify
project root forOPENAI_API_KEY
,LOG_LEVEL
,FRONTEND_PORT
. - The IDE's
env
block settings (see example above) would override these if the IDE sets environment variables when launching the script.
npm run dev
:- Similar to
bash scripts/dev.sh
. - Also uses
nodemon
andts-node
. - Also respects
.env
and environment variables set by the IDE.
- Similar to
Frontend UI
mcp-agentify
includes an optional Frontend UI, also referred to as the Frontend Server.
Enabling the Frontend UI
Set the FRONTEND_PORT
environment variable for mcp-agentify
. This is best done via:
- A
.env
file in themcp-agentify
project root when running locally:FRONTEND_PORT=3030 # To disable, set FRONTEND_PORT="disabled"
- The
env
block in your IDE's server configuration formcp-agentify
.
The Frontend UI will start immediately when mcp-agentify
launches if FRONTEND_PORT
is set to a valid number in its environment. If FRONTEND_PORT
is set to "disabled"
, the UI server will not start. If provided only as a fallback in initializationOptions
by a client, it will start after the MCP handshake (unless disabled by an environment variable).
Accessing the Frontend UI
Once mcp-agentify
is running and the Frontend UI is enabled (e.g., FRONTEND_PORT=3030
in its environment), open:
http://localhost:3030
(replace 3030 with your FRONTEND_PORT
if different)
Features
The Frontend UI provides the following sections:
- Gateway Status:
- Shows the overall status of the gateway (e.g., running, uptime).
- Lists configured backend MCP servers and their readiness status (e.g., "Filesystem: Ready", "Browserbase: Not Ready").
- Gateway Configuration:
- Displays the current (sanitized) configuration the gateway is using, including log level, backend definitions, etc. Sensitive information like API keys will be redacted.
- Real-time Logs:
- Streams logs directly from the gateway in real-time via WebSockets.
- Allows filtering logs by minimum severity level (Trace, Debug, Info, Warn, Error, Fatal).
- Provides an "Auto-scroll" option to keep the latest logs in view.
- Displays log timestamps, levels, messages, and any structured details.
- MCP Traces:
- Streams MCP messages exchanged between the gateway and backend servers, as well as between the client IDE and the gateway.
- Shows direction (Incoming to Gateway, Outgoing from Gateway), backend ID (if applicable), MCP method, request/response ID, and sanitized parameters or results.
- Also provides an "Auto-scroll" option.
How it Works
- The
FrontendServer
component (src/frontendServer.ts
) serves the static HTML, CSS, and JavaScript files located infrontend/public/
. - It provides API endpoints (
/api/status
,/api/config
,/api/logs
,/api/mcptrace
) that the frontend JavaScript uses to fetch initial state or paginated historical data (though historical data fetching is not fully implemented in the PoC's UI script). - A WebSocket connection is established between the frontend UI and the
FrontendServer
. - The gateway's main logger (
src/logger.ts
) is configured to pipe log entries (as JSON objects) to theFrontendServer
if the frontend UI is active. - The
BackendManager
and main server logic (src/server.ts
) emit MCP trace events. - The
FrontendServer
receives these log entries and trace events and broadcasts them to all connected WebSocket clients (i.e., open frontend UI pages). - The client-side JavaScript (
frontend/src/index.tsx
and components) receives these WebSocket messages and dynamically updates the corresponding sections in the HTML to display the information.
Local Install and Global Usage (Advanced)
While npm run dev
is great for active development and npx mcp-agentify
(once published) is convenient for project-local use, you might want to install mcp-agentify
globally from your local clone for broader testing or to simulate how a published global package would behave.
1. Global Install from Local Clone
After cloning the repository and ensuring all dependencies are installed (npm install
):
-
Navigate to the project root directory:
cd path/to/mcp-agentify
-
Build the project (if you want to install the compiled version):
npm run build
-
Install globally: To install the current local version globally, use:
npm install -g .
This command links the current directory (
.
) as a global package. If you've runnpm run build
, it will typically link the compiled version based on yourpackage.json
'sbin
andfiles
fields. -
Run the globally installed command: Now you should be able to run
mcp-agentify
from any directory:mcp-agentify
The gateway will start and listen on
stdio
. -
Uninstalling: To remove the global link, you'll typically use the package name defined in
package.json
:npm uninstall -g @your-scope/mcp-agentify # Replace with actual package name
If you used a different name or if it was just a link,
npm unlink .
from the project directory might also be needed, or checknpm list -g --depth=0
to find the linked package name.
2. Using npm link
(Recommended for Development)
npm link
is a more development-friendly way to create a global-like symlink to your local project. This means changes you make to your local code (even without rebuilding, if you run the linked version via ts-node
or if your IDE points to the source) can be reflected immediately when you run the global command.
-
Navigate to the project root directory:
cd path/to/mcp-agentify
-
Create the link:
npm link
This creates a global symlink named after your package name (e.g.,
mcp-agentify
or@your-scope/mcp-agentify
) that points to your current project directory. -
Run the linked command: You can now run
mcp-agentify
(or your package name) from any terminal:mcp-agentify
If your
package.json
bin
points todist/cli.js
, you'll need to runnpm run build
for changes tosrc
to be reflected in the linked command. If yourbin
could somehow point to ats-node
invoker forsrc/cli.ts
(more advanced setup), then changes might be live. -
Unlinking: To remove the symlink:
npm unlink --no-save @your-scope/mcp-agentify # Replace with actual package name # or from the project directory: # npm unlink
Note on .env
with Global Installs:
When running a globally installed or linked mcp-agentify
, it will look for a .env
file in the current working directory from where you run the command, not necessarily from the mcp-agentify
project's original root. For consistent behavior, especially with API keys, ensure your .env
file is in the directory where you execute the mcp-agentify
command, or configure these settings via initializationOptions
from your client tool.
Development
- Clone the repository:
git clone https://github.com/steipete/mcp-agentify.git
- Navigate to the project directory:
cd mcp-agentify
- Install dependencies:
npm install
- Create a
.env
file in the project root (copy from.env.example
) and add yourOPENAI_API_KEY
.OPENAI_API_KEY=your_openai_api_key_here LOG_LEVEL=debug FRONTEND_PORT=3030
- Run in development mode (with hot reloading):
This usesnpm run dev
nodemon
andts-node
to executesrc/cli.ts
.
Testing
Run tests with Vitest:
npm test
To run in watch mode:
npm run test:watch
To get a coverage report:
npm run test:coverage
(Note: Unit and integration tests are planned under Task 11 and 12 respectively.)
License
Dynamic Agent Methods via AGENTS
Environment Variable
mcp-agentify
can expose direct agent interaction methods on the fly based on the AGENTS
environment variable. This is useful for quickly testing different models or providing direct access to specific LLM configurations without defining them as full backend tools.
- Set the
AGENTS
environment variable as a comma-separated string of"Vendor/ModelName"
pairs.- Format:
AGENTS="Vendor1/ModelNameA,Vendor2/ModelNameB"
- Example:
AGENTS="OpenAI/gpt-4.1,OpenAI/o3"
- Note on "OpenAI" vendor: The vendor name "OpenAI" is treated case-insensitively and will be standardized to lowercase
openai
(e.g., "OPENAI/gpt-4.1" becomes "openai/gpt-4.1"). Other vendor names are case-sensitive. - (Ensure the model names are valid for the specified vendor, e.g., as per OpenAI API documentation for
gpt-4.1
,o3
, etc.)
- Format:
- For each entry,
mcp-agentify
will register an MCP method:- The
Vendor/ModelName
string is sanitized (non-alphanumerics, including/
, become_
). - The method will be named
agentify/agent_<sanitized_Vendor_ModelName>
. - Example:
AGENTS="OpenAI/gpt-4.1"
createsagentify/agent_OpenAI_gpt_4_1
.
- The
- These methods currently accept a
{ query: string, context?: OrchestrationContext }
payload and return a placeholder response. Full LLM interaction logic for these dynamic agents will be implemented in the future.