MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

Small local MCP server for routing Codex tool calls to Ollama, with optional @ollama / /ollama / #ollama skill patterns.

Created 4/14/2026
Updated about 9 hours ago
Repository documentation and setup instructions

MCP Ollama

Small local MCP server for routing Codex tool calls to an Ollama instance.

This project is useful when you want to keep Codex as the main coding agent, but offload selected prompts to a local or self-hosted Ollama model.

What it does

  • exposes Ollama as an MCP tool server over stdio
  • works with Codex MCP configuration
  • includes an optional Codex skill for @ollama, /ollama, and #ollama routing patterns

Included tools

  • ollama_ping: checks whether Ollama is reachable
  • ollama_list_models: lists installed Ollama models
  • ollama_chat: sends a prompt to Ollama and returns the reply

Quick start

  1. Install dependencies:
npm install
  1. Set your Ollama environment variables:
$env:OLLAMA_BASE_URL="http://127.0.0.1:11434"
$env:OLLAMA_MODEL="gemma4:latest"
  1. Run the server locally:
npm start

This server uses stdio, so in normal usage it is meant to be launched by an MCP client such as Codex.

Environment variables

  • OLLAMA_BASE_URL: defaults to http://127.0.0.1:11434
  • OLLAMA_MODEL: defaults to llama3.1:8b

Example:

$env:OLLAMA_BASE_URL="http://127.0.0.1:11434"
$env:OLLAMA_MODEL="gemma4:latest"

Example Codex MCP registration

Add a block like this to ~/.codex/config.toml:

[mcp_servers.mcp_ollama]
command = "node"
args = ["C:\\path\\to\\MCP-ollama\\index.js"]

[mcp_servers.mcp_ollama.env]
OLLAMA_BASE_URL = "http://127.0.0.1:11434"
OLLAMA_MODEL = "gemma4:latest"

A ready-to-edit sample is included in config.example.toml.

Optional skill

This repo also includes a sample Codex skill at skills/ollama-routing.

It adds these prefixes:

  • @ollama: call Ollama and return the raw response as directly as possible
  • /ollama: call Ollama and let Codex clean up the result
  • #ollama: show both the raw Ollama output and a short Codex post-processed version

To install it locally, copy the folder into ~/.codex/skills/ollama-routing.

Example usage

After the MCP server and skill are installed, these patterns are intended to feel different:

  • @ollama explain this stack trace
  • /ollama summarize this answer in Korean
  • #ollama compare these two ideas

The general idea is:

  • @ollama: raw or near-verbatim Ollama output
  • /ollama: Ollama first, then Codex post-processing
  • #ollama: show both

Notes

  • Do not commit node_modules
  • Prefer committing package-lock.json
  • Replace the sample paths and base URL with your own environment
Quick Setup
Installation guide for this server

Install Package (if required)

npx @modelcontextprotocol/server-mcp-ollama

Cursor configuration (mcp.json)

{ "mcpServers": { "twoeater-mcp-ollama": { "command": "npx", "args": [ "twoeater-mcp-ollama" ] } } }