MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

SenseMCP is an open-source MCP server that replaces flat cosine similarity search with cone-based spatial navigation through embedding space.

Created 4/12/2026
Updated about 19 hours ago
Repository documentation and setup instructions

SenseMCP

Spatial reasoning layer for AI agents. An MCP server that replaces flat retrieval (RAG) with directional, cone-based navigation through semantic space — giving LLMs the ability to explore, focus, pivot, and remember their position within a knowledge base.

"Your AI agent searches. Ours navigates."


Why SenseMCP

Standard RAG is stateless and single-shot: embed a query, fetch top-k similar documents, done. This works for simple lookups but breaks down for complex reasoning:

| Problem | RAG Behavior | SenseMCP Behavior | |---------|-------------|-------------------| | Multi-hop questions | Retrieves from one direction only | Navigates between concepts via path-finding | | Finding connections | Misses items that share no keywords | Discovers bridge items through spatial traversal | | Broad exploration | Returns a narrow cluster of similar items | Scans space, detects clusters, looks in multiple directions | | Ambiguous queries | Best-guess top-k from one embedding | Wide cone reveals clusters, then narrow to disambiguate | | Follow-up questions | Starts fresh every time | Maintains spatial position and exploration memory |


How It Works

The Cone Query Model

Every piece of data is a point in high-dimensional embedding space. Instead of cosine similarity search, SenseMCP uses cone-shaped queries — like giving the agent a field of vision:

Given:
  origin    O  — agent's current position in embedding space
  direction D  — unit vector toward the query concept
  angle     theta  — cone half-angle (narrow = precise, wide = exploratory)
  depth     r  — maximum distance from origin

A point P is inside the cone if:
  |P - O| <= r                              (within depth)
  cos(angle between (P-O) and D) >= cos(theta)  (within cone angle)

Points inside the cone are scored by a composite of alignment, distance, and importance — then optionally reranked for diversity or novelty.

Adaptive Scoring

The cone engine adapts its scoring weights based on data distribution:

  • When points are at varied distances (positional queries): 50% alignment, 30% proximity, 20% importance
  • When points cluster at similar distances (unit-norm embeddings from origin): shifts weight from distance to alignment — 70% alignment, 10% proximity, 20% importance

This makes SenseMCP work well with both positional navigation and origin-based queries without manual tuning.

The Navigation Loop

Instead of single-shot retrieval, agents navigate iteratively:

1. SCAN the space       -> discover what clusters exist
2. LOOK in a direction  -> see what's in a semantic cone
3. MOVE toward interest -> shift position for relative queries
4. LOOK again           -> see new things from new vantage point
5. PATH between concepts -> find stepping-stone connections
6. INTERSECT directions -> find items at the overlap of multiple concepts

Each action updates the agent's spatial state — position, history, explored regions — enabling informed follow-up decisions.


Architecture

AI Agent (Claude, GPT, etc.)
    |
    | MCP Protocol (stdio)
    v
+--SenseMCP Server-------------------------------------------+
|                                                             |
|  Tools Layer (14 MCP tools)                                 |
|    look, scan, focus, move, path, intersect, zoom,          |
|    suggest, remember, history, ingest, status                |
|                                                             |
|  Navigation Controller (navigator.ts)                       |
|    Orchestrates all spatial operations                       |
|    Manages reranking, novelty boost, auto-widening           |
|                                                             |
|  Core Engines                                               |
|    Cone Engine ---- cone queries, intersection, soft union   |
|    Reranker ------- blend, MMR diversity, novelty boost      |
|    Clusterer ------ k-means for scan/overview                |
|    Strategy Advisor  query analysis + tool recommendation    |
|    Query Expander -- multi-directional sub-query generation  |
|    Session Manager - position, history, bookmarks, regions   |
|                                                             |
|  Adapters                                                   |
|    Vector Store: InMemory | HNSW (hnswlib-node)             |
|    Embeddings:  Synthetic | Local MiniLM | OpenAI           |
+-------------------------------------------------------------+

MCP Tools Reference

Core Navigation

| Tool | Description | |------|-------------| | sense_look | Look in a semantic direction. Returns items within a cone of attention. Supports reranking (blend/MMR), novelty boost, and auto-widening. | | sense_scan | Wide sweep of the space. Returns cluster summaries with centroids — good for getting an overview before drilling down. | | sense_focus | Narrow, high-precision search. Small cone angle + high alignment threshold for when you know exactly what you need. | | sense_move | Shift position in semantic space. Targets: toward:<concept>, to_point:<id>, to_bookmark:<name>. All subsequent queries become relative to the new position. | | sense_path | Find a trajectory between two concepts with intermediate stepping-stone items. Returns items at each step along the path. | | sense_intersect | Find items at the geometric intersection of 2-5 semantic cones. Supports strict (all cones) and soft (weighted coverage) modes. | | sense_zoom | Progressive zoom: starts wide and narrows through stages (e.g. 80-50-25 degrees), finding increasingly precise results in a single call. | | sense_suggest | Meta-tool: analyzes a query and recommends which navigation strategy to use (direct, intersection, exploration, path, or zoom). |

Memory & State

| Tool | Description | |------|-------------| | sense_remember | Bookmark current position with a name and optional notes. | | sense_history | View navigation history — positions visited, actions taken, bookmarks saved. | | sense_status | Get knowledge space stats — total items, dimensions, current position, session info. |

Data Management

| Tool | Description | |------|-------------| | sense_ingest | Add items to the knowledge space with optional importance weights and metadata. |


Key Algorithms

1. Hybrid Reranking (Blend Mode)

After cone query retrieval, results are reranked by blending the spatial composite score with cosine similarity to the original query:

blendedScore = (1 - alpha) * normalizedComposite + alpha * normalizedCosineSim

This preserves the cone-based discovery advantage while improving top-K precision for the specific query intent.

2. MMR Diversity Reranking

Maximal Marginal Relevance selects results greedily to balance relevance with diversity:

MMR(i) = lambda * relevance(i) - (1 - lambda) * max_similarity_to_already_selected(i)

This eliminates near-duplicate results — critical for Wikipedia-style data where multiple chunks from the same article would otherwise dominate the top-10.

3. Soft Cone Intersection

Strict intersection requires items to appear in ALL cones — too restrictive in high dimensions (384D+). Soft intersection applies a coverage penalty instead:

adjustedScore = bestScore * (inCones / totalCones) ^ softness
  • softness = 0 — union (any cone counts)
  • softness = 0.5 — balanced (default, items in more cones score higher)
  • softness = 2 — nearly strict (heavy penalty for missing cones)

4. Novelty Boost

After reranking, items far from previously explored regions get a score increase:

boostedScore = score * (1 + noveltyWeight * normalizedDistToExplored)

This activates the explored-region tracking to push the agent toward undiscovered territory on subsequent looks.

5. Auto Cone Widening

When a narrow cone returns too few results (< 3), the engine automatically retries with a wider cone:

attempt 1: original angle
attempt 2: angle + 10 degrees
attempt 3: angle + 20 degrees (capped at 89 degrees)

This adapts to data distribution — no need to manually tune cone angles for different datasets or embedding dimensions.

6. Query Expansion for Exploration

For exploration tasks, a single query direction misses most of the relevant space. The query expander generates multiple search directions:

  1. Original query — primary direction
  2. Sub-phrases — splits multi-concept queries (e.g. "machine learning and biology" -> two directions)
  3. Cluster-directed — directions toward scan cluster centroids that are related to the query
  4. Orthogonal — perpendicular to query but pulled toward nearby clusters (Gram-Schmidt)

Each expanded direction gets its own look() call with MMR + novelty boost, and results are aggregated.

7. Strategy Advisor

Analyzes query structure and data distribution to recommend navigation strategy:

  • Direct — simple queries, high similarity to existing items
  • Intersection — query mentions overlap/common between multiple concepts
  • Path — query asks about connections between distant concepts
  • Zoom — query asks for detailed/specific results
  • Exploration — broad queries or high cluster diversity in results

8. HNSW Indexing

For datasets beyond a few hundred items, SenseMCP uses Hierarchical Navigable Small World graphs (via hnswlib-node) for approximate nearest neighbor search:

  • Indexed cone queries: kNN candidates -> cone geometry filter (5x candidate over-fetch)
  • Indexed intersection: union of kNN candidates from each cone direction -> intersection filter
  • Benchmarked at 90x speedup at 50K items vs brute-force

Embedding Backends

| Backend | Dimensions | Use Case | |---------|-----------|----------| | SyntheticEmbeddingService | 64D | Fast benchmarks, no API key needed. Deterministic hash-based embeddings where word overlap creates similarity. | | LocalEmbeddingService | 384D | Real embeddings via @xenova/transformers (MiniLM-L6-v2). No API key, runs locally. Used for Wikipedia benchmarks. | | OpenAIEmbeddingService | 1536D | Production quality via OpenAI text-embedding-3-small. Requires OPENAI_API_KEY. |


Benchmarks

Synthetic Benchmark (1,048 items, 64D, 5 domains)

Controlled dataset with known bridge items between domains. Tests structural properties of navigation vs retrieval.

npm run benchmark

| Method | P@5 | P@10 | Bridge Items | Subdomains | |--------|-----|------|-------------|------------| | Naive RAG | 0.258 | 0.206 | 0.3 | 6.4 | | Reranked RAG | 0.269 | 0.227 | 0.5 | 6.2 | | Iterative RAG | 0.246 | 0.235 | 0.3 | 6.0 | | SenseMCP | 0.331 | 0.269 | 1.3 | 13.6 |

Key results:

  • Multi-hop: SenseMCP finds 2.1 bridge items vs 0.9 for RAG (+133%)
  • Connection-finding: SenseMCP discovers 2.0 bridges vs 0.1 for RAG (20x improvement)
  • Exploration: 3.0 subdomains covered vs 1.7 for RAG (+76%)
  • Single-hop: Parity with RAG (P@5 = 0.540 both)

Wikipedia Benchmark (166 chunks, 384D real embeddings)

Real Wikipedia articles across 5 domains, 12 subdomains. Tests with actual semantic content.

npm run benchmark:wiki:verbose

| Task Type | SenseMCP Keyword@10 | RAG Keyword@10 | SenseMCP Domains | RAG Domains | |-----------|-------------------|---------------|-----------------|-------------| | Single-hop | 6.8 | 9.4 | 1.2 | 1.6 | | Multi-hop | 7.2 | 8.6 | 2.2 | 2.2 | | Exploration | 3.7 | 8.3 | 3.7 | 3.7 | | Connection | 4.8 | 7.5 | 2.3 | 2.0 | | Intersection | 8.8 | 9.0 | 2.3 | 2.0 |

Key results:

  • Intersection tasks: Near-parity on keywords (8.8 vs 9.0) with more domain coverage (2.3 vs 2.0) — soft intersection finds cross-domain results RAG misses
  • Connection-finding: SenseMCP covers more domains (2.3 vs 2.0), finding cross-domain bridges
  • Multi-hop mh-5 (biology + optimization): SenseMCP 9/10 vs RAG 8/10 — spatial traversal finds more relevant content
  • Exploration: Tied on domain coverage (3.7 each); keyword gap is a known limitation with current strategy

Scale Benchmark

Tests HNSW indexing performance from 1K to 50K items.

npm run benchmark:scale:small   # 1K-10K items
npm run benchmark:scale         # up to 50K items

Quick Start

Install

git clone <repo-url>
cd sense-mcp
npm install

Run Tests

npm test           # 89 tests across 8 test files

Run as MCP Server

# With local embeddings (no API key needed)
npm run dev

# With OpenAI embeddings
OPENAI_API_KEY=sk-... npm run dev

Configure with Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "sensemcp": {
      "command": "node",
      "args": ["--loader", "tsx", "/path/to/sense-mcp/src/index.ts"],
      "env": {
        "OPENAI_API_KEY": "your-key-here"
      }
    }
  }
}

Run Benchmarks

npm run benchmark                # Synthetic benchmark (fast, no API key)
npm run benchmark:wiki:verbose   # Wikipedia benchmark (local MiniLM embeddings)
npm run benchmark:scale:small    # HNSW scale test

Project Structure

sense-mcp/
  src/
    index.ts                      # MCP server entry point
    server.ts                     # Tool registration, session management
    types.ts                      # All shared type definitions
    core/
      vector-math.ts              # dot, norm, normalize, cosine similarity, lerp
      cone-engine.ts              # Cone queries, intersection, soft intersection
      navigator.ts                # High-level navigation orchestrator
      reranker.ts                 # Blend reranking, MMR, novelty boost
      clusterer.ts                # K-means clustering for scan
      session.ts                  # Session state, history, bookmarks, explored regions
      strategy-advisor.ts         # Query analysis + strategy recommendation
      query-expander.ts           # Multi-directional query expansion
    adapters/
      vector-store.ts             # VectorStore interface + InMemoryVectorStore
      hnsw-store.ts               # HNSW-backed store (hnswlib-node)
      embeddings.ts               # Synthetic, Local (MiniLM), OpenAI embedding services
    tools/
      sense-look.ts               # Directional cone search
      sense-scan.ts               # Wide sweep, cluster detection
      sense-focus.ts              # Narrow, high-precision search
      sense-move.ts               # Position shift
      sense-path.ts               # Concept-to-concept pathfinding
      sense-intersect.ts          # Multi-cone intersection (strict + soft)
      sense-zoom.ts               # Progressive zoom (multi-stage narrowing)
      sense-suggest.ts            # Strategy recommendation meta-tool
      sense-remember.ts           # Spatial bookmarks
      sense-history.ts            # Navigation history
      sense-ingest.ts             # Data ingestion
      sense-status.ts             # Space statistics
  tests/                          # 89 tests across 8 files
    vector-math.test.ts           # 18 tests — vector operations
    cone-engine.test.ts           # 7 tests — cone geometry
    cone-intersection.test.ts     # 12 tests — strict + soft intersection
    reranker.test.ts              # 17 tests — blend, MMR, novelty boost
    navigator.test.ts             # 11 tests — navigation, auto-widening
    strategy-advisor.test.ts      # 7 tests — strategy recommendations
    query-expander.test.ts        # 6 tests — query expansion
    hnsw-store.test.ts            # 11 tests — HNSW indexing
  benchmarks/
    run.ts                        # Synthetic benchmark runner
    run-wikipedia.ts              # Wikipedia real-data benchmark
    scale-test.ts                 # HNSW scale/performance test
    generate-dataset.ts           # Synthetic dataset generator
    report.ts                     # Markdown report generator
    datasets/
      wikipedia-loader.ts         # Wikipedia REST API loader + caching
    tasks/
      single-hop.ts               # Simple retrieval tasks
      multi-hop.ts                # Multi-step reasoning tasks
      exploration.ts              # Broad coverage tasks
      connection-finding.ts       # Cross-domain bridge discovery
      wikipedia-tasks.ts          # 21 curated Wikipedia tasks
    baselines/
      naive-rag.ts                # Cosine similarity top-k
      reranked-rag.ts             # Cosine sim + word overlap reranking
      iterative-rag.ts            # Multi-iteration query refinement
    agent/
      agent-runner.ts             # Claude-powered autonomous agent runner
      tool-adapter.ts             # Adapts MCP tools for agent use
      run-agent-benchmark.ts      # Agent benchmark entry point

Dependencies

| Package | Purpose | |---------|---------| | @modelcontextprotocol/sdk | MCP server protocol | | zod | Schema validation for tool inputs | | hnswlib-node | HNSW approximate nearest neighbor indexing | | @xenova/transformers | Local MiniLM-L6-v2 embeddings (384D) | | @anthropic-ai/sdk | Claude API for agent benchmarks | | dotenv | Environment variable loading |

Runtime: Node.js 20+, TypeScript (ESM modules). No GPU required.


License

MIT

Quick Setup
Installation guide for this server

Install Package (if required)

npx @modelcontextprotocol/server-sense-mcp

Cursor configuration (mcp.json)

{ "mcpServers": { "toni-d-e-v-sense-mcp": { "command": "npx", "args": [ "toni-d-e-v-sense-mcp" ] } } }