Unified MCP server for 16 databases — PostgreSQL, MongoDB, Neo4j, Elasticsearch, Redis, and more. ReScript core, runs on Deno.
// SPDX-License-Identifier: MIT // SPDX-FileCopyrightText: 2025 Jonathan D.A. Jewell
= Polyglot DB MCP :toc: macro :toclevels: 2 :icons: font :source-highlighter: rouge
One MCP server. 20 databases. Zero context switching.
Query PostgreSQL, MongoDB, Neo4j, Elasticsearch, Redis, and 15 more databases through a single unified interface. Built with ReScript, runs on Deno.
image:https://img.shields.io/badge/License-MIT%20OR%20AGPL--3.0-blue.svg[License: MIT OR AGPL-3.0,link=LICENSE.txt] image:https://img.shields.io/badge/Runtime-Deno-black.svg[Runtime: Deno,link=https://deno.land] image:https://img.shields.io/badge/Language-ReScript-red.svg[Language: ReScript,link=https://rescript-lang.org]
toc::[]
== Why Polyglot DB?
Modern applications use multiple databases — SQL for transactions, Redis for caching, Elasticsearch for search, vectors for AI. Switching between CLIs, APIs, and query languages is exhausting.
Polyglot DB MCP gives Claude (and other MCP clients) direct access to all your databases through natural language:
[quote] Find all users in PostgreSQL who signed up last week, then check if they're in the Redis cache
[quote] Search Elasticsearch for 'authentication errors' and correlate with the MongoDB audit log
[quote] Store this embedding in Qdrant and link it to the Neo4j knowledge graph
== Installation
=== Option A: Direct (Recommended for development)
[source,bash]
Clone the repository
git clone https://github.com/hyperpolymath/polyglot-db-mcp.git cd polyglot-db-mcp
Add to Claude Code
claude mcp add polyglot-db -- deno run
--allow-net --allow-read --allow-write --allow-env
$(pwd)/server.js
=== Option B: Container (Recommended for production)
[source,bash]
Using nerdctl (containerd)
nerdctl run -d --name polyglot-db
-e POSTGRES_HOST=host.docker.internal
-e MONGODB_URL=mongodb://host.docker.internal:27017
ghcr.io/hyperpolymath/polyglot-db-mcp:latest
Using podman
podman run -d --name polyglot-db
-e POSTGRES_HOST=host.containers.internal
ghcr.io/hyperpolymath/polyglot-db-mcp:latest
Using docker
docker run -d --name polyglot-db
-e POSTGRES_HOST=host.docker.internal
ghcr.io/hyperpolymath/polyglot-db-mcp:latest
=== Option C: Deno Deploy (Serverless / HTTP Mode)
Polyglot DB MCP supports the MCP Streamable HTTP transport (June 2025 spec) for cloud deployment:
[source,bash]
Deploy to Deno Deploy
deployctl deploy --project=polyglot-db-mcp server.js
Or run HTTP mode locally
deno task serve
Once deployed, configure your MCP client to connect via HTTP:
[source,json]
{ "mcpServers": { "polyglot-db": { "transport": { "type": "http", "url": "https://polyglot-db-mcp.deno.dev/mcp" } } } }
HTTP endpoints:
GET /health- Health checkGET /info- Server informationPOST /mcp- MCP Streamable HTTP endpoint (JSON-RPC 2.0)
== Quick Start
=== 1. Configure your databases
Create a .env file or export environment variables:
[source,bash]
Example: PostgreSQL + Redis + Elasticsearch
export POSTGRES_HOST=localhost POSTGRES_DATABASE=myapp export DRAGONFLY_HOST=localhost export ELASTICSEARCH_URL=http://localhost:9200
=== 2. Ask Claude
"What databases are connected?" "Show me the schema of the users table in PostgreSQL" "Cache this result in Redis with a 1 hour TTL"
== Supported Databases
=== Relational
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|PostgreSQL |PostgreSQL (FOSS) |Complex queries, ACID, extensions (PostGIS, pgvector) |pg_*
|MariaDB |GPL v2 (FOSS) |Web apps, MySQL compatibility |maria_*
|SQLite |Public Domain |Local storage, embedded, single-file |sqlite_*
|===
=== Document
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|MongoDB |SSPL |Flexible schemas, horizontal scaling |mongo_*
|SurrealDB |BSL/Apache 2.0 |Multi-model (doc + graph + SQL) |surreal_*
|ArangoDB |Apache 2.0 (FOSS) |Multi-model (doc + graph + KV), AQL |arango_*
|CouchDB |Apache 2.0 (FOSS) |Document DB with HTTP API, Mango queries |couchdb_*
|===
=== Wide Column
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|Cassandra |Apache 2.0 (FOSS) |Distributed, high availability, time-series |cassandra_*
|===
=== Graph
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|Neo4j |GPL v3 / Commercial |Relationships, social networks, fraud detection |neo4j_*
|Virtuoso |GPL v2 / Commercial |RDF triplestore, SPARQL, linked data |virtuoso_*
|===
=== Cache & Key-Value
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|Dragonfly |BSL |Redis replacement, 25x faster |dragonfly_*
|Memcached |BSD (FOSS) |Simple distributed caching |memcached_*
|LMDB |OpenLDAP (FOSS) |Embedded KV with ACID |lmdb_*
|===
=== Search
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|Elasticsearch |Elastic License 2.0 |Full-text search, log analytics |es_*
|Meilisearch |MIT (FOSS) |Instant, typo-tolerant search |meili_*
|===
=== Vector
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|Qdrant |Apache 2.0 (FOSS) |AI embeddings, semantic search |qdrant_*
|===
=== Time Series
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|InfluxDB |MIT (v1) |IoT, monitoring, metrics |influx_*
|===
=== Analytics
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|DuckDB |MIT (FOSS) |OLAP, query CSV/Parquet/JSON directly |duck_*
|===
=== Specialized
[cols="1,1,2,1",options="header"] |=== |Database |License |Best For |Tools
|XTDB |MIT (FOSS) |Bitemporal queries, audit trails |xtdb_*
|iTop |AGPL v3 (FOSS) |IT asset management, CMDB |itop_*
|===
NOTE: FOSS = Free & Open Source | Source-Available = viewable but restricted
== Configuration
Each database reads from environment variables. Only configure what you need.
=== PostgreSQL
[source,bash]
POSTGRES_HOST=localhost POSTGRES_PORT=5432 POSTGRES_DATABASE=mydb POSTGRES_USER=postgres POSTGRES_PASSWORD=secret
Connection pool (optional)
POSTGRES_POOL_MAX=10 # Max connections POSTGRES_IDLE_TIMEOUT=30 # Seconds before idle connection closed POSTGRES_CONNECT_TIMEOUT=10 # Connection timeout in seconds
=== MongoDB
[source,bash]
MONGODB_URL=mongodb://localhost:27017 MONGODB_DATABASE=mydb
Connection pool (optional)
MONGODB_POOL_MAX=10 # Max connections MONGODB_POOL_MIN=1 # Min connections MONGODB_IDLE_TIMEOUT=30000 # Idle timeout in ms MONGODB_CONNECT_TIMEOUT=10000 # Connect timeout in ms MONGODB_SERVER_TIMEOUT=30000 # Server selection timeout in ms
=== Neo4j
[source,bash]
NEO4J_URL=bolt://localhost:7687 NEO4J_USERNAME=neo4j NEO4J_PASSWORD=secret
=== Elasticsearch
[source,bash]
ELASTICSEARCH_URL=http://localhost:9200 ELASTICSEARCH_USERNAME=elastic # optional ELASTICSEARCH_PASSWORD=secret # optional
=== Dragonfly / Redis
[source,bash]
DRAGONFLY_HOST=localhost DRAGONFLY_PORT=6379 DRAGONFLY_PASSWORD= # optional
=== InfluxDB
[source,bash]
INFLUXDB_URL=http://localhost:8086 INFLUXDB_TOKEN=your-token INFLUXDB_ORG=your-org INFLUXDB_BUCKET=your-bucket
=== SurrealDB
[source,bash]
SURREAL_URL=http://localhost:8000 SURREAL_NAMESPACE=test SURREAL_DATABASE=test SURREAL_USERNAME=root SURREAL_PASSWORD=root
=== ArangoDB
[source,bash]
ARANGO_URL=http://localhost:8529 ARANGO_DATABASE=_system ARANGO_USERNAME=root ARANGO_PASSWORD=
=== Virtuoso
[source,bash]
VIRTUOSO_ENDPOINT=http://localhost:8890/sparql VIRTUOSO_UPDATE_ENDPOINT=http://localhost:8890/sparql-auth VIRTUOSO_USERNAME= VIRTUOSO_PASSWORD= VIRTUOSO_DEFAULT_GRAPH=
=== CouchDB
[source,bash]
COUCHDB_URL=http://localhost:5984 COUCHDB_USERNAME=admin COUCHDB_PASSWORD=secret COUCHDB_DATABASE=mydb
=== Cassandra
[source,bash]
CASSANDRA_CONTACT_POINTS=localhost # comma-separated list CASSANDRA_DATACENTER=datacenter1 CASSANDRA_KEYSPACE=mykeyspace CASSANDRA_USERNAME=cassandra CASSANDRA_PASSWORD=cassandra
=== SQLite
[source,bash]
SQLITE_PATH=./data.db # or :memory:
=== DuckDB
[source,bash]
DUCKDB_PATH=./analytics.db # or :memory:
=== Qdrant
[source,bash]
QDRANT_URL=http://localhost:6333 QDRANT_API_KEY= # optional
=== Meilisearch
[source,bash]
MEILISEARCH_URL=http://localhost:7700 MEILISEARCH_API_KEY= # optional
=== MariaDB
[source,bash]
MARIADB_HOST=localhost MARIADB_PORT=3306 MARIADB_USER=root MARIADB_PASSWORD=secret MARIADB_DATABASE=mydb
Connection pool (optional)
MARIADB_POOL_MAX=10 # Max connections MARIADB_ACQUIRE_TIMEOUT=10000 # Acquire timeout in ms MARIADB_IDLE_TIMEOUT=30000 # Idle timeout in ms MARIADB_CONNECT_TIMEOUT=10000 # Connect timeout in ms
=== Memcached
[source,bash]
MEMCACHED_SERVERS=localhost:11211
=== LMDB
[source,bash]
LMDB_PATH=./lmdb-data
=== XTDB
[source,bash]
XTDB_URL=http://localhost:3000
=== iTop
[source,bash]
ITOP_URL=http://localhost/itop ITOP_USERNAME=admin ITOP_PASSWORD=secret
== Usage Examples
=== Meta Tools
db_list List all 18 supported databases db_status Check which databases are currently connected db_help [database] Get available tools for a specific database
=== Natural Language Queries
Ask Claude things like:
PostgreSQL:
[quote] Create a users table with id, email, and created_at columns
[quote] Find all orders over $100 from the last month
MongoDB:
[quote] Insert a new document into the products collection
[quote] Aggregate sales by category with a $match and $group pipeline
Neo4j:
[quote] Find the shortest path between User:alice and User:bob
[quote] Show all nodes connected to the 'Engineering' department
Elasticsearch:
[quote] Search for documents containing 'critical error' in the logs index
[quote] Get the mapping for the products index
Redis/Dragonfly:
[quote] Set user:123:session with a 30 minute TTL
[quote] Get all keys matching cache:*
Qdrant:
[quote] Search for vectors similar to this embedding in the documents collection
[quote] Create a new collection with 1536 dimensions for OpenAI embeddings
Cross-Database:
[quote] Query users from PostgreSQL and cache the result in Redis
[quote] Find products in MongoDB and index them in Meilisearch
== Architecture
polyglot-db-mcp/ ├── index.js # MCP server entry point ├── src/ # ReScript source (core adapters) │ ├── Adapter.res # Shared types │ ├── bindings/ # Database client FFI │ └── adapters/ # PostgreSQL, MongoDB, SQLite, Dragonfly, Elasticsearch ├── adapters/ # JavaScript adapters (exotic databases) ├── lib/es6/ # Compiled ReScript output └── STATE.scm # Project state tracking
=== Language Policy
[cols="1,1,2",options="header"] |=== |Component |Language |Rationale
|Core adapters (5) |ReScript |Type safety, smaller bundles |Exotic adapters (11) |JavaScript |Pragmatic for v1.x |Future (v2.0.0) |100% ReScript |RSR Gold compliance |===
IMPORTANT: TypeScript is prohibited. We chose ReScript for its superior type inference and ML heritage.
== Development
=== Building ReScript
[source,bash]
npm install # Install ReScript compiler npm run res:build # Compile to JavaScript npm run res:watch # Watch mode
=== Running Locally
[source,bash]
deno task start
or
deno run --allow-net --allow-read --allow-write --allow-env index.js
=== Git Hooks
Enable the pre-commit hook to enforce language policy:
[source,bash]
git config core.hooksPath .githooks
== Adding a New Database
Create src/adapters/YourDb.res:
[source,rescript]
// SPDX-License-Identifier: MIT open Adapter
let name = "yourdb" let description = "Your database description"
let connect = async () => { /* ... / } let disconnect = async () => { / ... / } let isConnected = async () => { / ... */ }
let tools: Js.Dict.t = { let dict = Js.Dict.empty() Js.Dict.set(dict, "yourdb_query", { description: "Execute a query", params: makeParams([("query", stringParam("Query to run"))]), handler: queryHandler, }) dict }
Then import in index.js and rebuild.
== Roadmap
[cols="1,1,3",options="header"] |=== |Version |Status |Highlights
|1.0.0 |Released |16 databases, ReScript core, CI/CD |1.1.0 |In Progress |Connection pooling, container images, better errors |1.2.0 |Planned |Cross-database pipelines, caching helpers |2.0.0 |Vision |100% ReScript, RSR Gold compliance |===
== Related Projects
- https://github.com/hyperpolymath/arango-mcp[arango-mcp] — ArangoDB MCP server
- https://github.com/hyperpolymath/virtuoso-mcp[virtuoso-mcp] — Virtuoso SPARQL MCP server
== License
Dual-licensed: link:LICENSE.txt[MIT] OR link:LICENSE.txt[AGPL-3.0-or-later] — your choice.
We encourage (but don't require) layering the https://github.com/hyperpolymath/palimpsest-license[Palimpsest License] for ethical AI development practices.
© 2025 Jonathan D.A. Jewell (https://github.com/hyperpolymath[@hyperpolymath])