MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

A powerful, pluggable vector memory and Model Context Protocol (MCP) server for local semantic search and long-term memory.

Created 2/20/2026
Updated about 9 hours ago
Repository documentation and setup instructions

@one710/consciousness

npm version npm downloads Build Status License: MIT

A powerful, pluggable vector memory and Model Context Protocol (MCP) server for local semantic search and long-term memory.

Features

  • MCP Integration: Fully compatible with the Model Context Protocol.
  • Pluggable Architecture: Easily swap embedding providers and vector stores.
  • Embedded Local Storage: Supports Filesystem and Memory stores out of the box.
  • Semantic Search: Use state-of-the-art embeddings for intelligent memory retrieval.
  • DTS Indexing: Optimized search using Distance to Samples (DTS) logic.

Quick Start (using npx)

You can run the consciousness MCP server directly without installation using npx:

npx @one710/consciousness

By default, this will start an MCP server named "consciousness" using a FilesystemVectorStore (persisted to ./memory_store.json) and HFEmbeddingProvider.

Installation

npm install @one710/consciousness

Usage in Code

Creating an MCP Server

import { createServer } from "@one710/consciousness";
import { MemoryVectorStore } from "@one710/consciousness/vector/memory";
import { HFEmbeddingProvider } from "@one710/consciousness/embeddings/huggingface";

const provider = new HFEmbeddingProvider();
const store = new MemoryVectorStore(provider);
const server = createServer("my-server", "1.0.0", store);

// Connect to transport (e.g., Stdio)
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const transport = new StdioServerTransport();
await server.connect(transport);

Embedding Providers

Hugging Face (Local)

Uses @huggingface/transformers to generate embeddings locally on your CPU/GPU.

import { HFEmbeddingProvider } from "@one710/consciousness/embeddings/huggingface";
const provider = new HFEmbeddingProvider();

AI SDK (Cloud/Remote)

Uses the Vercel AI SDK to connect to any supported provider (e.g., OpenAI, Anthropic, Google).

import { AISDKEmbeddingProvider } from "@one710/consciousness/embeddings/aisdk";
import { openai } from "@ai-sdk/openai";

const provider = new AISDKEmbeddingProvider(
  openai.embedding("text-embedding-3-small"),
  1536, // Dimensions
);

Vector Stores

Memory Store (In-memory)

import { MemoryVectorStore } from "@one710/consciousness/vector/memory";
const store = new MemoryVectorStore(provider);

Filesystem Store (Local Persistence)

import { FilesystemVectorStore } from "@one710/consciousness/vector/filesystem";
const store = new FilesystemVectorStore(provider, "./memory-data.json");

Chroma Store (Distributed/Managed)

import { ChromaVectorStore } from "@one710/consciousness/vector/chroma";
import { ChromaClient } from "chromadb";

const client = new ChromaClient();
const store = new ChromaVectorStore(provider, client, "my-collection");

License

This project is licensed under the MIT License.

Quick Setup
Installation guide for this server

Install Package (if required)

npx @modelcontextprotocol/server-consciousness

Cursor configuration (mcp.json)

{ "mcpServers": { "one710-consciousness": { "command": "npx", "args": [ "one710-consciousness" ] } } }