MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

๐Ÿš€ AI Books MCP Server - Extend LLM context 15-60ร— via gravitational memory | Official MCP server for Claude Code & Anthropic

Created 2/11/2026
Updated about 21 hours ago
Repository documentation and setup instructions

AI Books MCP Server

Universal LLM Context Extension via Gravitational Memory Compression

License: MIT MCP

Extend any LLM's context window by 15-60ร— while maintaining 100% data integrity. Built on quantum-inspired gravitational memory compression.

๐Ÿš€ Features

  • Massive Context Extension: Extend LLM context 15-60ร— beyond native limits
  • 100% Data Integrity: Cryptographic hash verification ensures perfect accuracy
  • Universal Compatibility: Works with Claude, GPT-4, Llama, and any LLM
  • Zero Configuration: Works out of the box with Claude Code
  • Lightning Fast: Query libraries in milliseconds
  • Memory Efficient: Compression ratios up to 1240ร— on dense technical content

๐Ÿ“ฆ Installation

For Claude Code Users

npm install -g ai-books-mcp-server

Then add to your Claude Code MCP settings:

{
  "mcpServers": {
    "ai-books": {
      "command": "ai-books-mcp-server"
    }
  }
}

For Developers

git clone https://github.com/TryBoy869/ai-books-mcp-server.git
cd ai-books-mcp-server
npm install
npm run build

๐ŸŽฏ Use Cases

1. Large Codebases

Create library from 100+ files โ†’ Query specific functionality โ†’ Get precise answers

2. Research Papers

Compress 50 papers โ†’ Ask synthesis questions โ†’ Get citations + insights

3. Documentation

Load entire docs โ†’ Natural language queries โ†’ Contextual answers

4. Books & Long-form Content

Compress novels/textbooks โ†’ Ask thematic questions โ†’ Deep analysis

๐Ÿ› ๏ธ Available Tools

Core Tools

create_knowledge_library

Creates a compressed knowledge library from text.

{
  name: "react-docs",
  text: "...full React documentation...",
  n_max: 15  // Optional: compression level (5-20)
}

query_knowledge_library

Queries a library and retrieves relevant context.

{
  library_name: "react-docs",
  query: "How do hooks work?",
  top_k: 8  // Optional: number of chunks (1-20)
}

extend_context_from_files

Loads files and retrieves relevant context in one step.

{
  file_paths: ["./src/*.ts"],
  query: "Explain the authentication flow",
  top_k: 8
}

Management Tools

  • list_knowledge_libraries: List all libraries
  • get_library_stats: Detailed statistics
  • delete_knowledge_library: Remove a library
  • verify_library_integrity: Check 100% integrity
  • search_documents: Search with relevance scores

๐Ÿ“– Example Usage

In Claude Code

User: Can you help me understand this React codebase?

Claude: [Calls create_knowledge_library with all React files]
        [Creates library "react-project" with 245 chunks, 45ร— compression]
        
User: How does the authentication system work?

Claude: [Calls query_knowledge_library]
        [Retrieves 8 most relevant chunks from authentication code]
        [Provides detailed explanation with exact code references]

Result

Instead of:

  • โŒ "I can only see a few files at once"
  • โŒ "The codebase is too large for my context"

You get:

  • โœ… Full understanding of 100+ file codebases
  • โœ… Accurate answers with specific code references
  • โœ… Synthesis across multiple files

๐Ÿงฌ How It Works

Gravitational Memory Compression

Based on quantum physics' atomic orbital structure:

  1. Text Chunking: Split documents into 200-300 word chunks
  2. Hash Generation: SHA-256 hash for each chunk
  3. Orbital Encoding: Map hash to gravitational states (quantum-inspired)
  4. Compression: Achieve 15-60ร— reduction while maintaining retrievability
  5. Verification: 100% integrity guaranteed via hash comparison

Technical Details

  • Algorithm: Gravitational bit encoding with n_max orbitals
  • Compression: 1240 discrete states per bit (n_max=15)
  • Retrieval: O(N) semantic similarity + O(1) hash lookup
  • Integrity: Cryptographic verification (SHA-256)

๐Ÿ“Š Performance

| Metric | Value | |--------|-------| | Compression Ratio | 15-60ร— (typical) | | Data Integrity | 100% guaranteed | | Query Speed | < 100ms (1000 chunks) | | Max Library Size | Limited by RAM | | Chunk Retrieval | O(N) similarity scan |

๐ŸŽ“ Created By

Daouda Abdoul Anzize

  • Self-taught Systems Architect
  • 40+ Open Source Projects
  • Specialization: Meta-architectures & Protocol Design

Portfolio: tryboy869.github.io/daa
GitHub: @TryBoy869
Email: anzizdaouda0@gmail.com

๐Ÿ“„ License

MIT License - See LICENSE file

๐Ÿค Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing)
  5. Open a Pull Request

๐Ÿ› Issues

Found a bug? Have a feature request?

Open an issue

๐ŸŒŸ Star History

If you find this useful, please star the repo! โญ

๐Ÿ”— Links


Built with โค๏ธ by Daouda Anzize | Extending LLM horizons, one library at a time

Quick Setup
Installation guide for this server

Install Package (if required)

npx @modelcontextprotocol/server-ai-books-mcp-server

Cursor configuration (mcp.json)

{ "mcpServers": { "tryboy869-ai-books-mcp-server": { "command": "npx", "args": [ "tryboy869-ai-books-mcp-server" ] } } }