A Model Context Protocol (MCP) Server for CyberChef, enabling AI agents to perform encryption, encoding, compression, and data analysis.
CyberChef MCP Server
This project provides a Model Context Protocol (MCP) server interface for CyberChef, the "Cyber Swiss Army Knife" created by GCHQ.
By running this server, you enable AI assistants (like Claude, Cursor AI, and others) to natively utilize CyberChef's extensive library of 463+ data manipulation operations—including encryption, encoding, compression, and forensic analysis—as executable tools.
Latest Release: v1.5.1 | Release Notes | Security Policy | Security Fixes Report

Project Context
CyberChef is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. It was originally conceived and built by GCHQ.
This fork wraps the core CyberChef Node.js API into an MCP server, bridging the gap between natural language AI intent and deterministic data processing.

Features
MCP Tools
The server exposes CyberChef operations as MCP tools:
cyberchef_bake: The "Omni-tool". Executes a full CyberChef recipe (a chain of operations) on an input. Ideal for complex, multi-step transformations (e.g., "Decode Base64, then Gunzip, then prettify JSON").- Atomic Operations: 463 individual tools for specific tasks, dynamically generated from the CyberChef configuration.
cyberchef_to_base64/cyberchef_from_base64cyberchef_aes_decryptcyberchef_sha2cyberchef_yara_rules- ...and hundreds more.
cyberchef_search: A utility tool to help the AI discover available operations and their descriptions.
Technical Highlights
- Dockerized: Runs as a lightweight, self-contained Docker container based on Chainguard distroless Node.js 22 (~90MB compressed, 70% smaller attack surface than traditional images).
- Dual-Registry Publishing: Images published to both Docker Hub and GitHub Container Registry (GHCR) for maximum accessibility and Docker Scout health score optimization.
- Supply Chain Attestations: SBOM and provenance attestations attached to Docker Hub images for enhanced security transparency and compliance (SLSA Build Level 3).
- Stdio Transport: Communicates via standard input/output, making it easy to integrate with CLI-based MCP clients.
- Schema Validation: All inputs are validated against schemas derived from CyberChef's internal type system using
zod. - Modern Node.js: Fully compatible with Node.js v22+ with automated compatibility patches.
- Enhanced Observability (v1.5.0): Structured JSON logging with Pino for production monitoring, comprehensive error handling with actionable recovery suggestions, automatic retry logic with exponential backoff, request correlation with UUID tracking, circuit breaker pattern for cascading failure prevention, and streaming infrastructure for progressive results on large operations. See Release Notes for details.
- Performance Optimized (v1.4.0): LRU cache for operation results (100MB default), automatic streaming for large inputs (10MB+ threshold), configurable resource limits (100MB max input, 30s timeout), memory monitoring, and comprehensive benchmark suite. See Performance Tuning Guide for configuration options.
- Upstream Sync Automation (v1.3.0): Automated monitoring of upstream CyberChef releases every 6 hours, one-click synchronization workflow, comprehensive validation test suite with 465 tool tests, and emergency rollback mechanism.
- Security Hardened (v1.4.5+): Chainguard distroless base image with zero-CVE baseline, non-root execution (UID 65532), automated Trivy vulnerability scanning with build-fail thresholds, dual SBOM strategy (Docker Scout attestations + CycloneDX), read-only filesystem support, SLSA Build Level 3 provenance, and 7-day SLA for critical CVE patches. Fixed 11 of 12 code scanning vulnerabilities including critical cryptographic randomness weakness and 7 ReDoS vulnerabilities. See Security Policy and Security Fixes Report for details.
- Production Ready: Comprehensive CI/CD with CodeQL v4, automated testing, and dual-registry container publishing (Docker Hub + GHCR) with complete supply chain attestations.
Quick Start
Prerequisites
- Docker installed and running.
Installation Options
Option 1: Pull from Docker Hub (Online, Recommended)
# Docker Hub provides health scores and supply chain attestations
docker pull doublegate/cyberchef-mcp:latest
docker tag doublegate/cyberchef-mcp:latest cyberchef-mcp
docker run -i --rm cyberchef-mcp
Option 1b: Pull from GitHub Container Registry (Alternative)
docker pull ghcr.io/doublegate/cyberchef-mcp_v1:latest
docker tag ghcr.io/doublegate/cyberchef-mcp_v1:latest cyberchef-mcp
docker run -i --rm cyberchef-mcp
Option 2: Download Pre-built Image (Offline Installation)
For environments without direct GHCR access, download the pre-built Docker image tarball from the latest release:
-
Download the tarball (approximately 90MB compressed):
# Download from GitHub Releases wget https://github.com/doublegate/CyberChef-MCP/releases/download/v1.5.1/cyberchef-mcp-v1.5.1-docker-image.tar.gz -
Load the image into Docker:
docker load < cyberchef-mcp-v1.5.1-docker-image.tar.gz -
Tag for easier usage:
docker tag ghcr.io/doublegate/cyberchef-mcp_v1:v1.5.1 cyberchef-mcp -
Run the server:
docker run -i --rm cyberchef-mcp
Option 3: Build from Source
-
Clone the Repository:
git clone https://github.com/doublegate/CyberChef-MCP.git cd CyberChef-MCP -
Build the Docker Image:
docker build -f Dockerfile.mcp -t cyberchef-mcp . -
Run the Server (Interactive Mode): This command starts the server and listens on stdin. This is what your MCP client will run.
docker run -i --rm cyberchef-mcp -
Optional: Run with Enhanced Security (Read-Only Filesystem): For maximum security in production deployments:
docker run -i --rm --read-only --tmpfs /tmp:rw,noexec,nosuid,size=100m cyberchef-mcp
Client Configuration
Cursor AI
- Go to Settings > Features > MCP.
- Add a new server:
- Name:
CyberChef - Type:
command - Command:
docker - Args:
run -i --rm cyberchef-mcp
- Name:
Claude Code (CLI)
Add to your configuration file (typically ~/.config/claude/config.json):
{
"mcpServers": {
"cyberchef": {
"command": "docker",
"args": ["run", "-i", "--rm", "cyberchef-mcp"]
}
}
}
Claude Desktop
Add to your Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"cyberchef": {
"command": "docker",
"args": ["run", "-i", "--rm", "cyberchef-mcp"]
}
}
}
After adding the configuration, restart Claude Desktop. The CyberChef tools will appear in the available tools panel.
Performance & Configuration
Version 1.4.0 introduces comprehensive performance optimizations and configurable resource limits. All features can be tuned via environment variables for your deployment needs.
Performance Features
LRU Cache for Operation Results
- Automatically caches operation results to eliminate redundant computation
- Configurable cache size (100MB default) and item count (1000 default)
- Cache keys based on operation + input + arguments (SHA256 hash)
Automatic Streaming for Large Inputs
- Inputs exceeding 10MB automatically use chunked processing
- Supports encoding, compression, and hashing operations
- Memory-efficient handling of 100MB+ files
- Transparent fallback for non-streaming operations
Resource Limits
- Maximum input size validation (100MB default)
- Operation timeout enforcement (30 seconds default)
- Prevents out-of-memory crashes and runaway operations
Memory Monitoring
- Periodic memory usage logging to stderr
- Heap and RSS tracking for troubleshooting
Configuration Options
All features are configurable via environment variables:
# Logging (v1.5.0+)
LOG_LEVEL=info # Logging level: debug, info, warn, error, fatal
# Retry Logic (v1.5.0+)
CYBERCHEF_MAX_RETRIES=3 # Maximum retry attempts for transient failures
CYBERCHEF_INITIAL_BACKOFF=1000 # Initial backoff delay in milliseconds
CYBERCHEF_MAX_BACKOFF=10000 # Maximum backoff delay in milliseconds
CYBERCHEF_BACKOFF_MULTIPLIER=2 # Backoff multiplier for exponential backoff
# Streaming (v1.5.0+)
CYBERCHEF_STREAM_CHUNK_SIZE=1048576 # Chunk size for streaming (1MB)
CYBERCHEF_STREAM_PROGRESS_INTERVAL=10485760 # Progress reporting interval (10MB)
# Performance (v1.4.0+)
CYBERCHEF_MAX_INPUT_SIZE=104857600 # Maximum input size (100MB)
CYBERCHEF_OPERATION_TIMEOUT=30000 # Operation timeout in milliseconds (30s)
CYBERCHEF_STREAMING_THRESHOLD=10485760 # Streaming threshold (10MB)
CYBERCHEF_ENABLE_STREAMING=true # Enable streaming for large operations
CYBERCHEF_ENABLE_WORKERS=true # Enable worker threads (infrastructure only)
CYBERCHEF_CACHE_MAX_SIZE=104857600 # Cache maximum size (100MB)
CYBERCHEF_CACHE_MAX_ITEMS=1000 # Cache maximum items
Example Configurations
High-Throughput Server (Large Files)
docker run -i --rm --memory=4g \
-e CYBERCHEF_MAX_INPUT_SIZE=524288000 \
-e CYBERCHEF_STREAMING_THRESHOLD=52428800 \
-e CYBERCHEF_CACHE_MAX_SIZE=524288000 \
-e CYBERCHEF_OPERATION_TIMEOUT=120000 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
Low-Memory Environment
docker run -i --rm --memory=512m \
-e CYBERCHEF_MAX_INPUT_SIZE=10485760 \
-e CYBERCHEF_STREAMING_THRESHOLD=5242880 \
-e CYBERCHEF_CACHE_MAX_SIZE=10485760 \
-e CYBERCHEF_CACHE_MAX_ITEMS=100 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
Claude Desktop with Custom Limits
{
"mcpServers": {
"cyberchef": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "CYBERCHEF_MAX_INPUT_SIZE=209715200",
"-e", "CYBERCHEF_CACHE_MAX_SIZE=209715200",
"ghcr.io/doublegate/cyberchef-mcp_v1:latest"
]
}
}
}
Debug Logging for Troubleshooting (v1.5.0+)
docker run -i --rm \
-e LOG_LEVEL=debug \
-e CYBERCHEF_MAX_RETRIES=5 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest
For detailed performance tuning guidance, see the Performance Tuning Guide.
Performance Benchmarks
Run the benchmark suite to measure performance on your hardware:
# Install dependencies
npm install
# Generate required configuration
npx grunt configTests
# Run benchmarks
npm run benchmark
The benchmark suite tests 20+ operations across multiple input sizes (1KB, 10KB, 100KB) in categories including:
- Encoding operations (Base64, Hex)
- Hashing operations (MD5, SHA256, SHA512)
- Compression operations (Gzip)
- Cryptographic operations (AES)
- Text operations (Regex)
- Analysis operations (Entropy, Frequency Distribution)
Security
This project implements comprehensive security hardening with continuous improvements:
Latest Enhancements (v1.5.0)
- Enhanced Error Handling: Comprehensive error reporting for production debugging
- 8 Error Codes: Standardized error classification (INVALID_INPUT, MISSING_ARGUMENT, OPERATION_FAILED, TIMEOUT, OUT_OF_MEMORY, UNSUPPORTED_OPERATION, CACHE_ERROR, STREAMING_ERROR)
- Rich Context: Detailed debugging information (input size, operation name, request ID, timestamp)
- Recovery Suggestions: Actionable recommendations for common issues
- Retryable Classification: Automatic distinction between transient and permanent failures
- Structured Logging with Pino: Production-ready observability
- JSON Logs: Machine-readable logs for monitoring tools (Datadog, Splunk, ELK)
- Request Correlation: UUID-based request tracking across operations
- Performance Metrics: Duration, throughput, cache hits, memory usage
- Configurable Levels: debug, info, warn, error, fatal via LOG_LEVEL environment variable
- Automatic Retry Logic: Resilience for transient failures
- Exponential Backoff: 1s → 2s → 4s with jitter to prevent thundering herd
- Configurable Retries: Default 3 attempts, customizable via CYBERCHEF_MAX_RETRIES
- Smart Detection: Automatically retries timeouts, memory issues, network errors
- Circuit Breaker: Opens after 5 consecutive failures to prevent cascading issues
- MCP Streaming Infrastructure: Progressive results for large operations
- Chunked Processing: Memory-efficient handling of 100MB+ inputs
- Progress Reporting: Updates every 10MB for long-running operations
- 14 Supported Operations: Encoding (Base64, Hex), hashing (MD5, SHA family), text operations
- Configurable Thresholds: Streaming chunk size and progress interval
Security Hardening (v1.4.6)
- Chainguard Distroless Base Image: Enterprise-grade container security
- Zero-CVE Baseline: Daily security updates with 7-day SLA for critical patches
- 70% Smaller Attack Surface: Minimal OS footprint compared to traditional Alpine/Debian images
- Non-Root Execution: Runs as UID 65532 (nonroot user) in distroless environment
- SLSA Build Level 3 Provenance: Verifiable supply chain integrity
- Multi-stage Build:
-devvariant for compilation, distroless runtime for production
- Read-Only Filesystem Support: Production-ready immutable deployments
- Supports
docker run --read-onlywith tmpfs mount for /tmp - Compliance-ready for PCI-DSS, SOC 2, FedRAMP requirements
- Example:
docker run -i --rm --read-only --tmpfs /tmp:rw,noexec,nosuid,size=100m cyberchef-mcp
- Supports
- Security Scan Fail Thresholds: Automated vulnerability prevention
- Trivy scanner configured with
exit-code: '1'in CI/CD - Builds automatically fail on CRITICAL or HIGH vulnerabilities
- Prevents vulnerable images from reaching production
- Trivy scanner configured with
- Dual SBOM Strategy: Comprehensive supply chain transparency
- Part 1: Docker buildx attestations for automated registry scanning (Docker Scout)
- Part 2: Trivy CycloneDX SBOM for offline compliance auditing
- Both SBOMs attached as release assets for verification
Code Security (v1.4.1+)
- 11 of 12 Code Scanning Vulnerabilities Fixed: Comprehensive security hardening completed
- CRITICAL: Fixed insecure cryptographic randomness in GOST library - replaced
Math.random()withcrypto.randomBytes() - HIGH: Eliminated 7 ReDoS (Regular Expression Denial of Service) vulnerabilities across 6 operations
- NEW MODULE: SafeRegex.mjs provides centralized validation for all user-controlled regex patterns
- Pattern length limits (10,000 characters)
- ReDoS pattern detection (nested quantifiers, overlapping alternations)
- Timeout-based validation (100ms) to detect catastrophic backtracking
- XRegExp and standard RegExp support
- CRITICAL: Fixed insecure cryptographic randomness in GOST library - replaced
- All 1,933 Tests Passing: Security fixes validated with comprehensive test suite
- See Security Fixes Report for complete details
Supply Chain Security (v1.4.5+)
- Dual-Registry Publishing with Attestations: Enhanced security transparency and compliance
- Docker Hub: Primary distribution with Docker Scout health score monitoring
- GitHub Container Registry (GHCR): Secondary distribution for GitHub ecosystem integration
- Both registries receive identical images with full attestation support
- Docker Scout Attestations: Build integrity and software transparency
- Provenance Attestation (mode=max): Complete build process metadata (builder, materials, recipe) for SLSA Build Level 3 compliance
- SBOM Attestation: Automatic Software Bill of Materials generation in SPDX-JSON format
- Achieves optimal Docker Scout health score (grade A or B) on Docker Hub
- 15 points out of 100 in health score calculation - one of the highest-weighted policy categories
- Dual SBOM Strategy: Comprehensive software transparency
- Docker Attestation SBOM: Attached to image manifest for registry-based validation and
docker sbomcommand - Trivy SBOM Artifact: Standalone CycloneDX file for offline audits and compliance reporting
- Both SBOMs include complete dependency tree with version information
- Docker Attestation SBOM: Attached to image manifest for registry-based validation and
- Trivy Integration: Container and dependency scanning on every build with fail-fast thresholds
- GitHub Security Tab: All findings automatically uploaded as SARIF
- Verification: Use
docker scout quickviewanddocker sbomcommands to inspect attestations locally
Container Security (v1.4.5+)
- Chainguard Distroless: Zero-CVE baseline with minimal attack surface
- Non-Root Execution: Container runs as UID 65532 (nonroot user in distroless)
- Read-Only Filesystem: Supports
--read-onlyflag for immutable deployments - Minimal Attack Surface: No shell, no package manager, only runtime dependencies
- Health Checks: Built-in container health monitoring
Cryptographic Hardening (v1.2.5)
- Argon2 OWASP Compliance: Default parameters follow OWASP 2024-2025 recommendations
- Type: Argon2id (hybrid side-channel + GPU resistance)
- Memory: 19 MiB (OWASP minimum)
- Iterations: 2 (OWASP recommended for 19 MiB)
- Secure Random Number Generation: All cryptographic operations use
crypto.randomBytes()orcrypto.getRandomValues() - CVE-2025-64756 Fixed: Updated npm to resolve glob command injection vulnerability
Automated Security Scanning
- CodeQL Analysis: Continuous code scanning for security vulnerabilities
- Weekly Scans: Scheduled scans catch newly discovered vulnerabilities
Secure Deployment
# Recommended: Run with maximum security options (Chainguard distroless)
docker run -i --rm \
--read-only \
--tmpfs /tmp:rw,noexec,nosuid,size=100m \
--cap-drop=ALL \
--security-opt=no-new-privileges \
cyberchef-mcp
# Note: Chainguard distroless already runs as non-root (UID 65532)
# --read-only requires tmpfs mount for /tmp directory
For detailed information, see:
- Security Policy - Vulnerability reporting and security policies
- Security Audit - Comprehensive security assessment
- Security Fixes Report - Latest vulnerability fixes
- Security Fixes Summary - Quick reference guide
Project Roadmap
CyberChef MCP Server has a comprehensive development roadmap spanning 19 releases across 6 phases through August 2027.
| Phase | Releases | Timeline | Focus | Status | |-------|----------|----------|-------|--------| | Phase 1: Foundation | v1.2.0 - v1.4.6 | Q4 2025 - Q1 2026 | Security hardening, upstream sync, performance | Completed | | Phase 2: Enhancement | v1.5.0 - v1.7.0 | Q2 2026 | Streaming, recipe management, batch processing | v1.5.0 Released | | Phase 3: Maturity | v1.8.0 - v2.0.0 | Q3 2026 | API stabilization, breaking changes, v2.0.0 | Planned | | Phase 4: Expansion | v2.1.0 - v2.3.0 | Q4 2026 | Multi-modal, advanced transports, plugins | Planned | | Phase 5: Enterprise | v2.4.0 - v2.6.0 | Q1 2027 | OAuth 2.1, RBAC, Kubernetes, observability | Planned | | Phase 6: Evolution | v2.7.0 - v3.0.0 | Q2-Q3 2027 | Edge deployment, AI-native features, v3.0.0 | Planned |
See the Full Roadmap for detailed release plans and timelines.
Documentation
Detailed documentation is organized in the docs/ directory:
User Guides
- User Guide: Detailed installation and client configuration
- Commands Reference: List of all available MCP tools and operations
- Docker Hub Setup Guide: Quick start guide for Docker Hub publishing and attestations
- Docker Scout Attestations Guide: Comprehensive guide to supply chain attestations and health scores
Technical Documentation
- Architecture: Technical design of the MCP server
- Technical Implementation: Implementation details
- Performance Tuning Guide: Configuration guide for optimizing performance
Project Management
- Product Roadmap: Comprehensive v1.1.0 → v3.0.0 roadmap with timeline
- Tasks: 500+ implementation tasks organized by release
- Development Phases: Sprint breakdowns for each development phase
- Release Plans: Individual release specifications (v1.2.0 - v3.0.0)
- Project Summary: Internal project overview
Strategic Planning
- Upstream Sync Strategy: Automated CyberChef update monitoring
- Security Hardening Plan: Docker DHI, non-root, SBOM generation
- Multi-Modal Strategy: Image/binary/audio handling via MCP
- Plugin Architecture: Custom operations and sandboxed execution
- Enterprise Features: OAuth 2.1, RBAC, audit logging
Security & Releases
- Security Policy: Security policy and vulnerability reporting
- Security Audit: Comprehensive security assessment
- Security Fixes Report: Detailed report of 11 vulnerability fixes (ReDoS and cryptographic weaknesses)
- Security Fixes Summary: Quick reference for recent security improvements
- Release Notes v1.5.0: Enhanced error handling, structured logging, automatic retry, streaming infrastructure
- Release Notes v1.4.6: Sprint 1 Security Hardening - Chainguard distroless migration, zero-CVE baseline, read-only filesystem support
- Release Notes v1.4.5: Supply chain attestations and documentation reorganization
- Release Notes v1.4.4: Docker Hub build fix and 12 security vulnerability fixes
- Release Notes v1.4.3: Dependency resolution and Node.js 22 compatibility
- Release Notes v1.4.2: CI/CD improvements and zero-warning workflows
- Release Notes v1.4.1: Security patch - 11 Code Scanning vulnerabilities fixed
- Release Notes v1.4.0: Performance optimization with caching, streaming, and resource limits
- Release Notes v1.3.0: Upstream sync automation with comprehensive testing
- Release Notes v1.2.6: nginx:alpine-slim optimization for web app
- Release Notes v1.2.5: Security patch with OWASP Argon2 hardening
- Release Notes v1.2.0: Security hardening release
- Release Notes v1.1.0: Security fixes and Node.js 22 compatibility
- Release Notes v1.0.0: Initial MCP server release
Development
Local Setup
If you want to modify the server code without Docker:
- Install Dependencies:
npm install - Generate Config: (Required to build the internal operation lists)
npx grunt configTests - Run Server:
npm run mcp
CI/CD
This project uses GitHub Actions to ensure stability and security:
Core Development Workflows:
- Core CI (
core-ci.yml): Tests the underlying CyberChef logic and configuration generation on Node.js v22 - Docker Build (
mcp-docker-build.yml): Builds, verifies, and security scans thecyberchef-mcpDocker image - Pull Request Checks (
pull_requests.yml): Automated testing and validation for pull requests - Performance Benchmarks (
performance-benchmarks.yml): Automated performance regression testing on code changes (v1.4.0+)
Security & Release Workflows:
- Security Scan (
security-scan.yml): Trivy vulnerability scanning, SBOM generation, weekly scheduled scans - CodeQL Analysis (
codeql.yml): Automated security scanning for code vulnerabilities (CodeQL v4) - Release (
mcp-release.yml): Publishes Docker image to GHCR with SBOM attachment on version tags (v*), automatically creates GitHub releases
Upstream Sync Automation (v1.3.0):
- Upstream Monitor (
upstream-monitor.yml): Monitors GCHQ/CyberChef for new releases every 6 hours, creates GitHub issues for review - Upstream Sync (
upstream-sync.yml): Automated synchronization workflow with merge, config regeneration, testing, and PR creation - Rollback (
rollback.yml): Emergency rollback mechanism for reverting problematic upstream merges
All workflows use the latest CodeQL Action v4 for security scanning and SARIF upload.
Testing
# Run all tests (requires Node.js 22+)
npm test
# Run MCP validation test suite (465 tool tests with Vitest)
npm run test:mcp
# Run performance benchmarks (v1.4.0+)
npm run benchmark
# Test Node.js consumer compatibility
npm run testnodeconsumer
# Test UI (requires production build first)
npm run build
npm run testui
# Lint code
npm run lint
Contributing
Contributions to the MCP adapter are welcome! We appreciate:
- Bug Reports: Open an issue with detailed steps to reproduce
- Feature Requests: Check Roadmap first, then open an issue
- Pull Requests: See Tasks for areas needing work
- Documentation: Improvements to guides and examples are always welcome
Development Workflow
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes and test thoroughly
- Commit with conventional commit messages (
feat:,fix:,docs:, etc.) - Push to your fork and submit a pull request
For contributions to the core CyberChef operations, please credit the original GCHQ repository.
Repository Information
- Original CyberChef: GCHQ/CyberChef
- MCP Fork: doublegate/CyberChef-MCP
- Container Registries:
- Docker Hub (Primary): doublegate/cyberchef-mcp - With Docker Scout health scores and attestations
- GHCR (Secondary): ghcr.io/doublegate/cyberchef-mcp_v1
- Issue Tracker: GitHub Issues
Support
If you find this project useful, consider supporting its development:
Licensing
CyberChef is released under the Apache 2.0 Licence and is covered by Crown Copyright.
This MCP server adapter maintains the same Apache 2.0 license.