Model Context Protocol (MCP) server designed to generate and run mock API servers from API documentation (e.g., OpenAPI/Swagger specifications). This allows developers to quickly spin up mock backends for development, testing, and integration purposes.
MockLoop MCP - AI-Native Testing Platform
The world's first AI-native API testing platform powered by the Model Context Protocol (MCP). MockLoop MCP revolutionizes API testing with comprehensive AI-driven scenario generation, automated test execution, and intelligent analysis capabilities.
๐ Revolutionary Capabilities: 5 AI Prompts โข 15 Scenario Resources โข 16 Testing Tools โข 10 Context Tools โข 4 Core Tools โข Complete MCP Integration
๐ Documentation: https://docs.mockloop.com
๐ฆ PyPI Package: https://pypi.org/project/mockloop-mcp/
๐ GitHub Repository: https://github.com/mockloop/mockloop-mcp
๐ What Makes MockLoop MCP Revolutionary?
MockLoop MCP represents a paradigm shift in API testing, introducing the world's first AI-native testing architecture that combines:
- ๐ค AI-Driven Test Generation: 5 specialized MCP prompts for intelligent scenario creation
- ๐ฆ Community Scenario Packs: 15 curated testing resources with community architecture
- โก Automated Test Execution: 30 comprehensive MCP tools for complete testing workflows (16 testing + 10 context + 4 core)
- ๐ Stateful Testing: Advanced context management with GlobalContext and AgentContext
- ๐ Enterprise Compliance: Complete audit logging and regulatory compliance tracking
- ๐๏ธ Dual-Port Architecture: Eliminates /admin path conflicts with separate mocked API and admin ports
๐ฏ Core AI-Native Architecture
MCP Audit Logging
Enterprise-grade compliance and regulatory tracking
- Complete request/response audit trails
- Regulatory compliance monitoring
- Performance metrics and analytics
- Security event logging
MCP Prompts (5 AI-Driven Capabilities)
Intelligent scenario generation powered by AI
analyze_openapi_for_testing
- Comprehensive API analysis for testing strategiesgenerate_scenario_config
- Dynamic test scenario configurationoptimize_scenario_for_load
- Load testing optimizationgenerate_error_scenarios
- Error condition simulationgenerate_security_test_scenarios
- Security vulnerability testing
MCP Resources (15 Scenario Packs)
Community-driven testing scenarios with advanced architecture
- Load Testing Scenarios: High-volume traffic simulation
- Error Simulation Packs: Comprehensive error condition testing
- Security Testing Suites: Vulnerability assessment scenarios
- Performance Benchmarks: Standardized performance testing
- Integration Test Packs: Cross-service testing scenarios
- Community Architecture: Collaborative scenario sharing and validation
MCP Tools (16 Automated Testing Tools)
Complete automated test execution capabilities
Scenario Management (4 tools)
validate_scenario_config
- Scenario validation and verificationdeploy_scenario
- Automated scenario deploymentswitch_scenario
- Dynamic scenario switchinglist_active_scenarios
- Active scenario monitoring
Test Execution (4 tools)
execute_test_plan
- Comprehensive test plan executionrun_test_iteration
- Individual test iteration managementrun_load_test
- Load testing executionrun_security_test
- Security testing automation
Analysis & Reporting (4 tools)
analyze_test_results
- Intelligent test result analysisgenerate_test_report
- Comprehensive reportingcompare_test_runs
- Test run comparison and trendsget_performance_metrics
- Performance metrics collection
Workflow Management (4 tools)
create_test_session
- Test session initializationend_test_session
- Session cleanup and finalizationschedule_test_suite
- Automated test schedulingmonitor_test_progress
- Real-time progress monitoring
MCP Context Management (10 Stateful Workflow Tools)
Advanced state management for complex testing workflows
Context Creation & Management
create_test_session_context
- Test session state managementcreate_workflow_context
- Complex workflow orchestrationcreate_agent_context
- AI agent state management
Data Management
get_context_data
- Context data retrievalupdate_context_data
- Dynamic context updateslist_contexts_by_type
- Context discovery and listing
Snapshot & Recovery
create_context_snapshot
- State snapshot creationrestore_context_snapshot
- State rollback capabilities
Global Context
get_global_context_data
- Cross-session data sharingupdate_global_context_data
- Global state management
๐ Quick Start
Get started with the world's most advanced AI-native testing platform:
# 1. Install MockLoop MCP
pip install mockloop-mcp
# 2. Verify installation
mockloop-mcp --version
# 3. Configure with your MCP client (Cline, Claude Desktop, etc.)
# See configuration examples below
๐ Prerequisites
- Python 3.10+
- Pip package manager
- Docker and Docker Compose (for containerized mock servers)
- An MCP-compatible client (Cline, Claude Desktop, etc.)
๐ง Installation
Option 1: Install from PyPI (Recommended)
# Install the latest stable version
pip install mockloop-mcp
# Or install with optional dependencies
pip install mockloop-mcp[dev] # Development tools
pip install mockloop-mcp[docs] # Documentation tools
pip install mockloop-mcp[all] # All optional dependencies
# Verify installation
mockloop-mcp --version
Option 2: Development Installation
# Clone the repository
git clone https://github.com/mockloop/mockloop-mcp.git
cd mockloop-mcp
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"
โ๏ธ Configuration
MCP Client Configuration
Cline (VS Code Extension)
Add to your Cline MCP settings file:
{
"mcpServers": {
"MockLoopLocal": {
"autoApprove": [],
"disabled": false,
"timeout": 60,
"command": "mockloop-mcp",
"args": [],
"transportType": "stdio"
}
}
}
Claude Desktop
Add to your Claude Desktop configuration:
{
"mcpServers": {
"mockloop": {
"command": "mockloop-mcp",
"args": []
}
}
}
Virtual Environment Installations
For virtual environment installations, use the full Python path:
{
"mcpServers": {
"MockLoopLocal": {
"command": "/path/to/your/venv/bin/python",
"args": ["-m", "mockloop_mcp"],
"transportType": "stdio"
}
}
}
๐ ๏ธ Available MCP Tools
Core Mock Generation
generate_mock_api
Generate sophisticated FastAPI mock servers with dual-port architecture.
Parameters:
spec_url_or_path
(string, required): API specification URL or local file pathoutput_dir_name
(string, optional): Output directory nameauth_enabled
(boolean, optional): Enable authentication middleware (default: true)webhooks_enabled
(boolean, optional): Enable webhook support (default: true)admin_ui_enabled
(boolean, optional): Enable admin UI (default: true)storage_enabled
(boolean, optional): Enable storage functionality (default: true)
Revolutionary Dual-Port Architecture:
- Mocked API Port: Serves your API endpoints (default: 8000)
- Admin UI Port: Separate admin interface (default: 8001)
- Conflict Resolution: Eliminates /admin path conflicts in OpenAPI specs
- Enhanced Security: Port-based access control and isolation
Advanced Analytics
query_mock_logs
Query and analyze request logs with AI-powered insights.
Parameters:
server_url
(string, required): Mock server URLlimit
(integer, optional): Maximum logs to return (default: 100)offset
(integer, optional): Pagination offset (default: 0)method
(string, optional): Filter by HTTP methodpath_pattern
(string, optional): Regex pattern for path filteringtime_from
(string, optional): Start time filter (ISO format)time_to
(string, optional): End time filter (ISO format)include_admin
(boolean, optional): Include admin requests (default: false)analyze
(boolean, optional): Perform AI analysis (default: true)
AI-Powered Analysis:
- Performance metrics (P95/P99 response times)
- Error rate analysis and categorization
- Traffic pattern detection
- Automated debugging recommendations
- Session correlation and tracking
discover_mock_servers
Intelligent server discovery with dual-port architecture support.
Parameters:
ports
(array, optional): Ports to scan (default: common ports)check_health
(boolean, optional): Perform health checks (default: true)include_generated
(boolean, optional): Include generated mocks (default: true)
Advanced Discovery:
- Automatic architecture detection (single-port vs dual-port)
- Health status monitoring
- Server correlation and matching
- Port usage analysis
manage_mock_data
Dynamic response management without server restart.
Parameters:
server_url
(string, required): Mock server URLoperation
(string, required): Operation type ("update_response", "create_scenario", "switch_scenario", "list_scenarios")endpoint_path
(string, optional): API endpoint pathresponse_data
(object, optional): New response datascenario_name
(string, optional): Scenario namescenario_config
(object, optional): Scenario configuration
Dynamic Capabilities:
- Real-time response updates
- Scenario-based testing
- Runtime configuration management
- Zero-downtime modifications
๐ MCP Proxy Functionality
MockLoop MCP includes revolutionary proxy capabilities that enable seamless switching between mock and live API environments. This powerful feature transforms your testing workflow by providing:
Core Proxy Capabilities
- ๐ Seamless Mode Switching: Transition between mock, proxy, and hybrid modes without code changes
- ๐ฏ Intelligent Routing: Smart request routing based on configurable rules and conditions
- ๐ Universal Authentication: Support for API Key, Bearer Token, Basic Auth, and OAuth2
- ๐ Response Comparison: Automated comparison between mock and live API responses
- โก Zero-Downtime Switching: Change modes dynamically without service interruption
Operational Modes
Mock Mode (MOCK
)
- All requests handled by generated mock responses
- Predictable, consistent testing environment
- Ideal for early development and isolated testing
- No external dependencies or network calls
Proxy Mode (PROXY
)
- All requests forwarded to live API endpoints
- Real-time data and authentic responses
- Full integration testing capabilities
- Network-dependent operation with live credentials
Hybrid Mode (HYBRID
)
- Intelligent routing between mock and proxy based on rules
- Conditional switching based on request patterns, headers, or parameters
- Gradual migration from mock to live environments
- A/B testing and selective endpoint proxying
Quick Start Example
from mockloop_mcp.mcp_tools import create_mcp_plugin
# Create a proxy-enabled plugin
plugin_result = await create_mcp_plugin(
spec_url_or_path="https://api.example.com/openapi.json",
mode="hybrid", # Start with hybrid mode
plugin_name="example_api",
target_url="https://api.example.com",
auth_config={
"auth_type": "bearer_token",
"credentials": {"token": "your-token"}
},
routing_rules=[
{
"pattern": "/api/critical/*",
"mode": "proxy", # Critical endpoints use live API
"priority": 10
},
{
"pattern": "/api/dev/*",
"mode": "mock", # Development endpoints use mocks
"priority": 5
}
]
)
Advanced Features
- ๐ Response Validation: Compare mock vs live responses for consistency
- ๐ Performance Monitoring: Track response times and throughput across modes
- ๐ก๏ธ Error Handling: Graceful fallback mechanisms and retry policies
- ๐๏ธ Dynamic Configuration: Runtime mode switching and rule updates
- ๐ Audit Logging: Complete request/response tracking across all modes
Authentication Support
The proxy system supports comprehensive authentication schemes:
- API Key: Header, query parameter, or cookie-based authentication
- Bearer Token: OAuth2 and JWT token support
- Basic Auth: Username/password combinations
- OAuth2: Full OAuth2 flow with token refresh
- Custom: Extensible authentication handlers for proprietary schemes
Use Cases
- Development Workflow: Start with mocks, gradually introduce live APIs
- Integration Testing: Validate against real services while maintaining test isolation
- Performance Testing: Compare mock vs live API performance characteristics
- Staging Validation: Ensure mock responses match production API behavior
- Hybrid Deployments: Route critical operations to live APIs, others to mocks
๐ Complete Guide: For detailed configuration, examples, and best practices, see the MCP Proxy Guide.
๐ค AI Framework Integration
MockLoop MCP provides native integration with popular AI frameworks:
LangGraph Integration
from langgraph.graph import StateGraph, END
from mockloop_mcp import MockLoopClient
# Initialize MockLoop client
mockloop = MockLoopClient()
def setup_ai_testing(state):
"""AI-driven test setup"""
# Generate mock API with AI analysis
result = mockloop.generate_mock_api(
spec_url_or_path="https://api.example.com/openapi.json",
output_dir_name="ai_test_environment"
)
# Use AI prompts for scenario generation
scenarios = mockloop.analyze_openapi_for_testing(
api_spec=state["api_spec"],
analysis_depth="comprehensive",
include_security_tests=True
)
state["mock_server_url"] = "http://localhost:8000"
state["test_scenarios"] = scenarios
return state
def execute_ai_tests(state):
"""Execute AI-generated test scenarios"""
# Deploy AI-generated scenarios
for scenario in state["test_scenarios"]:
mockloop.deploy_scenario(
server_url=state["mock_server_url"],
scenario_config=scenario
)
# Execute load tests with AI optimization
results = mockloop.run_load_test(
server_url=state["mock_server_url"],
scenario_name=scenario["name"],
duration=300,
concurrent_users=100
)
# AI-powered result analysis
analysis = mockloop.analyze_test_results(
test_results=results,
include_recommendations=True
)
state["test_results"].append(analysis)
return state
# Build AI-native testing workflow
workflow = StateGraph(dict)
workflow.add_node("setup_ai_testing", setup_ai_testing)
workflow.add_node("execute_ai_tests", execute_ai_tests)
workflow.set_entry_point("setup_ai_testing")
workflow.add_edge("setup_ai_testing", "execute_ai_tests")
workflow.add_edge("execute_ai_tests", END)
app = workflow.compile()
CrewAI Multi-Agent Testing
from crewai import Agent, Task, Crew
from mockloop_mcp import MockLoopClient
# Initialize MockLoop client
mockloop = MockLoopClient()
# AI Testing Specialist Agent
api_testing_agent = Agent(
role='AI API Testing Specialist',
goal='Generate and execute comprehensive AI-driven API tests',
backstory='Expert in AI-native testing with MockLoop MCP integration',
tools=[
mockloop.generate_mock_api,
mockloop.analyze_openapi_for_testing,
mockloop.generate_scenario_config
]
)
# Performance Analysis Agent
performance_agent = Agent(
role='AI Performance Analyst',
goal='Analyze API performance with AI-powered insights',
backstory='Specialist in AI-driven performance analysis and optimization',
tools=[
mockloop.run_load_test,
mockloop.get_performance_metrics,
mockloop.analyze_test_results
]
)
# Security Testing Agent
security_agent = Agent(
role='AI Security Testing Expert',
goal='Conduct AI-driven security testing and vulnerability assessment',
backstory='Expert in AI-powered security testing methodologies',
tools=[
mockloop.generate_security_test_scenarios,
mockloop.run_security_test,
mockloop.compare_test_runs
]
)
# Define AI-driven tasks
ai_setup_task = Task(
description='Generate AI-native mock API with comprehensive testing scenarios',
agent=api_testing_agent,
expected_output='Mock server with AI-generated test scenarios deployed'
)
performance_task = Task(
description='Execute AI-optimized performance testing and analysis',
agent=performance_agent,
expected_output='Comprehensive performance analysis with AI recommendations'
)
security_task = Task(
description='Conduct AI-driven security testing and vulnerability assessment',
agent=security_agent,
expected_output='Security test results with AI-powered threat analysis'
)
# Create AI testing crew
ai_testing_crew = Crew(
agents=[api_testing_agent, performance_agent, security_agent],
tasks=[ai_setup_task, performance_task, security_task],
verbose=True
)
# Execute AI-native testing workflow
results = ai_testing_crew.kickoff()
LangChain AI Testing Tools
from langchain.agents import Tool, AgentExecutor, create_react_agent
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from mockloop_mcp import MockLoopClient
# Initialize MockLoop client
mockloop = MockLoopClient()
# AI-Native Testing Tools
def ai_generate_mock_api(spec_path: str) -> str:
"""Generate AI-enhanced mock API with intelligent scenarios"""
# Generate mock API
result = mockloop.generate_mock_api(spec_url_or_path=spec_path)
# Use AI to analyze and enhance
analysis = mockloop.analyze_openapi_for_testing(
api_spec=spec_path,
analysis_depth="comprehensive",
include_security_tests=True
)
return f"AI-enhanced mock API generated: {result}\nAI Analysis: {analysis['summary']}"
def ai_execute_testing_workflow(server_url: str) -> str:
"""Execute comprehensive AI-driven testing workflow"""
# Create test session context
session = mockloop.create_test_session_context(
session_name="ai_testing_session",
configuration={"ai_enhanced": True}
)
# Generate and deploy AI scenarios
scenarios = mockloop.generate_scenario_config(
api_spec=server_url,
scenario_types=["load", "error", "security"],
ai_optimization=True
)
results = []
for scenario in scenarios:
# Deploy scenario
mockloop.deploy_scenario(
server_url=server_url,
scenario_config=scenario
)
# Execute tests with AI monitoring
test_result = mockloop.execute_test_plan(
server_url=server_url,
test_plan=scenario["test_plan"],
ai_monitoring=True
)
results.append(test_result)
# AI-powered analysis
analysis = mockloop.analyze_test_results(
test_results=results,
include_recommendations=True,
ai_insights=True
)
return f"AI testing workflow completed: {analysis['summary']}"
# Create LangChain tools
ai_testing_tools = [
Tool(
name="AIGenerateMockAPI",
func=ai_generate_mock_api,
description="Generate AI-enhanced mock API with intelligent testing scenarios"
),
Tool(
name="AIExecuteTestingWorkflow",
func=ai_execute_testing_workflow,
description="Execute comprehensive AI-driven testing workflow with intelligent analysis"
)
]
# Create AI testing agent
llm = ChatOpenAI(temperature=0)
ai_testing_prompt = PromptTemplate.from_template("""
You are an AI-native testing assistant powered by MockLoop MCP.
You have access to revolutionary AI-driven testing capabilities including:
- AI-powered scenario generation
- Intelligent test execution
- Advanced performance analysis
- Security vulnerability assessment
- Stateful workflow management
Tools available: {tools}
Tool names: {tool_names}
Question: {input}
{agent_scratchpad}
""")
agent = create_react_agent(llm, ai_testing_tools, ai_testing_prompt)
agent_executor = AgentExecutor(agent=agent, tools=ai_testing_tools, verbose=True)
# Execute AI-native testing
response = agent_executor.invoke({
"input": "Generate a comprehensive AI-driven testing environment for a REST API and execute full testing workflow"
})
๐๏ธ Dual-Port Architecture
MockLoop MCP introduces a revolutionary dual-port architecture that eliminates common conflicts and enhances security:
Architecture Benefits
- ๐ Enhanced Security: Complete separation of mocked API and admin functionality
- โก Zero Conflicts: Eliminates /admin path conflicts in OpenAPI specifications
- ๐ Clean Analytics: Admin calls don't appear in mocked API metrics
- ๐ Independent Scaling: Scale mocked API and admin services separately
- ๐ก๏ธ Port-Based Access Control: Enhanced security through network isolation
Port Configuration
# Generate mock with dual-port architecture
result = mockloop.generate_mock_api(
spec_url_or_path="https://api.example.com/openapi.json",
business_port=8000, # Mocked API port
admin_port=8001, # Admin UI port
admin_ui_enabled=True
)
Access Points
- Mocked API:
http://localhost:8000
- Your API endpoints - Admin UI:
http://localhost:8001
- Management interface - API Documentation:
http://localhost:8000/docs
- Interactive Swagger UI - Health Check:
http://localhost:8000/health
- Server status
๐ Enterprise Features
Compliance & Audit Logging
MockLoop MCP provides enterprise-grade compliance features:
- Complete Audit Trails: Every request/response logged with metadata
- Regulatory Compliance: GDPR, SOX, HIPAA compliance support
- Performance Metrics: P95/P99 response times, error rates
- Security Monitoring: Threat detection and analysis
- Session Tracking: Cross-request correlation and analysis
Advanced Analytics
- AI-Powered Insights: Intelligent analysis and recommendations
- Traffic Pattern Detection: Automated anomaly detection
- Performance Optimization: AI-driven performance recommendations
- Error Analysis: Intelligent error categorization and resolution
- Trend Analysis: Historical performance and usage trends
๐ Stateful Testing Workflows
MockLoop MCP supports complex, stateful testing workflows through advanced context management:
Context Types
- Test Session Context: Maintain state across test executions
- Workflow Context: Complex multi-step testing orchestration
- Agent Context: AI agent state management and coordination
- Global Context: Cross-session data sharing and persistence
Example: Stateful E-commerce Testing
# Create test session context
session = mockloop.create_test_session_context(
session_name="ecommerce_integration_test",
configuration={
"test_type": "integration",
"environment": "staging",
"ai_enhanced": True
}
)
# Create workflow context for multi-step testing
workflow = mockloop.create_workflow_context(
workflow_name="user_journey_test",
parent_context=session["context_id"],
steps=[
"user_registration",
"product_browsing",
"cart_management",
"checkout_process",
"order_fulfillment"
]
)
# Execute stateful test workflow
for step in workflow["steps"]:
# Update context with step data
mockloop.update_context_data(
context_id=workflow["context_id"],
data={"current_step": step, "timestamp": datetime.now()}
)
# Execute step-specific tests
test_result = mockloop.execute_test_plan(
server_url="http://localhost:8000",
test_plan=f"{step}_test_plan",
context_id=workflow["context_id"]
)
# Create snapshot for rollback capability
snapshot = mockloop.create_context_snapshot(
context_id=workflow["context_id"],
snapshot_name=f"{step}_completion"
)
# Analyze complete workflow results
final_analysis = mockloop.analyze_test_results(
test_results=workflow["results"],
context_id=workflow["context_id"],
include_recommendations=True
)
๐ Running Generated Mock Servers
Using Docker Compose (Recommended)
# Navigate to generated mock directory
cd generated_mocks/your_api_mock
# Start with dual-port architecture
docker-compose up --build
# Access points:
# Mocked API: http://localhost:8000
# Admin UI: http://localhost:8001
Using Uvicorn Directly
# Install dependencies
pip install -r requirements_mock.txt
# Start the mock server
uvicorn main:app --reload --port 8000
Enhanced Features Access
- Admin UI:
http://localhost:8001
- Enhanced management interface - API Documentation:
http://localhost:8000/docs
- Interactive Swagger UI - Health Check:
http://localhost:8000/health
- Server status and metrics - Log Analytics:
http://localhost:8001/api/logs/search
- Advanced log querying - Performance Metrics:
http://localhost:8001/api/logs/analyze
- AI-powered insights - Scenario Management:
http://localhost:8001/api/mock-data/scenarios
- Dynamic testing
๐ Performance & Scalability
MockLoop MCP is designed for enterprise-scale performance:
Performance Metrics
- Response Times: P50, P95, P99 percentile tracking
- Throughput: Requests per second monitoring
- Error Rates: Comprehensive error analysis
- Resource Usage: Memory, CPU, and network monitoring
- Concurrency: Multi-user load testing support
Scalability Features
- Horizontal Scaling: Multi-instance deployment support
- Load Balancing: Built-in load balancing capabilities
- Caching: Intelligent response caching
- Database Optimization: Efficient SQLite and PostgreSQL support
- Container Orchestration: Kubernetes and Docker Swarm ready
๐ Security Features
Built-in Security
- Authentication Middleware: Configurable auth mechanisms
- Rate Limiting: Prevent abuse and DoS attacks
- Input Validation: Comprehensive request validation
- Security Headers: CORS, CSP, and security headers
- Audit Logging: Complete security event logging
Security Testing
- Vulnerability Assessment: AI-powered security testing
- Penetration Testing: Automated security scenario generation
- Compliance Checking: Security standard compliance verification
- Threat Modeling: AI-driven threat analysis
- Security Reporting: Comprehensive security analytics
๐ฃ๏ธ Future Development
Upcoming Features ๐ง
Enhanced AI Capabilities
- Advanced ML Models: Custom model training for API testing
- Predictive Analytics: AI-powered failure prediction
- Intelligent Test Generation: Self-improving test scenarios
- Natural Language Testing: Plain English test descriptions
Extended Protocol Support
- GraphQL Support: Native GraphQL API testing
- gRPC Integration: Protocol buffer testing support
- WebSocket Testing: Real-time communication testing
- Event-Driven Testing: Async and event-based API testing
Enterprise Integration
- CI/CD Integration: Native pipeline integration
- Monitoring Platforms: Datadog, New Relic, Prometheus integration
- Identity Providers: SSO and enterprise auth integration
- Compliance Frameworks: Extended regulatory compliance support
๐ค Contributing
We welcome contributions to MockLoop MCP! Please see our Contributing Guidelines for details.
Development Setup
# Fork and clone the repository
git clone https://github.com/your-username/mockloop-mcp.git
cd mockloop-mcp
# Create development environment
python3 -m venv .venv
source .venv/bin/activate
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/
# Run quality checks
ruff check src/
bandit -r src/
Community
- GitHub Repository: mockloop/mockloop-mcp
- Issues & Bug Reports: GitHub Issues
- Feature Requests: GitHub Issues
- Documentation: docs.mockloop.com
๐ License
MockLoop MCP is licensed under the MIT License.
๐ Get Started Today!
Ready to revolutionize your API testing with the world's first AI-native testing platform?
pip install mockloop-mcp
Join the AI-native testing revolution and experience the future of API testing with MockLoop MCP!
๐ Get Started Now โ