MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

M
MCP Openai Image Generator

A secure MCP (Model Context Protocol) server for generating images using OpenAI APIs. Designed to integrate with Claude Code, Cursor, VS Code, GitHub Copilot, and other AI tools. Automatically saves generated images to the user's project workspace with built-in rate limiting, API key validation, and production-ready deployment on Render.

Created 3/21/2026
Updated about 8 hours ago
Repository documentation and setup instructions

MCP OpenAI Image Generator

Generate, edit, and create variations of images using OpenAI GPT Image models via the Model Context Protocol (MCP).

npm version Python 3.8+ License: MIT MCP Compatible


Demo

Demo


Overview

mcp-openai-image-generator is a production-ready MCP server that exposes OpenAI's image APIs as tools your AI assistant can call directly. Ask Claude, Cursor, Copilot, or any MCP-compatible tool to generate or edit images — and the file is saved straight to your project folder.

Compatible with:

| Tool | Support | |------|---------| | Claude Code | ✅ stdio / NPX | | Claude Desktop | ✅ stdio / NPX | | Cursor | ✅ stdio / NPX | | VS Code + GitHub Copilot | ✅ stdio / NPX | | Windsurf | ✅ stdio / NPX | | Cline | ✅ stdio / NPX | | OpenAI / Codex | ✅ HTTP endpoint | | Google AI Studio / Gemini | ✅ HTTP endpoint | | Any MCP client | ✅ Remote HTTP |


Features

  • 3 powerful toolsgenerate_image, edit_image, create_variation
  • Latest models — GPT-Image-1.5, GPT-Image-1, GPT-Image-1-Mini
  • Multiple formats — PNG, JPEG, WebP with compression control
  • Transparent backgrounds — supported with PNG/WebP output
  • Portrait & landscape — 1024×1024, 1024×1536, 1536×1024
  • Batch generation — up to 10 images per call
  • Zero-config NPX — one line to add to any MCP client
  • Secure — API keys never logged, redacted from all output
  • Rate limiting — built-in sliding-window protection (HTTP mode)
  • Self-hosted or cloud — run locally or deploy to Render/Railway/Fly.io

Prerequisites


Quick Start (NPX — Recommended)

The fastest way to use this server with any MCP client. No cloning or Python setup needed — just add your API key.

Windows users: VS Code, Claude Desktop, Cursor, and Windsurf spawn processes without a shell on Windows, so npx can't find Python in the PATH. Use cmd /c as the command instead (see the Windows configs in each IDE section below).

{
  "command": "npx",
  "args": ["-y", "mcp-openai-image-generator"],
  "env": {
    "OPENAI_API_KEY": "sk-..."
  }
}

On Windows, replace "command": "npx" with "command": "cmd" and add "/c" as the first arg:

{
  "command": "cmd",
  "args": ["/c", "npx", "-y", "mcp-openai-image-generator"],
  "env": {
    "OPENAI_API_KEY": "sk-..."
  }
}

NPX will download the package, install Python dependencies once, and launch the server. Images are saved to generated_images/ in your current working directory.


IDE & Tool Configuration

Claude Code

Option A — Project config (.mcp.json in your project root, shared with your team):

{
  "mcpServers": {
    "openai-image-generator": {
      "command": "npx",
      "args": ["-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "${OPENAI_API_KEY}"
      }
    }
  }
}

Option B — CLI install (user-level, available in all projects):

claude mcp add -s user openai-image-generator \
  -e OPENAI_API_KEY=sk-... \
  -- npx -y mcp-openai-image-generator

Option C — Remote HTTP (no Python needed locally):

claude mcp add --transport http -s user openai-image-generator \
  https://mcp-openai-image-generator.onrender.com/mcp

Cursor

Config file: ~/.cursor/mcp.json (macOS/Linux) or %USERPROFILE%\.cursor\mcp.json (Windows).

On macOS and Linux:

{
  "mcpServers": {
    "openai-image-generator": {
      "command": "npx",
      "args": ["-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

On Windows (use cmd /c so the shell can locate Python):

{
  "mcpServers": {
    "openai-image-generator": {
      "command": "cmd",
      "args": ["/c", "npx", "-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Then open Cursor Settings → MCP and verify the server appears as connected.


VS Code + GitHub Copilot

Create .vscode/mcp.json in your project root, or edit the user-level file: %APPDATA%\Code\User\mcp.json (Windows) · ~/.config/Code/User/mcp.json (Linux) · ~/Library/Application Support/Code/User/mcp.json (macOS).

Why cmd /c on Windows? VS Code spawns MCP processes directly without a shell, so npx can't locate Python in the PATH. cmd /c routes through the Windows shell, giving access to the full PATH including Python.

On macOS and Linux:

{
  "servers": {
    "openai-image-generator": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

On Windows:

{
  "servers": {
    "openai-image-generator": {
      "type": "stdio",
      "command": "cmd",
      "args": ["/c", "npx", "-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Claude Desktop

Config file location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

On macOS and Linux:

{
  "mcpServers": {
    "openai-image-generator": {
      "command": "npx",
      "args": ["-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

On Windows:

{
  "mcpServers": {
    "openai-image-generator": {
      "command": "cmd",
      "args": ["/c", "npx", "-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Fully quit and restart Claude Desktop after saving.


Windsurf

Config file: ~/.codeium/windsurf/mcp_config.json (macOS/Linux) or %USERPROFILE%\.codeium\windsurf\mcp_config.json (Windows).

On macOS and Linux:

{
  "mcpServers": {
    "openai-image-generator": {
      "command": "npx",
      "args": ["-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

On Windows:

{
  "mcpServers": {
    "openai-image-generator": {
      "command": "cmd",
      "args": ["/c", "npx", "-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Restart Windsurf or reload the MCP config via the Command Palette.


Cline

In VS Code with the Cline extension installed:

  1. Open VS Code Settings → search for Cline MCP
  2. Click Edit in settings.json
  3. Add under cline.mcpServers (use cmd /c on Windows):

On macOS and Linux:

{
  "cline.mcpServers": {
    "openai-image-generator": {
      "command": "npx",
      "args": ["-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

On Windows:

{
  "cline.mcpServers": {
    "openai-image-generator": {
      "command": "cmd",
      "args": ["/c", "npx", "-y", "mcp-openai-image-generator"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

OpenAI / Codex

OpenAI's tools (Responses API, Assistants) don't natively support MCP stdio servers yet. Use the HTTP endpoint instead:

  1. Deploy this server (see Deployment)
  2. Call the HTTP endpoint from your OpenAI integration:
import httpx

response = httpx.post(
    "https://your-server.onrender.com/generate-image",
    json={
        "prompt": "A futuristic city skyline at dusk",
        "model": "gpt-image-1",
        "size": "1536x1024",
        "quality": "high",
        "openai_api_key": "sk-..."
    }
)
file_path = response.json()["file_paths"][0]

Google AI Studio

Google AI Studio and Gemini API do not natively support MCP yet. Use the HTTP endpoint with any HTTP client or via a custom Gemini function-calling tool definition:

import google.generativeai as genai
import httpx

def generate_image(prompt: str) -> str:
    """Generate an image from a text prompt."""
    r = httpx.post(
        "https://your-server.onrender.com/generate-image",
        json={"prompt": prompt, "openai_api_key": "sk-..."}
    )
    return r.json()["file_paths"][0]

# Register as a Gemini tool and call it

Local Installation (Alternative to NPX)

If you prefer not to use NPX:

git clone https://github.com/yourusername/mcp-openai-image-generator.git
cd mcp-openai-image-generator
pip install -r requirements.txt
cp .env.example .env   # then fill in your OPENAI_API_KEY

Then configure your MCP client with:

{
  "command": "python",
  "args": ["/absolute/path/to/mcp-openai-image-generator/server.py"],
  "env": {
    "OPENAI_API_KEY": "sk-..."
  }
}

Tools Reference

generate_image

Generate one or more images from a text prompt.

| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | prompt | string | required | Detailed text description of the image | | model | string | gpt-image-1.5 | Model: gpt-image-1.5, gpt-image-1, gpt-image-1-mini | | size | string | auto | 1024x1024, 1024x1536, 1536x1024, auto | | quality | string | auto | low, medium, high, auto | | output_format | string | png | png, jpeg, webp | | output_compression | integer | null | 0–100 for jpeg/webp only | | background | string | auto | transparent, opaque, auto | | moderation | string | auto | auto, low | | n | integer | 1 | Number of images (1–10) | | openai_api_key | string | env var | Override API key for this call |

Example prompt: A cinematic shot of a lone astronaut on Mars at sunset, red dust, dramatic lighting, 4K


edit_image

Edit an existing image with text instructions and an optional mask.

| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | image_paths | array | required | Paths to source images (PNG/JPEG/WebP, ≤ 50 MB) | | prompt | string | required | What to change (e.g. "replace the background with a forest") | | mask_path | string | null | PNG mask — white areas are edited, black areas preserved | | model | string | gpt-image-1.5 | Same options as generate_image | | size | string | 1024x1024 | Output dimensions | | quality | string | auto | Rendering quality | | input_fidelity | string | low | low or high — how closely to preserve source details | | output_format | string | png | Output format | | n | integer | 1 | Number of variants | | openai_api_key | string | env var | Override API key |


create_variation

Create re-imagined variations of an existing image.

| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | image_path | string | required | Path to source image | | prompt | string | required | Guide for re-interpretation (e.g. "as a watercolor painting") | | model | string | gpt-image-1.5 | Model to use | | size | string | 1024x1024 | Output dimensions | | quality | string | auto | Rendering quality | | output_format | string | png | Output format | | n | integer | 1 | Number of variations (1–10) | | openai_api_key | string | env var | Override API key |


Environment Variables

| Variable | Required | Description | |----------|----------|-------------| | OPENAI_API_KEY | Yes (or pass per-call) | Your OpenAI API key | | MCP_TRANSPORT | No | stdio (default) or http | | PORT | No | HTTP port when MCP_TRANSPORT=http (default: 8000) | | PYTHONUNBUFFERED | No | Set to 1 for real-time logs (recommended in production) | | LOG_LEVEL | No | Logging verbosity: DEBUG, INFO (default), WARNING, ERROR |


Cost Reference

| Model | Quality | Size | Cost/image | |-------|---------|------|------------| | gpt-image-1.5 | high | 1024×1024 | ~$0.133 | | gpt-image-1.5 | medium | 1024×1024 | ~$0.053 | | gpt-image-1.5 | low | 1024×1024 | ~$0.013 | | gpt-image-1 | high | 1024×1024 | ~$0.040 | | gpt-image-1-mini | low | 1024×1024 | ~$0.011 |

Prices are estimates. Check OpenAI pricing for current rates.


Deployment

Render (Free Tier)

Deploy to Render

A render.yaml is included. To deploy:

  1. Push this repo to GitHub
  2. Go to render.com → New → Blueprint
  3. Connect your repo and set OPENAI_API_KEY as a secret env var
  4. Deploy — your server will be live at https://your-service.onrender.com

Then use the remote HTTP config in any MCP client:

{
  "command": "npx",
  "args": ["-y", "mcp-remote", "https://your-service.onrender.com/mcp"]
}

Docker

A Dockerfile and docker-compose.yml are included in the repository root.

# Build and run
docker build -t mcp-openai-image-generator .
docker run -e OPENAI_API_KEY=sk-... -p 8000:8000 mcp-openai-image-generator

Or with Docker Compose (persists generated images, reads OPENAI_API_KEY from .env):

docker-compose up -d

See docs/deployment.md for full Docker and cloud deployment options.


Security

  • API keys are never logged — a log filter redacts all sk-... patterns before they reach any handler
  • Per-call clients — OpenAI clients are created fresh per request; keys are never stored in module state
  • No key storage — the server never writes keys to disk
  • Rate limiting — 5 requests/60 seconds per IP in HTTP mode (configurable in core/rate_limiter.py)
  • Input validation — API key format validated before use

Project Structure

mcp-openai-image-generator/
├── bin/
│   ├── mcp-openai-image-generator.js   # NPX entry point
│   └── check-python.js                 # Post-install Python check
├── core/
│   ├── api_client.py                   # OpenAI client factory + key validation
│   ├── file_manager.py                 # Image saving + directory management
│   ├── log_filter.py                   # API key redaction from logs
│   └── rate_limiter.py                 # Sliding-window rate limiter (HTTP mode)
├── tools/
│   ├── generate.py                     # generate_image tool
│   ├── edit.py                         # edit_image tool
│   └── variation.py                    # create_variation tool
├── docs/
│   ├── tools-reference.md              # Full tool parameter documentation
│   └── ide-integrations.md             # Detailed IDE setup guides
├── server.py                           # FastMCP server entry point
├── requirements.txt                    # Python dependencies
├── package.json                        # NPX / npm package config
├── .mcp.json                           # MCP config for this project
├── .env.example                        # Environment variable template
├── render.yaml                         # Render.com deployment config
└── README.md

Contributing

Contributions are welcome! Please open an issue or submit a pull request.

  1. Fork the repo
  2. Create a feature branch: git checkout -b feature/my-feature
  3. Commit your changes: git commit -m 'Add my feature'
  4. Push to the branch: git push origin feature/my-feature
  5. Open a Pull Request

License

MIT © 2025 — see LICENSE for details.

Quick Setup
Installation guide for this server

Install Package (if required)

uvx mcp-openai-image-generator

Cursor configuration (mcp.json)

{ "mcpServers": { "ishan96dev-mcp-openai-image-generator": { "command": "uvx", "args": [ "mcp-openai-image-generator" ] } } }