MCP server by xiaoxiaoxiaotao
paper-search-mcp
paper-search-mcp is an MCP server for agents that need to search papers, read arXiv PDFs, align records across sources, and produce structured literature-analysis inputs.
The server currently integrates two paper sources:
- Semantic Scholar for citation-aware discovery and metadata lookup
- arXiv for recent papers, metadata lookup, and PDF text extraction
It also includes higher-level utilities for cross-source alignment, BibTeX export, and compact literature digests.
Chinese documentation is available in README-zh.md.
MCP Tools
search_semantic_scholar
Search Semantic Scholar and return normalized paper metadata sorted by citation count.
Parameters:
query: Search querymax_results: Maximum number of results, default10
get_semantic_scholar_paper
Fetch detailed metadata for a Semantic Scholar paper by paper_id.
search_arxiv
Search arXiv and return normalized metadata.
Parameters:
query: Search querymax_results: Maximum number of results, default10sort_by:relevance,lastUpdatedDate, orsubmittedDatesort_order:ascendingordescending
get_arxiv_paper
Fetch metadata for one arXiv paper using an arXiv ID, abstract URL, or PDF URL.
read_arxiv_paper
Download an arXiv PDF, cache it locally, extract text from the first pages, and return a structured reading pack.
Parameters:
arxiv_id_or_url: arXiv ID, abstract URL, or PDF URLmax_pages: Maximum number of pages to extract, default8max_characters: Maximum number of extracted characters, default20000
export_bibtex
Export a paper as BibTeX.
Parameters:
source:semantic_scholarorarxividentifier: Semantic Scholarpaper_idor arXiv ID/URL
align_paper_by_title
Search Semantic Scholar and arXiv by title and return exact normalized title matches across both sources.
Parameters:
title: Paper title used for exact title alignmentsemantic_scholar_max_results: Search limit for Semantic Scholar, default10arxiv_max_results: Search limit for arXiv, default10
build_literature_digest
Search across Semantic Scholar and arXiv, deduplicate overlapping papers, and return a compact literature-analysis bundle.
This is useful for downstream agent tasks such as:
- finding classic work versus recent work
- grouping methods into families
- comparing datasets, metrics, and limitations
Installation
This project is designed to use uv for environment and dependency management.
uv sync
This creates .venv in the project directory and installs the project dependencies.
To include development dependencies as well:
uv sync --group dev
If you have a Semantic Scholar API key:
export S2_API_KEY=your_key_here
Optional environment variables:
S2_API_KEY: Semantic Scholar API keyPAPER_MCP_HTTP_TIMEOUT: HTTP timeout in seconds, default30PAPER_MCP_USER_AGENT: Custom user agent stringPAPER_MCP_CACHE_DIR: Override the on-disk cache directory for downloaded PDFs
Install As A Python Package
For local development or direct Python-based deployment:
pip install .
To install directly from a Git repository:
pip install https://github.com/xiaoxiaoxiaotao/paper-search-mcp.git
Running The Server
Start the server directly:
uv run paper-search-mcp
Example MCP client configuration:
{
"servers": {
"paper-search": {
"type": "stdio",
"command": "uv",
"args": [
"run",
"paper-search-mcp",
"-no-sync"
],
"cwd": "/home/tao/code/projects/paper-search-mcp",
"env": {
"S2_API_KEY": "${S2_API_KEY}"
}
}
},
"inputs": []
}
Notes
- Semantic Scholar is better for established, citation-rich papers.
- arXiv is better for recent work and full-text PDF reading.
build_literature_digestreduces prompt assembly work for downstream agents.read_arxiv_paperreturns text and analysis prompts instead of hard-coded conclusions.- PDF downloads are cached on disk to avoid repeated arXiv fetches.
- An npm package is possible as a thin wrapper, but the primary runtime is still Python or Docker.
Possible Extensions
- DOI / PMID / ACL Anthology / OpenAlex support
- citation graph and related-paper retrieval
- richer section-aware PDF chunking
- persistent metadata caching beyond PDFs