MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

C
Context Aware Ai Chat Bot MCP

MCP server by auronixtechnologies

Created 2/5/2026
Updated about 21 hours ago
Repository documentation and setup instructions

Context-Aware Multi-Agent Academic Assistant using MCP

A professional, production-ready AI-powered academic assistance system using the Model Context Protocol (MCP) with multiple specialized agents. Built with FastAPI backend and React frontend, integrated with Google Gemini LLM.

🎯 Overview

This application provides intelligent academic assistance through multiple specialized AI agents that understand context and deliver professional-grade responses. Perfect for college-level students and faculty.

Key Features

  • Multiple AI Agents: 5 specialized agents for different academic needs

    • Research Assistant
    • Academic Advisor
    • Query Resolver
    • Document Analyzer
    • Assessment Helper
  • Gemini LLM Integration: Uses Google's Gemini AI models for intelligent responses

  • Real-time Chat: Interactive conversation interface with context awareness

  • Query Management: History tracking and rating system

  • Professional Architecture: Enterprise-level code structure

  • Data Seeding: Pre-populated sample data for testing

  • Full-Stack Setup: Integrated backend-frontend solution

🚀 Quick Start

Prerequisites

  • Python 3.9+
  • Node.js 16+
  • Git
  • PostgreSQL (optional, can use SQLite for testing)
  • Google Gemini API Key

Backend Setup

  1. Navigate to backend directory

    cd backend
    
  2. Create and activate virtual environment

    python -m venv venv
    
    # Windows
    venv\Scripts\activate
    
    # macOS/Linux
    source venv/bin/activate
    
  3. Install dependencies

    pip install -r requirements.txt
    
  4. Setup environment variables

    cp .env.example .env
    

    Edit .env and add your Gemini API key:

    GEMINI_API_KEY=your_actual_gemini_api_key_here
    
  5. Run data seeding (optional)

    python -m seeds.seed_data
    
  6. Start the backend server

    python run.py
    

    Server will run at: http://localhost:8000

Frontend Setup

  1. Navigate to frontend directory

    cd ../frontend
    
  2. Install dependencies

    npm install
    
  3. Setup environment variables

    cp .env.example .env
    
  4. Start development server

    npm run dev
    

    Application will run at: http://localhost:5173

📚 Architecture

Backend Structure

backend/
├── app/
│   ├── agents/          # AI Agent implementations
│   ├── config/          # Configuration & settings
│   ├── models/          # Database & API schemas
│   ├── routes/          # API endpoints
│   ├── services/        # Business logic
│   ├── utils/           # Utilities (LLM, MCP, DB)
│   └── main.py         # FastAPI app
├── seeds/              # Data seeding
├── requirements.txt    # Python dependencies
├── .env.example       # Environment template
└── run.py            # Entry point

Frontend Structure

frontend/
├── src/
│   ├── pages/         # Page components
│   ├── services/      # API client
│   ├── App.jsx       # Main app
│   └── main.jsx      # Entry point
├── package.json      # Dependencies
└── vite.config.js   # Vite config

🤖 Available Agents

1. Research Assistant

  • Analyzes research papers
  • Provides literature reviews
  • Verifies sources
  • Suggests methodologies

2. Academic Advisor

  • Recommends courses
  • Plans academic paths
  • Assesses skills
  • Provides career guidance

3. Query Resolver

  • Answers academic questions
  • Explains concepts
  • Generates examples
  • Provides quick references

4. Document Analyzer

  • Summarizes documents
  • Extracts keywords
  • Classifies content
  • Detects plagiarism indicators

5. Assessment Helper

  • Generates study guides
  • Creates practice questions
  • Reviews assessments
  • Analyzes performance

🔌 API Endpoints

Health Check

GET /api/v1/health
GET /api/v1/health/db
GET /api/v1/health/llm

Agents

GET    /api/v1/agents                    # List all agents
GET    /api/v1/agents/{agent_type}       # Get agent info
POST   /api/v1/agents/execute-task       # Execute agent task
GET    /api/v1/agents/{agent_type}/capabilities

Queries

POST   /api/v1/queries/                  # Create query
GET    /api/v1/queries/{query_id}        # Get specific query
GET    /api/v1/queries/user/{user_id}    # Get user's queries
POST   /api/v1/queries/{query_id}/rate   # Rate query

Conversation

POST   /api/v1/conversation/session/new                       # Create session
POST   /api/v1/conversation/message                           # Save message
GET    /api/v1/conversation/session/{session_id}/history     # Get history

🌐 API Documentation

Interactive API documentation available at:

  • Swagger UI: http://localhost:8000/api/docs
  • ReDoc: http://localhost:8000/api/redoc

📊 Database Models

  • User: User accounts with roles
  • Student: Student information and tracking
  • Faculty: Instructor details
  • Subject: Academic subjects/courses
  • Course: Course offerings
  • AcademicQuery: Query history and responses
  • Document: Uploaded documents
  • Assessment: Student grades and feedback
  • ConversationHistory: Chat history

🔐 Environment Variables

Backend (.env)

# Server
BACKEND_HOST=localhost
BACKEND_PORT=8000
DEBUG=True

# Frontend
FRONTEND_URL=http://localhost:5173

# Database (SQLite for testing, PostgreSQL for production)
DATABASE_URL=sqlite:///./test.db
# For PostgreSQL: postgresql://user:password@localhost:5432/mcp_academic_db

# LLM
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL=gemini-pro

# Security
SECRET_KEY=your-secret-key-change-in-production

Frontend (.env)

VITE_API_URL=http://localhost:8000/api/v1
VITE_APP_NAME=MCP Academic Assistant

🧪 Testing

Data Seeding

# Generate test data
python backend/seeds/seed_data.py

This creates:

  • 5 student users
  • 3 faculty users
  • 6 subjects
  • 24 assessments
  • 6 sample queries
  • 5 documents
  • Conversation history

API Testing Examples

List available agents

curl http://localhost:8000/api/v1/agents

Create a query

curl -X POST http://localhost:8000/api/v1/queries/ \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": 1,
    "query_text": "Explain machine learning algorithms",
    "agent_type": "query_resolver",
    "context": "Computer Science 3rd semester"
  }'

Execute agent task

curl -X POST http://localhost:8000/api/v1/agents/execute-task \
  -H "Content-Type: application/json" \
  -d '{
    "agent_type": "research_assistant",
    "task": "Analyze this research paper",
    "context": "Machine Learning"
  }'

📦 Dependencies

Backend

  • FastAPI - Modern web framework
  • SQLAlchemy - ORM
  • Pydantic - Data validation
  • google-generativeai - Gemini LLM
  • pytest - Testing

Frontend

  • React 18 - UI library
  • Vite - Build tool
  • React Router - Navigation
  • Axios - HTTP client
  • Tailwind CSS - Styling

🛠️ Development

Backend Development

cd backend

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest

# Run with auto-reload
uvicorn app.main:app --reload

Frontend Development

cd frontend

# Development with HMR
npm run dev

# Build for production
npm run build

# Preview production build
npm run preview

📝 Project Structure Highlights

Professional Patterns:

  • Separation of concerns (models, services, routes)
  • Dependency injection
  • Error handling
  • Logging
  • Configuration management
  • Documentation
  • API versioning

Best Practices:

  • Type hints throughout
  • Comprehensive docstrings
  • CORS configuration
  • Environment-based settings
  • Database transactions
  • Async/await patterns

🚢 Deployment

Backend (FastAPI + Uvicorn)

# Production build
pip install gunicorn
gunicorn -w 4 -k uvicorn.workers.UvicornWorker app.main:app

Frontend (React + Vite)

# Production build
npm run build

# Deploy dist/ folder to your hosting service

🔄 CORS & Backend-Frontend Integration

The application includes proper CORS configuration to enable seamless communication:

Allowed Origins:
- http://localhost:5173 (Frontend dev)
- http://localhost:3000
- http://127.0.0.1:5173
- https://yourdomain.com (Production)

📈 Performance Considerations

  • Connection pooling for database
  • Request caching with Redis (optional)
  • Async/await for non-blocking operations
  • Optimized LLM prompts
  • Message pagination

🐛 Troubleshooting

LLM not responding

  • Check GEMINI_API_KEY in .env
  • Verify API key has Generative AI permissions
  • Check internet connection

Database connection issues

  • For SQLite: ensure write permissions in directory
  • For PostgreSQL: verify connection string in .env
  • Check if database service is running

CORS errors

  • Verify frontend URL in BACKEND CORS settings
  • Check if backend is running on correct port
  • Review browser console for specific error

Port already in use

# Kill process using port 8000 (backend)
# Windows
netstat -ano | findstr :8000
taskkill /PID <PID> /F

# macOS/Linux
lsof -i :8000
kill -9 <PID>

📞 Support

For issues or questions:

  1. Check existing documentation
  2. Review error messages in console/logs
  3. Verify .env configuration
  4. Check API endpoint documentation at /api/docs

📄 License

This project is created for educational purposes as a college-level project.

✨ Features Roadmap

  • [ ] Authentication & JWT
  • [ ] File upload integration
  • [ ] Advanced search
  • [ ] Collaborative features
  • [ ] Analytics dashboard
  • [ ] Mobile app
  • [ ] Docker containerization
  • [ ] CI/CD pipeline

🎓 Educational Value

This project demonstrates:

  • Full-stack development
  • Microservices architecture
  • AI/LLM integration
  • Database design
  • API development
  • Frontend-backend integration
  • Production-ready code structure
  • Professional documentation

Created for: College Projects - Batch 2
Technology Stack: FastAPI + React + Gemini LLM
Version: 1.0.0

Quick Setup
Installation guide for this server

Install Package (if required)

uvx context-aware-ai-chat-bot-mcp

Cursor configuration (mcp.json)

{ "mcpServers": { "auronixtechnologies-context-aware-ai-chat-bot-mcp": { "command": "uvx", "args": [ "context-aware-ai-chat-bot-mcp" ] } } }