EU AI Act Risk Assessor - MCP server that classifies AI use cases under Regulation (EU) 2024/1689 with deterministic precision. Built with Vurb.ts
🇪🇺 EU AI Act Risk Assessor
MCP server that classifies AI use cases under Regulation (EU) 2024/1689 with deterministic precision. Built with Vurb.ts.
Why This Exists
Every AI company in the EU needs to know: is my AI system legal?
The EU AI Act defines four risk tiers — from Unacceptable (banned) to Minimal (no obligations). Misclassification means fines up to €35 million or 7% of global revenue.
This MCP server gives any LLM client deterministic, auditable risk classification — zero hallucination on compliance.
Architecture
User describes AI use case (natural language)
↓
LLM translates → structured input (Zod-enforced enums/booleans)
↓
MCP applies legal matrix → deterministic verdict
↓
Presenter formats → rules, UI, suggested actions
↓
LLM presents result to user
LLM = Semantic Translator · MCP = Implacable Judge
The LLM handles what it does best (understanding language). The MCP handles what code does best (applying legal logic without drift).
Tools
| Tool | Description |
|---|---|
| risk.assess | Classify an AI use case across all four risk tiers |
| fines.calculate | Calculate maximum fine exposure under Art. 99 |
| exemptions.check | Interactive exemption auditor (guided decision tree) |
| compliance.obligations | List all applicable obligations by risk level |
| compliance.timeline | Enforcement deadlines and milestones |
| compliance.generate_report | Generate Annex IV technical documentation template |
risk.assess
The main classification tool. The LLM maps the user's natural language description to structured fields:
Input: "Our AI screens job applications and ranks candidates"
↓ LLM translates:
inferred_domain: "employment"
involves_profiling: true
makes_autonomous_decisions: true
↓ MCP evaluates:
→ Art. 5(1)(f) match? No (no biometric data)
→ Annex III §4 match? Yes (employment + profiling)
↓
Output: HIGH RISK | Art. 6 + Annex III §4 | 7 obligations
exemptions.check
Transforms the LLM into a guided compliance interviewer:
Call 1: exemptions.check(risk_level: "high", {})
→ "Is this system purely procedural (no discretion)?"
Call 2: exemptions.check(risk_level: "high", { purely_procedural: false })
→ "Does it merely improve a prior human activity?"
Call 3: exemptions.check(risk_level: "high", { ..., improves_prior_activity: true })
→ EXEMPT under Art. 6(3)(b)
fines.calculate
Exact Art. 99 formula:
Prohibited practice: max(€35M, 7% of global revenue)
High-risk noncompliance: max(€15M, 3% of global revenue)
Incorrect information: max(€7.5M, 1% of global revenue)
compliance.generate_report
Generates a complete Annex IV technical documentation template in Markdown — all 9 legally required sections with subsections. V8-safe (returns content as string, no filesystem access).
Legal Coverage
| Regulation Area | Articles | Implementation | |---|---|---| | Prohibited Practices | Art. 5(1)(a)–(h) | 8 rules with deterministic condition functions | | High-Risk Classification | Art. 6, Annex III | 8 domains, 15+ rules | | Limited Risk | Art. 50 | Transparency obligations | | Fines | Art. 99(3)–(5) | Exact formulas with SME proportionality | | Exemptions | Art. 6(3), Art. 2 | 6-node interactive decision tree | | Technical Documentation | Art. 11, Annex IV | 9-section template generator | | Enforcement Timeline | Arts. 111, 113 | Milestones with live status |
Usage Examples
🚫 Example 1 — Detecting a Prohibited Practice
User: "We're building an AI that analyses employee webcam footage to detect if they're stressed, bored, or engaged during meetings."
The LLM translates this into structured input and calls risk.assess:
{
"intention_summary": "Real-time emotion recognition system analysing employee webcam footage during meetings",
"inferred_domain": "employment",
"processes_biometric_data": true,
"involves_profiling": true,
"makes_autonomous_decisions": false
}
Result: 🚫 UNACCEPTABLE — Art. 5(1)(f) prohibits emotion recognition in the workplace. This AI practice is illegal since 2 February 2025.
The Presenter automatically suggests fines.calculate and exemptions.check as next actions.
⚠️ Example 2 — High-Risk Classification (Employment)
User: "Our startup uses AI to screen CVs and shortlist the top 20% of candidates for interview."
{
"intention_summary": "AI system that screens and ranks job applicants based on CV analysis",
"inferred_domain": "employment",
"processes_biometric_data": false,
"involves_profiling": true,
"makes_autonomous_decisions": true
}
Result: ⚠️ HIGH RISK — Annex III §4(a): AI systems used for recruitment and selection. Confidence: definitive. 7 compliance obligations apply.
The Presenter renders a Mermaid decision-flow diagram and suggests:
compliance.obligations— review all required obligationsfines.calculate— calculate maximum fine exposurecompliance.generate_report— generate Annex IV documentation
⚠️ Example 3 — High-Risk Classification (Education)
User: "We want to build an AI that grades university exams automatically and decides who passes."
{
"intention_summary": "Automated exam grading system with pass/fail decision authority",
"inferred_domain": "education",
"processes_biometric_data": false,
"involves_profiling": true,
"makes_autonomous_decisions": true
}
Result: ⚠️ HIGH RISK — Annex III §3(a): AI systems determining access to or outcomes in educational institutions.
ℹ️ Example 4 — Limited Risk (Chatbot)
User: "We have a customer support chatbot that answers questions about our products."
{
"intention_summary": "Customer-facing chatbot for product FAQ and support queries",
"inferred_domain": "none",
"processes_biometric_data": false,
"involves_profiling": false,
"makes_autonomous_decisions": false
}
Result: ℹ️ LIMITED RISK — Art. 50: The system must clearly disclose to users that they are interacting with an AI. No further compliance obligations beyond transparency.
✅ Example 5 — Minimal Risk
User: "We use AI to recommend songs to users based on their listening history."
{
"intention_summary": "Music recommendation engine based on listening patterns",
"inferred_domain": "none",
"processes_biometric_data": false,
"involves_profiling": false,
"makes_autonomous_decisions": false
}
Result: ✅ MINIMAL RISK — No specific obligations under the EU AI Act. Voluntary codes of conduct apply.
💰 Example 6 — Fine Calculation
User: "We're a company with €500M annual revenue. What's the worst-case fine if our prohibited AI system is caught?"
The LLM calls fines.calculate:
{
"violation_type": "prohibited_practice",
"global_annual_revenue_eur": 500000000,
"is_sme": false
}
Result: | | Value | |---|---| | Fixed threshold | €35,000,000 | | Revenue-based (7%) | €35,000,000 | | Applicable fine | €35,000,000 | | Article | Article 99(3) |
If annual revenue were €1B, the revenue-based fine (€70M) would exceed the fixed threshold.
🔍 Example 7 — Interactive Exemption Audit
User: "Our AI was classified as high-risk, but it just automates a simple checklist — no real decisions."
The LLM initiates an interactive exemption audit via exemptions.check:
Round 1:
{ "risk_level": "high", "answers": {} }
→ "Is this AI system purely procedural, performing a narrow task with no discretionary judgement?"
User: "Yes, it just checks if all documents are present in a submission."
Round 2:
{ "risk_level": "high", "answers": { "purely_procedural": true } }
→ ✅ EXEMPT under Art. 6(3)(a) — The system performs a purely procedural task.
The Presenter rules change dynamically: instead of "ask the next question", it now shows the exemption details and advises legal verification.
📋 Example 8 — Full Compliance Workflow
A realistic end-to-end scenario:
Step 1 — User describes their system:
"We're building an AI for a city council that analyses CCTV footage to predict where crimes are likely to occur."
Step 2 — risk.assess classifies:
🚫 UNACCEPTABLE — Art. 5(1)(d): Predictive policing based on profiling.
Domain: law_enforcement | Biometric: false | Profiling: true | Autonomous: true
Step 3 — User asks about exemptions:
"But what if it's for national security?"
Step 4 — exemptions.check evaluates:
→ "Is this system exclusively for military or national security purposes?"
User: "No, it's municipal police."
→ NOT EXEMPT — Art. 2(3) applies only to military/national security, not municipal law enforcement.
Step 5 — User asks about fines:
"What's the maximum fine for the city council?"
Step 6 — fines.calculate:
💰 Prohibited practice: max(€35M, 7% of global revenue)
Public authorities typically face the fixed threshold: €35,000,000
Step 7 — User asks about the timeline:
"When does this become enforceable?"
Step 8 — compliance.timeline:
📅 Prohibited practices (Art. 5): ENFORCED since 2 February 2025
⚠️ This system is already illegal.
📄 Example 9 — Generating Technical Documentation
User: "Generate the Annex IV documentation template for our employment screening AI."
The LLM calls compliance.generate_report:
{
"system_name": "TalentScreen AI",
"system_description": "AI-powered CV screening and candidate ranking system",
"intended_purpose": "Automated shortlisting of job applicants for interview",
"domain": "employment",
"provider_name": "Acme Corp"
}
Result: A complete Markdown document with all 9 Annex IV sections:
# Technical Documentation — TalentScreen AI
> Regulation (EU) 2024/1689 — Article 11, Annex IV
## 1. General Description of the AI System
## 2. Detailed Description of Elements and Development Process
## 3. Monitoring, Functioning and Control
## 4. Risk Management System
## 5. Data and Data Governance
## 6. Human Oversight Measures
## 7. Accuracy, Robustness and Cybersecurity
## 8. Transparency and Provision of Information
## 9. Post-Market Monitoring
Each section includes subsections with [To be completed by the provider] placeholders.
Deploy
Vinkius Cloud — One Command ⚡
The fastest path to production. vurb deploy publishes your server to Vinkius Cloud's global edge — zero infrastructure, built-in DLP, kill switch, audit logging, and a managed MCP token:
npm install
vurb deploy
That's it. No Dockerfile, no CI/CD pipeline, no servers to manage. You get a connection token that works with any MCP client — Cursor, Claude Desktop, Claude Code, Windsurf, Cline, VS Code + Copilot.
# Deploy with a custom name
vurb deploy --name eu-ai-act
# Deploy to a specific environment
vurb deploy --env production
💡 Tip: Install the Vinkius extension to manage your deployed server directly from VS Code, Cursor, or Windsurf — live connections, requests, P95 latency, DLP intercepts, token management, tool toggling, logs, and deployment history.
Connect Your MCP Client
After deploying, share the managed token with any MCP-compatible client:
Claude Desktop / Cursor / Windsurf
{
"mcpServers": {
"eu-ai-act": {
"url": "https://edge.vinkius.com/your-token/mcp"
}
}
}
Self-Hosted Alternatives
The same ToolRegistry runs anywhere — no code changes required:
| Platform | Adapter |
|---|---|
| Vercel Edge Functions | @vurb/vercel |
| Cloudflare Workers | @vurb/cloudflare |
| Any Node.js server | Stdio / HTTP+SSE transport |
// Vercel — one line
import { vercelAdapter } from '@vurb/vercel';
export const POST = vercelAdapter({ registry, contextFactory });
// Cloudflare Workers — one line
import { cloudflareWorkersAdapter } from '@vurb/cloudflare';
export default cloudflareWorkersAdapter({ registry, contextFactory });
Full deployment guides: Production Server · Vercel Adapter · Cloudflare Adapter
Running Locally via stdio
If you prefer to run the server locally and connect via stdio — the native MCP transport used by Claude Desktop, Cursor, and Windsurf:
# 1. Install dependencies
npm install
# 2. Build
npm run build
# 3. Test it directly (optional)
node dist/server.js
Then add it to your MCP client config (e.g. claude_desktop_config.json):
{
"mcpServers": {
"eu-ai-act": {
"command": "node",
"args": ["/absolute/path/to/eu-ai-act/dist/server.js"]
}
}
}
Or run without cloning via npx (after the package is published):
{
"mcpServers": {
"eu-ai-act": {
"command": "npx",
"args": ["-y", "@mcp-originals/eu-ai-act"]
}
}
}
Note: The stdio transport is the default. No environment variables are required — the server runs fully offline with all legal data hardcoded.
Development
# Install dependencies
npm install
# Development server with HMR
vurb dev
# Type-check
npm run typecheck
# Run tests
npm test
Project Structure
src/
├── data/ ← Legal knowledge base (hardcoded for determinism)
│ ├── types.ts
│ ├── prohibited-practices.ts Art. 5
│ ├── annex-iii-matrix.ts Annex III
│ ├── limited-risk.ts Art. 50
│ ├── obligations.ts Art. 9-15, 72
│ ├── timeline.ts Enforcement dates
│ ├── exemptions.ts Art. 6(3), Art. 2
│ ├── fines.ts Art. 99
│ └── annex-iv-template.ts Documentation TOC
├── engine/
│ └── risk-engine.ts Pure deterministic logic
├── models/ M — defineModel()
│ └── index.ts
├── views/ V — definePresenter()
│ └── index.ts
├── agents/ A — Tool definitions
│ ├── risk/
│ │ └── assess.tool.ts
│ ├── fines/
│ │ └── calculate.tool.ts
│ ├── exemptions/
│ │ └── check.tool.ts
│ └── compliance/
│ ├── obligations.tool.ts
│ ├── timeline.tool.ts
│ └── generate-report.tool.ts
├── vurb.ts Shared Vurb instance
└── server.ts Bootstrap
Built with Vurb.ts
This server showcases the full Vurb.ts MVA (Model-View-Agent) pattern:
| Feature | Usage |
|---|---|
| defineModel() | 6 domain models with guidance labels as auto-rules |
| definePresenter() | Schema validation, JIT rules, Mermaid diagrams, collection UI, suggested actions |
| Semantic Verbs | f.query() (readOnly), f.action() (neutral) |
| f.error() | Self-healing errors with suggestions |
| .cached() | Static reference data (obligations, timeline) |
| .instructions() | AI-first prompt engineering in the framework |
| agentLimit | Cognitive guardrails with truncation guidance |
| autoDiscover() | File-based tool routing |
Get started with Vurb.ts → · Documentation →
License
Apache 2.0 — See LICENSE for details.