MCP Servers

A collection of Model Context Protocol servers, templates, tools and more.

MCP server by arahimi-hims

Created 1/30/2026
Updated 4 days ago
Repository documentation and setup instructions

Launching jobs on Lakeflow

This tool is an opinionated way to spawn compute jobs on the cloud. By "compute job", I mean a massively parallel data processing job like training a deep net, analyzing a large corpus of text that's sitting in an S3 bucket, or 1000 parallel simulations of something. To let you do these things, this package asks you to author your code as a Python package and forces you to specify your package dependencies in a pyproject.toml. It then uploads that package (as a python wheel) for Databricks to execute it.

This is heavier-weight than Databrick's built-in notebook approach of editing a Python script in their web UI. In return, it lets you capture large package dependencies across repos via git submodules, and import third party packages via uv. It's lighter-weight than most other job submission systems because it doesn't require you to build docker containers. Docker containers take a large snapshot of your system, enough to build a full unix environment. These snapshots are on the order of gigabytes and difficult to upload from a home computer. For most of our work, wheels provide all the containerization we need (a wheel is a few kilobytes).

It has one more opinion: That uv is a good way to capture those Python dependencies, with a pyproject.toml. We're also exploring Pants as a way to manage more complex packages. Pants can also export wheels, so nothing in this design prevents us from adoptig Pants.

You can use this tool to build your wheel, upload it to Databricks, spawn copies of it each with different command line arguments, and track your jobs's status. You can also use a Databricks UI to check the state of your jobs. The tool provides several interfaces:

  • An MCP server so you can have AIs spawn jobs for you.
  • A CLI you can use from the shell.
  • A programmatic Python interface you can call from a Python program.

Getting access to Databricks

Check if you have access to Databrick by visiting this url. If you get stuck in an infinite loop where Databricks sends you a code that doesn't work, it means you don't have an account. Ask for one in #help-data-platform.

Your package's structure

This package assumes the package you want to run on the cluster has a structure like this and it can be run with uv run:

my_project/
├── pyproject.toml
├── src/
    └── my_package/
        ├── __init__.py
        └── my_package_py.py

It also assumes you've added an entry point to your pyproject.toml called "lakeflow-task". If your package is called my_package, and it has a driver script called my_package_py.py, and the main function in this script is called main, you would define the "lakeflow-task" entry point like this:

[project.scripts]
lakeflow-task = "my_package.my_package_py:main"

The package lakeflow_demo under this directory gives you a concrete example of how to set up a package.

Building and launching your package with the CLI

To run the package on the cluster, first build the wheel, then upload it, then tell Databricks to run it.

  1. Create the job from source:

    You can use create-job-from-source to build, upload, and create the job:

    uv run lakeflow.py create-job-from-source \
      "my-lakeflow-job" \
      "my-package" \
      --target ~/my_project \
      --max-workers 4 \
      --secret-env-var MY_SECRET_KEY --secret-env-var MY_OTHER_SECRET_KEY
    

    This returns the job ID, which we'll use in the next step. This doesn't yet run any jobs. It just starts a cluster that can run them. The --max-workers argument sets the maximum number of workers for autoscaling. You can also pass environment variables to the remote job without leaking secrets (like API keys) through your command line. The tool reads the values from your local environment and uploads them to Databricks Secrets. The job can access these secrets using the Databricks dbutils API, with its own package name as the scope.

  2. Start the job:

    uv run lakeflow.py trigger-run 123456 arg11 arg12
    uv run lakeflow.py trigger-run 123456 arg21 arg22
    uv run lakeflow.py trigger-run 123456 arg31 arg32
    

    This starts three instances of the job with three different sets of arguments. You can have the arguments refer to different shards of data, and kick off as many parallel jobs as you want. Your job can retrieve these arguments through argv. It can retrieve its job id from the environment variable DATABRICKS_RUN_ID.

  3. Monitor the runs:

    uv run lakeflow.py list-job-runs 123456
    

    This lists the runs for the given job ID.

  4. Get Run Logs:

    uv run lakeflow.py get-run-logs 987654321
    

    This retrieves the logs for a specific run ID. It takes the run returned by trigger-run.

Using Python programmatic interface

The above illustrated how to use the CLI. You might find it easier to use the programmatic Python interface to the package instead. See run_lakeflow_demo.py for an example.

Using the MCP server

You can install this package as an MCP server. To do that, add this to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "lakeflow": {
      "command": "uv",
      "args": [
        "run",
        "--quiet",
        "--directory",
        "/path/to/lakeflow-mcp",
        "python",
        "lakeflow.py"
      ],
      "env": {
        "DATABRICKS_HOST": "https://hims-machine-learning-staging-workspace.cloud.databricks.com",
        "DATABRICKS_TOKEN": "<your token>"
      }
    },
    ...
  }
}

Then you can ask the agent to do things like this:

let's launch 4 copies of this job on lakeflow, and pass them the arguments "fi", "fie", "fo", and "fum" respectively.

Alternative designs considered

My objective was to build a job submission system that:

  1. Python first: Could run Python packages with ~100s of Python files, and 3rd party dependencies.
  2. Versioned artifacts: Have versioned source and output.
  3. Workflow orchestration: Could break down its steps into tasks that could be cached, checkpointed, retried, and resumed under a workflow orchestrator.
  4. Native-capable: Could accommodate a small amount of non-Python code code written in Rust, C++, or Dafny.
  5. Small scale: Runs jobs on ~100s of remote workers in parallel, for ~20 engineers simultaneously.

The ideal system would use Prefect as a workflow orchestrator, on top of thexisting kubernetes scaffolding we currently use to run staging and prod. There are many workflow orchestrators, but Prefect is the only one that provides all of the workflow functionality listed above. The ideal system would be a Prefect front-end VM, which scales a kubernetes cluster up and down on demand. Rolling this out would have taken some conversations with the devops team, and introducing a new tech stack to the company. The time for this will come soon, but this package is not that.

In the mean time, this package uses a tech stack the Data Engineering team is already using. They already use Databricks to run notebooks, and their expertise with Databricks helped me ramp up quickly on this solution. Databricks notebooks are small python files the DE team edits in the Databricks UI. These scripts are versioned under Git. Databricks does provide a workflow orchestrator, but the team uses Airflow for their bigger jobs. In all, the tech stack the Data Processing team already uses provides 70% of the functionality I was trying to devlop. So I decided to build on top of it instead of building an alternative to it. This package upgrades our existing tech stack to support much larger Python packages via Python wheels (not just notebooks).

You'll notice that this package doesn't provide any workflow orchestration. That's to come. Databricks provides some rudimentary workflow capabilities, which I'll gradually incorporate into this system.

Quick Setup
Installation guide for this server

Install Package (if required)

uvx lakeflow-mcp

Cursor configuration (mcp.json)

{ "mcpServers": { "arahimi-hims-lakeflow-mcp": { "command": "uvx", "args": [ "lakeflow-mcp" ] } } }