How to Deploy MCP Servers with Docker: Complete Guide
TL;DR: Use python:3.12-slim (not Alpine), run as non-root user, expose port 8000. Streamable HTTP for production, stdio for local dev. Or skip the DevOps entirely with MCPize one-click hosting.
Look, I get it. You built an MCP server and now you're staring at the deployment phase thinking "how do I get this thing into production without breaking everything?"
Docker is your answer. It's not just some DevOps trend. It's genuinely the most reliable way to package and deploy MCP servers. Why? Because it solves the real problems that kill MCP deployments: dependency conflicts between Python versions, missing system libraries, environments that mysteriously differ between dev and prod, and the classic "works on my machine" situation.
This guide walks you through containerizing your MCP server from scratch. We'll cover creating Dockerfiles for Python and TypeScript servers, setting up Docker Compose for production, implementing security best practices, and testing with Claude Desktop. I'll also tell you straight up when managed hosting makes more sense than running your own Docker infrastructure.
Skip the DevOps. Deploy on MCPizeWhy Docker for MCP Servers Actually Makes Sense#
MCP servers aren't your typical web apps. They need to run continuously, handle concurrent requests from AI assistants, and integrate with external APIs that all have their own auth and config requirements. Docker handles all of this beautifully.
Environment consistency. A Docker container runs identically on your laptop, in CI/CD, and on production servers. Same Python version. Same system libraries. Same config. No surprises.
Real isolation. Containers run in isolated namespaces. If your MCP server has a vulnerability, it can't easily compromise the host or other containers. That's not just security theater. That's actual protection.
Dead simple distribution. Push an image to Docker Hub or your private registry. Pull it anywhere. No install scripts. No dependency resolution at runtime. It just works.
Instant rollbacks. Every deployment uses the exact same image. Rolling back means deploying the previous image tag. Takes seconds, not hours of debugging.
New to MCP? Start with the build guideWhat You'll Need#
Before we containerize your MCP server, make sure you have:
- Docker Desktop (macOS/Windows) or Docker Engine (Linux)
- Working MCP server code in Python or TypeScript
- Basic Docker familiarity. You should know what images, containers, and Dockerfiles are
If you haven't built an MCP server yet, check out the MCP server tutorial first. This guide assumes you have functional code ready to containerize.
Understanding MCP Transport Modes#
Your transport choice affects Docker configuration significantly. MCP servers communicate with AI assistants using one of two transport mechanisms.
STDIO Transport (Local Development)#
STDIO transport runs MCP servers as subprocesses. The AI assistant spawns the server, communicates via standard input/output streams, and terminates it when done.
For Docker, this means:
- Container runs as a foreground process
- No network ports exposed
- Great for local development or desktop integrations
- Claude Desktop uses STDIO by default
Streamable HTTP Transport (Production)#
Here's the thing. The old SSE transport you might have read about? It's deprecated as of 2025. The new standard is Streamable HTTP transport.
Streamable HTTP exposes MCP servers over a single HTTP endpoint. Your client sends JSON-RPC messages via POST. The server can respond with regular JSON or stream back results using SSE within the same HTTP response.
For Docker, this means:
- Container exposes network port (typically 8000 or 3000)
- Requires proper network configuration
- Supports multiple concurrent connections
- Required for production deployments
For production Docker deployments, you'll use Streamable HTTP transport.
Creating Your MCP Server Dockerfile#
The Dockerfile defines how to build your Docker MCP server image. Let's cover both Python and TypeScript.
Python MCP Server Dockerfile#
For Python MCP servers using FastMCP or the official SDK:
FROM python:3.12-slim
WORKDIR /app
# Create non-root user for security
RUN useradd --create-home --shell /bin/bash mcp
# Install dependencies first (cache layer)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY --chown=mcp:mcp . .
# Switch to non-root user
USER mcp
# Expose the MCP server port
EXPOSE 8000
# Health check for container orchestration
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s \
CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"
# Run the MCP server
CMD ["python", "-m", "mcp_server"]
Here's why I made these choices:
python:3.12-slim balances image size (~120MB) with compatibility. Alpine is smaller but frequently causes problems with packages that have C extensions. I've seen teams waste days debugging bcrypt or psycopg2 issues on Alpine. Slim just works.
Non-root user. Never run production containers as root. If an attacker escapes the container, they land as an unprivileged user. Basic security hygiene.
Separate COPY for requirements.txt. Creates a cache layer. Dependencies rebuild only when requirements change, not on every code change. Saves minutes per build.
HEALTHCHECK. Lets Docker and orchestrators detect unhealthy containers and restart them automatically.
TypeScript/Node MCP Server Dockerfile#
For TypeScript MCP servers, use a multi-stage build:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
RUN addgroup -g 1001 -S mcp && \
adduser -S mcp -u 1001 -G mcp
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
COPY --from=builder --chown=mcp:mcp /app/dist ./dist
USER mcp
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
CMD ["node", "dist/index.js"]
This uses a multi-stage build:
- Builder stage installs all dependencies and compiles TypeScript
- Production stage copies only compiled JavaScript and production dependencies
- Final image excludes TypeScript, dev dependencies, and source files
Use npm ci instead of npm install. It's faster and gives reproducible builds based on your lock file. This matters more than you think for production stability.
Why Multi-stage Builds Matter#
| Stage | Contents | Typical Size |
|---|---|---|
| Builder | Node, npm, TypeScript, all deps, source | ~800MB |
| Production | Node, runtime deps, compiled JS | ~150MB |
Multi-stage builds are essential for production. Fewer packages means fewer vulnerabilities. Smaller images mean faster deployments. It's a win all around.
Docker Compose for MCP Servers#
Docker Compose simplifies managing MCP servers with their dependencies and configuration.
Basic docker-compose.yml#
services:
mcp-server:
build: .
ports:
- "8000:8000"
environment:
- MCP_LOG_LEVEL=info
- API_KEY=${API_KEY}
restart: unless-stopped
That's it for dev. Builds your Dockerfile, maps port 8000, passes environment variables, and auto-restarts on failure.
Production docker-compose.yml#
Production needs health checks, resource limits, and proper logging:
services:
mcp-server:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
environment:
- MCP_LOG_LEVEL=info
- API_KEY=${API_KEY}
- DATABASE_URL=${DATABASE_URL}
env_file:
- .env.production
restart: unless-stopped
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.25'
memory: 128M
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
networks:
- mcp-network
networks:
mcp-network:
driver: bridge
What we added for production:
Resource limits. Prevents runaway containers from eating all host resources. Your MCP server probably doesn't need more than 512MB RAM.
Health checks. Auto-restart unhealthy containers. No more 3am pages because something silently died.
Log rotation. Prevents disk exhaustion from verbose logging. I've seen production servers go down because of full disks. Not fun.
Dedicated network. Isolates MCP traffic from other containers.
Security Best Practices for Docker MCP Servers#
MCP servers often handle sensitive operations. Database queries. API calls. File access. Security deserves real attention.
Run as Non-Root User#
This is the most common Docker security mistake I see. Running as root means a container escape gives attackers root on the host.
RUN useradd --create-home --shell /bin/bash mcp
USER mcp
Verify your container actually runs as non-root:
docker exec <container> whoami
# Should output: mcp (NOT root)
Use Minimal Base Images#
Every package in your base image is a potential vulnerability. Here's the tradeoff:
| Base Image | Size | Packages | CVE Exposure |
|---|---|---|---|
| python:3.12 | 1.0GB | 400+ | High |
| python:3.12-slim | 120MB | 100+ | Medium |
| python:3.12-alpine | 50MB | 20+ | Low |
Alpine looks great on paper, but it uses musl libc instead of glibc. This breaks prebuilt wheels for packages like NumPy and cryptography. If you're running pure Python code, Alpine works. For anything with C extensions, stick with slim.
For the ultra-security-conscious, check out distroless images like gcr.io/distroless/python3. They have no shell, no package manager, just your runtime. Attack surface near zero.
Secrets Management#
Never hardcode secrets in Dockerfiles or images:
# WRONG - secret baked into image
ENV API_KEY=sk-12345678
# RIGHT - secret passed at runtime
ENV API_KEY=${API_KEY}
For Docker Compose, use environment files:
# .env.production (never commit this file)
API_KEY=sk-your-production-key
DATABASE_URL=postgres://user:pass@host/db
Add .env* to your .dockerignore and .gitignore. Seriously.
For Docker Swarm or Kubernetes, use native secrets:
secrets:
api_key:
external: true
services:
mcp-server:
secrets:
- api_key
Container Scanning#
Scan images for vulnerabilities before deployment:
# Docker Scout (built into Docker Desktop)
docker scout cves your-mcp-server:latest
# Trivy (open source, fast)
trivy image your-mcp-server:latest
# Snyk (great for CI/CD)
snyk container test your-mcp-server:latest
Teams using container scanning see 48% fewer production vulnerabilities. That's not hype. That's real data from Docker's security research.
Testing Your Dockerized MCP Server#
Local Testing with Docker#
Build and run your container:
docker build -t my-mcp-server .
docker run -p 8000:8000 \
-e API_KEY=$API_KEY \
my-mcp-server
Test the health endpoint:
curl http://localhost:8000/health
# Expected: {"status": "ok"}
Testing with Claude Desktop#
For STDIO transport testing, configure Claude Desktop to use your Docker container:
{
"mcpServers": {
"my-server": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "API_KEY=${API_KEY}",
"my-mcp-server"
]
}
}
}
The -i flag enables interactive mode for STDIO communication. --rm cleans up the container after the session ends. Clean and simple.
Debugging Docker Containers#
When things break (and they will), Docker has your back:
# View container logs
docker logs my-mcp-server
# Follow logs in real-time
docker logs -f my-mcp-server
# Interactive shell access
docker exec -it my-mcp-server /bin/sh
# Inspect container configuration
docker inspect my-mcp-server
Common issues I've seen:
| Symptom | Likely Cause | Fix |
|---|---|---|
| Container exits immediately | Application crash on startup | Check logs for Python/Node errors |
| Port not accessible | Port mapping issue | Verify -p host:container matches EXPOSE |
| Permission denied on files | Root/non-root mismatch | Check file ownership in Dockerfile |
| Out of memory | Resource limits too tight | Increase memory in compose |
Deploying to Production#
Self-Hosted Options#
For teams running their own infrastructure:
Cloud VM (AWS EC2, GCP, DigitalOcean)
Install Docker on a VM, pull your image, run with Docker Compose. Simple and cost-effective for moderate scale.
docker compose -f docker-compose.prod.yml up -d
Container Orchestration (Kubernetes, Docker Swarm)
For high availability and automatic scaling:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcp-server
spec:
replicas: 3
selector:
matchLabels:
app: mcp-server
template:
spec:
containers:
- name: mcp-server
image: your-registry/mcp-server:latest
ports:
- containerPort: 8000
Kubernetes adds complexity but gives you automatic healing, rolling updates, and horizontal scaling. Only go this route if you actually need it.
The Reality of Self-Managed Docker#
Here's what nobody tells you. Managing Docker infrastructure is a part-time job. Security patches. Monitoring setup. Scaling config. Incident response at 2am.
Skip the DevOps: Deploy on MCPize#
MCPize provides managed hosting that eliminates infrastructure work entirely:
| Aspect | Self-Managed Docker | MCPize Managed |
|---|---|---|
| Deployment | CI/CD pipeline setup | mcpize publish |
| Scaling | Configure autoscaling | Automatic |
| Monitoring | Set up Prometheus/Grafana | Built-in dashboard |
| Security | Patch management | Handled by platform |
| Cost | $20-100/mo + your time | Platform fee only |
| Monetization | Build Stripe integration | 85% revenue share |
If you want to focus on building MCP servers instead of babysitting infrastructure, MCPize is the move.
Deploy on MCPizeDocker MCP Toolkit#
Docker Desktop includes the MCP Toolkit with access to 200+ curated MCP server images. It's built right in.
Using Docker MCP Catalog#
Docker maintains a catalog of pre-built MCP server images at hub.docker.com/mcp. Enable MCP Toolkit in Docker Desktop settings, then run servers directly:
docker mcp run filesystem --allowed-directories=/home/user/documents
docker mcp run github --token=$GITHUB_TOKEN
The Toolkit handles security checks, credential management, and connections to MCP clients like Claude Desktop, Cursor, and VS Code automatically.
When to Use Toolkit vs Custom Docker#
| Scenario | Recommendation |
|---|---|
| Using standard MCP servers (filesystem, github) | Docker MCP Toolkit |
| Custom MCP server with specific dependencies | Custom Dockerfile |
| Production deployment with CI/CD | Custom Dockerfile |
| Quick local testing | Docker MCP Toolkit |
The Toolkit is excellent for running existing MCP servers. For custom servers you're building, you'll need your own Dockerfile.
CI/CD for Docker MCP Servers#
If you're deploying Docker MCP servers in production, you want automated builds and deployments. Here's a GitHub Actions workflow that builds, scans, and pushes your image.
GitHub Actions Workflow#
name: Build and Deploy MCP Server
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: your-username/mcp-server:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: your-username/mcp-server:latest
severity: 'CRITICAL,HIGH'
exit-code: '1'
This workflow:
- Builds on every push to main
- Uses Docker layer caching to speed up builds
- Runs vulnerability scanning with Trivy
- Fails the build if critical vulnerabilities are found
- Only pushes to registry on non-PR builds
Automated Deployment#
After your image is pushed, you can trigger deployment to your infrastructure. Here's a simple SSH-based deploy step:
- name: Deploy to server
if: github.event_name != 'pull_request'
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SERVER_HOST }}
username: ${{ secrets.SERVER_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
docker pull your-username/mcp-server:latest
docker compose -f /app/docker-compose.prod.yml up -d
For Kubernetes deployments, use kubectl set image or a GitOps tool like ArgoCD.
The reality? This is a lot of YAML to maintain. If you're thinking "I just want to deploy my MCP server, not become a DevOps engineer," MCPize handles all of this automatically with mcpize publish.
Troubleshooting Common Docker MCP Issues#
After helping hundreds of developers containerize MCP servers, I've seen the same problems come up repeatedly. Here's the complete troubleshooting guide.
Container Won't Start#
Symptoms: Container exits immediately with code 1 or 127.
Check the logs first:
docker logs my-mcp-server
Common causes:
- Missing dependencies: Your requirements.txt or package.json is incomplete. Check if you're importing something not in your deps.
- Wrong entrypoint: The CMD in your Dockerfile doesn't match your actual module structure.
- Python path issues: Try using
CMD ["python", "-m", "your_module"]instead of direct script execution.
Port Not Accessible#
Symptoms: curl localhost:8000 times out or connection refused.
Debug steps:
# Check if container is running
docker ps
# Check what ports are exposed
docker port my-mcp-server
# Check if server is listening inside container
docker exec my-mcp-server netstat -tlpn
Common causes:
- Server binding to 127.0.0.1: Your MCP server needs to bind to
0.0.0.0to be accessible from outside the container. - Wrong port mapping: Make sure
-p 8000:8000matches your EXPOSE and actual server port. - Firewall blocking: On Linux, check
iptablesorufw.
Permission Denied Errors#
Symptoms: PermissionError: [Errno 13] Permission denied in logs.
This happens when you switch to a non-root user but files are owned by root:
# WRONG - files copied as root, but running as non-root
COPY . .
USER mcp
# RIGHT - copy with correct ownership
COPY --chown=mcp:mcp . .
USER mcp
MCP Tools Not Responding#
Symptoms: Claude Desktop connects but tools don't work or timeout.
Debug with MCP Inspector:
docker exec -it my-mcp-server npx @modelcontextprotocol/inspector
Common causes:
- Async/await issues: Make sure your tool handlers are properly async.
- Timeout configuration: Increase client timeouts for slow operations.
- Missing environment variables: Check if required API keys are passed to the container.
Image Size Too Large#
Symptoms: Your image is 500MB+ and deploys take forever.
Diagnose with:
docker history my-mcp-server --no-trunc
This shows you exactly what's eating space. Common fixes:
- Use multi-stage builds (see the TypeScript example above)
- Add a proper
.dockerignore:
node_modules
.git
.env*
*.log
__pycache__
.pytest_cache
.venv
dist
- Switch from
python:3.12topython:3.12-slim - Use
npm ci --only=productioninstead of fullnpm install
Memory Issues#
Symptoms: Container gets OOMKilled or becomes unresponsive.
Check current memory usage:
docker stats my-mcp-server
Fixes:
- Increase memory limits in docker-compose.yml
- Check for memory leaks in your MCP server code
- For Node.js, set
NODE_OPTIONS="--max-old-space-size=384"
FAQ#
What's the best Docker base image for MCP servers?
For Python MCP servers, use python:3.12-slim for the best balance. Alpine is smaller but breaks packages with C extensions. For TypeScript servers, node:20-alpine works well since most Node packages are pure JavaScript. If you need maximum security and have pure Python code, try gcr.io/distroless/python3.
How do I pass API keys and secrets to a Docker MCP server?
Never hardcode secrets in Dockerfiles. Use environment variables with docker run -e or Docker Compose environment sections. For production, use Docker secrets or a secrets manager. Add sensitive files to .dockerignore to prevent accidental inclusion in images.
What happened to SSE transport? Isn't that what MCP uses?
The old HTTP+SSE transport is deprecated as of 2025. The new standard is Streamable HTTP transport. It uses a single HTTP endpoint where clients POST JSON-RPC messages and servers can respond with JSON or stream results using SSE within the response. If you're starting fresh, use Streamable HTTP.
Can I run multiple MCP servers in one Docker container?
Don't. Follow the "one process per container" principle. Use Docker Compose to orchestrate multiple MCP server containers with shared networks. This improves isolation, simplifies debugging, and lets you scale individual servers independently.
How do I debug a Docker MCP server that isn't responding?
Check container logs with docker logs <container>. Verify the container is running with docker ps. For interactive debugging, use docker exec -it <container> /bin/sh. Common issues: incorrect port mappings, missing environment variables, or file permission problems from root/non-root mismatches.
Is there a managed alternative to deploying Docker MCP servers myself?
Yes. MCPize offers one-click managed hosting for MCP servers. Deploy with mcpize publish and get automatic scaling, monitoring, and the ability to monetize through the MCPize marketplace with 85% revenue share. No Docker management required.
Next Steps#
You've learned how to containerize MCP servers with Docker. From basic Dockerfiles through production Compose configs and security hardening. Here's the quick version:
- Create a Dockerfile for Python or TypeScript
- Use multi-stage builds to minimize image size
- Apply security practices: non-root user, minimal base images, secrets management
- Test locally with Docker and Claude Desktop
- Deploy to production or use managed hosting
The choice comes down to control versus convenience. Self-managed Docker gives you complete control but requires ongoing maintenance. MCPize managed hosting trades some flexibility for zero infrastructure work.
Deploy on MCPize Build your MCP server firstRelated:
- Deploy MCP to Cloudflare - Serverless deployment alternative
- Build MCP Server - Complete build tutorial
- Publish MCP Server - Publishing walkthrough
Questions about Docker deployment? Join MCPize Discord or browse deployed servers for reference implementations.



