A Model Context Protocol (MCP) server implementation in Go that integrates with existing resume processing pipelines. This server enables AI platforms like Claude, Copilot, and custom agents to leverage resume processing capabilities through a standardized MCP interface.
The Resume Processor MCP Server bridges the gap between AI platforms and resume processing pipelines, providing:
- Resume Processing: Convert Markdown resumes to PDF, LaTeX, and LinkedIn templates
- Resume Analysis: Extract metadata, sections, contact information, and skills
- Template Management: Access resume templates and examples
- Pipeline Integration: Seamless integration with existing Docker and shell-based processing
- Multi-Modal Support: WebSocket and stdio communication modes
- Full MCP 2024-11-05 specification compliance
- Tools, Resources, and Prompts support
- WebSocket and stdio communication modes
- Structured error handling and logging
process_resume: Full pipeline processing (Markdown → PDF/LaTeX/LinkedIn)analyze_resume: Content analysis and metadata extractionget_processing_status: Job status trackinglist_templates: Available template enumeration
- Claude Desktop: Direct stdio integration with configuration templates
- Cursor IDE: WebSocket and extension integration support
- VS Code Copilot: WebSocket extension support
- Custom Agents: Flexible communication modes
- Multi-stage Docker builds with full pipeline dependencies
- Automatic dependency detection (Docker vs local processing)
- Health checks and graceful shutdown
# Install Go 1.22+
go version
# Install pipeline dependencies
brew install pandoc
brew install --cask mactex
brew install python3
# Install Python dependencies
pip3 install -r requirements.txt# Build the MCP server
make build
# Run in WebSocket mode
make run
# Run in stdio mode for MCP clients
make run-stdio
# Test the server
make integration-test📖 See the Claude Desktop Integration Guide for setup instructions.
Quick configuration - Add to your Claude configuration file (~/.claude/config.json):
{
"mcpServers": {
"resume-processor": {
"command": "/path/to/resume-processor-mcp",
"args": ["stdio", "--work-dir", "/path/to/resume/directory"]
}
}
}📖 See the Cursor IDE Integration Guide for setup instructions.
Quick configuration - Add to your Cursor workspace settings (.vscode/settings.json):
{
"cursor.mcp.servers": {
"resume-processor": {
"url": "ws://localhost:8080/mcp",
"name": "Resume Processor"
}
}
}Configure as an extension in your workspace:
{
"copilot.extensions": {
"resume-processor": {
"url": "ws://localhost:8080/mcp"
}
}
}┌─────────────────┐ ┌──────────────────┐ ┌─────────────────────┐
│ AI Platform │────│ MCP Protocol │────│ Resume Processor │
│ (Claude/etc) │ │ (WebSocket/ │ │ Service │
│ │ │ Stdio) │ │ │
└─────────────────┘ └──────────────────┘ └─────────────────────┘
│
▼
┌─────────────────────┐
│ Processing Pipeline │
│ │
│ ┌─────────────────┐ │
│ │ Markdown │ │
│ │ ↓ │ │
│ │ Pandoc │ │
│ │ ↓ │ │
│ │ LaTeX │ │
│ │ ↓ │ │
│ │ PDFLaTeX │ │
│ │ ↓ │ │
│ │ PDF + LinkedIn│ │
│ └─────────────────┘ │
└─────────────────────┘Process a resume through the pipeline.
Parameters:
resume_content(string, required): Markdown content of the resumeresume_filename(string, optional): Filename for the resume (default: "resume.md")output_formats(array, optional): Formats to generate (default: ["pdf", "latex", "linkedin"])
Example:
{
"name": "process_resume",
"arguments": {
"resume_content": "# John Doe\n\n## Career Summary\n...",
"resume_filename": "john-doe-resume.md",
"output_formats": ["pdf", "linkedin"]
}
}Analyze resume content and extract structured metadata.
Parameters:
resume_content(string, required): Markdown content to analyze
Returns:
- Word count, sections, contact information, skills extraction
Check the status of a processing job.
Parameters:
processing_id(string, required): ID returned from process_resume
List available resume templates.
Returns:
- Array of available template names
The server provides built-in resources:
resume://examples/professional: Professional resume template exampleresume://docs/pipeline: Pipeline documentation
Pre-configured prompts for common use cases:
resume_review: Review and provide feedback on resume contentlinkedin_optimization: Generate LinkedIn profile optimization suggestions
resume-processor-mcp/
├── cmd/server/ # Main application entry point
├── pkg/
│ ├── mcp/ # MCP protocol types and definitions
│ ├── processor/ # Resume processing service
│ └── server/ # MCP server implementation
├── scripts/ # Resume processing scripts
├── Dockerfile.mcp # Docker configuration
├── Makefile # Build and development commands
└── mcp-config.json # MCP client configurations# Build for current platform
make build
# Build for all platforms
make build-all
# Build Docker image
make docker-build
# Run tests
make test
# Run with coverage
make test-coverage# Unit tests
make test
# Integration tests
make integration-test
# MCP protocol test
make mcp-test
# Pipeline test
make pipeline-test# Build Docker image
make docker-build
# Run container
docker run -p 8080:8080 -v $(pwd):/app/data resume-processor-mcp:latest
# With custom configuration
docker run -p 8080:8080 \
-v $(pwd):/app/data \
-e LOG_LEVEL=debug \
resume-processor-mcp:latestapiVersion: apps/v1
kind: Deployment
metadata:
name: resume-processor-mcp
spec:
replicas: 2
selector:
matchLabels:
app: resume-processor-mcp
template:
metadata:
labels:
app: resume-processor-mcp
spec:
containers:
- name: mcp-server
image: resume-processor-mcp:latest
ports:
- containerPort: 8080
env:
- name: LOG_LEVEL
value: "info"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: resume-processor-mcp-service
spec:
selector:
app: resume-processor-mcp
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer// Example interaction with Claude
const mcpClient = new MCPClient();
await mcpClient.callTool("process_resume", {
resume_content: resumeMarkdown,
output_formats: ["pdf", "linkedin"]
});import json
import subprocess
def process_resume_via_mcp(resume_content):
cmd = ["./resume-processor-mcp", "stdio"]
request = {
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "process_resume",
"arguments": {
"resume_content": resume_content
}
}
}
process = subprocess.Popen(cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True)
stdout, stderr = process.communicate(json.dumps(request))
return json.loads(stdout)LOG_LEVEL: Set logging level (debug, info, warn, error)WORK_DIR: Working directory for processingSERVER_PORT: Port for WebSocket server (default: 8080)SERVER_ADDRESS: Address to bind server (default: localhost)
resume-processor-mcp serve --help
resume-processor-mcp stdio --help# Check server health
curl http://localhost:8080/health
# Response
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z",
"version": "1.0.0"
}The server provides structured logging with the following fields:
- Request/response tracking
- Processing times
- Error rates
- Pipeline step completion
DEBUG: Detailed MCP message tracingINFO: General operation informationWARN: Non-critical issuesERROR: Critical errors and failures
- All resume content is sanitized
- File path validation prevents directory traversal
- Resource limits prevent excessive processing
- WebSocket origin validation
- Rate limiting (configurable)
- TLS support for production deployments
- Non-root user execution
- Minimal attack surface
- Regular dependency updates
- Pipeline Dependencies Missing
make pipeline-setup- Permission Errors
chmod +x process_resume.sh
chmod +x resume-processor-mcp- Port Already in Use
./resume-processor-mcp serve --port 8081- Docker Build Issues
make clean-docker
make docker-build# Enable debug logging
./resume-processor-mcp serve --log-level debug
# View detailed MCP messages
./resume-processor-mcp stdio --log-level debug- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Install development tools
make install-tools
# Run linter
make lint
# Format code
make fmt
# Run all tests
make test benchmarkThis project is licensed under the MIT License - see the LICENSE file for details.
When choosing a license for this project, the key considerations were:
Pros:
- Simple and permissive: Short, easy to understand license text
- Wide adoption: Most popular open source license, familiar to developers
- Minimal restrictions: Only requires copyright notice and license text
- Compatible: Works well with most other licenses
- Commercial friendly: Companies can use without legal concerns
Cons:
- No patent protection: Does not explicitly handle patent rights
- No trademark protection: Does not address trademark usage
- Contributor protection: Limited protection for contributors
Pros:
- Patent protection: Explicit patent grant and retaliation clause
- Contributor protection: Better protection for contributors
- Attribution requirements: Clear attribution and notice requirements
- Trademark protection: Explicit trademark usage guidelines
- Industry standard: Preferred by many large organizations
Cons:
- More complex: Longer license text with more legal terms
- Stricter requirements: More obligations for distributors
- License compatibility: Some compatibility issues with GPL 2.0
Choose MIT when:
- Building tools, libraries, or utilities (like this resume processor)
- Want maximum adoption and ease of use
- Target individual developers and small teams
- Simplicity and familiarity are priorities
- Patent concerns are minimal
Choose Apache 2.0 when:
- Building enterprise or business-critical software
- Working with large codebases or organizations
- Patent protection is important
- Need stronger contributor protections
- Target corporate and institutional users
For this project: MIT was chosen because this is a developer tool focused on personal document processing, where simplicity and wide adoption are more valuable than patent protection.
Open Systems Lab
Email: info@opensystemslab.com
Website: opensystemslab.com
Built with ❤️ using Go and the Model Context Protocol