Skip to content

Latest commit

 

History

History
574 lines (422 loc) · 14.2 KB

File metadata and controls

574 lines (422 loc) · 14.2 KB

Resume Processor MCP Server

A Model Context Protocol (MCP) server implementation in Go that integrates with existing resume processing pipelines. This server enables AI platforms like Claude, Copilot, and custom agents to leverage resume processing capabilities through a standardized MCP interface.

Overview

The Resume Processor MCP Server bridges the gap between AI platforms and resume processing pipelines, providing:

  • Resume Processing: Convert Markdown resumes to PDF, LaTeX, and LinkedIn templates
  • Resume Analysis: Extract metadata, sections, contact information, and skills
  • Template Management: Access resume templates and examples
  • Pipeline Integration: Seamless integration with existing Docker and shell-based processing
  • Multi-Modal Support: WebSocket and stdio communication modes

Features

🚀 MCP Protocol Implementation

  • Full MCP 2024-11-05 specification compliance
  • Tools, Resources, and Prompts support
  • WebSocket and stdio communication modes
  • Structured error handling and logging

📄 Resume Processing Tools

  • process_resume: Full pipeline processing (Markdown → PDF/LaTeX/LinkedIn)
  • analyze_resume: Content analysis and metadata extraction
  • get_processing_status: Job status tracking
  • list_templates: Available template enumeration

🔗 AI Platform Integration

  • Claude Desktop: Direct stdio integration with configuration templates
  • Cursor IDE: WebSocket and extension integration support
  • VS Code Copilot: WebSocket extension support
  • Custom Agents: Flexible communication modes

🐳 Docker & Container Support

  • Multi-stage Docker builds with full pipeline dependencies
  • Automatic dependency detection (Docker vs local processing)
  • Health checks and graceful shutdown

Quick Start

Prerequisites

# Install Go 1.22+
go version

# Install pipeline dependencies
brew install pandoc
brew install --cask mactex
brew install python3

# Install Python dependencies
pip3 install -r requirements.txt

Build and Run

# Build the MCP server
make build

# Run in WebSocket mode
make run

# Run in stdio mode for MCP clients
make run-stdio

# Test the server
make integration-test

MCP Configuration

For Claude Desktop

📖 See the Claude Desktop Integration Guide for setup instructions.

Quick configuration - Add to your Claude configuration file (~/.claude/config.json):

{
  "mcpServers": {
    "resume-processor": {
      "command": "/path/to/resume-processor-mcp",
      "args": ["stdio", "--work-dir", "/path/to/resume/directory"]
    }
  }
}

For Cursor IDE

📖 See the Cursor IDE Integration Guide for setup instructions.

Quick configuration - Add to your Cursor workspace settings (.vscode/settings.json):

{
  "cursor.mcp.servers": {
    "resume-processor": {
      "url": "ws://localhost:8080/mcp",
      "name": "Resume Processor"
    }
  }
}

For VS Code Copilot

Configure as an extension in your workspace:

{
  "copilot.extensions": {
    "resume-processor": {
      "url": "ws://localhost:8080/mcp"
    }
  }
}

Architecture

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────────┐
│   AI Platform   │────│  MCP Protocol    │────│  Resume Processor   │
│  (Claude/etc)   │    │  (WebSocket/     │    │     Service         │
│                 │    │   Stdio)         │    │                     │
└─────────────────┘    └──────────────────┘    └─────────────────────┘
                                                          │
                                                          ▼
                                                ┌─────────────────────┐
                                                │ Processing Pipeline │
                                                │                     │
                                                │ ┌─────────────────┐ │
                                                │ │   Markdown      │ │
                                                │ │      ↓          │ │
                                                │ │   Pandoc        │ │
                                                │ │      ↓          │ │
                                                │ │   LaTeX         │ │
                                                │ │      ↓          │ │
                                                │ │   PDFLaTeX      │ │
                                                │ │      ↓          │ │
                                                │ │   PDF + LinkedIn│ │
                                                │ └─────────────────┘ │
                                                └─────────────────────┘

MCP Tools Reference

process_resume

Process a resume through the pipeline.

Parameters:

  • resume_content (string, required): Markdown content of the resume
  • resume_filename (string, optional): Filename for the resume (default: "resume.md")
  • output_formats (array, optional): Formats to generate (default: ["pdf", "latex", "linkedin"])

Example:

{
  "name": "process_resume",
  "arguments": {
    "resume_content": "# John Doe\n\n## Career Summary\n...",
    "resume_filename": "john-doe-resume.md",
    "output_formats": ["pdf", "linkedin"]
  }
}

analyze_resume

Analyze resume content and extract structured metadata.

Parameters:

  • resume_content (string, required): Markdown content to analyze

Returns:

  • Word count, sections, contact information, skills extraction

get_processing_status

Check the status of a processing job.

Parameters:

  • processing_id (string, required): ID returned from process_resume

list_templates

List available resume templates.

Returns:

  • Array of available template names

Resources

The server provides built-in resources:

  • resume://examples/professional: Professional resume template example
  • resume://docs/pipeline: Pipeline documentation

Prompts

Pre-configured prompts for common use cases:

  • resume_review: Review and provide feedback on resume content
  • linkedin_optimization: Generate LinkedIn profile optimization suggestions

Development

Project Structure

resume-processor-mcp/
├── cmd/server/           # Main application entry point
├── pkg/
│   ├── mcp/             # MCP protocol types and definitions
│   ├── processor/       # Resume processing service
│   └── server/          # MCP server implementation
├── scripts/             # Resume processing scripts
├── Dockerfile.mcp       # Docker configuration
├── Makefile            # Build and development commands
└── mcp-config.json     # MCP client configurations

Building

# Build for current platform
make build

# Build for all platforms
make build-all

# Build Docker image
make docker-build

# Run tests
make test

# Run with coverage
make test-coverage

Testing

# Unit tests
make test

# Integration tests
make integration-test

# MCP protocol test
make mcp-test

# Pipeline test
make pipeline-test

Deployment

Docker Deployment

# Build Docker image
make docker-build

# Run container
docker run -p 8080:8080 -v $(pwd):/app/data resume-processor-mcp:latest

# With custom configuration
docker run -p 8080:8080 \
  -v $(pwd):/app/data \
  -e LOG_LEVEL=debug \
  resume-processor-mcp:latest

Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: resume-processor-mcp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: resume-processor-mcp
  template:
    metadata:
      labels:
        app: resume-processor-mcp
    spec:
      containers:
      - name: mcp-server
        image: resume-processor-mcp:latest
        ports:
        - containerPort: 8080
        env:
        - name: LOG_LEVEL
          value: "info"
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: resume-processor-mcp-service
spec:
  selector:
    app: resume-processor-mcp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: LoadBalancer

Integration Examples

Claude Desktop Integration

// Example interaction with Claude
const mcpClient = new MCPClient();

await mcpClient.callTool("process_resume", {
  resume_content: resumeMarkdown,
  output_formats: ["pdf", "linkedin"]
});

Custom AI Agent Integration

import json
import subprocess

def process_resume_via_mcp(resume_content):
    cmd = ["./resume-processor-mcp", "stdio"]
    
    request = {
        "jsonrpc": "2.0",
        "id": 1,
        "method": "tools/call",
        "params": {
            "name": "process_resume",
            "arguments": {
                "resume_content": resume_content
            }
        }
    }
    
    process = subprocess.Popen(cmd, stdin=subprocess.PIPE, 
                              stdout=subprocess.PIPE, 
                              stderr=subprocess.PIPE, 
                              text=True)
    
    stdout, stderr = process.communicate(json.dumps(request))
    return json.loads(stdout)

Configuration

Environment Variables

  • LOG_LEVEL: Set logging level (debug, info, warn, error)
  • WORK_DIR: Working directory for processing
  • SERVER_PORT: Port for WebSocket server (default: 8080)
  • SERVER_ADDRESS: Address to bind server (default: localhost)

Command Line Options

resume-processor-mcp serve --help
resume-processor-mcp stdio --help

Monitoring and Logging

Health Checks

# Check server health
curl http://localhost:8080/health

# Response
{
  "status": "healthy",
  "timestamp": "2024-01-15T10:30:00Z",
  "version": "1.0.0"
}

Metrics

The server provides structured logging with the following fields:

  • Request/response tracking
  • Processing times
  • Error rates
  • Pipeline step completion

Log Levels

  • DEBUG: Detailed MCP message tracing
  • INFO: General operation information
  • WARN: Non-critical issues
  • ERROR: Critical errors and failures

Security Considerations

Input Validation

  • All resume content is sanitized
  • File path validation prevents directory traversal
  • Resource limits prevent excessive processing

Network Security

  • WebSocket origin validation
  • Rate limiting (configurable)
  • TLS support for production deployments

Container Security

  • Non-root user execution
  • Minimal attack surface
  • Regular dependency updates

Troubleshooting

Common Issues

  1. Pipeline Dependencies Missing
make pipeline-setup
  1. Permission Errors
chmod +x process_resume.sh
chmod +x resume-processor-mcp
  1. Port Already in Use
./resume-processor-mcp serve --port 8081
  1. Docker Build Issues
make clean-docker
make docker-build

Debug Mode

# Enable debug logging
./resume-processor-mcp serve --log-level debug

# View detailed MCP messages
./resume-processor-mcp stdio --log-level debug

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Setup

# Install development tools
make install-tools

# Run linter
make lint

# Format code
make fmt

# Run all tests
make test benchmark

License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT vs Apache 2.0 License Comparison

When choosing a license for this project, the key considerations were:

MIT License (Current Choice)

Pros:

  • Simple and permissive: Short, easy to understand license text
  • Wide adoption: Most popular open source license, familiar to developers
  • Minimal restrictions: Only requires copyright notice and license text
  • Compatible: Works well with most other licenses
  • Commercial friendly: Companies can use without legal concerns

Cons:

  • No patent protection: Does not explicitly handle patent rights
  • No trademark protection: Does not address trademark usage
  • Contributor protection: Limited protection for contributors

Apache 2.0 License (Alternative)

Pros:

  • Patent protection: Explicit patent grant and retaliation clause
  • Contributor protection: Better protection for contributors
  • Attribution requirements: Clear attribution and notice requirements
  • Trademark protection: Explicit trademark usage guidelines
  • Industry standard: Preferred by many large organizations

Cons:

  • More complex: Longer license text with more legal terms
  • Stricter requirements: More obligations for distributors
  • License compatibility: Some compatibility issues with GPL 2.0

How to Choose

Choose MIT when:

  • Building tools, libraries, or utilities (like this resume processor)
  • Want maximum adoption and ease of use
  • Target individual developers and small teams
  • Simplicity and familiarity are priorities
  • Patent concerns are minimal

Choose Apache 2.0 when:

  • Building enterprise or business-critical software
  • Working with large codebases or organizations
  • Patent protection is important
  • Need stronger contributor protections
  • Target corporate and institutional users

For this project: MIT was chosen because this is a developer tool focused on personal document processing, where simplicity and wide adoption are more valuable than patent protection.

Contact

Open Systems Lab
Email: info@opensystemslab.com
Website: opensystemslab.com


Built with ❤️ using Go and the Model Context Protocol