νκ΅μ΄ Β· Website Β· Dashboard
β‘ Fast & Reliable β Built on 8+ years of web scraping expertise, 1,900+ production crawlers, and battle-tested anti-bot handling.
An MCP (Model Context Protocol) server that lets AI agents fetch and read web pages. Simply give it a URL, and it returns clean, LLM-ready content β fast.
Before: AI can't read web pages directly
After: "Summarize this article" just works β¨
- π URL β Markdown: Preserves headings, lists, links
- π URL β Text: Plain text extraction
- π·οΈ Metadata: Title, author, date, images
- π§Ή Clean Output: No ads, no navigation, no scripts
- β‘ JavaScript Rendering: Works with SPAs
- π³ Built-in Billing: Credit tracking, subscription management, usage analytics (MCP keys)
- π Auto-Retry: 429 rate limit responses automatically retried with Retry-After
- π Dual Transport: Stdio (npx) + Streamable HTTP for flexible deployment
Scrapi MCP Server supports two transport modes:
| Mode | Best For | Node.js Required |
|---|---|---|
| Stdio | Claude Desktop, Cursor, Cline, Claude Code | Yes (auto via npx) |
| Streamable HTTP | All clients, Node.js-free environments | No |
- Scrapi MCP account (separate from the main Scrapi account)
- Claude Desktop, Cline, or Cursor installed
- Node.js 20+
No installation needed. Just configure your MCP client to use npx.
{
"mcpServers": {
"scrapi": {
"command": "npx",
"args": ["-y", "@scrapi.ai/mcp-server"],
"env": {
"SCRAPI_API_KEY": "your-api-key"
}
}
}
}Tip: You can also pass the API key via CLI argument instead of env var:
"args": ["-y", "@scrapi.ai/mcp-server", "--api-key", "your-api-key"]
See Step 2 for where to put this configuration.
# Clone the repository
git clone https://github.com/bamchi/scrapi-mcp-server.git
cd scrapi-mcp-server
# Install dependencies and build
npm install && npm run build- Go to https://scrapi.ai
- Sign up or log in
- Visit the MCP Dashboard β your Free plan (500 credits/month) and API key are created automatically
- Copy your
hsmcp_API key
Option A: Via Settings (Recommended)
- Open Claude Desktop
- Click Settings (gear icon, bottom left)
- Select Developer tab
- Click "Edit Config" button
- Add the mcpServers configuration (see below)
- Save and restart Claude Desktop (Cmd+Q, then reopen)
Option B: Edit config file directly
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Configuration (npx):
{
"mcpServers": {
"scrapi": {
"command": "npx",
"args": ["-y", "@scrapi.ai/mcp-server"],
"env": {
"SCRAPI_API_KEY": "your-api-key"
}
}
}
}Configuration (from source):
{
"mcpServers": {
"scrapi": {
"command": "node",
"args": ["/absolute/path/to/scrapi-mcp-server/dist/index.js"],
"env": {
"SCRAPI_API_KEY": "your-api-key"
}
}
}
}Note: Replace
/absolute/path/to/with the actual path where you cloned the repository.
Config file location:
- macOS:
~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json - Windows:
%APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json
Configuration (npx):
{
"mcpServers": {
"scrapi": {
"command": "npx",
"args": ["-y", "@scrapi.ai/mcp-server"],
"env": {
"SCRAPI_API_KEY": "your-api-key"
}
}
}
}Configuration (from source):
{
"mcpServers": {
"scrapi": {
"command": "node",
"args": ["/absolute/path/to/scrapi-mcp-server/dist/index.js"],
"env": {
"SCRAPI_API_KEY": "your-api-key"
}
}
}
}Create or edit .cursor/mcp.json in your project root:
Configuration (npx):
{
"mcpServers": {
"scrapi": {
"command": "npx",
"args": ["-y", "@scrapi.ai/mcp-server"],
"env": {
"SCRAPI_API_KEY": "your-api-key"
}
}
}
}Configuration (from source):
{
"mcpServers": {
"scrapi": {
"command": "node",
"args": ["/absolute/path/to/scrapi-mcp-server/dist/index.js"],
"env": {
"SCRAPI_API_KEY": "your-api-key"
}
}
}
}Option 1: CLI command (Recommended)
claude mcp add scrapi-ai -s user -e SCRAPI_API_KEY=your-api-key -- npx -y @scrapi.ai/mcp-serverOr with --api-key:
claude mcp add scrapi-ai -s user -- npx -y @scrapi.ai/mcp-server --api-key your-api-keyOption 2: Edit config file
Edit ~/.claude.json or project .mcp.json:
{
"mcpServers": {
"scrapi": {
"command": "npx",
"args": ["-y", "@scrapi.ai/mcp-server", "--api-key", "your-api-key"]
}
}
}Connect via Streamable HTTP β no Node.js installation needed on the client side.
Endpoint: https://scrapi.ai/mcp
Cursor (.cursor/mcp.json):
{
"mcpServers": {
"scrapi": {
"url": "https://scrapi.ai/mcp",
"headers": {
"Authorization": "Bearer your-api-key"
}
}
}
}Claude Code (CLI):
claude mcp add --transport http scrapi https://scrapi.ai/mcp \
--header "Authorization: Bearer your-api-key"Cline (cline_mcp_settings.json):
{
"mcpServers": {
"scrapi": {
"type": "streamableHttp",
"url": "https://scrapi.ai/mcp",
"headers": {
"Authorization": "Bearer your-api-key"
}
}
}
}Claude Desktop (claude_desktop_config.json):
{
"mcpServers": {
"scrapi": {
"command": "npx",
"args": [
"mcp-remote",
"https://scrapi.ai/mcp",
"--header",
"Authorization: Bearer your-api-key"
]
}
}
}Note: Claude Desktop requires the mcp-remote proxy for HTTP connections.
Self-host the HTTP server (advanced)
Run your own instance instead of using the hosted endpoint:
SCRAPI_API_KEY=your-api-key npx -y -p @scrapi.ai/mcp-server scrapi-http
# or from source:
SCRAPI_API_KEY=your-api-key node dist/http.jsThe server starts at http://localhost:3000 with the MCP endpoint at /mcp. Configure with PORT and HOST environment variables. Replace the URL in the client configurations above with your self-hosted URL (e.g. http://localhost:3000/mcp).
Health check: GET http://localhost:3000/health
- Claude Desktop: Fully quit (Cmd+Q on macOS, Alt+F4 on Windows) and reopen
- Claude Code: Restart the session
- Cline: Restart VS Code
- Cursor: Restart the editor
You should see the MCP server connection indicator.
Scrapes a webpage and returns AI-readable content.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
url |
string | β | URL to scrape |
format |
string | markdown (default) or text |
Example:
{
"url": "https://example.com/article",
"format": "markdown"
}Markdown Output:
# Article Title
> Author: John Doe | Published: 2024-01-15
## Introduction
This is the main content of the article, converted to clean markdown...
## Key Points
- Point 1: Important detail
- Point 2: Another insight
- [Related Link](https://example.com/related)Text Output:
Article Title
Author: John Doe | Published: 2024-01-15
Introduction
This is the main content of the article, converted to plain text...
Key Points
- Point 1: Important detail
- Point 2: Another insight
Scrapes multiple webpages in parallel and returns AI-readable content.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
urls |
string[] | β | URLs to scrape (max 10) |
format |
string | markdown (default) or text |
Example:
{
"urls": ["https://example.com/page1", "https://example.com/page2"],
"format": "text"
}Output:
[
{
"url": "https://example.com/page1",
"content": "Page 1 Title\n\nThis is the content of page 1..."
},
{
"url": "https://example.com/page2",
"content": "Page 2 Title\n\nThis is the content of page 2..."
}
]Check the status of all ScraperServer instances. Shows server health, circuit breaker state, failure counts, and timing info.
Parameters: None
Example:
{}Output:
## ScraperServer Status
Total: 3 | Available: 2
| Name | OS | Status | Failures | Last Success | Last Failure |
|------|----|--------|----------|--------------|--------------|
| pluto | linux | OK | 0 | 01/30 14:23:05 | - |
| mars | mac | FAIL | 2 | 01/29 10:00:00 | 01/30 13:55:12 |
| venus | linux | OPEN | 3 | 01/28 09:00:00 | 01/30 12:00:00 |
### Issues
- **mars**: Connection refused - connect(2)
- **venus**: Circuit breaker open until 01/30 12:30:00
- **venus**: Net::ReadTimeoutStatus values:
| Status | Description |
|---|---|
OK |
Server is healthy |
FAIL |
Server is unhealthy |
OPEN |
Circuit breaker open (isolated for 30 min) |
N/A |
Not yet checked |
Check your API usage and remaining credits.
Parameters: None
Example:
{}Output:
## MCP Credits
| Item | Value |
|------|-------|
| Plan | starter |
| Subscription Credits | 1,500 |
| Purchased Credits | 200 |
| Total Remaining | 1,700 |
| Period End | 2026-03-01 |Retrieve detailed billing information including subscription, plans, daily usage, and spending limits.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
action |
string | Yes | subscription, plans, daily_usage, or spending_limits |
start_date |
string | Start date for daily_usage (YYYY-MM-DD, default: 30 days ago) |
|
end_date |
string | End date for daily_usage (YYYY-MM-DD, default: today) |
Example β Current subscription:
{ "action": "subscription" }## MCP Subscription
| Item | Value |
|------|-------|
| Plan | starter (Starter) |
| Status | active |
| Monthly Credits | 2,000 |
| Price | $19.00/mo |
| Rate Limit | 30 RPM |
| Burst Limit | 5 concurrent |
| Period End | 2026-03-01 |Example β Available plans:
{ "action": "plans" }## Available MCP Plans
| Plan | Credits/mo | Price | RPM | Burst |
|------|-----------|-------|-----|-------|
| Free (free) | 500 | Free | 10 | 2 |
| Starter (starter) | 2,000 | $19.00/mo | 30 | 5 |
| Pro (pro) | 10,000 | $49.00/mo | 60 | 10 |
| Business (business) | 50,000 | $149.00/mo | 120 | 20 |Example β Daily usage history:
{ "action": "daily_usage", "start_date": "2026-02-01", "end_date": "2026-02-07" }## Daily Usage (2026-02-01 ~ 2026-02-07)
| Date | Requests | Credits | Top Tool |
|------|----------|---------|----------|
| 2026-02-07 | 45 | 45 | scrape#scrape (45) |
| 2026-02-06 | 120 | 120 | scrape#scrape (100) |
**Total**: 165 requests, 165 creditsExample β Spending limits:
{ "action": "spending_limits" }## Spending Limits
| Item | Value |
|------|-------|
| Daily Limit | 500 credits |
| Today's Usage | 120 credits |
| Usage % | 24.0% |User: Summarize this article: https://news.example.com/article/12345
Claude: [calls scrape_url]
Here's a summary of the article:
## Key Points
- Point 1: ...
- Point 2: ...
- Point 3: ...
User: Get the content from https://example.com/data
Claude: [calls scrape_url]
# Page Title
> Source: https://example.com/data
The page content is returned in clean Markdown format...
User: What's the pricing on https://competitor.com/product/abc
Claude: [calls scrape_url]
Here's the pricing information:
- **Product**: ABC Premium
- **Regular Price**: $99.00
- **Sale Price**: $79.00 (20% off)
User: Read https://docs.example.com/api/v2 and write integration code
Claude: [calls scrape_url]
I've analyzed the API documentation. Here's the integration code:
// api-client.ts
export class ExampleApiClient {
private baseUrl = 'https://api.example.com/v2';
async getData(): Promise<Response> {
// ...
}
}
βββββββββββββββββββ
β User β
β "Summarize this β
β URL for me" β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Claude Desktop β
β / Cursor β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ βββββββββββββββββββ
β MCP Server ββββββΊβ Scrapi API β
β (scrape_url) β β (format param) β
ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ
β β
βββββββββββββββββββββββββ
β Markdown/Text Response
βΌ
βββββββββββββββββββ
β AI Response β
β (Summary, etc.) β
βββββββββββββββββββ
Built by the team behind Scrapi, with 8+ years of web scraping experience:
- β 1,900+ production crawlers
- β JavaScript rendering support
- β Anti-bot handling
- β 99.9% uptime
Make sure your API key is provided via one of these methods:
- Environment variable: Set
SCRAPI_API_KEYin your configuration - CLI argument: Pass
--api-key your-keyin the args
Verify that your API key is correct and active in your Scrapi dashboard.
If you upgraded but still see old behavior, clear the npx cache:
npx clear-npx-cache- Ensure Node.js 20+ is installed
- Try running
node /absolute/path/to/scrapi-mcp-server/dist/index.jsmanually to check for errors - Fully quit Claude Desktop (Cmd+Q on macOS, Alt+F4 on Windows) and restart
- Check Settings > Developer to verify the server is listed
Update Claude Desktop to the latest version: Claude menu β "Check for Updates..."
- Email: support@scrapi.ai
- Issues: GitHub Issues
MIT Β© Scrapi