# How to integrate Scrape do MCP with Claude Code

```json
{
  "title": "How to integrate Scrape do MCP with Claude Code",
  "toolkit": "Scrape do",
  "toolkit_slug": "scrape_do",
  "framework": "Claude Code",
  "framework_slug": "claude-code",
  "url": "https://composio.dev/toolkits/scrape_do/framework/claude-code",
  "markdown_url": "https://composio.dev/toolkits/scrape_do/framework/claude-code.md",
  "updated_at": "2026-05-12T10:24:44.960Z"
}
```

## Introduction

Manage your Scrape do directly from Claude Code with zero worries about OAuth hassles, API-breaking issues, or reliability and security concerns.
You can do this in two different ways:
- Via [Composio Connect](https://dashboard.composio.dev/login?utm_source=toolkits&utm_medium=framework_template&utm_campaign=claude-code&utm_content=composio_connect&next=%2F~%2Forg%2Fconnect%2Fclients%2Fclaude-code) - Direct and easiest approach
- Via [Composio SDK](https://docs.composio.dev/docs?utm_source=toolkits&utm_medium=framework_template&utm_campaign=claude-code&utm_content=composio_sdk) - Programmatic approach with more control

## Also integrate Scrape do with

- [OpenAI Agents SDK](https://composio.dev/toolkits/scrape_do/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/scrape_do/framework/claude-agents-sdk)
- [Claude Cowork](https://composio.dev/toolkits/scrape_do/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/scrape_do/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/scrape_do/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/scrape_do/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/scrape_do/framework/cli)
- [Google ADK](https://composio.dev/toolkits/scrape_do/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/scrape_do/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/scrape_do/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/scrape_do/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/scrape_do/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/scrape_do/framework/crew-ai)

## TL;DR

- Only one MCP URL to connect multiple apps with Claude Code with zero auth hassles.
- Programmatic tool calling allows LLMs to write its code in a remote workbench to handle complex tool chaining. Reduces to-and-fro with LLMs for frequent tool calling.
- Handling Large tool responses out of LLM context to minimize context rot.
- Dynamic just-in-time access to 20,000 tools across 1000+ other Apps for cross-app workflows. It loads the tools you need, so LLMs aren't overwhelmed by tools you don't need.

## Connect Scrape do to Claude Code

### Connecting Scrape do to Claude Code using Composio
1. Add the Composio MCP to Claude

```bash
claude mcp add --scope user --transport http composio https://connect.composio.dev/mcp
```

## What is Claude Code?

Claude Code is Anthropic's command line developer tool that lets you use Claude directly inside your terminal. Instead of switching between your editor, browser, and chat, you can stay in your project folder and ask Claude to help you build, debug, refactor, and understand code right where you're working.
Key features include:
- Terminal-Native Experience: Work with Claude directly in your command line without switching contexts
- MCP Support: Built-in support for Model Context Protocol servers to extend Claude's capabilities
- Project Context: Claude understands your project structure and can read, write, and modify files
- Interactive Development: Ask questions, debug code, and get help in real-time while coding
- Multi-Platform: Works on macOS, Linux, WSL, and Windows

## What is the Scrape do MCP server, and what's possible with it?

The Scrape do MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Scrape do account. It provides structured and secure access to robust web scraping tools, so your agent can perform actions like scraping dynamic pages, managing sessions, setting custom headers or proxies, and extracting structured data from any website on your behalf.
- Dynamic page scraping with headless browsers: Retrieve fully rendered HTML content from JavaScript-heavy or protected websites by leveraging advanced browser emulation and proxy rotation.
- Custom scraping session management: Set device type, cookies, wait times, and custom headers to imitate different users, maintain sessions, or access device-specific content for tailored data extraction.
- Proxy and anti-bot bypass control: Enable super or proxy modes to utilize residential, mobile, or datacenter proxies, helping your agent bypass strict anti-bot systems and geo-restrictions seamlessly.
- Targeted resource filtering: Block specific URLs like ads or analytics scripts during scraping to increase speed, avoid distractions, and improve privacy.
- Account usage and statistics retrieval: Access real-time usage stats, subscription status, and remaining request limits so your agent can monitor scraping quotas and avoid interruptions.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `SCRAPE_DO_CANCEL_ASYNC_JOB` | Cancel Async Job | Tool to cancel an asynchronous scraping job. Use when you need to stop processing of pending tasks in a job. Completed tasks remain available. |
| `SCRAPE_DO_CREATE_ASYNC_JOB` | Create Async Scraping Job | Tool to create an asynchronous scraping job with specified targets and options. Use when you need to scrape multiple URLs in parallel without waiting for results. Returns a job ID immediately for polling results later via the get job status action. |
| `SCRAPE_DO_GET_ACCOUNT_INFO` | Get Account Information | Retrieves account information and usage statistics from Scrape.do. This action makes a GET request to the Scrape.do info endpoint to fetch: - Subscription status - Concurrent request limits and usage - Monthly request limits and remaining requests - Real-time usage statistics Rate limit: Maximum 10 requests per minute. Use remaining request counts to monitor credits proactively, as different scraping operations (e.g., rendered-page requests) consume varying credit amounts and exhaustion mid-run causes failures. |
| `SCRAPE_DO_GET_AMAZON_OFFERS` | Get Amazon Product Offers | Get all seller offers for any Amazon product. Retrieves every seller listing including pricing, shipping costs, seller information, and Buy Box status in structured JSON format. Use when you need to compare prices across multiple sellers or find the best deal for a specific product. |
| `SCRAPE_DO_GET_AMAZON_PRODUCT` | Get Amazon product details | Extract structured product data from Amazon product detail pages (PDP). Returns comprehensive product information including title, pricing, ratings, images, best seller rankings, and technical specifications in JSON format. |
| `SCRAPE_DO_GET_AMAZON_RAW_HTML` | Get Amazon raw HTML | Tool to get raw HTML from any Amazon page with ZIP code geo-targeting. Use when you need complete unprocessed HTML source from Amazon URLs with location-based targeting. Ideal for scraping pages not covered by other structured endpoints. |
| `SCRAPE_DO_GET_ASYNC_ACCOUNT_INFO` | Get Async API Account Information | Tool to get account information for the Async API including concurrency limits and usage statistics. Use when you need to check available concurrency slots, active jobs, or remaining credits for Async API operations. |
| `SCRAPE_DO_GET_ASYNC_JOB` | Get Async Job Details | Tool to retrieve details and status of a specific asynchronous scraping job. Use when you need to check the progress, status, or results of a previously created async job. Returns job metadata including creation time, completion time, task counts, and detailed task list. |
| `SCRAPE_DO_GET_ASYNC_TASK` | Get Async Task Result | Tool to retrieve the result of a specific task within an asynchronous job. Returns the scraped content for that particular URL. Use when you need to check the status and result of a previously submitted async scraping task. |
| `SCRAPE_DO_SCRAPE_DO_GET_PAGE` | Scrape webpage using scrape.do | A tool to scrape web pages using scrape.do's API service. Makes a basic GET request to fetch webpage content while handling anti-bot protections and proxy rotation automatically. Does not execute JavaScript by default — pages requiring client-side rendering (SPAs, dynamically loaded content) will return incomplete HTML; use SCRAPE_DO_GET_RENDER_PAGE or set render=true for those cases. |
| `SCRAPE_DO_LIST_ASYNC_JOBS` | List Asynchronous Scraping Jobs | Tool to list all asynchronous scraping jobs. Returns paginated list of jobs with their status and metadata. Use when you need to retrieve job history or monitor job statuses. Supports pagination with up to 100 jobs per page. |
| `SCRAPE_DO_SCRAPE_DO_PROXY_MODE` | Use Scrape.do Proxy Mode | This tool implements the Proxy Mode functionality of scrape.do, which allows routing requests through their proxy server. It provides an alternative way to access web scraping capabilities by handling complex JavaScript-rendered pages, geolocation-based routing, device simulation, and built-in anti-bot and retry mechanisms. |
| `SCRAPE_DO_SCRAPE_URL_POST` | Scrape URL using POST method | Tool to scrape web pages using POST method via scrape.do API. Use when you need to send POST requests to target websites with custom request body data. Supports all parameters from GET endpoint plus request body customization for POST/PUT/PATCH methods. |
| `SCRAPE_DO_SEARCH_AMAZON` | Search Amazon products | Tool to search Amazon and scrape product listings with structured results. Performs keyword searches and returns structured product data including titles, prices, ratings, Prime status, sponsored flags, and position rankings in JSON format. Use when you need to search for products on Amazon marketplace or gather product information from search results. |
| `SCRAPE_DO_SET_BLOCK_URLS` | Block specific URLs during scraping | This tool allows users to block specific URLs during the scraping process. It's particularly useful for blocking unwanted resources like analytics scripts, advertisements, or any other URLs that might interfere with the scraping process or slow it down. It provides granular control by allowing users to specify URL patterns to block, thereby improving scraping performance and maintaining privacy. |
| `SCRAPE_DO_SET_REGIONAL_GEO_CODE` | Set Regional Geolocation for Scraping | This tool allows users to set a broader geographical targeting by specifying a region code instead of a specific country code. This is useful when you want to scrape content from an entire region rather than a specific country. Note that this feature requires super mode to be enabled and is only available for Business Plan or higher subscriptions. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Scrape do MCP server is an implementation of the Model Context Protocol that connects Claude Code (and other AI assistants like Claude and Cursor) directly to your Scrape do account. It provides structured and secure access so Claude can perform Scrape do operations on your behalf.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- Claude Pro, Max, or API billing enabled Anthropic account
- Composio API Key
- A Scrape do account
- Basic knowledge of Python or TypeScript

### 1. Install Claude Code

To install Claude Code, use one of the following methods based on your operating system:
```bash
# macOS, Linux, WSL
curl -fsSL https://claude.ai/install.sh | bash

# Windows PowerShell
irm https://claude.ai/install.ps1 | iex

# Windows CMD
curl -fsSL https://claude.ai/install.cmd -o install.cmd && install.cmd && del install.cmd
```

### 2. Set up Claude Code

Open a terminal, go to your project folder, and start Claude Code:
- Claude Code will open in your terminal
- Follow the prompts to sign in with your Anthropic account
- Complete the authentication flow
- Once authenticated, you can start using Claude Code
```bash
cd your-project-folder
claude
```

### 3. Set up environment variables

Create a .env file in your project root with the following variables:
- COMPOSIO_API_KEY authenticates with Composio (get it from [Composio dashboard](https://dashboard.composio.dev/login?utm_source=toolkits&utm_medium=framework_template&utm_campaign=claude-code&utm_content=api_key&next=%2F~%2Forg%2Fconnect%2Fclients%2Fclaude-code))
- USER_ID identifies the user for session management (use any unique identifier)
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
USER_ID=your_user_id_here
```

### 4. Install Composio library

No description provided.
```python
pip install composio-core python-dotenv
```

```typescript
npm install @composio/core dotenv
```

### 5. Generate Composio MCP URL

No description provided.
```python
import os
from composio import Composio
from dotenv import load_dotenv

load_dotenv()

COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
USER_ID = os.getenv("USER_ID")

composio_client = Composio(api_key=COMPOSIO_API_KEY)

composio_session = composio_client.create(
    user_id=USER_ID,
    toolkits=["scrape_do"],
)

COMPOSIO_MCP_URL = composio_session.mcp.url

print(f"MCP URL: {COMPOSIO_MCP_URL}")
print(f"\nUse this command to add to Claude Code:")
print(f'claude mcp add --transport http scrape_do-composio "{COMPOSIO_MCP_URL}" --headers "X-API-Key:{COMPOSIO_API_KEY}"')
```

```typescript
import 'dotenv/config';
import { Composio } from '@composio/core';

const { COMPOSIO_API_KEY, USER_ID } = process.env;

if (!COMPOSIO_API_KEY || !USER_ID) {
  throw new Error('COMPOSIO_API_KEY and USER_ID required in .env');
}

const composioClient = new Composio({ apiKey: COMPOSIO_API_KEY });

const composioSession = await composioClient.create(USER_ID, {
  toolkits: ['scrape_do'],
});

const composioMcpUrl = composioSession?.mcp.url;

console.log(`MCP URL: ${composioMcpUrl}`);
console.log(`\nUse this command to add to Claude Code:`);
console.log(`claude mcp add --transport http scrape_do-composio "${composioMcpUrl}" --headers "X-API-Key:${COMPOSIO_API_KEY}"`);
```

### 6. Run the script and copy the MCP URL

No description provided.
```python
python generate_mcp_url.py
```

```typescript
node --loader ts-node/esm generate_mcp_url.ts
# or if using tsx
tsx generate_mcp_url.ts
```

### 7. Add Scrape do MCP to Claude Code

In your terminal, add the MCP server using the command from the previous step. The command format is:
- claude mcp add registers a new MCP server with Claude Code
- --transport http specifies that this is an HTTP-based MCP server
- The server name (scrape_do-composio) is how you'll reference it
- The URL points to your Composio Tool Router session
- --headers includes your Composio API key for authentication
After running the command, close the current Claude Code session and start a new one for the changes to take effect.
```bash
claude mcp add --transport http scrape_do-composio "YOUR_MCP_URL_HERE" --headers "X-API-Key:YOUR_COMPOSIO_API_KEY"

# Then restart Claude Code
exit
claude
```

### 8. Verify the installation

Check that your Scrape do MCP server is properly configured.
- This command lists all MCP servers registered with Claude Code
- You should see your scrape_do-composio entry in the list
- This confirms that Claude Code can now access Scrape do tools
If everything is wired up, you should see your scrape_do-composio entry listed:
```bash
claude mcp list
```

### 9. Authenticate Scrape do

The first time you try to use Scrape do tools, you'll be prompted to authenticate.
- Claude Code will detect that you need to authenticate with Scrape do
- It will show you an authentication link
- Open the link in your browser (or copy/paste it)
- Complete the Scrape do authorization flow
- Return to the terminal and start using Scrape do through Claude Code
Once authenticated, you can ask Claude Code to perform Scrape do operations in natural language. For example:
- "Scrape product prices from a dynamic website"
- "Extract news headlines with JavaScript rendering"
- "Bypass Cloudflare to get full page HTML"

## Complete Code

```python
import os
from composio import Composio
from dotenv import load_dotenv

load_dotenv()

COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
USER_ID = os.getenv("USER_ID")

composio_client = Composio(api_key=COMPOSIO_API_KEY)

composio_session = composio_client.create(
    user_id=USER_ID,
    toolkits=["scrape_do"],
)

COMPOSIO_MCP_URL = composio_session.mcp.url

print(f"MCP URL: {COMPOSIO_MCP_URL}")
print(f"\nUse this command to add to Claude Code:")
print(f'claude mcp add --transport http scrape_do-composio "{COMPOSIO_MCP_URL}" --headers "X-API-Key:{COMPOSIO_API_KEY}"')
```

```typescript
import 'dotenv/config';
import { Composio } from '@composio/core';

const { COMPOSIO_API_KEY, USER_ID } = process.env;

if (!COMPOSIO_API_KEY || !USER_ID) {
  throw new Error('COMPOSIO_API_KEY and USER_ID required in .env');
}

const composioClient = new Composio({ apiKey: COMPOSIO_API_KEY });

const composioSession = await composioClient.create(USER_ID, {
  toolkits: ['scrape_do'],
});

const composioMcpUrl = composioSession?.mcp.url;

console.log(`MCP URL: ${composioMcpUrl}`);
console.log(`\nUse this command to add to Claude Code:`);
console.log(`claude mcp add --transport http scrape_do-composio "${composioMcpUrl}" --headers "X-API-Key:${COMPOSIO_API_KEY}"`);
```

## Conclusion

You've successfully integrated Scrape do with Claude Code using Composio's MCP server. Now you can interact with Scrape do directly from your terminal using natural language commands.
Key features of this setup:
- Terminal-native experience without switching contexts
- Natural language commands for Scrape do operations
- Secure authentication through Composio's managed MCP
- Tool Router for dynamic tool discovery and execution
Next steps:
- Try asking Claude Code to perform various Scrape do operations
- Add more toolkits to your Tool Router session for multi-app workflows
- Integrate this setup into your development workflow for increased productivity
You can extend this by adding more toolkits, implementing custom workflows, or building automation scripts that leverage Claude Code's capabilities.

## How to build Scrape do MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/scrape_do/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/scrape_do/framework/claude-agents-sdk)
- [Claude Cowork](https://composio.dev/toolkits/scrape_do/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/scrape_do/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/scrape_do/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/scrape_do/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/scrape_do/framework/cli)
- [Google ADK](https://composio.dev/toolkits/scrape_do/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/scrape_do/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/scrape_do/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/scrape_do/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/scrape_do/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/scrape_do/framework/crew-ai)

## Related Toolkits

- [Excel](https://composio.dev/toolkits/excel) - Microsoft Excel is a robust spreadsheet application for organizing, analyzing, and visualizing data. It's the go-to tool for calculations, reporting, and flexible data management.
- [21risk](https://composio.dev/toolkits/_21risk) - 21RISK is a web app built for easy checklist, audit, and compliance management. It streamlines risk processes so teams can focus on what matters.
- [Abstract](https://composio.dev/toolkits/abstract) - Abstract provides a suite of APIs for automating data validation and enrichment tasks. It helps developers streamline workflows and ensure data quality with minimal effort.
- [Addressfinder](https://composio.dev/toolkits/addressfinder) - Addressfinder is a data quality platform for verifying addresses, emails, and phone numbers. It helps you ensure accurate customer and contact data every time.
- [Agenty](https://composio.dev/toolkits/agenty) - Agenty is a web scraping and automation platform for extracting data and automating browser tasks—no coding needed. It streamlines data collection, monitoring, and repetitive online actions.
- [Ambee](https://composio.dev/toolkits/ambee) - Ambee is an environmental data platform providing real-time, hyperlocal APIs for air quality, weather, and pollen. Get precise environmental insights to power smarter decisions in your apps and workflows.
- [Ambient weather](https://composio.dev/toolkits/ambient_weather) - Ambient Weather is a platform for personal weather stations with a robust API for accessing local, real-time, and historical weather data. Get detailed environmental insights directly from your own sensors for smarter apps and automations.
- [Anonyflow](https://composio.dev/toolkits/anonyflow) - Anonyflow is a service for encryption-based data anonymization and secure data sharing. It helps organizations meet GDPR, CCPA, and HIPAA data privacy compliance requirements.
- [Api ninjas](https://composio.dev/toolkits/api_ninjas) - Api ninjas offers 120+ public APIs spanning categories like weather, finance, sports, and more. Developers use it to supercharge apps with real-time data and actionable endpoints.
- [Api sports](https://composio.dev/toolkits/api_sports) - Api sports is a comprehensive sports data platform covering 2,000+ competitions with live scores and 15+ years of stats. Instantly access up-to-date sports information for analysis, apps, or chatbots.
- [Apify](https://composio.dev/toolkits/apify) - Apify is a cloud platform for building, deploying, and managing web scraping and automation tools called Actors. It lets you automate data extraction and workflow tasks at scale—no infrastructure headaches.
- [Autom](https://composio.dev/toolkits/autom) - Autom is a lightning-fast search engine results data platform for Google, Bing, and Brave. Developers use it to access fresh, low-latency SERP data on demand.
- [Beaconchain](https://composio.dev/toolkits/beaconchain) - Beaconchain is a real-time analytics platform for Ethereum 2.0's Beacon Chain. It provides detailed insights into validators, blocks, and overall network performance.
- [Big data cloud](https://composio.dev/toolkits/big_data_cloud) - BigDataCloud provides APIs for geolocation, reverse geocoding, and address validation. Instantly access reliable location intelligence to enhance your applications and workflows.
- [Bigpicture io](https://composio.dev/toolkits/bigpicture_io) - BigPicture.io offers APIs for accessing detailed company and profile data. Instantly enrich your applications with up-to-date insights on 20M+ businesses.
- [Bitquery](https://composio.dev/toolkits/bitquery) - Bitquery is a blockchain data platform offering indexed, real-time, and historical data from 40+ blockchains via GraphQL APIs. Get unified, reliable access to complex on-chain data for analytics, trading, and research.
- [Brightdata](https://composio.dev/toolkits/brightdata) - Brightdata is a leading web data platform offering advanced scraping, SERP APIs, and anti-bot tools. It lets you collect public web data at scale, bypassing blocks and friction.
- [Builtwith](https://composio.dev/toolkits/builtwith) - BuiltWith is a web technology profiler that uncovers the technologies powering any website. Gain actionable insights into analytics, hosting, and content management stacks for smarter research and lead generation.
- [Byteforms](https://composio.dev/toolkits/byteforms) - Byteforms is an all-in-one platform for creating forms, managing submissions, and integrating data. It streamlines workflows by centralizing form data collection and automation.
- [Cabinpanda](https://composio.dev/toolkits/cabinpanda) - Cabinpanda is a data collection platform for building and managing online forms. It helps streamline how you gather, organize, and analyze responses.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Scrape do MCP?

With a standalone Scrape do MCP server, the agents and LLMs can only access a fixed set of Scrape do tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Scrape do and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Claude Code?

Yes, you can. Claude Code fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Scrape do tools.

### Can I manage the permissions and scopes for Scrape do while using Tool Router?

Yes, absolutely. You can configure which Scrape do scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Scrape do data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
