# How to integrate Scrape do MCP with Pydantic AI

```json
{
  "title": "How to integrate Scrape do MCP with Pydantic AI",
  "toolkit": "Scrape do",
  "toolkit_slug": "scrape_do",
  "framework": "Pydantic AI",
  "framework_slug": "pydantic-ai",
  "url": "https://composio.dev/toolkits/scrape_do/framework/pydantic-ai",
  "markdown_url": "https://composio.dev/toolkits/scrape_do/framework/pydantic-ai.md",
  "updated_at": "2026-05-12T10:24:44.960Z"
}
```

## Introduction

This guide walks you through connecting Scrape do to Pydantic AI using the Composio tool router. By the end, you'll have a working Scrape do agent that can scrape product prices from a dynamic website, extract news headlines with javascript rendering, bypass cloudflare to get full page html through natural language commands.
This guide will help you understand how to give your Pydantic AI agent real control over a Scrape do account through Composio's Scrape do MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Scrape do with

- [OpenAI Agents SDK](https://composio.dev/toolkits/scrape_do/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/scrape_do/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/scrape_do/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/scrape_do/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/scrape_do/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/scrape_do/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/scrape_do/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/scrape_do/framework/cli)
- [Google ADK](https://composio.dev/toolkits/scrape_do/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/scrape_do/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/scrape_do/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/scrape_do/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/scrape_do/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/scrape_do/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- How to set up your Composio API key and User ID
- How to create a Composio Tool Router session for Scrape do
- How to attach an MCP Server to a Pydantic AI agent
- How to stream responses and maintain chat history
- How to build a simple REPL-style chat interface to test your Scrape do workflows

## What is Pydantic AI?

Pydantic AI is a Python framework for building AI agents with strong typing and validation. It leverages Pydantic's data validation capabilities to create robust, type-safe AI applications.
Key features include:
- Type Safety: Built on Pydantic for automatic data validation
- MCP Support: Native support for Model Context Protocol servers
- Streaming: Built-in support for streaming responses
- Async First: Designed for async/await patterns

## What is the Scrape do MCP server, and what's possible with it?

The Scrape do MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Scrape do account. It provides structured and secure access to robust web scraping tools, so your agent can perform actions like scraping dynamic pages, managing sessions, setting custom headers or proxies, and extracting structured data from any website on your behalf.
- Dynamic page scraping with headless browsers: Retrieve fully rendered HTML content from JavaScript-heavy or protected websites by leveraging advanced browser emulation and proxy rotation.
- Custom scraping session management: Set device type, cookies, wait times, and custom headers to imitate different users, maintain sessions, or access device-specific content for tailored data extraction.
- Proxy and anti-bot bypass control: Enable super or proxy modes to utilize residential, mobile, or datacenter proxies, helping your agent bypass strict anti-bot systems and geo-restrictions seamlessly.
- Targeted resource filtering: Block specific URLs like ads or analytics scripts during scraping to increase speed, avoid distractions, and improve privacy.
- Account usage and statistics retrieval: Access real-time usage stats, subscription status, and remaining request limits so your agent can monitor scraping quotas and avoid interruptions.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `SCRAPE_DO_CANCEL_ASYNC_JOB` | Cancel Async Job | Tool to cancel an asynchronous scraping job. Use when you need to stop processing of pending tasks in a job. Completed tasks remain available. |
| `SCRAPE_DO_CREATE_ASYNC_JOB` | Create Async Scraping Job | Tool to create an asynchronous scraping job with specified targets and options. Use when you need to scrape multiple URLs in parallel without waiting for results. Returns a job ID immediately for polling results later via the get job status action. |
| `SCRAPE_DO_GET_ACCOUNT_INFO` | Get Account Information | Retrieves account information and usage statistics from Scrape.do. This action makes a GET request to the Scrape.do info endpoint to fetch: - Subscription status - Concurrent request limits and usage - Monthly request limits and remaining requests - Real-time usage statistics Rate limit: Maximum 10 requests per minute. Use remaining request counts to monitor credits proactively, as different scraping operations (e.g., rendered-page requests) consume varying credit amounts and exhaustion mid-run causes failures. |
| `SCRAPE_DO_GET_AMAZON_OFFERS` | Get Amazon Product Offers | Get all seller offers for any Amazon product. Retrieves every seller listing including pricing, shipping costs, seller information, and Buy Box status in structured JSON format. Use when you need to compare prices across multiple sellers or find the best deal for a specific product. |
| `SCRAPE_DO_GET_AMAZON_PRODUCT` | Get Amazon product details | Extract structured product data from Amazon product detail pages (PDP). Returns comprehensive product information including title, pricing, ratings, images, best seller rankings, and technical specifications in JSON format. |
| `SCRAPE_DO_GET_AMAZON_RAW_HTML` | Get Amazon raw HTML | Tool to get raw HTML from any Amazon page with ZIP code geo-targeting. Use when you need complete unprocessed HTML source from Amazon URLs with location-based targeting. Ideal for scraping pages not covered by other structured endpoints. |
| `SCRAPE_DO_GET_ASYNC_ACCOUNT_INFO` | Get Async API Account Information | Tool to get account information for the Async API including concurrency limits and usage statistics. Use when you need to check available concurrency slots, active jobs, or remaining credits for Async API operations. |
| `SCRAPE_DO_GET_ASYNC_JOB` | Get Async Job Details | Tool to retrieve details and status of a specific asynchronous scraping job. Use when you need to check the progress, status, or results of a previously created async job. Returns job metadata including creation time, completion time, task counts, and detailed task list. |
| `SCRAPE_DO_GET_ASYNC_TASK` | Get Async Task Result | Tool to retrieve the result of a specific task within an asynchronous job. Returns the scraped content for that particular URL. Use when you need to check the status and result of a previously submitted async scraping task. |
| `SCRAPE_DO_SCRAPE_DO_GET_PAGE` | Scrape webpage using scrape.do | A tool to scrape web pages using scrape.do's API service. Makes a basic GET request to fetch webpage content while handling anti-bot protections and proxy rotation automatically. Does not execute JavaScript by default — pages requiring client-side rendering (SPAs, dynamically loaded content) will return incomplete HTML; use SCRAPE_DO_GET_RENDER_PAGE or set render=true for those cases. |
| `SCRAPE_DO_LIST_ASYNC_JOBS` | List Asynchronous Scraping Jobs | Tool to list all asynchronous scraping jobs. Returns paginated list of jobs with their status and metadata. Use when you need to retrieve job history or monitor job statuses. Supports pagination with up to 100 jobs per page. |
| `SCRAPE_DO_SCRAPE_DO_PROXY_MODE` | Use Scrape.do Proxy Mode | This tool implements the Proxy Mode functionality of scrape.do, which allows routing requests through their proxy server. It provides an alternative way to access web scraping capabilities by handling complex JavaScript-rendered pages, geolocation-based routing, device simulation, and built-in anti-bot and retry mechanisms. |
| `SCRAPE_DO_SCRAPE_URL_POST` | Scrape URL using POST method | Tool to scrape web pages using POST method via scrape.do API. Use when you need to send POST requests to target websites with custom request body data. Supports all parameters from GET endpoint plus request body customization for POST/PUT/PATCH methods. |
| `SCRAPE_DO_SEARCH_AMAZON` | Search Amazon products | Tool to search Amazon and scrape product listings with structured results. Performs keyword searches and returns structured product data including titles, prices, ratings, Prime status, sponsored flags, and position rankings in JSON format. Use when you need to search for products on Amazon marketplace or gather product information from search results. |
| `SCRAPE_DO_SET_BLOCK_URLS` | Block specific URLs during scraping | This tool allows users to block specific URLs during the scraping process. It's particularly useful for blocking unwanted resources like analytics scripts, advertisements, or any other URLs that might interfere with the scraping process or slow it down. It provides granular control by allowing users to specify URL patterns to block, thereby improving scraping performance and maintaining privacy. |
| `SCRAPE_DO_SET_REGIONAL_GEO_CODE` | Set Regional Geolocation for Scraping | This tool allows users to set a broader geographical targeting by specifying a region code instead of a specific country code. This is useful when you want to scrape content from an entire region rather than a specific country. Note that this feature requires super mode to be enabled and is only available for Business Plan or higher subscriptions. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Scrape do MCP server is an implementation of the Model Context Protocol that connects your AI agent to Scrape do. It provides structured and secure access so your agent can perform Scrape do operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- Python 3.9 or higher
- A Composio account with an active API key
- Basic familiarity with Python and async programming

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

Install the required libraries.
What's happening:
- composio connects your agent to external SaaS tools like Scrape do
- pydantic-ai lets you create structured AI agents with tool support
- python-dotenv loads your environment variables securely from a .env file
```bash
pip install composio pydantic-ai python-dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your agent to Composio's API
- USER_ID associates your session with your account for secure tool access
- OPENAI_API_KEY to access OpenAI LLMs
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
USER_ID=your_user_id_here
OPENAI_API_KEY=your_openai_api_key
```

### 4. Import dependencies

What's happening:
- We load environment variables and import required modules
- Composio manages connections to Scrape do
- MCPServerStreamableHTTP connects to the Scrape do MCP server endpoint
- Agent from Pydantic AI lets you define and run the AI assistant
```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()
```

### 5. Create a Tool Router Session

What's happening:
- We're creating a Tool Router session that gives your agent access to Scrape do tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
```python
async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Scrape do
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["scrape_do"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")
```

### 6. Initialize the Pydantic AI Agent

What's happening:
- The MCP client connects to the Scrape do endpoint
- The agent uses GPT-5 to interpret user commands and perform Scrape do operations
- The instructions field defines the agent's role and behavior
```python
# Attach the MCP server to a Pydantic AI Agent
scrape_do_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
agent = Agent(
    "openai:gpt-5",
    toolsets=[scrape_do_mcp],
    instructions=(
        "You are a Scrape do assistant. Use Scrape do tools to help users "
        "with their requests. Ask clarifying questions when needed."
    ),
)
```

### 7. Build the chat interface

What's happening:
- The agent reads input from the terminal and streams its response
- Scrape do API calls happen automatically under the hood
- The model keeps conversation history to maintain context across turns
```python
# Simple REPL with message history
history = []
print("Chat started! Type 'exit' or 'quit' to end.\n")
print("Try asking the agent to help you with Scrape do.\n")

while True:
    user_input = input("You: ").strip()
    if user_input.lower() in {"exit", "quit", "bye"}:
        print("\nGoodbye!")
        break
    if not user_input:
        continue

    print("\nAgent is thinking...\n", flush=True)

    async with agent.run_stream(user_input, message_history=history) as stream_result:
        collected_text = ""
        async for chunk in stream_result.stream_output():
            text_piece = None
            if isinstance(chunk, str):
                text_piece = chunk
            elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                text_piece = chunk.delta
            elif hasattr(chunk, "text"):
                text_piece = chunk.text
            if text_piece:
                collected_text += text_piece
        result = stream_result

    print(f"Agent: {collected_text}\n")
    history = result.all_messages()
```

### 8. Run the application

What's happening:
- The asyncio loop launches the agent and keeps it running until you exit
```python
if __name__ == "__main__":
    asyncio.run(main())
```

## Complete Code

```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()

async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Scrape do
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["scrape_do"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")

    # Attach the MCP server to a Pydantic AI Agent
    scrape_do_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
    agent = Agent(
        "openai:gpt-5",
        toolsets=[scrape_do_mcp],
        instructions=(
            "You are a Scrape do assistant. Use Scrape do tools to help users "
            "with their requests. Ask clarifying questions when needed."
        ),
    )

    # Simple REPL with message history
    history = []
    print("Chat started! Type 'exit' or 'quit' to end.\n")
    print("Try asking the agent to help you with Scrape do.\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() in {"exit", "quit", "bye"}:
            print("\nGoodbye!")
            break
        if not user_input:
            continue

        print("\nAgent is thinking...\n", flush=True)

        async with agent.run_stream(user_input, message_history=history) as stream_result:
            collected_text = ""
            async for chunk in stream_result.stream_output():
                text_piece = None
                if isinstance(chunk, str):
                    text_piece = chunk
                elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                    text_piece = chunk.delta
                elif hasattr(chunk, "text"):
                    text_piece = chunk.text
                if text_piece:
                    collected_text += text_piece
            result = stream_result

        print(f"Agent: {collected_text}\n")
        history = result.all_messages()

if __name__ == "__main__":
    asyncio.run(main())
```

## Conclusion

You've built a Pydantic AI agent that can interact with Scrape do through Composio's Tool Router. With this setup, your agent can perform real Scrape do actions through natural language.
You can extend this further by:
- Adding other toolkits like Gmail, HubSpot, or Salesforce
- Building a web-based chat interface around this agent
- Using multiple MCP endpoints to enable cross-app workflows (for example, Gmail + Scrape do for workflow automation)
This architecture makes your AI agent "agent-native", able to securely use APIs in a unified, composable way without custom integrations.

## How to build Scrape do MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/scrape_do/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/scrape_do/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/scrape_do/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/scrape_do/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/scrape_do/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/scrape_do/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/scrape_do/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/scrape_do/framework/cli)
- [Google ADK](https://composio.dev/toolkits/scrape_do/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/scrape_do/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/scrape_do/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/scrape_do/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/scrape_do/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/scrape_do/framework/crew-ai)

## Related Toolkits

- [Excel](https://composio.dev/toolkits/excel) - Microsoft Excel is a robust spreadsheet application for organizing, analyzing, and visualizing data. It's the go-to tool for calculations, reporting, and flexible data management.
- [21risk](https://composio.dev/toolkits/_21risk) - 21RISK is a web app built for easy checklist, audit, and compliance management. It streamlines risk processes so teams can focus on what matters.
- [Abstract](https://composio.dev/toolkits/abstract) - Abstract provides a suite of APIs for automating data validation and enrichment tasks. It helps developers streamline workflows and ensure data quality with minimal effort.
- [Addressfinder](https://composio.dev/toolkits/addressfinder) - Addressfinder is a data quality platform for verifying addresses, emails, and phone numbers. It helps you ensure accurate customer and contact data every time.
- [Agenty](https://composio.dev/toolkits/agenty) - Agenty is a web scraping and automation platform for extracting data and automating browser tasks—no coding needed. It streamlines data collection, monitoring, and repetitive online actions.
- [Ambee](https://composio.dev/toolkits/ambee) - Ambee is an environmental data platform providing real-time, hyperlocal APIs for air quality, weather, and pollen. Get precise environmental insights to power smarter decisions in your apps and workflows.
- [Ambient weather](https://composio.dev/toolkits/ambient_weather) - Ambient Weather is a platform for personal weather stations with a robust API for accessing local, real-time, and historical weather data. Get detailed environmental insights directly from your own sensors for smarter apps and automations.
- [Anonyflow](https://composio.dev/toolkits/anonyflow) - Anonyflow is a service for encryption-based data anonymization and secure data sharing. It helps organizations meet GDPR, CCPA, and HIPAA data privacy compliance requirements.
- [Api ninjas](https://composio.dev/toolkits/api_ninjas) - Api ninjas offers 120+ public APIs spanning categories like weather, finance, sports, and more. Developers use it to supercharge apps with real-time data and actionable endpoints.
- [Api sports](https://composio.dev/toolkits/api_sports) - Api sports is a comprehensive sports data platform covering 2,000+ competitions with live scores and 15+ years of stats. Instantly access up-to-date sports information for analysis, apps, or chatbots.
- [Apify](https://composio.dev/toolkits/apify) - Apify is a cloud platform for building, deploying, and managing web scraping and automation tools called Actors. It lets you automate data extraction and workflow tasks at scale—no infrastructure headaches.
- [Autom](https://composio.dev/toolkits/autom) - Autom is a lightning-fast search engine results data platform for Google, Bing, and Brave. Developers use it to access fresh, low-latency SERP data on demand.
- [Beaconchain](https://composio.dev/toolkits/beaconchain) - Beaconchain is a real-time analytics platform for Ethereum 2.0's Beacon Chain. It provides detailed insights into validators, blocks, and overall network performance.
- [Big data cloud](https://composio.dev/toolkits/big_data_cloud) - BigDataCloud provides APIs for geolocation, reverse geocoding, and address validation. Instantly access reliable location intelligence to enhance your applications and workflows.
- [Bigpicture io](https://composio.dev/toolkits/bigpicture_io) - BigPicture.io offers APIs for accessing detailed company and profile data. Instantly enrich your applications with up-to-date insights on 20M+ businesses.
- [Bitquery](https://composio.dev/toolkits/bitquery) - Bitquery is a blockchain data platform offering indexed, real-time, and historical data from 40+ blockchains via GraphQL APIs. Get unified, reliable access to complex on-chain data for analytics, trading, and research.
- [Brightdata](https://composio.dev/toolkits/brightdata) - Brightdata is a leading web data platform offering advanced scraping, SERP APIs, and anti-bot tools. It lets you collect public web data at scale, bypassing blocks and friction.
- [Builtwith](https://composio.dev/toolkits/builtwith) - BuiltWith is a web technology profiler that uncovers the technologies powering any website. Gain actionable insights into analytics, hosting, and content management stacks for smarter research and lead generation.
- [Byteforms](https://composio.dev/toolkits/byteforms) - Byteforms is an all-in-one platform for creating forms, managing submissions, and integrating data. It streamlines workflows by centralizing form data collection and automation.
- [Cabinpanda](https://composio.dev/toolkits/cabinpanda) - Cabinpanda is a data collection platform for building and managing online forms. It helps streamline how you gather, organize, and analyze responses.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Scrape do MCP?

With a standalone Scrape do MCP server, the agents and LLMs can only access a fixed set of Scrape do tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Scrape do and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Pydantic AI?

Yes, you can. Pydantic AI fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Scrape do tools.

### Can I manage the permissions and scopes for Scrape do while using Tool Router?

Yes, absolutely. You can configure which Scrape do scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Scrape do data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
