# How to integrate Scrapegraph ai MCP with Pydantic AI

```json
{
  "title": "How to integrate Scrapegraph ai MCP with Pydantic AI",
  "toolkit": "Scrapegraph ai",
  "toolkit_slug": "scrapegraph_ai",
  "framework": "Pydantic AI",
  "framework_slug": "pydantic-ai",
  "url": "https://composio.dev/toolkits/scrapegraph_ai/framework/pydantic-ai",
  "markdown_url": "https://composio.dev/toolkits/scrapegraph_ai/framework/pydantic-ai.md",
  "updated_at": "2026-05-12T10:24:47.118Z"
}
```

## Introduction

This guide walks you through connecting Scrapegraph ai to Pydantic AI using the Composio tool router. By the end, you'll have a working Scrapegraph ai agent that can extract product prices from amazon search results, summarize latest news headlines from bbc homepage, convert wikipedia article to markdown format through natural language commands.
This guide will help you understand how to give your Pydantic AI agent real control over a Scrapegraph ai account through Composio's Scrapegraph ai MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Scrapegraph ai with

- [OpenAI Agents SDK](https://composio.dev/toolkits/scrapegraph_ai/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/scrapegraph_ai/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/scrapegraph_ai/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/scrapegraph_ai/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/scrapegraph_ai/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/scrapegraph_ai/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/scrapegraph_ai/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/scrapegraph_ai/framework/cli)
- [Google ADK](https://composio.dev/toolkits/scrapegraph_ai/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/scrapegraph_ai/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/scrapegraph_ai/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/scrapegraph_ai/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/scrapegraph_ai/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/scrapegraph_ai/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- How to set up your Composio API key and User ID
- How to create a Composio Tool Router session for Scrapegraph ai
- How to attach an MCP Server to a Pydantic AI agent
- How to stream responses and maintain chat history
- How to build a simple REPL-style chat interface to test your Scrapegraph ai workflows

## What is Pydantic AI?

Pydantic AI is a Python framework for building AI agents with strong typing and validation. It leverages Pydantic's data validation capabilities to create robust, type-safe AI applications.
Key features include:
- Type Safety: Built on Pydantic for automatic data validation
- MCP Support: Native support for Model Context Protocol servers
- Streaming: Built-in support for streaming responses
- Async First: Designed for async/await patterns

## What is the Scrapegraph ai MCP server, and what's possible with it?

The Scrapegraph ai MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Scrapegraph ai account. It provides structured and secure access to powerful web scraping and data extraction tools, so your agent can perform actions like running AI-powered scrapers, converting webpages to markdown, monitoring job statuses, and managing your account usage with ease.
- AI-powered web scraping and search: Instruct your agent to extract structured data from any website or perform detailed web searches with parsed, organized results.
- Webpage to markdown conversion: Let your agent instantly convert any webpage into clean, readable markdown for easy documentation or analysis.
- Automated job status tracking: Check on the progress and results of ongoing scraping, crawling, or conversion jobs to stay updated without manual effort.
- Smart multi-page crawling: Direct the agent to launch intelligent crawlers that gather data across multiple linked pages in a single workflow.
- Account usage monitoring and feedback: Retrieve your remaining credits, track API usage, and submit feedback on completed tasks—all through your AI agent.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `SCRAPEGRAPH_AI_CONVERT_WEBPAGE_TO_MARKDOWN_V2` | Convert Webpage to Markdown (V2) | Tool to convert any webpage into clean, well-formatted Markdown with full parameter control. Use when you need advanced options like stealth mode, custom headers, or webhook notifications. Supports all Markdownify API parameters. |
| `SCRAPEGRAPH_AI_GENERATE_SCHEMA` | Generate Schema | Generate or modify a JSON schema based on a search query for structured data extraction. Use when you need a schema template for scraping specific data fields. |
| `SCRAPEGRAPH_AI_GET_AGENTIC_SCRAPER_HISTORY` | Get Agentic Scraper History | Retrieve paginated history of agentic scraper jobs. Use to view past scraping requests, their status, and results. |
| `SCRAPEGRAPH_AI_GET_CRAWLER_HISTORY` | Get Crawler History | Retrieve the history of crawler jobs for your account. Returns paginated list of past crawler requests with their status, results, and metadata. |
| `SCRAPEGRAPH_AI_GET_CREDITS` | Get Credits | Retrieve remaining and used credits for your ScrapeGraphAI account. Useful for checking credit availability before bulk scraping operations to avoid mid-run failures. |
| `SCRAPEGRAPH_AI_GET_ENDPOINT_SUGGESTIONS` | Get Endpoint Suggestions | Tool to get AI-powered suggestions for creating scraping endpoints. Use when you need to identify what data can be extracted from a website and how to structure the scraping logic. |
| `SCRAPEGRAPH_AI_GET_LIVE_SESSION_URL` | Get Live Session URL | Tool to get a URL for a live browser session. Use when you need to interact with a webpage in real-time through a controlled browser environment. |
| `SCRAPEGRAPH_AI_GET_MARKDOWNIFY_HISTORY` | Get Markdownify History | Tool to retrieve the history of markdownify webpage-to-Markdown conversion jobs. Use when you need to view past markdownify requests and their statuses. |
| `SCRAPEGRAPH_AI_GET_SCRAPE_HISTORY` | Get Scrape History | Retrieve the history of scrape jobs from your ScrapeGraphAI account. Use this to check the status of past scrapes, view results, and track credit usage. |
| `SCRAPEGRAPH_AI_GET_SEARCHSCRAPER_HISTORY` | Get Searchscraper History | Get the history of searchscraper jobs with pagination support. Use this to retrieve past searchscraper requests, their status, and results. |
| `SCRAPEGRAPH_AI_GET_SITEMAP_HISTORY` | Get Sitemap History | Tool to retrieve the history of sitemap extraction jobs. Use when you need to view past sitemap extraction requests, their status, and results. |
| `SCRAPEGRAPH_AI_GET_SMARTSCRAPER_HISTORY` | Get Smartscraper History | Tool to retrieve the history of smartscraper jobs. Use when you need to view past scraping requests and their results. |
| `SCRAPEGRAPH_AI_GET_USAGE_TIMELINE` | Get Usage Timeline | Tool to retrieve usage timeline statistics for your ScrapeGraphAI account. Use when you need to visualize or analyze service usage patterns over time. |
| `SCRAPEGRAPH_AI_GET_WEBHOOK_LOGS` | Get Webhook Logs | Tool to retrieve webhook delivery logs for a crawler job. Use when you need to check the status and history of webhook notifications sent for a specific crawler execution. |
| `SCRAPEGRAPH_AI_LIST_SCHEDULED_JOBS` | List Scheduled Jobs | Retrieve a paginated list of all scheduled scraping jobs for your account. Use this action to view and manage your scheduled jobs, including their configuration, cron schedules, and active status. Supports filtering by service type and active status. |
| `SCRAPEGRAPH_AI_MARKDOWNIFY_STATUS` | Markdownify Status | Check the status and retrieve results of a Markdownify webpage-to-Markdown conversion job. Use this action to poll for the status of an async Markdownify request started via SCRAPEGRAPH_AI_MARKDOWNIFY. Note: The ScrapeGraph AI API typically returns completed results synchronously, so this status endpoint is primarily useful for long-running conversions of large or complex webpages. |
| `SCRAPEGRAPH_AI_SAVE_ENDPOINT` | Save Endpoint Configuration | Tool to save custom scraping endpoint configurations to ScrapeGraphAI. Use when you need to create reusable scraping endpoints with specific parameters and extraction logic. |
| `SCRAPEGRAPH_AI_SEARCH_SCRAPER` | Search Scraper | Perform AI-powered web searches with structured, parsed results. Some sites block scrapers and return empty bodies; treat these as unrecoverable for that URL. JS-rendered pages may yield incomplete content. |
| `SCRAPEGRAPH_AI_SEARCH_SCRAPER_STATUS` | Check SearchScraper Status | Check the status and results of an asynchronous SearchScraper job. |
| `SCRAPEGRAPH_AI_SMART_CRAWLER_STATUS` | SmartCrawler Status | Check the status and retrieve results of a SmartCrawler web crawling job. Use this action to poll for completion and get the extracted content from a previously started SmartCrawler job. Returns the job status, crawled URLs, page content in markdown/HTML format, and LLM extraction results (if enabled). Implement a polling timeout (e.g., max retries or elapsed time cap) to avoid indefinite loops when waiting for long-running jobs. |
| `SCRAPEGRAPH_AI_SMART_SCRAPER_START` | Start Smart Scraper | Start AI-powered web scraping with natural language extraction prompts. When `wait` is false (default), returns a `request_id`; poll for results using SCRAPEGRAPH_AI_SMART_SCRAPER_STATUS. Check `error` and `job_status` fields in the response before using extracted data. |
| `SCRAPEGRAPH_AI_SMART_SCRAPER_STATUS` | SmartScraper Status | Check the status and retrieve results of a SmartScraper web scraping job. Use this action to poll for completion after starting a SmartScraper job with wait=false. The request_id is returned by the Start SmartScraper action. Typical workflow: 1. Start a scraping job with SCRAPEGRAPH_AI_SMART_SCRAPER_START (wait=false) 2. Use the returned request_id to check status with this action 3. Poll until status is 'completed' or 'failed' 4. When completed, the 'result' field contains the extracted data. When completed, also check the 'error' field before consuming 'result', as 'failed' status populates 'error' instead of 'result'. |
| `SCRAPEGRAPH_AI_START_SMART_CRAWLER` | Start Smart Crawler (Async) | Tool to start a multi-page web crawl using SmartCrawler for AI-powered data extraction. Use when you need to extract structured data from multiple pages of a website. Returns immediately with a task_id - use the status check action to monitor progress and retrieve results. |
| `SCRAPEGRAPH_AI_SUBMIT_FEEDBACK` | Submit Feedback | Submit feedback and ratings for completed ScrapeGraphAI requests. |
| `SCRAPEGRAPH_AI_SUBMIT_PRODUCT_FEEDBACK` | Submit Product Feedback | Submit product feedback for ScrapeGraphAI. Use to provide ratings, comments, suggestions, and other feedback about the product itself. |
| `SCRAPEGRAPH_AI_TOONIFY` | Convert JSON to TOON Format | Tool to convert JSON data to TOON (Token-Oriented Object Notation) format. Use when you need to reduce token usage for LLM processing while maintaining data structure. |
| `SCRAPEGRAPH_AI_VALIDATE_API_KEY` | Validate API Key | Validate your ScrapeGraphAI API key to ensure it is active and authorized. Use this action to check API key validity before making other API calls. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Scrapegraph ai MCP server is an implementation of the Model Context Protocol that connects your AI agent to Scrapegraph ai. It provides structured and secure access so your agent can perform Scrapegraph ai operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- Python 3.9 or higher
- A Composio account with an active API key
- Basic familiarity with Python and async programming

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

Install the required libraries.
What's happening:
- composio connects your agent to external SaaS tools like Scrapegraph ai
- pydantic-ai lets you create structured AI agents with tool support
- python-dotenv loads your environment variables securely from a .env file
```bash
pip install composio pydantic-ai python-dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your agent to Composio's API
- USER_ID associates your session with your account for secure tool access
- OPENAI_API_KEY to access OpenAI LLMs
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
USER_ID=your_user_id_here
OPENAI_API_KEY=your_openai_api_key
```

### 4. Import dependencies

What's happening:
- We load environment variables and import required modules
- Composio manages connections to Scrapegraph ai
- MCPServerStreamableHTTP connects to the Scrapegraph ai MCP server endpoint
- Agent from Pydantic AI lets you define and run the AI assistant
```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()
```

### 5. Create a Tool Router Session

What's happening:
- We're creating a Tool Router session that gives your agent access to Scrapegraph ai tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
```python
async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Scrapegraph ai
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["scrapegraph_ai"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")
```

### 6. Initialize the Pydantic AI Agent

What's happening:
- The MCP client connects to the Scrapegraph ai endpoint
- The agent uses GPT-5 to interpret user commands and perform Scrapegraph ai operations
- The instructions field defines the agent's role and behavior
```python
# Attach the MCP server to a Pydantic AI Agent
scrapegraph_ai_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
agent = Agent(
    "openai:gpt-5",
    toolsets=[scrapegraph_ai_mcp],
    instructions=(
        "You are a Scrapegraph ai assistant. Use Scrapegraph ai tools to help users "
        "with their requests. Ask clarifying questions when needed."
    ),
)
```

### 7. Build the chat interface

What's happening:
- The agent reads input from the terminal and streams its response
- Scrapegraph ai API calls happen automatically under the hood
- The model keeps conversation history to maintain context across turns
```python
# Simple REPL with message history
history = []
print("Chat started! Type 'exit' or 'quit' to end.\n")
print("Try asking the agent to help you with Scrapegraph ai.\n")

while True:
    user_input = input("You: ").strip()
    if user_input.lower() in {"exit", "quit", "bye"}:
        print("\nGoodbye!")
        break
    if not user_input:
        continue

    print("\nAgent is thinking...\n", flush=True)

    async with agent.run_stream(user_input, message_history=history) as stream_result:
        collected_text = ""
        async for chunk in stream_result.stream_output():
            text_piece = None
            if isinstance(chunk, str):
                text_piece = chunk
            elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                text_piece = chunk.delta
            elif hasattr(chunk, "text"):
                text_piece = chunk.text
            if text_piece:
                collected_text += text_piece
        result = stream_result

    print(f"Agent: {collected_text}\n")
    history = result.all_messages()
```

### 8. Run the application

What's happening:
- The asyncio loop launches the agent and keeps it running until you exit
```python
if __name__ == "__main__":
    asyncio.run(main())
```

## Complete Code

```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()

async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Scrapegraph ai
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["scrapegraph_ai"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")

    # Attach the MCP server to a Pydantic AI Agent
    scrapegraph_ai_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
    agent = Agent(
        "openai:gpt-5",
        toolsets=[scrapegraph_ai_mcp],
        instructions=(
            "You are a Scrapegraph ai assistant. Use Scrapegraph ai tools to help users "
            "with their requests. Ask clarifying questions when needed."
        ),
    )

    # Simple REPL with message history
    history = []
    print("Chat started! Type 'exit' or 'quit' to end.\n")
    print("Try asking the agent to help you with Scrapegraph ai.\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() in {"exit", "quit", "bye"}:
            print("\nGoodbye!")
            break
        if not user_input:
            continue

        print("\nAgent is thinking...\n", flush=True)

        async with agent.run_stream(user_input, message_history=history) as stream_result:
            collected_text = ""
            async for chunk in stream_result.stream_output():
                text_piece = None
                if isinstance(chunk, str):
                    text_piece = chunk
                elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                    text_piece = chunk.delta
                elif hasattr(chunk, "text"):
                    text_piece = chunk.text
                if text_piece:
                    collected_text += text_piece
            result = stream_result

        print(f"Agent: {collected_text}\n")
        history = result.all_messages()

if __name__ == "__main__":
    asyncio.run(main())
```

## Conclusion

You've built a Pydantic AI agent that can interact with Scrapegraph ai through Composio's Tool Router. With this setup, your agent can perform real Scrapegraph ai actions through natural language.
You can extend this further by:
- Adding other toolkits like Gmail, HubSpot, or Salesforce
- Building a web-based chat interface around this agent
- Using multiple MCP endpoints to enable cross-app workflows (for example, Gmail + Scrapegraph ai for workflow automation)
This architecture makes your AI agent "agent-native", able to securely use APIs in a unified, composable way without custom integrations.

## How to build Scrapegraph ai MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/scrapegraph_ai/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/scrapegraph_ai/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/scrapegraph_ai/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/scrapegraph_ai/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/scrapegraph_ai/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/scrapegraph_ai/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/scrapegraph_ai/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/scrapegraph_ai/framework/cli)
- [Google ADK](https://composio.dev/toolkits/scrapegraph_ai/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/scrapegraph_ai/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/scrapegraph_ai/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/scrapegraph_ai/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/scrapegraph_ai/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/scrapegraph_ai/framework/crew-ai)

## Related Toolkits

- [Excel](https://composio.dev/toolkits/excel) - Microsoft Excel is a robust spreadsheet application for organizing, analyzing, and visualizing data. It's the go-to tool for calculations, reporting, and flexible data management.
- [21risk](https://composio.dev/toolkits/_21risk) - 21RISK is a web app built for easy checklist, audit, and compliance management. It streamlines risk processes so teams can focus on what matters.
- [Abstract](https://composio.dev/toolkits/abstract) - Abstract provides a suite of APIs for automating data validation and enrichment tasks. It helps developers streamline workflows and ensure data quality with minimal effort.
- [Addressfinder](https://composio.dev/toolkits/addressfinder) - Addressfinder is a data quality platform for verifying addresses, emails, and phone numbers. It helps you ensure accurate customer and contact data every time.
- [Agenty](https://composio.dev/toolkits/agenty) - Agenty is a web scraping and automation platform for extracting data and automating browser tasks—no coding needed. It streamlines data collection, monitoring, and repetitive online actions.
- [Ambee](https://composio.dev/toolkits/ambee) - Ambee is an environmental data platform providing real-time, hyperlocal APIs for air quality, weather, and pollen. Get precise environmental insights to power smarter decisions in your apps and workflows.
- [Ambient weather](https://composio.dev/toolkits/ambient_weather) - Ambient Weather is a platform for personal weather stations with a robust API for accessing local, real-time, and historical weather data. Get detailed environmental insights directly from your own sensors for smarter apps and automations.
- [Anonyflow](https://composio.dev/toolkits/anonyflow) - Anonyflow is a service for encryption-based data anonymization and secure data sharing. It helps organizations meet GDPR, CCPA, and HIPAA data privacy compliance requirements.
- [Api ninjas](https://composio.dev/toolkits/api_ninjas) - Api ninjas offers 120+ public APIs spanning categories like weather, finance, sports, and more. Developers use it to supercharge apps with real-time data and actionable endpoints.
- [Api sports](https://composio.dev/toolkits/api_sports) - Api sports is a comprehensive sports data platform covering 2,000+ competitions with live scores and 15+ years of stats. Instantly access up-to-date sports information for analysis, apps, or chatbots.
- [Apify](https://composio.dev/toolkits/apify) - Apify is a cloud platform for building, deploying, and managing web scraping and automation tools called Actors. It lets you automate data extraction and workflow tasks at scale—no infrastructure headaches.
- [Autom](https://composio.dev/toolkits/autom) - Autom is a lightning-fast search engine results data platform for Google, Bing, and Brave. Developers use it to access fresh, low-latency SERP data on demand.
- [Beaconchain](https://composio.dev/toolkits/beaconchain) - Beaconchain is a real-time analytics platform for Ethereum 2.0's Beacon Chain. It provides detailed insights into validators, blocks, and overall network performance.
- [Big data cloud](https://composio.dev/toolkits/big_data_cloud) - BigDataCloud provides APIs for geolocation, reverse geocoding, and address validation. Instantly access reliable location intelligence to enhance your applications and workflows.
- [Bigpicture io](https://composio.dev/toolkits/bigpicture_io) - BigPicture.io offers APIs for accessing detailed company and profile data. Instantly enrich your applications with up-to-date insights on 20M+ businesses.
- [Bitquery](https://composio.dev/toolkits/bitquery) - Bitquery is a blockchain data platform offering indexed, real-time, and historical data from 40+ blockchains via GraphQL APIs. Get unified, reliable access to complex on-chain data for analytics, trading, and research.
- [Brightdata](https://composio.dev/toolkits/brightdata) - Brightdata is a leading web data platform offering advanced scraping, SERP APIs, and anti-bot tools. It lets you collect public web data at scale, bypassing blocks and friction.
- [Builtwith](https://composio.dev/toolkits/builtwith) - BuiltWith is a web technology profiler that uncovers the technologies powering any website. Gain actionable insights into analytics, hosting, and content management stacks for smarter research and lead generation.
- [Byteforms](https://composio.dev/toolkits/byteforms) - Byteforms is an all-in-one platform for creating forms, managing submissions, and integrating data. It streamlines workflows by centralizing form data collection and automation.
- [Cabinpanda](https://composio.dev/toolkits/cabinpanda) - Cabinpanda is a data collection platform for building and managing online forms. It helps streamline how you gather, organize, and analyze responses.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Scrapegraph ai MCP?

With a standalone Scrapegraph ai MCP server, the agents and LLMs can only access a fixed set of Scrapegraph ai tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Scrapegraph ai and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Pydantic AI?

Yes, you can. Pydantic AI fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Scrapegraph ai tools.

### Can I manage the permissions and scopes for Scrapegraph ai while using Tool Router?

Yes, absolutely. You can configure which Scrapegraph ai scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Scrapegraph ai data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
