# How to integrate Browseai MCP with Pydantic AI

```json
{
  "title": "How to integrate Browseai MCP with Pydantic AI",
  "toolkit": "Browseai",
  "toolkit_slug": "browseai",
  "framework": "Pydantic AI",
  "framework_slug": "pydantic-ai",
  "url": "https://composio.dev/toolkits/browseai/framework/pydantic-ai",
  "markdown_url": "https://composio.dev/toolkits/browseai/framework/pydantic-ai.md",
  "updated_at": "2026-05-12T10:04:12.979Z"
}
```

## Introduction

This guide walks you through connecting Browseai to Pydantic AI using the Composio tool router. By the end, you'll have a working Browseai agent that can extract product prices from competitor websites, monitor job postings for new remote roles, get detailed status of last robot run through natural language commands.
This guide will help you understand how to give your Pydantic AI agent real control over a Browseai account through Composio's Browseai MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Browseai with

- [OpenAI Agents SDK](https://composio.dev/toolkits/browseai/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/browseai/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/browseai/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/browseai/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/browseai/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/browseai/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/browseai/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/browseai/framework/cli)
- [Google ADK](https://composio.dev/toolkits/browseai/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/browseai/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/browseai/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/browseai/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/browseai/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/browseai/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- How to set up your Composio API key and User ID
- How to create a Composio Tool Router session for Browseai
- How to attach an MCP Server to a Pydantic AI agent
- How to stream responses and maintain chat history
- How to build a simple REPL-style chat interface to test your Browseai workflows

## What is Pydantic AI?

Pydantic AI is a Python framework for building AI agents with strong typing and validation. It leverages Pydantic's data validation capabilities to create robust, type-safe AI applications.
Key features include:
- Type Safety: Built on Pydantic for automatic data validation
- MCP Support: Native support for Model Context Protocol servers
- Streaming: Built-in support for streaming responses
- Async First: Designed for async/await patterns

## What is the Browseai MCP server, and what's possible with it?

The Browseai MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Browseai account. It provides structured and secure access to your robots, tasks, and monitoring workflows, so your agent can perform actions like running robots, extracting website data, monitoring for changes, and managing web automation tasks on your behalf.
- On-demand web data extraction: Trigger your Browseai robots to collect data from any website instantly, using custom input parameters for targeted web scraping.
- Automated website monitoring: Set up and manage recurring monitors that watch websites for changes, so your agent can alert you to updates or new information as soon as they happen.
- Bulk task execution: Launch thousands of data extraction tasks at once for large-scale or repeated scraping jobs, streamlining high-volume automation for your workflows.
- Task and robot management: Retrieve lists of your robots, fetch detailed results for specific tasks, check execution history, and delete obsolete tasks or monitors to keep your automation setup organized.
- Real-time notifications and workflow integration: Configure webhooks so your agent receives instant updates when tasks complete, fail, or when website changes are detected, enabling seamless integration with your broader systems.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `BROWSEAI_BULK_RUN_TASKS` | Bulk Run Tasks | This action allows users to bulk run up to 1,000 tasks per API call using a specified robot. For larger datasets, submit multiple bulk runs sequentially (up to 500,000 tasks total). It provides a POST endpoint at /v2/robots/{robotId}/bulk-runs and supports parameters such as robot_id (required), title (required), and input_parameters (required). This bulk operation is essential for large-scale data extraction. |
| `BROWSEAI_CREATE_MONITOR` | Create Monitor | Creates a new monitor for a Browse AI robot to automatically track website changes over time. A monitor runs a robot on a recurring schedule and can notify you when the captured data (screenshots or text) changes. This is useful for tracking price changes, content updates, availability status, or any other dynamic website information. You must first have a robot created (via the Browse AI dashboard) before you can monitor it. Use the GET_ROBOTS action to find available robot IDs. Key features: - Configure recurring schedules (hourly, daily, weekly) using RRULE format - Get email notifications when screenshots or text content changes - Customize notification sensitivity with threshold settings - Override robot parameters for each monitor |
| `BROWSEAI_CREATE_WEBHOOK` | Create Webhook | This tool creates a new webhook for a Browse AI robot. Webhooks are HTTP callbacks that Browse AI sends to your server immediately when specific events occur, eliminating the need for polling. It is useful for: - Setting up automated notifications for task completion - Receiving real-time updates when changes are detected - Integrating Browse AI with your own systems - Automating workflows based on robot task results The webhook can be configured to trigger on different event types: - taskFinished: Triggers on any task completion (success or failure) - taskFinishedSuccessfully: Triggers only on successful task completions - taskFinishedWithError: Triggers only on failed task completions - taskCapturedDataChanged: Triggers when data changes are detected during monitoring - tableExportFinishedSuccessfully: Triggers when table export completes (Beta feature) Note: Browse AI only supports one event type per webhook. To monitor multiple event types, create separate webhooks for each event type. |
| `BROWSEAI_DELETE_MONITOR` | Delete a specific monitor | This tool allows users to delete a specific monitor from their Browse AI account. It uses the DELETE method and requires a valid monitor_id. |
| `BROWSEAI_DELETE_TASK` | Delete a specific task | This tool allows you to delete a specific task in BrowseAI by its task ID. It is used for cleaning up completed or failed tasks, managing resources, and maintaining your task list. |
| `BROWSEAI_GET_ROBOTS` | Get Robots List | Retrieves a list of all robots (automated web tasks) under your Browse AI account. A robot is an automated browser task that can be trained to perform web operations such as: - Opening webpages and navigating through sites - Logging into websites - Clicking buttons and filling forms - Extracting structured data from web pages - Monitoring websites for changes This action returns comprehensive information about each robot including: - Robot ID (required for running tasks via other API endpoints) - Robot name and timestamps (creation/update) - Input parameters the robot accepts (parameter names, types, and whether they're required) Use this to: - Discover available robots and their IDs for task execution - Check robot configuration and required input parameters - Monitor robot creation and update activity - Get an overview of all automated tasks in your account No parameters required - simply call this action to retrieve all robots. |
| `BROWSEAI_GET_ROBOT_TASKS` | Get Robot Tasks | Retrieves a paginated list of all tasks (execution runs) for a specific Browse AI robot. Each task represents one execution of the robot, which includes its current status, execution timestamps, input parameters, captured text data, screenshots, and extracted lists. This tool returns tasks in reverse chronological order (newest first) by default. Use the limit and offset parameters to paginate through large result sets. Use this tool to: - Check the status of recent robot executions (pending, running, successful, failed) - Access data captured by robot tasks (text, screenshots, lists) - Monitor robot performance and execution history - Retrieve task IDs for use with other task-related actions - Debug failed robot executions Note: You must have at least one robot configured in your Browse AI account to use this action. Use the Get Robots List action first to find available robot IDs. |
| `BROWSEAI_GET_SYSTEM_STATUS` | Get System Status | Tool to check the operational status of Browse AI infrastructure, including the tasks queue condition. Use when you need to verify if Browse AI services are operational before running robots or tasks. |
| `BROWSEAI_GET_TASK_DETAILS` | Get Task Details | Retrieves detailed information about a specific Browse AI task by robot ID and task ID. This tool returns comprehensive task details including execution status, captured data, screenshots, input parameters, timestamps (created, started, completed), and error information if the task failed. It provides an in-depth view of individual task execution for monitoring, debugging, and data retrieval purposes. Use this tool when you need to: - Check the current status of a task (running, successful, failed) - Retrieve data captured during task execution - Access screenshots taken during the task - Review input parameters used for the task - Debug task failures by examining error details - Monitor task execution timestamps |
| `BROWSEAI_LIST_ROBOT_WEBHOOKS` | List Robot Webhooks | Tool to retrieve all webhooks configured for a specific Browse AI robot. Use when you need to view, audit, or manage webhook configurations for a robot. Returns webhook details including URL, event type, active status, and creation timestamp. |
| `BROWSEAI_RUN_ROBOT` | Run Robot | Triggers the execution of a Browse.ai robot to extract data from websites. This action starts a new robot task that will scrape/extract data according to the robot's pre-configured settings. The robot runs asynchronously and returns a task ID immediately. To retrieve the extracted data, use BROWSEAI_GET_TASK_DETAILS with the returned task ID. Prerequisites: - You must have at least one robot configured in your Browse.ai account - Obtain robot_id using BROWSEAI_GET_ROBOTS action first - Know what input parameters your robot expects (configured during robot creation) Common use cases: - Scraping product data from e-commerce sites - Monitoring website changes for specific content - Extracting structured data from multiple web pages - Automating repetitive web data collection tasks Note: The robot executes asynchronously. Check task status using BROWSEAI_GET_TASK_DETAILS or BROWSEAI_GET_ROBOT_TASKS to retrieve results when the task completes. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Browseai MCP server is an implementation of the Model Context Protocol that connects your AI agent to Browseai. It provides structured and secure access so your agent can perform Browseai operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- Python 3.9 or higher
- A Composio account with an active API key
- Basic familiarity with Python and async programming

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

Install the required libraries.
What's happening:
- composio connects your agent to external SaaS tools like Browseai
- pydantic-ai lets you create structured AI agents with tool support
- python-dotenv loads your environment variables securely from a .env file
```bash
pip install composio pydantic-ai python-dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your agent to Composio's API
- USER_ID associates your session with your account for secure tool access
- OPENAI_API_KEY to access OpenAI LLMs
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
USER_ID=your_user_id_here
OPENAI_API_KEY=your_openai_api_key
```

### 4. Import dependencies

What's happening:
- We load environment variables and import required modules
- Composio manages connections to Browseai
- MCPServerStreamableHTTP connects to the Browseai MCP server endpoint
- Agent from Pydantic AI lets you define and run the AI assistant
```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()
```

### 5. Create a Tool Router Session

What's happening:
- We're creating a Tool Router session that gives your agent access to Browseai tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
```python
async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Browseai
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["browseai"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")
```

### 6. Initialize the Pydantic AI Agent

What's happening:
- The MCP client connects to the Browseai endpoint
- The agent uses GPT-5 to interpret user commands and perform Browseai operations
- The instructions field defines the agent's role and behavior
```python
# Attach the MCP server to a Pydantic AI Agent
browseai_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
agent = Agent(
    "openai:gpt-5",
    toolsets=[browseai_mcp],
    instructions=(
        "You are a Browseai assistant. Use Browseai tools to help users "
        "with their requests. Ask clarifying questions when needed."
    ),
)
```

### 7. Build the chat interface

What's happening:
- The agent reads input from the terminal and streams its response
- Browseai API calls happen automatically under the hood
- The model keeps conversation history to maintain context across turns
```python
# Simple REPL with message history
history = []
print("Chat started! Type 'exit' or 'quit' to end.\n")
print("Try asking the agent to help you with Browseai.\n")

while True:
    user_input = input("You: ").strip()
    if user_input.lower() in {"exit", "quit", "bye"}:
        print("\nGoodbye!")
        break
    if not user_input:
        continue

    print("\nAgent is thinking...\n", flush=True)

    async with agent.run_stream(user_input, message_history=history) as stream_result:
        collected_text = ""
        async for chunk in stream_result.stream_output():
            text_piece = None
            if isinstance(chunk, str):
                text_piece = chunk
            elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                text_piece = chunk.delta
            elif hasattr(chunk, "text"):
                text_piece = chunk.text
            if text_piece:
                collected_text += text_piece
        result = stream_result

    print(f"Agent: {collected_text}\n")
    history = result.all_messages()
```

### 8. Run the application

What's happening:
- The asyncio loop launches the agent and keeps it running until you exit
```python
if __name__ == "__main__":
    asyncio.run(main())
```

## Complete Code

```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()

async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Browseai
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["browseai"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")

    # Attach the MCP server to a Pydantic AI Agent
    browseai_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
    agent = Agent(
        "openai:gpt-5",
        toolsets=[browseai_mcp],
        instructions=(
            "You are a Browseai assistant. Use Browseai tools to help users "
            "with their requests. Ask clarifying questions when needed."
        ),
    )

    # Simple REPL with message history
    history = []
    print("Chat started! Type 'exit' or 'quit' to end.\n")
    print("Try asking the agent to help you with Browseai.\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() in {"exit", "quit", "bye"}:
            print("\nGoodbye!")
            break
        if not user_input:
            continue

        print("\nAgent is thinking...\n", flush=True)

        async with agent.run_stream(user_input, message_history=history) as stream_result:
            collected_text = ""
            async for chunk in stream_result.stream_output():
                text_piece = None
                if isinstance(chunk, str):
                    text_piece = chunk
                elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                    text_piece = chunk.delta
                elif hasattr(chunk, "text"):
                    text_piece = chunk.text
                if text_piece:
                    collected_text += text_piece
            result = stream_result

        print(f"Agent: {collected_text}\n")
        history = result.all_messages()

if __name__ == "__main__":
    asyncio.run(main())
```

## Conclusion

You've built a Pydantic AI agent that can interact with Browseai through Composio's Tool Router. With this setup, your agent can perform real Browseai actions through natural language.
You can extend this further by:
- Adding other toolkits like Gmail, HubSpot, or Salesforce
- Building a web-based chat interface around this agent
- Using multiple MCP endpoints to enable cross-app workflows (for example, Gmail + Browseai for workflow automation)
This architecture makes your AI agent "agent-native", able to securely use APIs in a unified, composable way without custom integrations.

## How to build Browseai MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/browseai/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/browseai/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/browseai/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/browseai/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/browseai/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/browseai/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/browseai/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/browseai/framework/cli)
- [Google ADK](https://composio.dev/toolkits/browseai/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/browseai/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/browseai/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/browseai/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/browseai/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/browseai/framework/crew-ai)

## Related Toolkits

- [Firecrawl](https://composio.dev/toolkits/firecrawl) - Firecrawl automates large-scale web crawling and data extraction. It helps organizations efficiently gather, index, and analyze content from online sources.
- [Tavily](https://composio.dev/toolkits/tavily) - Tavily offers powerful search and data retrieval from documents, databases, and the web. It helps teams locate and filter information instantly, saving hours on research.
- [Exa](https://composio.dev/toolkits/exa) - Exa is a data extraction and search platform for gathering and analyzing information from websites, APIs, or databases. It helps teams quickly surface insights and automate data-driven workflows.
- [Serpapi](https://composio.dev/toolkits/serpapi) - SerpApi is a real-time API for structured search engine results. It lets you automate SERP data collection, parsing, and analysis for SEO and research.
- [Peopledatalabs](https://composio.dev/toolkits/peopledatalabs) - Peopledatalabs delivers B2B data enrichment and identity resolution APIs. Supercharge your apps with accurate, up-to-date business and contact data.
- [Snowflake](https://composio.dev/toolkits/snowflake) - Snowflake is a cloud data warehouse built for elastic scaling, secure data sharing, and fast SQL analytics across major clouds.
- [Posthog](https://composio.dev/toolkits/posthog) - PostHog is an open-source analytics platform for tracking user interactions and product metrics. It helps teams refine features, analyze funnels, and reduce churn with actionable insights.
- [Amplitude](https://composio.dev/toolkits/amplitude) - Amplitude is a digital analytics platform for product and behavioral data insights. It helps teams analyze user journeys and make data-driven decisions quickly.
- [Bright Data MCP](https://composio.dev/toolkits/brightdata_mcp) - Bright Data MCP is an AI-powered web scraping and data collection platform. Instantly access public web data in real time with advanced scraping tools.
- [ClickHouse](https://composio.dev/toolkits/clickhouse) - ClickHouse is an open-source, column-oriented database for real-time analytics and big data processing using SQL. Its lightning-fast query performance makes it ideal for handling large datasets and delivering instant insights.
- [Coinmarketcal](https://composio.dev/toolkits/coinmarketcal) - CoinMarketCal is a community-powered crypto calendar for upcoming events, announcements, and releases. It helps traders track market-moving developments and stay ahead in the crypto space.
- [Control d](https://composio.dev/toolkits/control_d) - Control d is a customizable DNS filtering and traffic redirection platform. It helps you manage internet access, enforce policies, and monitor usage across devices and networks.
- [Databox](https://composio.dev/toolkits/databox) - Databox is a business analytics platform that connects your data from any tool and device. It helps you track KPIs, build dashboards, and discover actionable insights.
- [Databricks](https://composio.dev/toolkits/databricks) - Databricks is a unified analytics platform for big data and AI on the lakehouse architecture. It empowers data teams to collaborate, analyze, and build scalable solutions efficiently.
- [Datagma](https://composio.dev/toolkits/datagma) - Datagma delivers data intelligence and analytics for business growth and market discovery. Get actionable market insights and track competitors to inform your strategy.
- [Delighted](https://composio.dev/toolkits/delighted) - Delighted is a customer feedback platform based on the Net Promoter System®. It helps you quickly gather, track, and act on customer sentiment.
- [Dovetail](https://composio.dev/toolkits/dovetail) - Dovetail is a research analysis platform for transcript review and insight generation. It helps teams code interviews, analyze feedback, and create actionable research summaries.
- [Dub](https://composio.dev/toolkits/dub) - Dub is a short link management platform with analytics and API access. Use it to easily create, manage, and track branded short links for your business.
- [Elasticsearch](https://composio.dev/toolkits/elasticsearch) - Elasticsearch is a distributed, RESTful search and analytics engine for all types of data. It delivers fast, scalable search and powerful analytics across massive datasets.
- [Fireflies](https://composio.dev/toolkits/fireflies) - Fireflies.ai is an AI-powered meeting assistant that records, transcribes, and analyzes voice conversations. It helps teams capture call notes automatically and search or summarize meetings effortlessly.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Browseai MCP?

With a standalone Browseai MCP server, the agents and LLMs can only access a fixed set of Browseai tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Browseai and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Pydantic AI?

Yes, you can. Pydantic AI fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Browseai tools.

### Can I manage the permissions and scopes for Browseai while using Tool Router?

Yes, absolutely. You can configure which Browseai scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Browseai data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
