# How to integrate Kadoa MCP with Pydantic AI

```json
{
  "title": "How to integrate Kadoa MCP with Pydantic AI",
  "toolkit": "Kadoa",
  "toolkit_slug": "kadoa",
  "framework": "Pydantic AI",
  "framework_slug": "pydantic-ai",
  "url": "https://composio.dev/toolkits/kadoa/framework/pydantic-ai",
  "markdown_url": "https://composio.dev/toolkits/kadoa/framework/pydantic-ai.md",
  "updated_at": "2026-05-12T10:16:37.984Z"
}
```

## Introduction

This guide walks you through connecting Kadoa to Pydantic AI using the Composio tool router. By the end, you'll have a working Kadoa agent that can fetch the latest data from your workflow, check crawl status for session abc123, list all pages crawled in last run through natural language commands.
This guide will help you understand how to give your Pydantic AI agent real control over a Kadoa account through Composio's Kadoa MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Kadoa with

- [OpenAI Agents SDK](https://composio.dev/toolkits/kadoa/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/kadoa/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/kadoa/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/kadoa/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/kadoa/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/kadoa/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/kadoa/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/kadoa/framework/cli)
- [Google ADK](https://composio.dev/toolkits/kadoa/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/kadoa/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/kadoa/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/kadoa/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/kadoa/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/kadoa/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- How to set up your Composio API key and User ID
- How to create a Composio Tool Router session for Kadoa
- How to attach an MCP Server to a Pydantic AI agent
- How to stream responses and maintain chat history
- How to build a simple REPL-style chat interface to test your Kadoa workflows

## What is Pydantic AI?

Pydantic AI is a Python framework for building AI agents with strong typing and validation. It leverages Pydantic's data validation capabilities to create robust, type-safe AI applications.
Key features include:
- Type Safety: Built on Pydantic for automatic data validation
- MCP Support: Native support for Model Context Protocol servers
- Streaming: Built-in support for streaming responses
- Async First: Designed for async/await patterns

## What is the Kadoa MCP server, and what's possible with it?

The Kadoa MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Kadoa account. It provides structured and secure access to your data extraction workflows, so your agent can launch crawls, monitor sessions, retrieve extracted data, and manage workflow configurations automatically on your behalf.
- Automated workflow monitoring and management: Ask your agent to fetch workflow configurations, enable data validation, or get the latest results from any extraction workflow you have set up.
- Crawling session control: Have your agent check the status of crawl sessions, list all crawled pages, and pull the raw content (HTML or Markdown) from any page processed by a workflow.
- Notification channel setup and retrieval: Direct your agent to create notification channels, list available notification event types, and fetch specific channel configurations for streamlined alerting.
- Location and environment awareness: Let your agent retrieve all supported locations to ensure workflows run in the right environment before launching new extraction jobs.
- Seamless data access: Instruct your agent to quickly get the most recent data output from any workflow, keeping your automations and dashboards always up to date.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `KADOA_APPROVE_BULK_VALIDATION_RULES` | Bulk Approve Validation Rules | Tool to bulk approve preview validation rules for a workflow. Use after generating validation rules to activate multiple rules at once. |
| `KADOA_CREATE_CRAWL_CONFIG` | Create Crawl Config | Tool to create a new crawling configuration in Kadoa. Use when you need to configure a custom crawl with specific options like navigation settings, extraction rules, or artifact capture preferences. |
| `KADOA_CREATE_NOTIFICATION_CHANNEL` | Create Notification Channel | Tool to create a notification channel for alerts delivery. |
| `KADOA_CREATE_SCHEMA` | Create Schema | Create a new data schema with specified fields and entity type. Use this tool to define a reusable schema for data extraction workflows. Schemas can have Data Fields (typed data like STRING, NUMBER), Raw Content Fields (HTML/Markdown), or Classification Fields (predefined categories). Note: Field names must be camelCase (start lowercase, letters/numbers only). STRING dataType fields require an example value. |
| `KADOA_CREATE_SUPPORT_ISSUE` | Create Support Issue | Tool to create a support ticket in Kadoa. Use when you need to report bugs, request features, or get help with workflows or integrations. User information is automatically inferred from the authentication context. |
| `KADOA_CREATE_WORKFLOW_TRIGGER` | Create Workflow Trigger | Tool to create a trigger that fires when a source workflow emits an event. Use when you need to chain workflows together, triggering one workflow based on events from another. Common use case: trigger data processing workflow when scraping workflow finishes. |
| `KADOA_DELETE_ALL_VALIDATION_RULES` | Delete All Validation Rules | Tool to soft-delete all validation rules for a specific workflow with optional audit trail. This is a bulk operation that marks rules as deleted without permanently removing them. Use when reconfiguring workflows or cleaning up validation rules. |
| `KADOA_DELETE_CRAWL_CONFIG` | Delete Crawl Configuration | Tool to delete a crawling configuration by its config ID. Use when you need to remove an existing crawl configuration. |
| `KADOA_DELETE_NOTIFICATION_CHANNEL` | Delete Notification Channel | Tool to delete a notification channel by its ID. Use when you need to remove a channel that is no longer needed. |
| `KADOA_DELETE_SCHEMA` | Delete Schema | Tool to delete a schema and all its revisions. Use when you need to permanently remove a schema from your Kadoa account. This operation cannot be undone. |
| `KADOA_DELETE_VALIDATION_RULE` | Delete Validation Rule | Tool to delete a validation rule from a Kadoa workflow. Performs a soft delete with optional audit reason. |
| `KADOA_DELETE_VALIDATION_RULES_BULK` | Delete Validation Rules (Bulk) | Tool to bulk delete multiple validation rules for a workflow. Use when you need to remove multiple rules at once instead of deleting them individually. |
| `KADOA_DELETE_WORKFLOW` | Delete Workflow | Delete a workflow permanently from your Kadoa account. Use this tool when you need to permanently remove a workflow. This action cannot be undone. |
| `KADOA_DELETE_WORKFLOW_TRIGGER` | Delete Workflow Trigger | Tool to delete a trigger from a Kadoa workflow. Use when you need to remove a specific trigger configuration. This action cannot be undone. |
| `KADOA_DISABLE_VALIDATION_RULE` | Disable Validation Rule | Tool to disable a validation rule with a mandatory reason. Use when you need to temporarily or permanently deactivate a data validation rule. |
| `KADOA_ENABLE_DATA_VALIDATION` | Enable Data Validation | Tool to enable data validation on a specified workflow. Use after creating or updating a workflow to enforce its validation rules. |
| `KADOA_EXECUTE_BULK_WORKFLOW_OPERATIONS` | Execute Bulk Workflow Operations | Execute actions on multiple workflows at once. Use when you need to perform the same operation on many workflows efficiently. Best-effort processing: each workflow is processed independently, so some may succeed while others fail. Check the response to see individual results and summary statistics. Supported actions: run, pause, resume, delete, approve, assignTags. |
| `KADOA_EXPORT_ACTIVITY` | Export Activity Events | Tool to export activity events from audit logs to CSV format for compliance and audit purposes. Use when you need to retrieve historical activity data, generate audit reports, or track user actions across workflows. Supports filtering by time range, user, workflow, event types, and other criteria. |
| `KADOA_EXPORT_ACTIVITY_WORKFLOWS` | Export Activity Workflows | Tool to export workflow configurations and metadata as CSV for portfolio reviews and compliance reporting. Use when you need to generate compliance reports or review workflow activity across a specific time period. Returns CSV data containing workflow details, states, and activity metadata. |
| `KADOA_FETCH_WORKFLOW_CONFIGURATION` | Get Workflow by ID | Retrieve detailed configuration of a workflow by its ID. Returns workflow metadata, extraction schema, scheduling settings, data validation config, and run status. Use this to inspect a workflow's setup or check its current state. |
| `KADOA_GET_ALL_LOCATIONS` | Get all locations | Retrieves all available scraping proxy locations (countries) supported by Kadoa. Returns ISO country codes (e.g., US, GB, DE) that can be used when configuring workflows to scrape from specific geographic regions. Use this to see which locations are available before creating location-specific scraping workflows. |
| `KADOA_GET_CRAWL_BUCKET_DATA` | Get Crawl Bucket Data | Tool to retrieve file content from the Kadoa crawling bucket (HTML or screenshot). Use when you need to access raw files stored during a crawl session. |
| `KADOA_GET_CRAWL_CONFIG` | Get Crawl Configuration | Tool to retrieve a crawling configuration by its ID. Use when you need to view the detailed settings of an existing crawl configuration. |
| `KADOA_GET_CRAWLED_PAGE_CONTENT` | Get Crawled Page Content | Tool to retrieve content of a crawled page. Use when you need the HTML or Markdown of a page from a specific crawling session. |
| `KADOA_GET_CRAWLED_PAGES` | Get Crawled Pages | Tool to list pages crawled during a session. Use when you need to paginate through results after starting a crawl session. |
| `KADOA_GET_CRAWL_STATUS` | Get Crawl Status | Tool to fetch current status of a crawling session. Use when you need to check progress of a crawl by its session ID. |
| `KADOA_GET_EVENT_TYPE` | Get Event Type Details | Tool to retrieve details for a specific notification event type. Use when you need to understand the schema, description, or configuration of a particular event type for setting up notifications. |
| `KADOA_GET_EVENT_TYPES` | Get Notification Event Types | Tool to retrieve supported notification event types. Use when you need to enumerate available notification triggers. |
| `KADOA_GET_LATEST_WORKFLOW_DATA` | Get Latest Workflow Data | Retrieves the extracted data from a Kadoa workflow's most recent run (or a specific run if runId is provided). Returns paginated records in JSON or CSV format. Use Get Workflows action first to obtain a valid workflowId. |
| `KADOA_GET_LATEST_WORKFLOW_VALIDATION` | Get Latest Workflow Validation | Retrieves the latest validation results for the most recent job of a workflow. Returns comprehensive validation data including anomaly counts, detailed anomaly lists by rule, schema issues, and change detection summaries. |
| `KADOA_GET_NOTIFICATION_CHANNEL` | Get Notification Channel | Tool to retrieve details of a specific notification channel. Use when you have a channel's ID and need its configuration. |
| `KADOA_GET_NOTIFICATION_LOGS` | Get Notification Logs | Tool to retrieve notification event logs with optional filtering by workflow, event type, and date range. Use when you need to audit notification delivery, troubleshoot missing notifications, or review event history. |
| `KADOA_GET_NOTIFICATION_SETTING` | Get Notification Setting | Retrieves a specific notification setting by its unique identifier. Use this tool to fetch details about how notifications are configured for specific events and which channels are linked. Returns the event type, enabled status, linked channels, and timestamps. |
| `KADOA_GET_SCHEMA` | Get Schema by ID | Retrieve a specific schema by its unique identifier. Returns schema metadata, field definitions, and configuration. Use this to inspect available data structures or validate schema configurations for workflows. |
| `KADOA_GET_VALIDATION_ANOMALIES` | Get Validation Anomalies | Tool to retrieve all anomalies for a specific validation. Use this when you need to fetch detailed anomaly data detected during a data validation run, grouped by validation rules with pagination support. |
| `KADOA_GET_VALIDATION_ANOMALIES_BY_RULE` | Get Validation Anomalies By Rule | Tool to retrieve anomalies for a specific validation rule. Use this to investigate specific rule violations and understand what data failed validation checks. |
| `KADOA_GET_VALIDATION_CONFIG` | Get Validation Configuration | Tool to retrieve the data validation configuration for a specific workflow. Use this to check validation status, alerting thresholds, and rule counts before modifying validation settings. |
| `KADOA_GET_VALIDATION_RULE` | Get Validation Rule | Tool to retrieve a specific validation rule by its ID. Use this to inspect rule details including configuration, status, and metadata. |
| `KADOA_GET_WORKFLOW_AUDIT_LOG` | Get Workflow Audit Log | Retrieve audit log entries for a workflow. Use when you need to track changes and operations performed on a workflow. Returns paginated log entries showing operation type, user information, and changed values. |
| `KADOA_GET_WORKFLOW_JOB` | Get Workflow Job | Tool to retrieve the current status and telemetry information for a specific workflow job. Use when you need to check the execution status, errors, or metadata of a particular job run. |
| `KADOA_GET_WORKFLOW_RUN_HISTORY` | Get Workflow Run History | Tool to fetch workflow run history. Use when you need to retrieve past run records for a workflow after execution. |
| `KADOA_GET_WORKFLOWS` | Get Workflows | Retrieve a paginated list of workflows with optional filtering. Use this tool to list all workflows in your Kadoa account. You can filter by: - search: Find workflows by name, URL, or ID - state: Filter by workflow state (ACTIVE, PAUSED, ERROR, etc.) - monitoring: Filter by whether monitoring is enabled Returns workflow details including name, state, URLs, schema, and run statistics. |
| `KADOA_GET_WORKFLOW_TRIGGER` | Get Workflow Trigger | Tool to retrieve a specific trigger for a workflow. Use this when you need to inspect trigger details including its configuration and status. |
| `KADOA_GET_WORKFLOW_VALIDATION_RESULTS` | Get Workflow Validation Results | Retrieves the latest validation results for a specific workflow job. Returns validation details including anomalies detected, rules executed, and schema issues. Returns has_results=false if no validation results exist for the specified job. |
| `KADOA_GET_WORKSPACE_DETAILS` | Get Workspace Details | Tool to retrieve detailed information about a workspace (user, team, or organization). Use when you need to get workspace metadata including name, type, email, feature flags, and team information. |
| `KADOA_LIST_ACTIVITY` | List Activity Events | Tool to retrieve activity events from audit logs with basic filtering and pagination. Use when you need to track workflow events, user actions, or system activities. Supports time-based filtering (absolute or relative), event type filtering, and resource filtering. |
| `KADOA_LIST_CHANGES` | List Changes | Tool to retrieve all data changes detected across workflows in your Kadoa account. Use this when you need to monitor what data has changed in your workflows over time. You can filter changes by workflow IDs, date range, and paginate through results. |
| `KADOA_LIST_CRAWL_SESSIONS` | List Crawl Sessions | Tool to retrieve a paginated list of crawling sessions with optional filtering. Use when you need to view all crawl sessions or filter by user ID. |
| `KADOA_LIST_JOB_VALIDATIONS` | List Job Validations | Tool to list all validation runs for a specific job with pagination support. Use when you need to retrieve the full validation history for a job, not just the latest result. |
| `KADOA_LIST_NOTIFICATION_CHANNELS` | List Notification Channels | Tool to retrieve all notification channels configured for the account. Use when you need to list available channels for alerts delivery. |
| `KADOA_LIST_NOTIFICATION_SETTINGS` | List Notification Settings | Tool to retrieve all notification settings, with optional filtering by workflow ID or event type. Use when you need to list configured notifications or check existing settings before creating new ones. |
| `KADOA_LIST_SCHEMAS` | List Schemas | Tool to retrieve all schemas accessible by the authenticated user. Use this when you need to see available schema definitions or find a specific schema by name or entity type. |
| `KADOA_LIST_SUPPORT_STATES` | List Support States | Tool to retrieve available support issue states. Use when you need to see what states can be assigned to support tickets. |
| `KADOA_LIST_VALIDATION_RULES` | List Validation Rules | Tool to list all data validation rules with optional pagination and filtering. |
| `KADOA_LIST_WORKFLOW_TRIGGERS` | List Workflow Triggers | Tool to get all triggers where the specified workflow is the source. Use when you need to retrieve the list of triggers associated with a workflow, such as understanding what actions or workflows are triggered by this workflow's execution. |
| `KADOA_PAUSE_CRAWL_SESSION` | Pause Crawl Session | Tool to pause an active crawling session. Use when you need to temporarily stop a running crawl without terminating it completely. |
| `KADOA_PAUSE_WORKFLOW` | Pause Workflow | Tool to pause a running or scheduled workflow. Use when you need to temporarily stop a workflow from executing. The workflow will remain paused until explicitly resumed or reactivated. |
| `KADOA_POST_ADVANCED_WORKFLOW` | Create Advanced Workflow | Tool to create an advanced workflow. Use when you need a valid advanced workflow ID before updating steps. |
| `KADOA_POST_CRAWL` | Start Crawl Session | Starts a new web crawling session to crawl and index pages from a website. Use this tool when you need to: - Crawl an entire website or specific sections - Gather page content for extraction or analysis - Index multiple pages from a domain Returns a session_id that can be used with get_crawl_status to monitor progress and get_crawled_pages to retrieve the crawled content. |
| `KADOA_POST_NOTIFICATION_SETTING` | Create Notification Setting | Tool to create a notification setting linking channels to events. Use when subscribing workflows or workspace-level notifications to specific events. |
| `KADOA_POST_NOTIFICATION_TEST` | Send Test Notification | Sends a test notification event to verify notification channel configurations are working correctly. Use this tool to test that your notification channels (email, Slack, Teams, webhooks, etc.) are properly configured before relying on them for production workflows. The test sends a simulated event of the specified type, which will be delivered to all configured notification channels for that event type. |
| `KADOA_POST_WEBHOOK_SUBSCRIPTION` | Subscribe to Webhook Events | Tool to subscribe to specified webhook events. This will create a webhook channel and then create notification settings for the provided events linking that channel. |
| `KADOA_POST_WORKFLOW` | Create Workflow | Create a new Kadoa web scraping workflow. This tool creates a workflow that can extract structured data from web pages. Provide URLs to scrape, specify a navigation mode, and define the data schema. For structured extraction: Use 'single-page' mode with entity + fields. For paginated content: Use 'paginated-page' mode. For AI-driven navigation: Use 'agentic-navigation' mode with userPrompt (Enterprise only). Note: Workflow creation may take 60+ seconds as Kadoa analyzes the target URLs. |
| `KADOA_POST_WORKFLOW_MONITORING` | Configure Workflow Monitoring | Configure monitoring and scheduling for a Kadoa workflow to detect data changes. This tool allows you to: - Set up recurring workflow runs at specified intervals (daily, hourly, weekly, etc.) - Enable monitoring to detect when specific fields change (e.g., price changes, stock updates) - Configure conditions to filter which changes trigger notifications Use this after creating a workflow or to update an existing workflow's monitoring settings. The workflow must exist before you can configure its monitoring. |
| `KADOA_POST_WORKFLOW_VALIDATION_RULE` | Generate Workflow Validation Rule | Generate an AI-powered data validation rule for a Kadoa workflow. This tool uses AI to convert a natural-language description into a SQL-based validation rule that can detect data quality issues in workflow outputs. Prerequisites: - The workflow must have completed at least one successful job run - The workflow must have data validation enabled The generated rule will be created in 'preview' status for testing before activation. Use the List Validation Rules action to view created rules, and the Bulk Approve Validation Rules action to activate them. |
| `KADOA_PUT_NOTIFICATION_CHANNEL` | Update Notification Channel | Tool to update an existing notification channel. Use when you need to modify channel details. |
| `KADOA_RESUME_CRAWL_SESSION` | Resume Crawl Session | Tool to resume a paused crawling session. Use when you need to restart a crawl that was previously paused or stopped. |
| `KADOA_RESUME_WORKFLOW` | Resume Workflow | Resumes a paused, preview, or error workflow. Use when you need to activate a workflow that is not currently running. Cannot resume workflows in certain states; check workflow state first using Get Workflows action. |
| `KADOA_RUN_ADHOC_EXTRACTION` | Run Ad-hoc Extraction | Tool to synchronously extract data from a URL using a given template. Use after choosing the schemaId (custom or 'html', 'body', 'markdown'). |
| `KADOA_RUN_WORKFLOW` | Run Workflow | Tool to trigger a workflow to run immediately. Use when you need to start a workflow execution on demand. Returns a job ID that can be used to track the execution status. |
| `KADOA_SCHEDULE_VALIDATION_JOB` | Schedule Validation Job | Tool to schedule a data validation job for a specific workflow job. Use this to trigger validation rules on job data, detect anomalies, and ensure data quality. Supports custom SQL rules, dry-run mode, and idempotent validation IDs. |
| `KADOA_UNSUBSCRIBE_FROM_WEBHOOK_EVENTS` | Unsubscribe from Webhook Events | Unsubscribe from webhook event notifications by deleting a notification setting. Use this tool when you need to: - Remove an existing notification subscription by its settings ID - Stop receiving webhook notifications for specific events - Clean up notification configurations The settings ID can be obtained from the response of the Subscribe to Webhook Events action or from listing notification settings. |
| `KADOA_UPDATE_NOTIFICATION_SETTINGS` | Update Notification Settings | Tool to update existing notification settings for events. Use when modifying notification configurations such as enabled status, event type, event configuration, or linked channels. |
| `KADOA_UPDATE_SCHEMA` | Update Schema | Tool to update an existing Kadoa schema. Use when you need to modify schema metadata (name, entity) or update the field definitions. At least one of name, entity, or fields must be provided to update the schema. |
| `KADOA_UPDATE_VALIDATION_CONFIG` | Update Validation Configuration | Tool to update the complete data validation configuration including alerting settings for a specific workflow. Use this to modify validation status, alert thresholds, and notification preferences. |
| `KADOA_UPDATE_WORKFLOW_METADATA` | Update Workflow Metadata | Tool to update workflow metadata such as name, description, tags, and configuration settings. Use when you need to modify an existing workflow's properties without recreating it. |
| `KADOA_UPDATE_WORKFLOW_TRIGGER` | Update Workflow Trigger | Tool to update trigger properties including event type and enabled status. Use when you need to modify an existing workflow trigger's configuration. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Kadoa MCP server is an implementation of the Model Context Protocol that connects your AI agent to Kadoa. It provides structured and secure access so your agent can perform Kadoa operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- Python 3.9 or higher
- A Composio account with an active API key
- Basic familiarity with Python and async programming

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

Install the required libraries.
What's happening:
- composio connects your agent to external SaaS tools like Kadoa
- pydantic-ai lets you create structured AI agents with tool support
- python-dotenv loads your environment variables securely from a .env file
```bash
pip install composio pydantic-ai python-dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your agent to Composio's API
- USER_ID associates your session with your account for secure tool access
- OPENAI_API_KEY to access OpenAI LLMs
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
USER_ID=your_user_id_here
OPENAI_API_KEY=your_openai_api_key
```

### 4. Import dependencies

What's happening:
- We load environment variables and import required modules
- Composio manages connections to Kadoa
- MCPServerStreamableHTTP connects to the Kadoa MCP server endpoint
- Agent from Pydantic AI lets you define and run the AI assistant
```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()
```

### 5. Create a Tool Router Session

What's happening:
- We're creating a Tool Router session that gives your agent access to Kadoa tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
```python
async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Kadoa
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["kadoa"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")
```

### 6. Initialize the Pydantic AI Agent

What's happening:
- The MCP client connects to the Kadoa endpoint
- The agent uses GPT-5 to interpret user commands and perform Kadoa operations
- The instructions field defines the agent's role and behavior
```python
# Attach the MCP server to a Pydantic AI Agent
kadoa_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
agent = Agent(
    "openai:gpt-5",
    toolsets=[kadoa_mcp],
    instructions=(
        "You are a Kadoa assistant. Use Kadoa tools to help users "
        "with their requests. Ask clarifying questions when needed."
    ),
)
```

### 7. Build the chat interface

What's happening:
- The agent reads input from the terminal and streams its response
- Kadoa API calls happen automatically under the hood
- The model keeps conversation history to maintain context across turns
```python
# Simple REPL with message history
history = []
print("Chat started! Type 'exit' or 'quit' to end.\n")
print("Try asking the agent to help you with Kadoa.\n")

while True:
    user_input = input("You: ").strip()
    if user_input.lower() in {"exit", "quit", "bye"}:
        print("\nGoodbye!")
        break
    if not user_input:
        continue

    print("\nAgent is thinking...\n", flush=True)

    async with agent.run_stream(user_input, message_history=history) as stream_result:
        collected_text = ""
        async for chunk in stream_result.stream_output():
            text_piece = None
            if isinstance(chunk, str):
                text_piece = chunk
            elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                text_piece = chunk.delta
            elif hasattr(chunk, "text"):
                text_piece = chunk.text
            if text_piece:
                collected_text += text_piece
        result = stream_result

    print(f"Agent: {collected_text}\n")
    history = result.all_messages()
```

### 8. Run the application

What's happening:
- The asyncio loop launches the agent and keeps it running until you exit
```python
if __name__ == "__main__":
    asyncio.run(main())
```

## Complete Code

```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()

async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Kadoa
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["kadoa"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")

    # Attach the MCP server to a Pydantic AI Agent
    kadoa_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
    agent = Agent(
        "openai:gpt-5",
        toolsets=[kadoa_mcp],
        instructions=(
            "You are a Kadoa assistant. Use Kadoa tools to help users "
            "with their requests. Ask clarifying questions when needed."
        ),
    )

    # Simple REPL with message history
    history = []
    print("Chat started! Type 'exit' or 'quit' to end.\n")
    print("Try asking the agent to help you with Kadoa.\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() in {"exit", "quit", "bye"}:
            print("\nGoodbye!")
            break
        if not user_input:
            continue

        print("\nAgent is thinking...\n", flush=True)

        async with agent.run_stream(user_input, message_history=history) as stream_result:
            collected_text = ""
            async for chunk in stream_result.stream_output():
                text_piece = None
                if isinstance(chunk, str):
                    text_piece = chunk
                elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                    text_piece = chunk.delta
                elif hasattr(chunk, "text"):
                    text_piece = chunk.text
                if text_piece:
                    collected_text += text_piece
            result = stream_result

        print(f"Agent: {collected_text}\n")
        history = result.all_messages()

if __name__ == "__main__":
    asyncio.run(main())
```

## Conclusion

You've built a Pydantic AI agent that can interact with Kadoa through Composio's Tool Router. With this setup, your agent can perform real Kadoa actions through natural language.
You can extend this further by:
- Adding other toolkits like Gmail, HubSpot, or Salesforce
- Building a web-based chat interface around this agent
- Using multiple MCP endpoints to enable cross-app workflows (for example, Gmail + Kadoa for workflow automation)
This architecture makes your AI agent "agent-native", able to securely use APIs in a unified, composable way without custom integrations.

## How to build Kadoa MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/kadoa/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/kadoa/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/kadoa/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/kadoa/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/kadoa/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/kadoa/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/kadoa/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/kadoa/framework/cli)
- [Google ADK](https://composio.dev/toolkits/kadoa/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/kadoa/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/kadoa/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/kadoa/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/kadoa/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/kadoa/framework/crew-ai)

## Related Toolkits

- [Excel](https://composio.dev/toolkits/excel) - Microsoft Excel is a robust spreadsheet application for organizing, analyzing, and visualizing data. It's the go-to tool for calculations, reporting, and flexible data management.
- [21risk](https://composio.dev/toolkits/_21risk) - 21RISK is a web app built for easy checklist, audit, and compliance management. It streamlines risk processes so teams can focus on what matters.
- [Abstract](https://composio.dev/toolkits/abstract) - Abstract provides a suite of APIs for automating data validation and enrichment tasks. It helps developers streamline workflows and ensure data quality with minimal effort.
- [Addressfinder](https://composio.dev/toolkits/addressfinder) - Addressfinder is a data quality platform for verifying addresses, emails, and phone numbers. It helps you ensure accurate customer and contact data every time.
- [Agentql](https://composio.dev/toolkits/agentql) - Agentql is a toolkit that connects AI agents to the web using a specialized query language. It enables structured web interaction and data extraction for smarter automations.
- [Agenty](https://composio.dev/toolkits/agenty) - Agenty is a web scraping and automation platform for extracting data and automating browser tasks—no coding needed. It streamlines data collection, monitoring, and repetitive online actions.
- [Ambee](https://composio.dev/toolkits/ambee) - Ambee is an environmental data platform providing real-time, hyperlocal APIs for air quality, weather, and pollen. Get precise environmental insights to power smarter decisions in your apps and workflows.
- [Ambient weather](https://composio.dev/toolkits/ambient_weather) - Ambient Weather is a platform for personal weather stations with a robust API for accessing local, real-time, and historical weather data. Get detailed environmental insights directly from your own sensors for smarter apps and automations.
- [Anonyflow](https://composio.dev/toolkits/anonyflow) - Anonyflow is a service for encryption-based data anonymization and secure data sharing. It helps organizations meet GDPR, CCPA, and HIPAA data privacy compliance requirements.
- [Api ninjas](https://composio.dev/toolkits/api_ninjas) - Api ninjas offers 120+ public APIs spanning categories like weather, finance, sports, and more. Developers use it to supercharge apps with real-time data and actionable endpoints.
- [Api sports](https://composio.dev/toolkits/api_sports) - Api sports is a comprehensive sports data platform covering 2,000+ competitions with live scores and 15+ years of stats. Instantly access up-to-date sports information for analysis, apps, or chatbots.
- [Apify](https://composio.dev/toolkits/apify) - Apify is a cloud platform for building, deploying, and managing web scraping and automation tools called Actors. It lets you automate data extraction and workflow tasks at scale—no infrastructure headaches.
- [Autom](https://composio.dev/toolkits/autom) - Autom is a lightning-fast search engine results data platform for Google, Bing, and Brave. Developers use it to access fresh, low-latency SERP data on demand.
- [Beaconchain](https://composio.dev/toolkits/beaconchain) - Beaconchain is a real-time analytics platform for Ethereum 2.0's Beacon Chain. It provides detailed insights into validators, blocks, and overall network performance.
- [Big data cloud](https://composio.dev/toolkits/big_data_cloud) - BigDataCloud provides APIs for geolocation, reverse geocoding, and address validation. Instantly access reliable location intelligence to enhance your applications and workflows.
- [Bigpicture io](https://composio.dev/toolkits/bigpicture_io) - BigPicture.io offers APIs for accessing detailed company and profile data. Instantly enrich your applications with up-to-date insights on 20M+ businesses.
- [Bitquery](https://composio.dev/toolkits/bitquery) - Bitquery is a blockchain data platform offering indexed, real-time, and historical data from 40+ blockchains via GraphQL APIs. Get unified, reliable access to complex on-chain data for analytics, trading, and research.
- [Brightdata](https://composio.dev/toolkits/brightdata) - Brightdata is a leading web data platform offering advanced scraping, SERP APIs, and anti-bot tools. It lets you collect public web data at scale, bypassing blocks and friction.
- [Builtwith](https://composio.dev/toolkits/builtwith) - BuiltWith is a web technology profiler that uncovers the technologies powering any website. Gain actionable insights into analytics, hosting, and content management stacks for smarter research and lead generation.
- [Byteforms](https://composio.dev/toolkits/byteforms) - Byteforms is an all-in-one platform for creating forms, managing submissions, and integrating data. It streamlines workflows by centralizing form data collection and automation.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Kadoa MCP?

With a standalone Kadoa MCP server, the agents and LLMs can only access a fixed set of Kadoa tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Kadoa and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Pydantic AI?

Yes, you can. Pydantic AI fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Kadoa tools.

### Can I manage the permissions and scopes for Kadoa while using Tool Router?

Yes, absolutely. You can configure which Kadoa scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Kadoa data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
