# How to integrate Datadog MCP with Pydantic AI

```json
{
  "title": "How to integrate Datadog MCP with Pydantic AI",
  "toolkit": "Datadog",
  "toolkit_slug": "datadog",
  "framework": "Pydantic AI",
  "framework_slug": "pydantic-ai",
  "url": "https://composio.dev/toolkits/datadog/framework/pydantic-ai",
  "markdown_url": "https://composio.dev/toolkits/datadog/framework/pydantic-ai.md",
  "updated_at": "2026-05-12T10:08:17.883Z"
}
```

## Introduction

This guide walks you through connecting Datadog to Pydantic AI using the Composio tool router. By the end, you'll have a working Datadog agent that can create downtime for nightly maintenance window, list all monitors tracking cpu usage, create synthetic api test for login endpoint through natural language commands.
This guide will help you understand how to give your Pydantic AI agent real control over a Datadog account through Composio's Datadog MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Datadog with

- [ChatGPT](https://composio.dev/toolkits/datadog/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/datadog/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/datadog/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/datadog/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/datadog/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/datadog/framework/codex)
- [Cursor](https://composio.dev/toolkits/datadog/framework/cursor)
- [VS Code](https://composio.dev/toolkits/datadog/framework/vscode)
- [OpenCode](https://composio.dev/toolkits/datadog/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/datadog/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/datadog/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/datadog/framework/cli)
- [Google ADK](https://composio.dev/toolkits/datadog/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/datadog/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/datadog/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/datadog/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/datadog/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/datadog/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- How to set up your Composio API key and User ID
- How to create a Composio Tool Router session for Datadog
- How to attach an MCP Server to a Pydantic AI agent
- How to stream responses and maintain chat history
- How to build a simple REPL-style chat interface to test your Datadog workflows

## What is Pydantic AI?

Pydantic AI is a Python framework for building AI agents with strong typing and validation. It leverages Pydantic's data validation capabilities to create robust, type-safe AI applications.
Key features include:
- Type Safety: Built on Pydantic for automatic data validation
- MCP Support: Native support for Model Context Protocol servers
- Streaming: Built-in support for streaming responses
- Async First: Designed for async/await patterns

## What is the Datadog MCP server, and what's possible with it?

The Datadog MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Datadog account. It provides structured and secure access to your monitoring and observability platform, so your agent can perform actions like creating dashboards, managing monitors, scheduling downtimes, and tracking key events on your behalf.
- Custom dashboard creation and management: Direct your agent to build new dashboards or retrieve detailed information about existing dashboards for unified infrastructure and application monitoring.
- Monitor setup and deletion: Easily have your agent create new monitors to track critical metrics or remove outdated ones to keep your alerting system relevant.
- Automated downtime scheduling: Let your agent schedule maintenance windows by creating downtimes that suppress alerts during planned outages or deployments.
- Event tracking and logging: Ask your agent to create and log significant events—like deployments or configuration changes—so your team always stays informed.
- Service level objectives and synthetic testing: Instruct your agent to define SLOs or set up synthetic API tests for continuous reliability and performance tracking.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `DATADOG_CREATE_DASHBOARD` | Create Dashboard | Create a dashboard in Datadog. Dashboards provide customizable visualizations for monitoring your infrastructure, applications, and business metrics in a unified view. |
| `DATADOG_CREATE_DOWNTIME` | Create downtime | Creates a new downtime in Datadog to suppress alerts during maintenance windows or planned outages. Useful for preventing false alarms during deployments or maintenance. |
| `DATADOG_CREATE_EVENT` | Create event | Creates a new event in Datadog. Events are useful for tracking deployments, outages, configuration changes, and other important occurrences. |
| `DATADOG_CREATE_MONITOR` | Create monitor | Creates a new Datadog monitor to track metrics, logs, or other data sources with configurable alerting thresholds and notifications. |
| `DATADOG_CREATE_SLO` | Create SLO | Create a Service Level Objective (SLO) in Datadog. SLOs help you define and track reliability targets for your services, enabling data-driven decisions about service quality and reliability investments. |
| `DATADOG_CREATE_SYNTHETIC_API_TEST` | Create Synthetic API Test | Create a synthetic API test in Datadog. Creates a new synthetic API test that continuously monitors API endpoints from multiple locations worldwide. Useful for proactive monitoring of API uptime, performance, and functionality. |
| `DATADOG_CREATE_WEBHOOK` | Create Webhook | Create a webhook in Datadog. Registers a named destination endpoint; each monitor must explicitly reference the webhook by name in its message or notification settings for alerts to be delivered. |
| `DATADOG_DELETE_DASHBOARD` | Delete Dashboard | Delete a dashboard in Datadog. Permanently removes a dashboard from your organization. This action cannot be undone. Use with caution. |
| `DATADOG_DELETE_MONITOR` | Delete monitor | Deletes a Datadog monitor permanently. Use with caution as this action cannot be undone. |
| `DATADOG_GET_DASHBOARD` | Get Dashboard | Get a specific dashboard from Datadog. Retrieves detailed information about a dashboard including its widgets, layout, template variables, and metadata. |
| `DATADOG_GET_MONITOR` | Get monitor | Retrieves detailed information about a specific Datadog monitor, including its current state, configuration, and any active downtimes. |
| `DATADOG_GET_SERVICE_DEPENDENCIES` | Get Service Dependencies | Get service dependency mapping from Datadog APM. This action retrieves the dependency graph for a specific service, showing both upstream services (that call this service) and downstream services (that this service calls). It's essential for: - Understanding the blast radius of service failures - Identifying critical dependencies during incidents - Analyzing service communication patterns - Planning architectural changes - Monitoring service health in context The dependency information includes call rates, error rates, and latency metrics to help assess the health of service relationships. Requires APM instrumentation on the target service; uninstrumented services return empty or incomplete dependency data. |
| `DATADOG_GET_SYNTHETICS_LOCATIONS` | Get Synthetics Locations | Tool to retrieve all available public and private locations for Synthetic tests in Datadog. Use when you need a list of location identifiers for creating or managing synthetic tests. |
| `DATADOG_GET_TAGS` | Get host tags | Retrieves all tags associated with a specific host in Datadog. Useful for understanding host metadata and organizing infrastructure. |
| `DATADOG_GET_USAGE_SUMMARY` | Get usage summary | Retrieves usage summary information from Datadog including API calls, hosts, containers, and other billable usage metrics. Useful for cost monitoring and usage analysis. Months with no activity return empty payloads on success; absent data is expected, not an error. |
| `DATADOG_LIST_ALL_TAGS` | List All Tags | List all tags from Datadog. Tags help organize and filter your infrastructure and applications. This action shows all tags in use across your organization. |
| `DATADOG_LIST_API_KEYS` | List API Keys | List API keys in Datadog. Retrieves all API keys in the organization for security auditing, access management, and key rotation planning. Helps maintain security posture by tracking key usage and ownership. Response contains sensitive key metadata (names, owners, last-used timestamps); restrict tool access to authorized personnel and handle output securely. |
| `DATADOG_LIST_APM_SERVICES` | List APM Services | List APM services from Datadog. Application Performance Monitoring (APM) provides deep visibility into your applications, helping you track performance, errors, and dependencies. |
| `DATADOG_LIST_AWS_INTEGRATION` | List AWS Integration | List AWS integrations in Datadog. Retrieves all configured AWS account integrations, showing which AWS accounts are monitored by Datadog and their configuration settings. Useful for cloud infrastructure management and ensuring comprehensive monitoring coverage. |
| `DATADOG_LIST_DASHBOARDS` | List dashboards | Lists all Datadog dashboards with basic information. Useful for dashboard management and getting an overview of available dashboards. |
| `DATADOG_LIST_EVENTS` | List events | Lists events from Datadog within a specified time range. Events track important occurrences like deployments, outages, and configuration changes. Combining multiple filters (tags, sources, priority) with narrow time ranges may return empty results even when events exist — start with broad filters and narrow incrementally. Large time ranges with minimal filtering can return very high event volumes; tune tags, sources, and start/end before processing results. |
| `DATADOG_LIST_HOSTS` | List hosts | Lists all hosts in your Datadog infrastructure with detailed information including metrics, tags, and status. Useful for infrastructure monitoring and management. |
| `DATADOG_LIST_INCIDENTS` | List Incidents | List incidents from Datadog. Incident Management helps you track, manage, and resolve incidents efficiently with comprehensive timeline and impact tracking. |
| `DATADOG_LIST_LOG_INDEXES` | List Log Indexes | Tool to retrieve a list of all log indexes configured in Datadog, including their names and configurations. Use before DATADOG_SEARCH_LOGS to identify the correct index name; searching without specifying the right index can hide valid logs and increase usage costs across high-volume indexes. |
| `DATADOG_LIST_METRICS` | List active metrics | Discover metric names by listing actively reporting metrics since a given timestamp. Use when you need to find what metrics exist before querying timeseries data with DATADOG_QUERY_METRICS. |
| `DATADOG_LIST_MONITORS` | List monitors | Get all monitor details. This endpoint allows you to retrieve information about all monitors configured in your organization. You can filter by group states, name, tags, and use pagination to manage large result sets. |
| `DATADOG_LIST_ROLES` | List Roles | List roles from Datadog organization. Roles define sets of permissions that control what users can do within your Datadog organization. |
| `DATADOG_LIST_SERVICE_CHECKS` | List service checks | Lists service checks from Datadog. Service checks are status checks that track the health of your services and infrastructure components. |
| `DATADOG_LIST_SL_OS` | List SLOs | List Service Level Objectives (SLOs) from Datadog. Service Level Objectives help you track the reliability and performance of your services by setting measurable targets for key metrics. |
| `DATADOG_LIST_SYNTHETICS` | List Synthetics Tests | List Synthetics tests from Datadog. Synthetics monitoring allows you to proactively monitor your applications and APIs by simulating user interactions and API calls from various locations. |
| `DATADOG_LIST_USERS` | List Users | List users from Datadog organization. User management allows you to see team members, their roles, and access levels within your Datadog organization. |
| `DATADOG_LIST_WEBHOOKS` | List Webhooks | List webhooks from Datadog. Webhooks allow you to send notifications to external services when monitors trigger, enabling integration with your workflows. |
| `DATADOG_MUTE_MONITOR` | Mute Monitor | Mute a monitor in Datadog. Temporarily silences alerts from a monitor, which is useful during maintenance windows, deployments, or when investigating known issues to prevent alert fatigue. |
| `DATADOG_QUERY_METRICS` | Query metrics | Queries Datadog metrics and returns time series data. Useful for retrieving historical metric data, creating custom dashboards, or building reports. |
| `DATADOG_SEARCH_LOGS` | Search logs | Searches Datadog logs with advanced filtering capabilities. IMPORTANT NOTES: - Sort parameter is NOT supported by the Datadog Logs API and will cause errors - Time parameters must be in milliseconds (13-digit UNIX timestamps) - Limit parameter is passed as string to the API - Log content is nested under 'content' field in API response Useful for troubleshooting, monitoring application behavior, and analyzing log patterns. |
| `DATADOG_SEARCH_SPANS_ANALYTICS` | Search Spans Analytics | Search and analyze span data with aggregations in Datadog. This action uses the Datadog Spans Analytics API to perform advanced queries and aggregations on trace span data. It's essential for: - Analyzing error rates and latency patterns - Understanding service dependencies and bottlenecks - Root cause analysis during incidents - Performance monitoring and optimization The API supports complex queries with grouping, filtering, and various aggregation functions similar to log analytics. The request body must conform to the `aggregate_request` schema; schema violations return "Invalid type. Expected 'aggregate_request'". If `filter` or `compute` cannot satisfy this schema, use basic trace search instead. |
| `DATADOG_SEARCH_TRACES` | Search Traces | Search for traces in Datadog APM. This action allows you to search for distributed traces across your services. It's essential for: - Finding specific request flows during incident investigation - Analyzing performance bottlenecks across services - Understanding error propagation through your system - Correlating user requests with backend operations Traces provide the complete picture of a request as it travels through your distributed system, making them crucial for root cause analysis. |
| `DATADOG_SUBMIT_METRICS` | Submit metrics | Submits custom metrics to Datadog. Useful for sending application-specific metrics, business KPIs, or custom performance indicators. |
| `DATADOG_UNMUTE_MONITOR` | Unmute Monitor | Unmute a monitor in Datadog. Re-enables alerts from a previously muted monitor, returning it to normal monitoring and alerting behavior. Alerting resumes immediately upon call, so ensure maintenance or issue resolution is fully complete before unmuting to avoid alert storms. Use this after maintenance windows or issue resolution to resume monitoring. |
| `DATADOG_UPDATE_DASHBOARD` | Update Dashboard | Update a dashboard in Datadog. Updates an existing dashboard with new configuration, widgets, or layout while preserving its identity and creation metadata. |
| `DATADOG_UPDATE_HOST_TAGS` | Update host tags | Updates tags for a specific host in Datadog. This replaces all existing tags from the specified source with the new tags provided. |
| `DATADOG_UPDATE_MONITOR` | Update monitor | Updates an existing Datadog monitor with new configuration, thresholds, or notification settings. Only specified fields will be updated. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Datadog MCP server is an implementation of the Model Context Protocol that connects your AI agent to Datadog. It provides structured and secure access so your agent can perform Datadog operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- Python 3.9 or higher
- A Composio account with an active API key
- Basic familiarity with Python and async programming

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

Install the required libraries.
What's happening:
- composio connects your agent to external SaaS tools like Datadog
- pydantic-ai lets you create structured AI agents with tool support
- python-dotenv loads your environment variables securely from a .env file
```bash
pip install composio pydantic-ai python-dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your agent to Composio's API
- USER_ID associates your session with your account for secure tool access
- OPENAI_API_KEY to access OpenAI LLMs
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
USER_ID=your_user_id_here
OPENAI_API_KEY=your_openai_api_key
```

### 4. Import dependencies

What's happening:
- We load environment variables and import required modules
- Composio manages connections to Datadog
- MCPServerStreamableHTTP connects to the Datadog MCP server endpoint
- Agent from Pydantic AI lets you define and run the AI assistant
```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()
```

### 5. Create a Tool Router Session

What's happening:
- We're creating a Tool Router session that gives your agent access to Datadog tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
```python
async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Datadog
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["datadog"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")
```

### 6. Initialize the Pydantic AI Agent

What's happening:
- The MCP client connects to the Datadog endpoint
- The agent uses GPT-5 to interpret user commands and perform Datadog operations
- The instructions field defines the agent's role and behavior
```python
# Attach the MCP server to a Pydantic AI Agent
datadog_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
agent = Agent(
    "openai:gpt-5",
    toolsets=[datadog_mcp],
    instructions=(
        "You are a Datadog assistant. Use Datadog tools to help users "
        "with their requests. Ask clarifying questions when needed."
    ),
)
```

### 7. Build the chat interface

What's happening:
- The agent reads input from the terminal and streams its response
- Datadog API calls happen automatically under the hood
- The model keeps conversation history to maintain context across turns
```python
# Simple REPL with message history
history = []
print("Chat started! Type 'exit' or 'quit' to end.\n")
print("Try asking the agent to help you with Datadog.\n")

while True:
    user_input = input("You: ").strip()
    if user_input.lower() in {"exit", "quit", "bye"}:
        print("\nGoodbye!")
        break
    if not user_input:
        continue

    print("\nAgent is thinking...\n", flush=True)

    async with agent.run_stream(user_input, message_history=history) as stream_result:
        collected_text = ""
        async for chunk in stream_result.stream_output():
            text_piece = None
            if isinstance(chunk, str):
                text_piece = chunk
            elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                text_piece = chunk.delta
            elif hasattr(chunk, "text"):
                text_piece = chunk.text
            if text_piece:
                collected_text += text_piece
        result = stream_result

    print(f"Agent: {collected_text}\n")
    history = result.all_messages()
```

### 8. Run the application

What's happening:
- The asyncio loop launches the agent and keeps it running until you exit
```python
if __name__ == "__main__":
    asyncio.run(main())
```

## Complete Code

```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()

async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Datadog
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["datadog"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")

    # Attach the MCP server to a Pydantic AI Agent
    datadog_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
    agent = Agent(
        "openai:gpt-5",
        toolsets=[datadog_mcp],
        instructions=(
            "You are a Datadog assistant. Use Datadog tools to help users "
            "with their requests. Ask clarifying questions when needed."
        ),
    )

    # Simple REPL with message history
    history = []
    print("Chat started! Type 'exit' or 'quit' to end.\n")
    print("Try asking the agent to help you with Datadog.\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() in {"exit", "quit", "bye"}:
            print("\nGoodbye!")
            break
        if not user_input:
            continue

        print("\nAgent is thinking...\n", flush=True)

        async with agent.run_stream(user_input, message_history=history) as stream_result:
            collected_text = ""
            async for chunk in stream_result.stream_output():
                text_piece = None
                if isinstance(chunk, str):
                    text_piece = chunk
                elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                    text_piece = chunk.delta
                elif hasattr(chunk, "text"):
                    text_piece = chunk.text
                if text_piece:
                    collected_text += text_piece
            result = stream_result

        print(f"Agent: {collected_text}\n")
        history = result.all_messages()

if __name__ == "__main__":
    asyncio.run(main())
```

## Conclusion

You've built a Pydantic AI agent that can interact with Datadog through Composio's Tool Router. With this setup, your agent can perform real Datadog actions through natural language.
You can extend this further by:
- Adding other toolkits like Gmail, HubSpot, or Salesforce
- Building a web-based chat interface around this agent
- Using multiple MCP endpoints to enable cross-app workflows (for example, Gmail + Datadog for workflow automation)
This architecture makes your AI agent "agent-native", able to securely use APIs in a unified, composable way without custom integrations.

## How to build Datadog MCP Agent with another framework

- [ChatGPT](https://composio.dev/toolkits/datadog/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/datadog/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/datadog/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/datadog/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/datadog/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/datadog/framework/codex)
- [Cursor](https://composio.dev/toolkits/datadog/framework/cursor)
- [VS Code](https://composio.dev/toolkits/datadog/framework/vscode)
- [OpenCode](https://composio.dev/toolkits/datadog/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/datadog/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/datadog/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/datadog/framework/cli)
- [Google ADK](https://composio.dev/toolkits/datadog/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/datadog/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/datadog/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/datadog/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/datadog/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/datadog/framework/crew-ai)

## Related Toolkits

- [Supabase](https://composio.dev/toolkits/supabase) - Supabase is an open-source backend platform offering scalable Postgres databases, authentication, storage, and real-time APIs. It lets developers build modern apps without managing infrastructure.
- [Codeinterpreter](https://composio.dev/toolkits/codeinterpreter) - Codeinterpreter is a Python-based coding environment with built-in data analysis and visualization. It lets you instantly run scripts, plot results, and prototype solutions inside supported platforms.
- [GitHub](https://composio.dev/toolkits/github) - GitHub is a code hosting platform for version control and collaborative software development. It streamlines project management, code review, and team workflows in one place.
- [Ably](https://composio.dev/toolkits/ably) - Ably is a real-time messaging platform for live chat and data sync in modern apps. It offers global scale and rock-solid reliability for seamless, instant experiences.
- [Abuselpdb](https://composio.dev/toolkits/abuselpdb) - Abuselpdb is a central database for reporting and checking IPs linked to malicious online activity. Use it to quickly identify and report suspicious or abusive IP addresses.
- [Alchemy](https://composio.dev/toolkits/alchemy) - Alchemy is a blockchain development platform offering APIs and tools for Ethereum apps. It simplifies building and scaling Web3 projects with robust infrastructure.
- [Algolia](https://composio.dev/toolkits/algolia) - Algolia is a hosted search API that powers lightning-fast, relevant search experiences for web and mobile apps. It helps developers deliver instant, typo-tolerant, and scalable search without complex infrastructure.
- [Anchor browser](https://composio.dev/toolkits/anchor_browser) - Anchor browser is a developer platform for AI-powered web automation. It transforms complex browser actions into easy API endpoints for streamlined web interaction.
- [Apiflash](https://composio.dev/toolkits/apiflash) - Apiflash is a website screenshot API for programmatically capturing web pages. It delivers high-quality screenshots on demand for automation, monitoring, or reporting.
- [Apiverve](https://composio.dev/toolkits/apiverve) - Apiverve delivers a suite of powerful APIs that simplify integration for developers. It's designed for reliability and scalability so you can build faster, smarter applications without the integration headache.
- [Appcircle](https://composio.dev/toolkits/appcircle) - Appcircle is an enterprise-grade mobile CI/CD platform for building, testing, and publishing mobile apps. It streamlines mobile DevOps so teams ship faster and with more confidence.
- [Appdrag](https://composio.dev/toolkits/appdrag) - Appdrag is a cloud platform for building websites, APIs, and databases with drag-and-drop tools and code editing. It accelerates development and iteration by combining hosting, database management, and low-code features in one place.
- [Appveyor](https://composio.dev/toolkits/appveyor) - AppVeyor is a cloud-based continuous integration service for building, testing, and deploying applications. It helps developers automate and streamline their software delivery pipelines.
- [Backendless](https://composio.dev/toolkits/backendless) - Backendless is a backend-as-a-service platform for mobile and web apps, offering database, file storage, user authentication, and APIs. It helps developers ship scalable applications faster without managing server infrastructure.
- [Baserow](https://composio.dev/toolkits/baserow) - Baserow is an open-source no-code database platform for building collaborative data apps. It makes it easy for teams to organize data and automate workflows without writing code.
- [Bench](https://composio.dev/toolkits/bench) - Bench is a benchmarking tool for automated performance measurement and analysis. It helps you quickly evaluate, compare, and track your systems or workflows.
- [Better stack](https://composio.dev/toolkits/better_stack) - Better Stack is a monitoring, logging, and incident management solution for apps and services. It helps teams ensure application reliability and performance with real-time insights.
- [Bitbucket](https://composio.dev/toolkits/bitbucket) - Bitbucket is a Git-based code hosting and collaboration platform for teams. It enables secure repository management and streamlined code reviews.
- [Blazemeter](https://composio.dev/toolkits/blazemeter) - Blazemeter is a continuous testing platform for web and mobile app performance. It empowers teams to automate and analyze large-scale tests with ease.
- [Blocknative](https://composio.dev/toolkits/blocknative) - Blocknative delivers real-time mempool monitoring and transaction management for public blockchains. Instantly track pending transactions and optimize blockchain interactions with live data.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Datadog MCP?

With a standalone Datadog MCP server, the agents and LLMs can only access a fixed set of Datadog tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Datadog and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Pydantic AI?

Yes, you can. Pydantic AI fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Datadog tools.

### Can I manage the permissions and scopes for Datadog while using Tool Router?

Yes, absolutely. You can configure which Datadog scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Datadog data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
