# How to integrate Peopledatalabs MCP with Autogen

```json
{
  "title": "How to integrate Peopledatalabs MCP with Autogen",
  "toolkit": "Peopledatalabs",
  "toolkit_slug": "peopledatalabs",
  "framework": "AutoGen",
  "framework_slug": "autogen",
  "url": "https://composio.dev/toolkits/peopledatalabs/framework/autogen",
  "markdown_url": "https://composio.dev/toolkits/peopledatalabs/framework/autogen.md",
  "updated_at": "2026-05-12T10:21:44.689Z"
}
```

## Introduction

This guide walks you through connecting Peopledatalabs to AutoGen using the Composio tool router. By the end, you'll have a working Peopledatalabs agent that can enrich this email with full person profile, standardize and clean this company name, get detailed info for the skill 'python' through natural language commands.
This guide will help you understand how to give your AutoGen agent real control over a Peopledatalabs account through Composio's Peopledatalabs MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Peopledatalabs with

- [OpenAI Agents SDK](https://composio.dev/toolkits/peopledatalabs/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/peopledatalabs/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/peopledatalabs/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/peopledatalabs/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/peopledatalabs/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/peopledatalabs/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/peopledatalabs/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/peopledatalabs/framework/cli)
- [Google ADK](https://composio.dev/toolkits/peopledatalabs/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/peopledatalabs/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/peopledatalabs/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/peopledatalabs/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/peopledatalabs/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/peopledatalabs/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- Get and set up your OpenAI and Composio API keys
- Install the required dependencies for Autogen and Composio
- Initialize Composio and create a Tool Router session for Peopledatalabs
- Wire that MCP URL into Autogen using McpWorkbench and StreamableHttpServerParams
- Configure an Autogen AssistantAgent that can call Peopledatalabs tools
- Run a live chat loop where you ask the agent to perform Peopledatalabs operations

## What is AutoGen?

Autogen is a framework for building multi-agent conversational AI systems from Microsoft. It enables you to create agents that can collaborate, use tools, and maintain complex workflows.
Key features include:
- Multi-Agent Systems: Build collaborative agent workflows
- MCP Workbench: Native support for Model Context Protocol tools
- Streaming HTTP: Connect to external services through streamable HTTP
- AssistantAgent: Pre-built agent class for tool-using assistants

## What is the Peopledatalabs MCP server, and what's possible with it?

The Peopledatalabs MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Peopledatalabs account. It provides structured and secure access to rich B2B data, so your agent can enrich profiles, standardize company details, validate customer information, and perform advanced searches with ease.
- Comprehensive person data enrichment: Automatically enhance individual profiles using identifiers like email, phone, or full name combined with company or location data.
- Company data validation and enrichment: Instantly verify and enrich company details with firmographics, employee counts, and standardized fields to power your workflows.
- Advanced person search and filtering: Leverage Elasticsearch-powered queries to find the exact professional profiles you need using job title, skills, experience, and more.
- Data cleaning and standardization: Cleanse and structure raw company, school, or location data to maintain high-quality records in your systems.
- Skill and job title enrichment: Provide context and standardized information for job titles or professional skills to improve analytics and targeting.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `PEOPLEDATALABS_CLEAN_COMPANY_DATA` | Clean company data | Cleans and standardizes company information based on a name, website, or profile URL; providing at least one of these inputs is highly recommended for meaningful results. |
| `PEOPLEDATALABS_CLEAN_COMPANY_POST` | Clean company data (POST) | Tool to clean and standardize company data using POST method. Use when you need to standardize company information by providing company name, website, or social profile. Returns standardized company information including name, website, LinkedIn profile, and other company identifiers. |
| `PEOPLEDATALABS_CLEAN_LOCATION_DATA` | Clean location data | Cleans and standardizes a raw, unformatted location string into a structured representation, provided the input is a recognizable geographical place. |
| `PEOPLEDATALABS_CLEAN_LOCATION_POST` | Clean location data (POST) | Tool to clean and standardize location data using POST method. Use when you need to normalize raw location strings into structured location information including city, region, and country. |
| `PEOPLEDATALABS_CLEAN_SCHOOL_DATA` | Clean school data | Cleans and standardizes school information; provide at least one of the school's name, website, or profile for optimal results. |
| `PEOPLEDATALABS_CLEAN_SCHOOL_DATA_POST` | Clean school data (POST) | Tool to clean and standardize school data using POST method. Use when you need to clean school information by providing name, website, or profile. |
| `PEOPLEDATALABS_ENRICH_BULK_COMPANY_DATA` | Enrich Bulk Company Data | Tool to enrich up to 100 companies in a single request using the Bulk Company Enrichment API. Use when you need to enrich multiple company profiles efficiently. Each request must include at least one company identifier (website, profile, name, ticker, or pdl_id). Results are returned in the same order as the input requests, with individual status codes indicating success (200) or failure (404). |
| `PEOPLEDATALABS_ENRICH_BULK_PERSON_DATA` | Enrich bulk person data | Tool to enrich up to 100 person profiles in a single API request using the Bulk Person Enrichment API. Use when you need to enrich multiple people efficiently, as this effectively increases the rate limit by up to 100x compared to individual enrichment calls. Each request in the array can use the same parameters as the single person enrichment endpoint. |
| `PEOPLEDATALABS_ENRICH_COMPANY_DATA` | Enrich Company Data | Enriches company data from People Data Labs with details like firmographics and employee counts. CRITICAL: This action REQUIRES at least one company identifier. DO NOT send empty {} requests. You MUST provide at least one of: pdl_id, name, profile, ticker, or website. Valid request examples: - {"name": "Apple Inc."} - enrich by company name - {"website": "google.com"} - enrich by website URL - {"ticker": "MSFT"} - enrich by stock ticker - {"profile": "linkedin.com/company/microsoft"} - enrich by social profile. Each call consumes API credits; use specific identifiers rather than exploratory requests. |
| `PEOPLEDATALABS_ENRICH_IP_DATA` | Enrich IP Data | Enriches an IP address with company, location, metadata, and person data from People Data Labs. |
| `PEOPLEDATALABS_ENRICH_JOB_TITLE_DATA` | Enrich job title data | Enhances a job title by providing additional contextual information and details. |
| `PEOPLEDATALABS_ENRICH_PERSON_DATA` | Enrich person data | Enriches person data using various identifiers; requires a primary ID (profile, email, phone, email_hash, lid, pdl_id) OR a name (full, or first and last) combined with another demographic detail (e.g., company, school, location). |
| `PEOPLEDATALABS_ENRICH_SKILL_DATA` | Enrich skill data | Retrieves detailed, standardized information for a given skill by querying the People Data Labs Skill Enrichment API; for best results, provide a recognized professional skill or area of expertise. |
| `PEOPLEDATALABS_GENERATE_SEARCH_QUERY` | Generate Search Query | Converts natural language queries into structured PDL Elasticsearch queries for people or company searches; generates optimized query structure without executing the search. |
| `PEOPLEDATALABS_AUTOCOMPLETE_FIELD_SUGGESTIONS` | Autocomplete field suggestions | Provides autocompletion suggestions for a specific field (e.g., company, skill, title) based on partial text input. |
| `PEOPLEDATALABS_GET_AUTOCOMPLETE_SUGGESTIONS_POST` | Get autocomplete suggestions (POST) | Tool to get autocompletion suggestions using POST method for complex query parameters. Use when building type-ahead interfaces or needing to suggest values for Search API queries. Supports company, location, skill, title, and other fields with configurable result size. |
| `PEOPLEDATALABS_GET_COLUMN_DETAILS` | Get column details | Retrieves predefined enum values for a column name from `enum_mappings.json`; `is_enum` in the response will be false if the column is not found or is not an enum type. |
| `PEOPLEDATALABS_GET_SCHEMA` | Get schema | Retrieves the schema, including field names, descriptions, and data types, for 'person' or 'company' entity types. |
| `PEOPLEDATALABS_GET_SUBJECT_REQUESTS` | Get subject requests | Tool to retrieve subject access requests for data privacy compliance. Use when you need to manage or review data subject requests related to person data in your PeopleDataLabs account. |
| `PEOPLEDATALABS_IDENTIFY_PERSON_DATA` | Identify person data | Retrieves detailed profile information for an individual from People Data Labs (PDL), requiring at least one identifier such as email, phone, or profile URL. If using name alone, it must be paired with at least one additional attribute (company, location, school, etc.) — name-only queries return no match. |
| `PEOPLEDATALABS_PEOPLE_SEARCH_ELASTIC` | People Search with Elasticsearch | Searches for person profiles in the People Data Labs (PDL) database using an Elasticsearch Domain Specific Language (DSL) query. This action allows for highly targeted searches based on criteria such as job titles, skills, company details, location, experience, and more. Preconditions: - The provided Elasticsearch query (in the `query` field) must be a syntactically correct JSON object representing a valid Elasticsearch query. - The query must utilize fields that are defined in the People Data Labs person schema. - The `dataset` parameter must specify one of the allowed dataset categories. |
| `PEOPLEDATALABS_QUERY_PERSON_CHANGELOG` | Query person changelog | Tool to query the changelog of person records between two consecutive dataset versions. Returns information about updates, additions, deletions, merges, and opt-outs for individuals. Use when you need to track changes to person profiles across PDL dataset versions or monitor specific person IDs for updates. |
| `PEOPLEDATALABS_SEARCH_COMPANY_ELASTIC` | Company Search with Elasticsearch | Performs a search for company profiles within People Data Labs using a custom Elasticsearch Domain Specific Language (DSL) query. This action allows for detailed and complex filtering based on various attributes of a company, such as name, industry, employee_count, founded year, location, and more. Results can be paginated using the `size` and `scroll_token` parameters. Preconditions: - The `query` parameter must contain a valid Elasticsearch DSL query string, structured as a JSON object. - This action queries the People Data Labs company search endpoint (`/v5/company/search`) and returns company records. |
| `PEOPLEDATALABS_SEARCH_COMPANY_POST` | Search Company Records (POST) | Tool to search and filter company records from the full Company Dataset using Elasticsearch or SQL queries via POST method. Use when you need to find multiple companies matching specific criteria with complex filtering. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Peopledatalabs MCP server is an implementation of the Model Context Protocol that connects your AI agents and assistants directly to Peopledatalabs. Instead of manually wiring Peopledatalabs APIs, OAuth, and scopes yourself, you get a structured, tool-based interface that an LLM can call safely.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

You will need:
- A Composio API key
- An OpenAI API key (used by Autogen's OpenAIChatCompletionClient)
- A Peopledatalabs account you can connect to Composio
- Some basic familiarity with Autogen and Python async

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

Install Composio, Autogen extensions, and dotenv.
What's happening:
- composio connects your agent to Peopledatalabs via MCP
- autogen-agentchat provides the AssistantAgent class
- autogen-ext-openai provides the OpenAI model client
- autogen-ext-tools provides MCP workbench support
```bash
pip install composio python-dotenv
pip install autogen-agentchat autogen-ext-openai autogen-ext-tools
```

### 3. Set up environment variables

Create a .env file in your project folder.
What's happening:
- COMPOSIO_API_KEY is required to talk to Composio
- OPENAI_API_KEY is used by Autogen's OpenAI client
- USER_ID is how Composio identifies which user's Peopledatalabs connections to use
```bash
COMPOSIO_API_KEY=your-composio-api-key
OPENAI_API_KEY=your-openai-api-key
USER_ID=your-user-identifier@example.com
```

### 4. Import dependencies and create Tool Router session

What's happening:
- load_dotenv() reads your .env file
- Composio(api_key=...) initializes the SDK
- create(...) creates a Tool Router session that exposes Peopledatalabs tools
- session.mcp.url is the MCP endpoint that Autogen will connect to
```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import McpWorkbench, StreamableHttpServerParams

load_dotenv()

async def main():
    # Initialize Composio and create a Peopledatalabs session
    composio = Composio(api_key=os.getenv("COMPOSIO_API_KEY"))
    session = composio.create(
        user_id=os.getenv("USER_ID"),
        toolkits=["peopledatalabs"]
    )
    url = session.mcp.url
```

### 5. Configure MCP parameters for Autogen

Autogen expects parameters describing how to talk to the MCP server. That is what StreamableHttpServerParams is for.
What's happening:
- url points to the Tool Router MCP endpoint from Composio
- timeout is the HTTP timeout for requests
- sse_read_timeout controls how long to wait when streaming responses
- terminate_on_close=True cleans up the MCP server process when the workbench is closed
```python
# Configure MCP server parameters for Streamable HTTP
server_params = StreamableHttpServerParams(
    url=url,
    timeout=30.0,
    sse_read_timeout=300.0,
    terminate_on_close=True,
    headers={"x-api-key": os.getenv("COMPOSIO_API_KEY")}
)
```

### 6. Create the model client and agent

What's happening:
- OpenAIChatCompletionClient wraps the OpenAI model for Autogen
- McpWorkbench connects the agent to the MCP tools
- AssistantAgent is configured with the Peopledatalabs tools from the workbench
```python
# Create model client
model_client = OpenAIChatCompletionClient(
    model="gpt-5",
    api_key=os.getenv("OPENAI_API_KEY")
)

# Use McpWorkbench as context manager
async with McpWorkbench(server_params) as workbench:
    # Create Peopledatalabs assistant agent with MCP tools
    agent = AssistantAgent(
        name="peopledatalabs_assistant",
        description="An AI assistant that helps with Peopledatalabs operations.",
        model_client=model_client,
        workbench=workbench,
        model_client_stream=True,
        max_tool_iterations=10
    )
```

### 7. Run the interactive chat loop

What's happening:
- The script prompts you in a loop with You:
- Autogen passes your input to the model, which decides which Peopledatalabs tools to call via MCP
- agent.run_stream(...) yields streaming messages as the agent thinks and calls tools
- Typing exit, quit, or bye ends the loop
```python
print("Chat started! Type 'exit' or 'quit' to end the conversation.\n")
print("Ask any Peopledatalabs related question or task to the agent.\n")

# Conversation loop
while True:
    user_input = input("You: ").strip()

    if user_input.lower() in ["exit", "quit", "bye"]:
        print("\nGoodbye!")
        break

    if not user_input:
        continue

    print("\nAgent is thinking...\n")

    # Run the agent with streaming
    try:
        response_text = ""
        async for message in agent.run_stream(task=user_input):
            if hasattr(message, "content") and message.content:
                response_text = message.content

        # Print the final response
        if response_text:
            print(f"Agent: {response_text}\n")
        else:
            print("Agent: I encountered an issue processing your request.\n")

    except Exception as e:
        print(f"Agent: Sorry, I encountered an error: {str(e)}\n")
```

## Complete Code

```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import McpWorkbench, StreamableHttpServerParams

load_dotenv()

async def main():
    # Initialize Composio and create a Peopledatalabs session
    composio = Composio(api_key=os.getenv("COMPOSIO_API_KEY"))
    session = composio.create(
        user_id=os.getenv("USER_ID"),
        toolkits=["peopledatalabs"]
    )
    url = session.mcp.url

    # Configure MCP server parameters for Streamable HTTP
    server_params = StreamableHttpServerParams(
        url=url,
        timeout=30.0,
        sse_read_timeout=300.0,
        terminate_on_close=True,
        headers={"x-api-key": os.getenv("COMPOSIO_API_KEY")}
    )

    # Create model client
    model_client = OpenAIChatCompletionClient(
        model="gpt-5",
        api_key=os.getenv("OPENAI_API_KEY")
    )

    # Use McpWorkbench as context manager
    async with McpWorkbench(server_params) as workbench:
        # Create Peopledatalabs assistant agent with MCP tools
        agent = AssistantAgent(
            name="peopledatalabs_assistant",
            description="An AI assistant that helps with Peopledatalabs operations.",
            model_client=model_client,
            workbench=workbench,
            model_client_stream=True,
            max_tool_iterations=10
        )

        print("Chat started! Type 'exit' or 'quit' to end the conversation.\n")
        print("Ask any Peopledatalabs related question or task to the agent.\n")

        # Conversation loop
        while True:
            user_input = input("You: ").strip()

            if user_input.lower() in ['exit', 'quit', 'bye']:
                print("\nGoodbye!")
                break

            if not user_input:
                continue

            print("\nAgent is thinking...\n")

            # Run the agent with streaming
            try:
                response_text = ""
                async for message in agent.run_stream(task=user_input):
                    if hasattr(message, 'content') and message.content:
                        response_text = message.content

                # Print the final response
                if response_text:
                    print(f"Agent: {response_text}\n")
                else:
                    print("Agent: I encountered an issue processing your request.\n")

            except Exception as e:
                print(f"Agent: Sorry, I encountered an error: {str(e)}\n")

if __name__ == "__main__":
    asyncio.run(main())
```

## Conclusion

You now have an Autogen assistant wired into Peopledatalabs through Composio's Tool Router and MCP. From here you can:
- Add more toolkits to the toolkits list, for example notion or hubspot
- Refine the agent description to point it at specific workflows
- Wrap this script behind a UI, Slack bot, or internal tool
Once the pattern is clear for Peopledatalabs, you can reuse the same structure for other MCP-enabled apps with minimal code changes.

## How to build Peopledatalabs MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/peopledatalabs/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/peopledatalabs/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/peopledatalabs/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/peopledatalabs/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/peopledatalabs/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/peopledatalabs/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/peopledatalabs/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/peopledatalabs/framework/cli)
- [Google ADK](https://composio.dev/toolkits/peopledatalabs/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/peopledatalabs/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/peopledatalabs/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/peopledatalabs/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/peopledatalabs/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/peopledatalabs/framework/crew-ai)

## Related Toolkits

- [Firecrawl](https://composio.dev/toolkits/firecrawl) - Firecrawl automates large-scale web crawling and data extraction. It helps organizations efficiently gather, index, and analyze content from online sources.
- [Tavily](https://composio.dev/toolkits/tavily) - Tavily offers powerful search and data retrieval from documents, databases, and the web. It helps teams locate and filter information instantly, saving hours on research.
- [Exa](https://composio.dev/toolkits/exa) - Exa is a data extraction and search platform for gathering and analyzing information from websites, APIs, or databases. It helps teams quickly surface insights and automate data-driven workflows.
- [Serpapi](https://composio.dev/toolkits/serpapi) - SerpApi is a real-time API for structured search engine results. It lets you automate SERP data collection, parsing, and analysis for SEO and research.
- [Snowflake](https://composio.dev/toolkits/snowflake) - Snowflake is a cloud data warehouse built for elastic scaling, secure data sharing, and fast SQL analytics across major clouds.
- [Posthog](https://composio.dev/toolkits/posthog) - PostHog is an open-source analytics platform for tracking user interactions and product metrics. It helps teams refine features, analyze funnels, and reduce churn with actionable insights.
- [Amplitude](https://composio.dev/toolkits/amplitude) - Amplitude is a digital analytics platform for product and behavioral data insights. It helps teams analyze user journeys and make data-driven decisions quickly.
- [Bright Data MCP](https://composio.dev/toolkits/brightdata_mcp) - Bright Data MCP is an AI-powered web scraping and data collection platform. Instantly access public web data in real time with advanced scraping tools.
- [Browseai](https://composio.dev/toolkits/browseai) - Browseai is a web automation and data extraction platform that turns any website into an API. It's perfect for monitoring websites and retrieving structured data without manual scraping.
- [ClickHouse](https://composio.dev/toolkits/clickhouse) - ClickHouse is an open-source, column-oriented database for real-time analytics and big data processing using SQL. Its lightning-fast query performance makes it ideal for handling large datasets and delivering instant insights.
- [Coinmarketcal](https://composio.dev/toolkits/coinmarketcal) - CoinMarketCal is a community-powered crypto calendar for upcoming events, announcements, and releases. It helps traders track market-moving developments and stay ahead in the crypto space.
- [Control d](https://composio.dev/toolkits/control_d) - Control d is a customizable DNS filtering and traffic redirection platform. It helps you manage internet access, enforce policies, and monitor usage across devices and networks.
- [Databox](https://composio.dev/toolkits/databox) - Databox is a business analytics platform that connects your data from any tool and device. It helps you track KPIs, build dashboards, and discover actionable insights.
- [Databricks](https://composio.dev/toolkits/databricks) - Databricks is a unified analytics platform for big data and AI on the lakehouse architecture. It empowers data teams to collaborate, analyze, and build scalable solutions efficiently.
- [Datagma](https://composio.dev/toolkits/datagma) - Datagma delivers data intelligence and analytics for business growth and market discovery. Get actionable market insights and track competitors to inform your strategy.
- [Delighted](https://composio.dev/toolkits/delighted) - Delighted is a customer feedback platform based on the Net Promoter System®. It helps you quickly gather, track, and act on customer sentiment.
- [Dovetail](https://composio.dev/toolkits/dovetail) - Dovetail is a research analysis platform for transcript review and insight generation. It helps teams code interviews, analyze feedback, and create actionable research summaries.
- [Dub](https://composio.dev/toolkits/dub) - Dub is a short link management platform with analytics and API access. Use it to easily create, manage, and track branded short links for your business.
- [Elasticsearch](https://composio.dev/toolkits/elasticsearch) - Elasticsearch is a distributed, RESTful search and analytics engine for all types of data. It delivers fast, scalable search and powerful analytics across massive datasets.
- [Fireflies](https://composio.dev/toolkits/fireflies) - Fireflies.ai is an AI-powered meeting assistant that records, transcribes, and analyzes voice conversations. It helps teams capture call notes automatically and search or summarize meetings effortlessly.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Peopledatalabs MCP?

With a standalone Peopledatalabs MCP server, the agents and LLMs can only access a fixed set of Peopledatalabs tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Peopledatalabs and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Autogen?

Yes, you can. Autogen fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Peopledatalabs tools.

### Can I manage the permissions and scopes for Peopledatalabs while using Tool Router?

Yes, absolutely. You can configure which Peopledatalabs scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Peopledatalabs data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
