# How to integrate Semanticscholar MCP with Pydantic AI

```json
{
  "title": "How to integrate Semanticscholar MCP with Pydantic AI",
  "toolkit": "Semanticscholar",
  "toolkit_slug": "semanticscholar",
  "framework": "Pydantic AI",
  "framework_slug": "pydantic-ai",
  "url": "https://composio.dev/toolkits/semanticscholar/framework/pydantic-ai",
  "markdown_url": "https://composio.dev/toolkits/semanticscholar/framework/pydantic-ai.md",
  "updated_at": "2026-05-12T10:25:08.086Z"
}
```

## Introduction

This guide walks you through connecting Semanticscholar to Pydantic AI using the Composio tool router. By the end, you'll have a working Semanticscholar agent that can find the latest papers on graph neural networks, list citations for a specific research paper, summarize an author’s recent publications through natural language commands.
This guide will help you understand how to give your Pydantic AI agent real control over a Semanticscholar account through Composio's Semanticscholar MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Semanticscholar with

- [OpenAI Agents SDK](https://composio.dev/toolkits/semanticscholar/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/semanticscholar/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/semanticscholar/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/semanticscholar/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/semanticscholar/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/semanticscholar/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/semanticscholar/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/semanticscholar/framework/cli)
- [Google ADK](https://composio.dev/toolkits/semanticscholar/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/semanticscholar/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/semanticscholar/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/semanticscholar/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/semanticscholar/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/semanticscholar/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- How to set up your Composio API key and User ID
- How to create a Composio Tool Router session for Semanticscholar
- How to attach an MCP Server to a Pydantic AI agent
- How to stream responses and maintain chat history
- How to build a simple REPL-style chat interface to test your Semanticscholar workflows

## What is Pydantic AI?

Pydantic AI is a Python framework for building AI agents with strong typing and validation. It leverages Pydantic's data validation capabilities to create robust, type-safe AI applications.
Key features include:
- Type Safety: Built on Pydantic for automatic data validation
- MCP Support: Native support for Model Context Protocol servers
- Streaming: Built-in support for streaming responses
- Async First: Designed for async/await patterns

## What is the Semanticscholar MCP server, and what's possible with it?

The Semanticscholar MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Semantic Scholar account. It provides structured and secure access to scholarly data, so your agent can search for academic papers, retrieve detailed author profiles, analyze citations, and explore references or publication histories on your behalf.
- Comprehensive literature search and discovery: Let your agent search for academic papers by topic, author, or relevance and retrieve lists of matching publications with rich metadata.
- In-depth paper and author insights: Ask your agent to fetch detailed information about specific papers—including titles, abstracts, authors, and publication years—or get complete profiles for researchers and their entire body of work.
- Citation and reference analysis: Enable your agent to trace the impact of a paper by pulling its citations or explore the foundational research it builds upon by listing its references.
- Batch retrieval for large-scale research: Efficiently gather details on multiple papers or authors at once, streamlining reviews and bibliometric analyses across large datasets.
- Bulk and relevance-based queries: Use advanced bulk search and filtering to identify up to thousands of papers at a time, making it easy for your agent to support systematic literature reviews and academic data exploration.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `SEMANTICSCHOLAR_DETAILS_ABOUT_AN_AUTHOR` | Details about an author | Retrieve detailed information about an author from Semantic Scholar, including name, affiliations, publication statistics (paperCount, citationCount, h-index), external IDs (ORCID, DBLP), and optionally papers. By default returns authorId and name only. Use 'fields' parameter for additional data: name, url, affiliations, homepage, externalIds, paperCount, citationCount, hIndex, papers (supports nested fields like papers.title, papers.year). Limit: 10 MB per request. |
| `SEMANTICSCHOLAR_DETAILS_ABOUT_AN_AUTHOR_S_PAPERS` | Details about an author s papers | Retrieves a list of papers authored or co-authored by a specific researcher identified by their unique Semantic Scholar author ID. This endpoint is particularly useful for conducting literature reviews, analyzing an author's body of work, or tracking a researcher's publications over time. It provides a comprehensive view of an author's contributions to their field of study, including all papers where the author is listed as an author regardless of their authorship position. The response may be paginated for authors with a large number of publications, and additional API calls might be necessary to retrieve the complete list of papers. Use the offset and limit parameters to control pagination. |
| `SEMANTICSCHOLAR_DETAILS_ABOUT_A_PAPER` | Details about a paper | Examples: https://api.semanticscholar.org/graph/v1/paper/649def34f8be52c8b66281af98ae884c09aef38b Returns a paper with its paperId and title. https://api.semanticscholar.org/graph/v1/paper/649def34f8be52c8b66281af98ae884c09aef38b?fields=url,year,authors Returns the paper's paperId, url, year, and list of authors. Each author has authorId and name. https://api.semanticscholar.org/graph/v1/paper/649def34f8be52c8b66281af98ae884c09aef38b?fields=citations.authors Returns the paper's paperId and list of citations. Each citation has its paperId plus its list of authors. Each author has their 2 always included fields of authorId and name. Limitations: Can only return up to 10 MB of data at a time. |
| `SEMANTICSCHOLAR_DETAILS_ABOUT_A_PAPER_S_AUTHORS` | Details about a paper s authors | Retrieves the list of authors for a specific paper identified by its unique paper_id in the Semantic Scholar database. This endpoint returns detailed author information including authorId and name (returned by default), and optionally: url, affiliations, homepage, paperCount, citationCount, hIndex, and papers (with subfields). Use the 'fields' parameter to request additional author fields beyond the defaults. The response is paginated and includes offset/limit parameters for retrieving large author lists. This tool is ideal for exploring paper collaborations, identifying author affiliations, or building author networks. It accepts various paper ID formats including Semantic Scholar IDs, DOI, ARXIV, PMID, and others. |
| `SEMANTICSCHOLAR_DETAILS_ABOUT_A_PAPER_S_CITATIONS` | Details about a paper s citations | Retrieves a list of citations for a specific academic paper using its unique Semantic Scholar paper ID. This endpoint is useful for researchers and developers who want to explore the impact and connections of a particular academic work within the broader scientific literature. It provides information about other papers that have cited the specified paper, allowing users to trace the influence of research and discover related works. The endpoint should be used when analyzing the reception and impact of a specific paper, building citation networks, or conducting bibliometric studies. It does not provide the full text of citing papers or detailed information about the citations beyond basic metadata. |
| `SEMANTICSCHOLAR_DETAILS_ABOUT_A_PAPER_S_REFERENCES` | Details about a paper s references | Retrieves the list of references cited by a specific paper in the Semantic Scholar database. This endpoint allows users to explore the scholarly context of a publication by accessing its bibliography. It's particularly useful for understanding the foundation of a paper's research, tracing the development of ideas, or conducting literature reviews. The tool returns details about the cited papers, which may include their titles, authors, publication dates, and Semantic Scholar IDs. It should be used when analyzing a paper's sources or investigating the connections between different academic works. Note that this endpoint only provides outgoing references (papers cited by the specified paper) and not incoming citations (papers that cite the specified paper). |
| `SEMANTICSCHOLAR_GET_DATASET` | Get dataset download links | Tool to get download links for a specific dataset within a release. Use when you need to download Semantic Scholar dataset files from S3. Returns pre-signed URLs for all dataset partitions. |
| `SEMANTICSCHOLAR_GET_DATASET_DIFFS` | Get dataset diffs | Get download links for incremental diffs between dataset releases. Returns a list of diffs required to update a dataset from start_release to end_release, enabling efficient dataset synchronization. Use when you need to update a local dataset copy without re-downloading the entire dataset. |
| `SEMANTICSCHOLAR_GET_DETAILS_FOR_MULTIPLE_AUTHORS_AT_ONCE` | Get details for multiple authors at once | Retrieves detailed information for multiple authors from Semantic Scholar in a single API call. This endpoint allows users to efficiently fetch data for a batch of authors by providing their unique Semantic Scholar IDs. It's particularly useful for applications that need to gather information on multiple authors simultaneously, reducing the number of individual API calls required. The endpoint accepts a list of author IDs and returns comprehensive details for each author, which may include their publications, citations, and other relevant academic information. While the exact response structure is not specified in the given schema, users can expect rich metadata about the requested authors. |
| `SEMANTICSCHOLAR_GET_DETAILS_FOR_MULTIPLE_PAPERS_AT_ONCE` | Get details for multiple papers at once | Retrieve detailed information for multiple academic papers in a single API call using the Semantic Scholar paper batch endpoint. This endpoint efficiently fetches data for up to 500 papers at once, significantly reducing the number of individual API requests needed. Key features: - Accepts multiple paper ID formats (Semantic Scholar ID, CorpusId, DOI, ArXiv, PMID, etc.) - Customizable field selection to retrieve only needed data - Papers not found return null in the corresponding array position - Results maintain the same order as input IDs - Supports nested field queries (e.g., authors.name, citations.title) Use this endpoint when you have a list of known paper IDs and want to retrieve their details simultaneously, rather than making individual requests for each paper. |
| `SEMANTICSCHOLAR_GET_PAPER_RECOMMENDATIONS` | Get paper recommendations | Tool to get paper recommendations based on positive and negative example papers. Use when you need to find papers similar to ones you like (positive examples) and optionally dissimilar to ones you don't like (negative examples). The recommendation engine analyzes the provided examples and returns relevant papers from the Semantic Scholar database. |
| `SEMANTICSCHOLAR_GET_RECOMMENDATIONS_FOR_PAPER` | Get recommendations for paper | Tool to get recommended papers for a single positive example paper. Use when you need to find papers similar to a given paper based on Semantic Scholar's recommendation algorithm. |
| `SEMANTICSCHOLAR_GET_RELEASE` | Get dataset release information | Tool to retrieve metadata for a specific Semantic Scholar dataset release. Returns release information including available datasets with their descriptions. Use when you need to discover what datasets are available in a release or get release documentation. |
| `SEMANTICSCHOLAR_LIST_RELEASES` | List available dataset releases | Tool to list all available dataset releases from Semantic Scholar. Use when you need to discover available release dates for downloading datasets. |
| `SEMANTICSCHOLAR_PAPER_TITLE_SEARCH` | Paper title search | Behaves similarly to /paper/search, but is intended for retrieval of a single paper based on closest title match to given query. Examples: https://api.semanticscholar.org/graph/v1/paper/search/match?query=Construction of the Literature Graph in Semantic Scholar Returns a single paper that is the closest title match. Each paper has its paperId, title, and matchScore as well as any other requested fields. https://api.semanticscholar.org/graph/v1/paper/search/match?query=totalGarbageNonsense Returns with a 404 error and a "Title match not found" message. Limitations: Will only return the single highest match result. |
| `SEMANTICSCHOLAR_SEARCH_BULK_PAPERS` | Search Bulk Papers | Tool to perform bulk search for academic papers. Intended for bulk retrieval of basic paper data without search relevance scoring. Use when you need to retrieve large sets of papers with optional text filtering and various criteria. Supports token-based pagination for efficient fetching of up to 10 million papers (use Datasets API for larger needs). |
| `SEMANTICSCHOLAR_SEARCH_FOR_AUTHORS_BY_NAME` | Search for authors by name | Search for academic authors in the Semantic Scholar database by name. This action searches for authors using plain-text name queries. The search is case-insensitive and supports partial name matches (e.g., "Smith" will match "John Smith", "Adam Smith", etc.). Use cases: - Find authors by their name to get their author ID - Discover authors in a specific research area by searching common names - Retrieve author metadata including publications, affiliations, citation counts, and h-index - Build author directories or research networks The response includes pagination metadata (total, offset, next) to help retrieve large result sets. Use the 'fields' parameter to customize which author attributes are returned, and use 'offset' and 'limit' for pagination through result sets larger than 1000 authors. Note: Results are paginated with a maximum of 1000 results per request. Use the 'next' field in the response to determine the offset for the next page. |
| `SEMANTICSCHOLAR_SEARCH_PAPERS` | Search papers by relevance | Tool to search for academic papers by relevance in the Semantic Scholar database. Use when searching for papers on specific topics, keywords, or research areas. Returns papers ordered by relevance score with support for extensive filtering by publication type, date, venue, field of study, and citation metrics. |
| `SEMANTICSCHOLAR_SUGGEST_PAPER_QUERY_COMPLETIONS` | Suggest paper query completions | Get autocomplete suggestions for paper queries. Returns a list of papers matching the partial query string, useful for interactive search experiences. Each suggestion includes the paper ID, title, and authors with publication year. Example: For query "machine learning", returns papers like "Machine learning - a probabilistic perspective" by Murphy, 2012. |
| `SEMANTICSCHOLAR_TEXT_SNIPPET_SEARCH` | Text snippet search | Search for text snippets (~500 words) within academic papers that match your natural language query. Returns relevant excerpts from papers' titles, abstracts, and body text, ranked by relevance score. Each result includes: snippet text, location in paper, citation references, and paper metadata (title, authors, corpus ID). Supports filtering by authors, publication date, venue, field of study, citation count, and specific paper IDs. Results sorted by relevance (highest score first). Use limit=10 (default, max 1000) to control result count. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Semanticscholar MCP server is an implementation of the Model Context Protocol that connects your AI agent to Semanticscholar. It provides structured and secure access so your agent can perform Semanticscholar operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- Python 3.9 or higher
- A Composio account with an active API key
- Basic familiarity with Python and async programming

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

Install the required libraries.
What's happening:
- composio connects your agent to external SaaS tools like Semanticscholar
- pydantic-ai lets you create structured AI agents with tool support
- python-dotenv loads your environment variables securely from a .env file
```bash
pip install composio pydantic-ai python-dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your agent to Composio's API
- USER_ID associates your session with your account for secure tool access
- OPENAI_API_KEY to access OpenAI LLMs
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
USER_ID=your_user_id_here
OPENAI_API_KEY=your_openai_api_key
```

### 4. Import dependencies

What's happening:
- We load environment variables and import required modules
- Composio manages connections to Semanticscholar
- MCPServerStreamableHTTP connects to the Semanticscholar MCP server endpoint
- Agent from Pydantic AI lets you define and run the AI assistant
```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()
```

### 5. Create a Tool Router Session

What's happening:
- We're creating a Tool Router session that gives your agent access to Semanticscholar tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
```python
async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Semanticscholar
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["semanticscholar"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")
```

### 6. Initialize the Pydantic AI Agent

What's happening:
- The MCP client connects to the Semanticscholar endpoint
- The agent uses GPT-5 to interpret user commands and perform Semanticscholar operations
- The instructions field defines the agent's role and behavior
```python
# Attach the MCP server to a Pydantic AI Agent
semanticscholar_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
agent = Agent(
    "openai:gpt-5",
    toolsets=[semanticscholar_mcp],
    instructions=(
        "You are a Semanticscholar assistant. Use Semanticscholar tools to help users "
        "with their requests. Ask clarifying questions when needed."
    ),
)
```

### 7. Build the chat interface

What's happening:
- The agent reads input from the terminal and streams its response
- Semanticscholar API calls happen automatically under the hood
- The model keeps conversation history to maintain context across turns
```python
# Simple REPL with message history
history = []
print("Chat started! Type 'exit' or 'quit' to end.\n")
print("Try asking the agent to help you with Semanticscholar.\n")

while True:
    user_input = input("You: ").strip()
    if user_input.lower() in {"exit", "quit", "bye"}:
        print("\nGoodbye!")
        break
    if not user_input:
        continue

    print("\nAgent is thinking...\n", flush=True)

    async with agent.run_stream(user_input, message_history=history) as stream_result:
        collected_text = ""
        async for chunk in stream_result.stream_output():
            text_piece = None
            if isinstance(chunk, str):
                text_piece = chunk
            elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                text_piece = chunk.delta
            elif hasattr(chunk, "text"):
                text_piece = chunk.text
            if text_piece:
                collected_text += text_piece
        result = stream_result

    print(f"Agent: {collected_text}\n")
    history = result.all_messages()
```

### 8. Run the application

What's happening:
- The asyncio loop launches the agent and keeps it running until you exit
```python
if __name__ == "__main__":
    asyncio.run(main())
```

## Complete Code

```python
import asyncio
import os
from dotenv import load_dotenv
from composio import Composio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

load_dotenv()

async def main():
    api_key = os.getenv("COMPOSIO_API_KEY")
    user_id = os.getenv("USER_ID")
    if not api_key or not user_id:
        raise RuntimeError("Set COMPOSIO_API_KEY and USER_ID in your environment")

    # Create a Composio Tool Router session for Semanticscholar
    composio = Composio(api_key=api_key)
    session = composio.create(
        user_id=user_id,
        toolkits=["semanticscholar"],
    )
    url = session.mcp.url
    if not url:
        raise ValueError("Composio session did not return an MCP URL")

    # Attach the MCP server to a Pydantic AI Agent
    semanticscholar_mcp = MCPServerStreamableHTTP(url, headers={"x-api-key": COMPOSIO_API_KEY})
    agent = Agent(
        "openai:gpt-5",
        toolsets=[semanticscholar_mcp],
        instructions=(
            "You are a Semanticscholar assistant. Use Semanticscholar tools to help users "
            "with their requests. Ask clarifying questions when needed."
        ),
    )

    # Simple REPL with message history
    history = []
    print("Chat started! Type 'exit' or 'quit' to end.\n")
    print("Try asking the agent to help you with Semanticscholar.\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() in {"exit", "quit", "bye"}:
            print("\nGoodbye!")
            break
        if not user_input:
            continue

        print("\nAgent is thinking...\n", flush=True)

        async with agent.run_stream(user_input, message_history=history) as stream_result:
            collected_text = ""
            async for chunk in stream_result.stream_output():
                text_piece = None
                if isinstance(chunk, str):
                    text_piece = chunk
                elif hasattr(chunk, "delta") and isinstance(chunk.delta, str):
                    text_piece = chunk.delta
                elif hasattr(chunk, "text"):
                    text_piece = chunk.text
                if text_piece:
                    collected_text += text_piece
            result = stream_result

        print(f"Agent: {collected_text}\n")
        history = result.all_messages()

if __name__ == "__main__":
    asyncio.run(main())
```

## Conclusion

You've built a Pydantic AI agent that can interact with Semanticscholar through Composio's Tool Router. With this setup, your agent can perform real Semanticscholar actions through natural language.
You can extend this further by:
- Adding other toolkits like Gmail, HubSpot, or Salesforce
- Building a web-based chat interface around this agent
- Using multiple MCP endpoints to enable cross-app workflows (for example, Gmail + Semanticscholar for workflow automation)
This architecture makes your AI agent "agent-native", able to securely use APIs in a unified, composable way without custom integrations.

## How to build Semanticscholar MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/semanticscholar/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/semanticscholar/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/semanticscholar/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/semanticscholar/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/semanticscholar/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/semanticscholar/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/semanticscholar/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/semanticscholar/framework/cli)
- [Google ADK](https://composio.dev/toolkits/semanticscholar/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/semanticscholar/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/semanticscholar/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/semanticscholar/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/semanticscholar/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/semanticscholar/framework/crew-ai)

## Related Toolkits

- [Composio](https://composio.dev/toolkits/composio) - Composio is an integration platform that connects AI agents with hundreds of business tools. It streamlines authentication and lets you trigger actions across services—no custom code needed.
- [Composio search](https://composio.dev/toolkits/composio_search) - Composio search is a unified web search toolkit spanning travel, e-commerce, news, financial markets, images, and more. It lets you and your apps tap into up-to-date web data from a single, easy-to-integrate service.
- [Perplexityai](https://composio.dev/toolkits/perplexityai) - Perplexityai delivers natural, conversational AI models for generating human-like text. Instantly get context-aware, high-quality responses for chat, search, or complex workflows.
- [Browser tool](https://composio.dev/toolkits/browser_tool) - Browser tool is a virtual browser integration that lets AI agents interact with the web programmatically. It enables automated browsing, scraping, and action-taking from any AI workflow.
- [Ai ml api](https://composio.dev/toolkits/ai_ml_api) - Ai ml api is a suite of AI/ML models for natural language and image tasks. It provides fast, scalable access to advanced AI capabilities for your apps and workflows.
- [Aivoov](https://composio.dev/toolkits/aivoov) - Aivoov is an AI-powered text-to-speech platform offering 1,000+ voices in over 150 languages. Instantly turn written content into natural, human-like audio for any application.
- [All images ai](https://composio.dev/toolkits/all_images_ai) - All-Images.ai is an AI-powered image generation and management platform. It helps you create, search, and organize images effortlessly with advanced AI capabilities.
- [Anthropic administrator](https://composio.dev/toolkits/anthropic_administrator) - Anthropic administrator is an API for managing Anthropic organizational resources like members, workspaces, and API keys. It helps you automate admin tasks and streamline resource management across your Anthropic organization.
- [Api labz](https://composio.dev/toolkits/api_labz) - Api labz is a platform offering a suite of AI-driven APIs and workflow tools. It helps developers automate tasks and build smarter, more efficient applications.
- [Apipie ai](https://composio.dev/toolkits/apipie_ai) - Apipie ai is an AI model aggregator offering a single API for accessing top AI models from multiple providers. It helps developers build cost-efficient, latency-optimized AI solutions without juggling multiple integrations.
- [Astica ai](https://composio.dev/toolkits/astica_ai) - Astica ai provides APIs for computer vision, NLP, and voice synthesis. Integrate advanced AI features into your app with a single API key.
- [Bigml](https://composio.dev/toolkits/bigml) - BigML is a machine learning platform that lets you build, train, and deploy predictive models from your data. Its intuitive interface and robust API make machine learning accessible and efficient.
- [Botbaba](https://composio.dev/toolkits/botbaba) - Botbaba is a platform for building, managing, and deploying conversational AI chatbots across messaging channels. It streamlines chatbot automation, making it easier to integrate AI into customer interactions.
- [Botpress](https://composio.dev/toolkits/botpress) - Botpress is an open-source platform for building, deploying, and managing chatbots. It helps teams automate conversations and deliver rich, interactive messaging experiences.
- [Chatbotkit](https://composio.dev/toolkits/chatbotkit) - Chatbotkit is a platform for building and managing AI-powered chatbots using robust APIs and SDKs. It lets you easily add conversational AI to your apps for better user engagement.
- [Cody](https://composio.dev/toolkits/cody) - Cody is an AI assistant built for businesses, trained on your company's knowledge and data. It delivers instant answers and insights, tailored for your team.
- [Context7 MCP](https://composio.dev/toolkits/context7_mcp) - Context7 MCP delivers live, version-specific code docs and examples right from the source. It helps developers and AI agents instantly retrieve authoritative programming info—no more out-of-date docs.
- [Customgpt](https://composio.dev/toolkits/customgpt) - CustomGPT.ai lets you build and deploy chatbots tailored to your own data and business needs. Get precise and context-aware AI conversations without writing code.
- [Datarobot](https://composio.dev/toolkits/datarobot) - Datarobot is a machine learning platform that automates model development, deployment, and monitoring. It empowers organizations to quickly gain predictive insights from large datasets.
- [Deepgram](https://composio.dev/toolkits/deepgram) - Deepgram is an AI-powered speech recognition platform for accurate audio transcription and understanding. It enables fast, scalable speech-to-text with advanced audio intelligence features.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Semanticscholar MCP?

With a standalone Semanticscholar MCP server, the agents and LLMs can only access a fixed set of Semanticscholar tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Semanticscholar and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Pydantic AI?

Yes, you can. Pydantic AI fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Semanticscholar tools.

### Can I manage the permissions and scopes for Semanticscholar while using Tool Router?

Yes, absolutely. You can configure which Semanticscholar scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Semanticscholar data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
