# How to integrate Gemini MCP with LangChain

```json
{
  "title": "How to integrate Gemini MCP with LangChain",
  "toolkit": "Gemini",
  "toolkit_slug": "gemini",
  "framework": "LangChain",
  "framework_slug": "langchain",
  "url": "https://composio.dev/toolkits/gemini/framework/langchain",
  "markdown_url": "https://composio.dev/toolkits/gemini/framework/langchain.md",
  "updated_at": "2026-05-12T10:12:37.656Z"
}
```

## Introduction

This guide walks you through connecting Gemini to LangChain using the Composio tool router. By the end, you'll have a working Gemini agent that can summarize this research article in 100 words, generate a creative image of a futuristic city, create a 30-second video based on this script through natural language commands.
This guide will help you understand how to give your LangChain agent real control over a Gemini account through Composio's Gemini MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Gemini with

- [ChatGPT](https://composio.dev/toolkits/gemini/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/gemini/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/gemini/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/gemini/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/gemini/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/gemini/framework/codex)
- [Cursor](https://composio.dev/toolkits/gemini/framework/cursor)
- [VS Code](https://composio.dev/toolkits/gemini/framework/vscode)
- [OpenCode](https://composio.dev/toolkits/gemini/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/gemini/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/gemini/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/gemini/framework/cli)
- [Google ADK](https://composio.dev/toolkits/gemini/framework/google-adk)
- [Vercel AI SDK](https://composio.dev/toolkits/gemini/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/gemini/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/gemini/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/gemini/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- Get and set up your OpenAI and Composio API keys
- Connect your Gemini project to Composio
- Create a Tool Router MCP session for Gemini
- Initialize an MCP client and retrieve Gemini tools
- Build a LangChain agent that can interact with Gemini
- Set up an interactive chat interface for testing

## What is LangChain?

LangChain is a framework for developing applications powered by language models. It provides tools and abstractions for building agents that can reason, use tools, and maintain conversation context.
Key features include:
- Agent Framework: Build agents that can use tools and make decisions
- MCP Integration: Connect to external services through Model Context Protocol adapters
- Memory Management: Maintain conversation history across interactions
- Multi-Provider Support: Works with OpenAI, Anthropic, and other LLM providers

## What is the Gemini MCP server, and what's possible with it?

The Gemini MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Gemini account. It provides structured and secure access to Gemini's multimodal AI features, so your agent can generate text, images, and videos, analyze content, and manage model resources on your behalf.
- Text and content generation: Instruct your agent to create high-quality, customized text using Gemini's advanced generative models—great for brainstorming, drafting, or summarizing information.
- Creative image and video generation: Ask the agent to generate original images or high-quality videos from text prompts using Gemini 2.5 Flash and Veo models, with fine control over style and format.
- Embedding and semantic analysis: Let your agent transform any text into rich semantic embeddings for similarity search, clustering, or classification tasks.
- Model discovery and optimization: Have the agent list available Gemini and Veo models, check their capabilities, and select the best fit for your project or workflow.
- Efficient resource management: Enable the agent to track video generation operations, download final assets, and optimize prompt inputs by counting tokens—all without manual intervention.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `GEMINI_COUNT_TOKENS` | Count Tokens (Gemini) | Counts the number of tokens in text using Gemini tokenization. Useful for estimating costs, checking input limits, and optimizing prompts before making API calls. |
| `GEMINI_EMBED_CONTENT` | Embed Content (Gemini) | Generates text embeddings using Gemini embedding models. Converts text into numerical vectors for semantic search, similarity comparison, clustering, and classification tasks. |
| `GEMINI_GENERATE_CONTENT` | Generate Content (Gemini) | Generates text content or speech audio from prompts using Gemini models. Supports text generation models (Gemini Flash, Pro) and text-to-speech models with configurable parameters. Generated text is nested at results[i].response.data.text. Output may be wrapped in markdown fences (e.g., ```html...```) or preceded by explanatory prose; strip these before file writing or rendering. |
| `GEMINI_GENERATE_IMAGE` | Generate Image (Nano Banana) | Generates images from text prompts using Gemini models (Nano Banana). Supports models: 'gemini-2.5-flash-image' (GA stable, fast), 'gemini-3-pro-image-preview' (Nano Banana Pro - advanced with 4K resolution, thinking mode, up to 14 reference images), and 'gemini-2.0-flash-exp-image-generation' (2.0 Flash experimental). Returns one image per call; images are uploaded to S3. Parse response at data.image.s3url or the text-type entry in data.content — prefer the URL to avoid base64 blobs. Always validate s3url before treating call as successful; a 200 response may contain only text with no image. Store s3url immediately as URLs can expire. Output formats are raster only (JPG/PNG/WebP); request PNG for transparency. Concurrent usage may trigger HTTP 429/RESOURCE_EXHAUSTED — keep concurrency ≤3 and use exponential backoff (1s→2s→4s, ~5 retries). NOTE NEVER EVER TRUE SYNC_TO_WORKBENCH IN RUBE_MULTI_EXECUTE_TOOL |
| `GEMINI_GENERATE_VIDEOS` | Generate Videos (Veo) | Generates videos from text prompts using Google's Veo models. Returns an operation_name for tracking; pass it verbatim (no edits) to GEMINI_WAIT_FOR_VIDEO or GEMINI_GET_VIDEOS_OPERATION. Jobs take 30–180+ seconds; wait 10s before first poll, then poll every 10–30s (allow up to 12 min). Successful results include data.video_file.s3url — missing s3url means failure. If done=true but no video_file, check raiMediaFilteredReasons (safety block); revise prompt and regenerate. Text-only; cannot accept image inputs. Max ~3–5 concurrent jobs; 429 RESOURCE_EXHAUSTED requires exponential backoff. For retries, always start a fresh call — never reuse a failed operation_name. |
| `GEMINI_LIST_MODELS` | List Models (Gemini API) | Lists available Gemini and Veo models with their capabilities and limits. Useful for discovering supported models and their features before making generation requests. Before calling video generation tools, verify model availability here — preview Veo models (e.g., veo-3.0-generate-preview) may be unavailable or return missing video URIs; prefer stable models like veo-2.0-generate-001. |
| `GEMINI_WAIT_FOR_VIDEO` | Wait and Download Video (Veo) | Polls a Veo video generation operation until completion, then downloads and returns the video as a FileDownloadable. Generation takes 30–120+ seconds (up to ~10–12 min); long waits are normal, not failures. On completion, the URL is nested at data.video_file.s3url — validate it is non-empty before downstream use. A done=true response without a valid s3url indicates safety filter rejection (check raiMediaFilteredReasons) or quota exhaustion — adjust the prompt and regenerate. On timeout, use GEMINI_GET_VIDEOS_OPERATION with incremental backoff before starting a new job. Keep parallel jobs to 3–5 to avoid 429 RESOURCE_EXHAUSTED errors. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Gemini MCP server is an implementation of the Model Context Protocol that connects your AI agent to Gemini. It provides structured and secure access so your agent can perform Gemini operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

No description provided.

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

No description provided.
```python
pip install composio-langchain langchain-mcp-adapters langchain python-dotenv
```

```typescript
npm install @composio/langchain @langchain/core @langchain/openai @langchain/mcp-adapters dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your requests to Composio's API
- COMPOSIO_USER_ID identifies the user for session management
- OPENAI_API_KEY enables access to OpenAI's language models
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
COMPOSIO_USER_ID=your_composio_user_id_here
OPENAI_API_KEY=your_openai_api_key_here
```

### 4. Import dependencies

No description provided.
```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from dotenv import load_dotenv
from composio import Composio
import asyncio
import os

load_dotenv()
```

```typescript
import { Composio } from '@composio/core';
import { LangchainProvider } from '@composio/langchain';
import { MultiServerMCPClient } from "@langchain/mcp-adapters";
import { createAgent } from "langchain";
import * as readline from 'readline';
import 'dotenv/config';

dotenv.config();
```

### 5. Initialize Composio client

What's happening:
- We're loading the COMPOSIO_API_KEY from environment variables and validating it exists
- Creating a Composio instance that will manage our connection to Gemini tools
- Validating that COMPOSIO_USER_ID is also set before proceeding
```python
async def main():
    composio = Composio(api_key=os.getenv("COMPOSIO_API_KEY"))

    if not os.getenv("COMPOSIO_API_KEY"):
        raise ValueError("COMPOSIO_API_KEY is not set")
    if not os.getenv("COMPOSIO_USER_ID"):
        raise ValueError("COMPOSIO_USER_ID is not set")
```

```typescript
const composioApiKey = process.env.COMPOSIO_API_KEY;
const userId = process.env.COMPOSIO_USER_ID;

if (!composioApiKey) throw new Error('COMPOSIO_API_KEY is not set');
if (!userId) throw new Error('COMPOSIO_USER_ID is not set');

async function main() {
    const composio = new Composio({
        apiKey: composioApiKey as string,
        provider: new LangchainProvider()
    });
```

### 6. Create a Tool Router session

What's happening:
- We're creating a Tool Router session that gives your agent access to Gemini tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
- This approach allows the agent to dynamically load and use Gemini tools as needed
```python
# Create Tool Router session for Gemini
session = composio.create(
    user_id=os.getenv("COMPOSIO_USER_ID"),
    toolkits=['gemini']
)

url = session.mcp.url
```

```typescript
const session = await composio.create(
    userId as string,
    {
        toolkits: ['gemini']
    }
);

const url = session.mcp.url;
```

### 7. Configure the agent with the MCP URL

No description provided.
```python
client = MultiServerMCPClient({
    "gemini-agent": {
        "transport": "streamable_http",
        "url": session.mcp.url,
        "headers": {
            "x-api-key": os.getenv("COMPOSIO_API_KEY")
        }
    }
})

tools = await client.get_tools()

agent = create_agent("gpt-5", tools)
```

```typescript
const client = new MultiServerMCPClient({
    "gemini-agent": {
        transport: "http",
        url: url,
        headers: {
            "x-api-key": process.env.COMPOSIO_API_KEY
        }
    }
});

const tools = await client.getTools();

const agent = createAgent({ model: "gpt-5", tools });
```

### 8. Set up interactive chat interface

No description provided.
```python
conversation_history = []

print("Chat started! Type 'exit' or 'quit' to end the conversation.\n")
print("Ask any Gemini related question or task to the agent.\n")

while True:
    user_input = input("You: ").strip()

    if user_input.lower() in ['exit', 'quit', 'bye']:
        print("\nGoodbye!")
        break

    if not user_input:
        continue

    conversation_history.append({"role": "user", "content": user_input})
    print("\nAgent is thinking...\n")

    response = await agent.ainvoke({"messages": conversation_history})
    conversation_history = response['messages']
    final_response = response['messages'][-1].content
    print(f"Agent: {final_response}\n")
```

```typescript
let conversationHistory: any[] = [];

console.log("Chat started! Type 'exit' or 'quit' to end the conversation.\n");
console.log("Ask any Gemini related question or task to the agent.\n");

const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
    prompt: 'You: '
});

rl.prompt();

rl.on('line', async (userInput: string) => {
    const trimmedInput = userInput.trim();

    if (['exit', 'quit', 'bye'].includes(trimmedInput.toLowerCase())) {
        console.log("\nGoodbye!");
        rl.close();
        process.exit(0);
    }

    if (!trimmedInput) {
        rl.prompt();
        return;
    }

    conversationHistory.push({ role: "user", content: trimmedInput });
    console.log("\nAgent is thinking...\n");

    const response = await agent.invoke({ messages: conversationHistory });
    conversationHistory = response.messages;

    const finalResponse = response.messages[response.messages.length - 1]?.content;
    console.log(`Agent: ${finalResponse}\n`);
        
        rl.prompt();
    });

    rl.on('close', () => {
        console.log('\n👋 Session ended.');
        process.exit(0);
    });
```

### 9. Run the application

No description provided.
```python
if __name__ == "__main__":
    asyncio.run(main())
```

```typescript
main().catch((err) => {
    console.error('Fatal error:', err);
    process.exit(1);
});
```

## Complete Code

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from dotenv import load_dotenv
from composio import Composio
import asyncio
import os

load_dotenv()

async def main():
    composio = Composio(api_key=os.getenv("COMPOSIO_API_KEY"))
    
    if not os.getenv("COMPOSIO_API_KEY"):
        raise ValueError("COMPOSIO_API_KEY is not set")
    if not os.getenv("COMPOSIO_USER_ID"):
        raise ValueError("COMPOSIO_USER_ID is not set")
    
    session = composio.create(
        user_id=os.getenv("COMPOSIO_USER_ID"),
        toolkits=['gemini']
    )

    url = session.mcp.url
    
    client = MultiServerMCPClient({
        "gemini-agent": {
            "transport": "streamable_http",
            "url": url,
            "headers": {
                "x-api-key": os.getenv("COMPOSIO_API_KEY")
            }
        }
    })
    
    tools = await client.get_tools()
  
    agent = create_agent("gpt-5", tools)
    
    conversation_history = []
    
    print("Chat started! Type 'exit' or 'quit' to end the conversation.\n")
    print("Ask any Gemini related question or task to the agent.\n")
    
    while True:
        user_input = input("You: ").strip()
        
        if user_input.lower() in ['exit', 'quit', 'bye']:
            print("\nGoodbye!")
            break
        
        if not user_input:
            continue
        
        conversation_history.append({"role": "user", "content": user_input})
        print("\nAgent is thinking...\n")
        
        response = await agent.ainvoke({"messages": conversation_history})
        conversation_history = response['messages']
        final_response = response['messages'][-1].content
        print(f"Agent: {final_response}\n")

if __name__ == "__main__":
    asyncio.run(main())
```

```typescript
import { Composio } from '@composio/core';
import { LangchainProvider } from '@composio/langchain';
import { MultiServerMCPClient } from "@langchain/mcp-adapters";  
import { createAgent } from "langchain";
import * as readline from 'readline';
import 'dotenv/config';

const composioApiKey = process.env.COMPOSIO_API_KEY;
const userId = process.env.COMPOSIO_USER_ID;

if (!composioApiKey) throw new Error('COMPOSIO_API_KEY is not set');
if (!userId) throw new Error('COMPOSIO_USER_ID is not set');

async function main() {
    const composio = new Composio({
        apiKey: composioApiKey as string,
        provider: new LangchainProvider()
    });

    const session = await composio.create(
        userId as string,
        {
            toolkits: ['gemini']
        }
    );

    const url = session.mcp.url;
    
    const client = new MultiServerMCPClient({
        "gemini-agent": {
            transport: "http",
            url: url,
            headers: {
                "x-api-key": process.env.COMPOSIO_API_KEY
            }
        }
    });
    
    const tools = await client.getTools();
  
    const agent = createAgent({ model: "gpt-5", tools });
    
    let conversationHistory: any[] = [];
    
    console.log("Chat started! Type 'exit' or 'quit' to end the conversation.\n");
    console.log("Ask any Gemini related question or task to the agent.\n");
    
    const rl = readline.createInterface({
        input: process.stdin,
        output: process.stdout,
        prompt: 'You: '
    });

    rl.prompt();

    rl.on('line', async (userInput: string) => {
        const trimmedInput = userInput.trim();
        
        if (['exit', 'quit', 'bye'].includes(trimmedInput.toLowerCase())) {
            console.log("\nGoodbye!");
            rl.close();
            process.exit(0);
        }
        
        if (!trimmedInput) {
            rl.prompt();
            return;
        }
        
        conversationHistory.push({ role: "user", content: trimmedInput });
        console.log("\nAgent is thinking...\n");
        
        const response = await agent.invoke({ messages: conversationHistory });
        conversationHistory = response.messages;
        
        const finalResponse = response.messages[response.messages.length - 1]?.content;
        console.log(`Agent: ${finalResponse}\n`);
        
        rl.prompt();
    });

    rl.on('close', () => {
        console.log('\nSession ended.');
        process.exit(0);
    });
}

main().catch((err) => {
    console.error('Fatal error:', err);
    process.exit(1);
});
```

## Conclusion

You've successfully built a LangChain agent that can interact with Gemini through Composio's Tool Router.
Key features of this implementation:
- Dynamic tool loading through Composio's Tool Router
- Conversation history maintenance for context-aware responses
- Async Python provides clean, efficient execution of agent workflows
You can extend this further by adding error handling, implementing specific business logic, or integrating additional Composio toolkits to create multi-app workflows.

## How to build Gemini MCP Agent with another framework

- [ChatGPT](https://composio.dev/toolkits/gemini/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/gemini/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/gemini/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/gemini/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/gemini/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/gemini/framework/codex)
- [Cursor](https://composio.dev/toolkits/gemini/framework/cursor)
- [VS Code](https://composio.dev/toolkits/gemini/framework/vscode)
- [OpenCode](https://composio.dev/toolkits/gemini/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/gemini/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/gemini/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/gemini/framework/cli)
- [Google ADK](https://composio.dev/toolkits/gemini/framework/google-adk)
- [Vercel AI SDK](https://composio.dev/toolkits/gemini/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/gemini/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/gemini/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/gemini/framework/crew-ai)

## Related Toolkits

- [Composio](https://composio.dev/toolkits/composio) - Composio is an integration platform that connects AI agents with hundreds of business tools. It streamlines authentication and lets you trigger actions across services—no custom code needed.
- [Composio search](https://composio.dev/toolkits/composio_search) - Composio search is a unified web search toolkit spanning travel, e-commerce, news, financial markets, images, and more. It lets you and your apps tap into up-to-date web data from a single, easy-to-integrate service.
- [Perplexityai](https://composio.dev/toolkits/perplexityai) - Perplexityai delivers natural, conversational AI models for generating human-like text. Instantly get context-aware, high-quality responses for chat, search, or complex workflows.
- [Browser tool](https://composio.dev/toolkits/browser_tool) - Browser tool is a virtual browser integration that lets AI agents interact with the web programmatically. It enables automated browsing, scraping, and action-taking from any AI workflow.
- [Ai ml api](https://composio.dev/toolkits/ai_ml_api) - Ai ml api is a suite of AI/ML models for natural language and image tasks. It provides fast, scalable access to advanced AI capabilities for your apps and workflows.
- [Aivoov](https://composio.dev/toolkits/aivoov) - Aivoov is an AI-powered text-to-speech platform offering 1,000+ voices in over 150 languages. Instantly turn written content into natural, human-like audio for any application.
- [All images ai](https://composio.dev/toolkits/all_images_ai) - All-Images.ai is an AI-powered image generation and management platform. It helps you create, search, and organize images effortlessly with advanced AI capabilities.
- [Anthropic administrator](https://composio.dev/toolkits/anthropic_administrator) - Anthropic administrator is an API for managing Anthropic organizational resources like members, workspaces, and API keys. It helps you automate admin tasks and streamline resource management across your Anthropic organization.
- [Api labz](https://composio.dev/toolkits/api_labz) - Api labz is a platform offering a suite of AI-driven APIs and workflow tools. It helps developers automate tasks and build smarter, more efficient applications.
- [Apipie ai](https://composio.dev/toolkits/apipie_ai) - Apipie ai is an AI model aggregator offering a single API for accessing top AI models from multiple providers. It helps developers build cost-efficient, latency-optimized AI solutions without juggling multiple integrations.
- [Astica ai](https://composio.dev/toolkits/astica_ai) - Astica ai provides APIs for computer vision, NLP, and voice synthesis. Integrate advanced AI features into your app with a single API key.
- [Bigml](https://composio.dev/toolkits/bigml) - BigML is a machine learning platform that lets you build, train, and deploy predictive models from your data. Its intuitive interface and robust API make machine learning accessible and efficient.
- [Botbaba](https://composio.dev/toolkits/botbaba) - Botbaba is a platform for building, managing, and deploying conversational AI chatbots across messaging channels. It streamlines chatbot automation, making it easier to integrate AI into customer interactions.
- [Botpress](https://composio.dev/toolkits/botpress) - Botpress is an open-source platform for building, deploying, and managing chatbots. It helps teams automate conversations and deliver rich, interactive messaging experiences.
- [Chatbotkit](https://composio.dev/toolkits/chatbotkit) - Chatbotkit is a platform for building and managing AI-powered chatbots using robust APIs and SDKs. It lets you easily add conversational AI to your apps for better user engagement.
- [Cody](https://composio.dev/toolkits/cody) - Cody is an AI assistant built for businesses, trained on your company's knowledge and data. It delivers instant answers and insights, tailored for your team.
- [Context7 MCP](https://composio.dev/toolkits/context7_mcp) - Context7 MCP delivers live, version-specific code docs and examples right from the source. It helps developers and AI agents instantly retrieve authoritative programming info—no more out-of-date docs.
- [Customgpt](https://composio.dev/toolkits/customgpt) - CustomGPT.ai lets you build and deploy chatbots tailored to your own data and business needs. Get precise and context-aware AI conversations without writing code.
- [Datarobot](https://composio.dev/toolkits/datarobot) - Datarobot is a machine learning platform that automates model development, deployment, and monitoring. It empowers organizations to quickly gain predictive insights from large datasets.
- [Deepgram](https://composio.dev/toolkits/deepgram) - Deepgram is an AI-powered speech recognition platform for accurate audio transcription and understanding. It enables fast, scalable speech-to-text with advanced audio intelligence features.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Gemini MCP?

With a standalone Gemini MCP server, the agents and LLMs can only access a fixed set of Gemini tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Gemini and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with LangChain?

Yes, you can. LangChain fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Gemini tools.

### Can I manage the permissions and scopes for Gemini while using Tool Router?

Yes, absolutely. You can configure which Gemini scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Gemini data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
