# How to integrate Hyperbrowser MCP with LangChain

```json
{
  "title": "How to integrate Hyperbrowser MCP with LangChain",
  "toolkit": "Hyperbrowser",
  "toolkit_slug": "hyperbrowser",
  "framework": "LangChain",
  "framework_slug": "langchain",
  "url": "https://composio.dev/toolkits/hyperbrowser/framework/langchain",
  "markdown_url": "https://composio.dev/toolkits/hyperbrowser/framework/langchain.md",
  "updated_at": "2026-05-12T10:15:29.348Z"
}
```

## Introduction

This guide walks you through connecting Hyperbrowser to LangChain using the Composio tool router. By the end, you'll have a working Hyperbrowser agent that can start a browser session with stealth mode, extract all product titles from this url, check status of your ongoing scrape job through natural language commands.
This guide will help you understand how to give your LangChain agent real control over a Hyperbrowser account through Composio's Hyperbrowser MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Hyperbrowser with

- [OpenAI Agents SDK](https://composio.dev/toolkits/hyperbrowser/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/hyperbrowser/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/hyperbrowser/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/hyperbrowser/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/hyperbrowser/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/hyperbrowser/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/hyperbrowser/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/hyperbrowser/framework/cli)
- [Google ADK](https://composio.dev/toolkits/hyperbrowser/framework/google-adk)
- [Vercel AI SDK](https://composio.dev/toolkits/hyperbrowser/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/hyperbrowser/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/hyperbrowser/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/hyperbrowser/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- Get and set up your OpenAI and Composio API keys
- Connect your Hyperbrowser project to Composio
- Create a Tool Router MCP session for Hyperbrowser
- Initialize an MCP client and retrieve Hyperbrowser tools
- Build a LangChain agent that can interact with Hyperbrowser
- Set up an interactive chat interface for testing

## What is LangChain?

LangChain is a framework for developing applications powered by language models. It provides tools and abstractions for building agents that can reason, use tools, and maintain conversation context.
Key features include:
- Agent Framework: Build agents that can use tools and make decisions
- MCP Integration: Connect to external services through Model Context Protocol adapters
- Memory Management: Maintain conversation history across interactions
- Multi-Provider Support: Works with OpenAI, Anthropic, and other LLM providers

## What is the Hyperbrowser MCP server, and what's possible with it?

The Hyperbrowser MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Hyperbrowser account. It provides structured and secure access to automated browser sessions, web scraping, and browser-based task management, so your agent can launch sessions, extract data, manage automation jobs, and monitor progress on your behalf.
- Automated browser session creation: Let your agent spin up new browser sessions with custom privacy, stealth, and proxy settings for tailored automation tasks.
- Scalable web scraping and extraction: Easily initiate and manage scrape jobs to extract structured content from any target website, with support for session and scrape customization.
- Real-time job status monitoring: Have your agent check, track, and report the live status of browser-use, crawl, or data extraction jobs, ensuring you always know what's happening.
- Retrieve results from automation jobs: Fetch and review the outputs of completed crawl or extract jobs, including paginated data and detailed results, right inside your workflow.
- Profile and automation management: Create or delete Hyperbrowser profiles as needed, giving you flexible control over your automation environment and resources.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `HYPERBROWSER_ADD_EXTENSION` | Add Extension | Tool to add a new browser extension to Hyperbrowser for use in sessions. Use when you need to upload a Chrome extension zip file that can be loaded into browser sessions. The extension zip must contain a valid manifest.json file at the root level. |
| `HYPERBROWSER_CREATE_PROFILE` | Create Hyperbrowser Profile | Creates a new persistent Hyperbrowser profile for storing browser state (cookies, sessions, etc.). Use this to create a reusable profile that can be attached to browser sessions via the profile ID. Profiles allow you to maintain logged-in states across multiple sessions without re-authenticating. |
| `HYPERBROWSER_CREATE_SCRAPE_JOB` | Create Scrape Job | Tool to initiate a new scrape job. Use when you need to extract structured content from a target URL with custom session and scrape settings. |
| `HYPERBROWSER_CREATE_SESSION` | Create Session | Tool to create a new browser session with custom stealth, proxy, and privacy settings. Use when initializing an automated browsing session with specific configuration. |
| `HYPERBROWSER_DELETE_PROFILE` | Delete Profile | Tool to delete a profile. Use when you need to remove a profile by its unique identifier after confirming its existence. |
| `HYPERBROWSER_FETCH_WEB_PAGE` | Fetch Web Page | Tool to fetch a web page and return content in various formats (HTML, Markdown, JSON, screenshot, etc.). Use when you need to retrieve and process web content with customizable browser settings, output formats, and content filtering options. |
| `HYPERBROWSER_GET_BROWSER_USE_TASK_STATUS` | Get browser-use task status | Tool to retrieve the current status of a browser-use task. Use when checking if a browser automation task has completed or is still pending. |
| `HYPERBROWSER_GET_CLAUDE_COMPUTER_USE_TASK_RESULT` | Get Claude Computer Use Task Result | Tool to retrieve the complete result and status of a Claude Computer Use task. Use when you need full task details including execution steps, final results, and error information. |
| `HYPERBROWSER_GET_CLAUDE_COMPUTER_USE_TASK_STATUS` | Get Claude Computer Use Task Status | Poll the execution status of a Claude Computer Use task. Use after calling HYPERBROWSER_START_CLAUDE_COMPUTER_USE_TASK to check if the task has completed, is still running, or has failed. Pass the jobId returned from the start task action. |
| `HYPERBROWSER_GET_CRAWL_JOB_STATUS` | Get Crawl Job Status | Tool to retrieve the status and results of a specific crawl job. Use after submitting a crawl job to check its progress or fetch results. |
| `HYPERBROWSER_GET_CRAWL_STATUS` | Get Crawl Status | Tool to retrieve the current status of a specific crawl job. Use after initiating a crawl job to poll its status. |
| `HYPERBROWSER_GET_CUA_TASK_RESULT` | Get CUA Task Result | Tool to retrieve the status and results of a CUA (Claude User Agent) task. Use after starting a CUA task to check its progress, retrieve execution steps, and obtain the final result. |
| `HYPERBROWSER_GET_CUA_TASK_STATUS` | Get CUA Task Status | Poll the execution status of a CUA task. Use to check if a CUA task has completed, is still running, or has failed. |
| `HYPERBROWSER_GET_EXTRACT_JOB_RESULT` | Get Extract Job Result | Tool to fetch the status and results of a specific extract job. Use after initiating an extract job to monitor progress and retrieve final data. |
| `HYPERBROWSER_GET_EXTRACT_JOB_STATUS` | Get Extract Job Status | Retrieve the status of an extract job. Use after calling Start Extract Job to poll its progress. Poll periodically until status is 'completed' or 'failed', then use Get Extract Job Result to retrieve the extracted data. |
| `HYPERBROWSER_GET_GEMINI_COMPUTER_USE_TASK_RESULT` | Get Gemini Computer Use task result | Tool to retrieve the current status and results of a Gemini Computer Use task. Use when checking if a Gemini automation task has completed or is still pending. |
| `HYPERBROWSER_GET_HYPER_AGENT_TASK_RESULT` | Get HyperAgent Task Result | Tool to retrieve the status and results of a HyperAgent task. Use when checking if a HyperAgent task has completed or to get the final results and execution steps. |
| `HYPERBROWSER_GET_PROFILE` | Get Profile By ID | Retrieves details of a specific Hyperbrowser profile by its UUID. Returns profile metadata including name, team association, and timestamps. Use the List Profiles action first to obtain valid profile IDs. |
| `HYPERBROWSER_GET_SCRAPE_JOB_RESULT` | Get Scrape Job Result | Retrieves the status and results of a scrape job. Poll this endpoint after creating a scrape job to check progress and get the scraped content when completed. Returns jobId, status (pending/running/completed/failed), scraped data, and any error message. |
| `HYPERBROWSER_GET_SCRAPE_JOB_STATUS` | Get Scrape Job Status | Tool to retrieve the current status of a specific scrape job. Use after initiating a scrape job to poll its status; poll at moderate intervals (e.g., every 2–5 seconds) rather than tight loops to avoid exhausting quota. Responses may be large when the job includes screenshots or full HTML. |
| `HYPERBROWSER_GET_SESSION_DETAILS` | Get Session Details | Retrieve detailed information about a Hyperbrowser session by its ID. Use this tool to get connection endpoints (wsEndpoint, liveUrl), session status, and configuration details for an existing session. The session ID can be obtained from create_session or list_sessions actions. Returns active connection details (wsEndpoint, liveUrl, token) for active sessions, or historical information for closed sessions. |
| `HYPERBROWSER_GET_SESSION_DOWNLOADS_URL` | Get Session Downloads URL | Tool to retrieve the downloads URL for a session. Returns a signed URL to download files saved during a browser session. Note: The session must be created with 'saveDownloads: true' for downloads to be available. Poll this endpoint checking status until 'completed'. |
| `HYPERBROWSER_GET_SESSION_RECORDING` | Get Session Recording | Retrieve the recording URL for a browser session. Returns a pre-signed S3 URL to download the rrweb JSON recording. The recording status indicates availability: - 'pending': Recording is being prepared - 'in_progress': Recording is still being processed - 'completed': Recording is ready, recordingUrl will contain the download link - 'failed': Recording failed, check the error field - 'not_enabled': Session was created without enableWebRecording=true Poll this endpoint until status is 'completed' or 'failed' after stopping a session. |
| `HYPERBROWSER_GET_SESSION_VIDEO_RECORDING_URL` | Get Session Video Recording URL | Tool to retrieve the video recording URL for a browser session. Returns a pre-signed URL to download the video recording. The video recording status indicates availability: - 'pending': Video recording is being prepared - 'in_progress': Video recording is still being processed - 'completed': Video recording is ready, recordingUrl will contain the download link - 'failed': Video recording failed, check the error field - 'not_enabled': Session was created without enableVideoWebRecording=true Poll this endpoint until status is 'completed' or 'failed' after stopping a session. |
| `HYPERBROWSER_GET_WEB_CRAWL_RESULT` | Get Web Crawl Result | Tool to retrieve the status and results of a web crawl job. Use after submitting a web crawl job to check its progress and fetch paginated results. Supports pagination via page and batchSize parameters for large crawl jobs. |
| `HYPERBROWSER_GET_WEB_CRAWL_STATUS` | Get Web Crawl Status | Tool to retrieve just the status of a web crawl job without the full results. Use after initiating a web crawl to poll its current state. |
| `HYPERBROWSER_LIST_EXTENSIONS` | List Extensions | Tool to list all browser extensions. Use when you need to fetch all available extensions for the Hyperbrowser account. |
| `HYPERBROWSER_LIST_PROFILES` | List Profiles | Tool to list profiles. Use when you need to fetch paginated profiles and optionally filter by name. |
| `HYPERBROWSER_LIST_SESSIONS` | List Sessions | Tool to list sessions with optional status filter. Use when you need a paginated overview of browser sessions before acting on them. |
| `HYPERBROWSER_SEARCH_WEB` | Search Web | Tool to perform a web search and retrieve results with titles, URLs, and descriptions. Use when you need to search the web for information on a specific topic or query. |
| `HYPERBROWSER_START_BROWSER_USE_TASK` | Start Browser Use Task | Tool to start an asynchronous browser-use task. Use when you need to automate web interactions given a task instruction. |
| `HYPERBROWSER_START_CLAUDE_COMPUTER_USE_TASK` | Start Claude Computer Use Task | Tool to start a Claude Computer Use task. Use when you need AI-driven automated browser interactions. Call after you have your task prompt and any session preferences configured. |
| `HYPERBROWSER_START_CRAWL_JOB` | Start Crawl Job | Tool to start a new crawl job for a specified URL. Use when you need to initiate a web crawl before checking job status. |
| `HYPERBROWSER_START_CUA_TASK` | Start CUA Task | Tool to start an OpenAI CUA (Computer-Using Agent) task. Use when you need AI-driven automated browser interactions powered by OpenAI. Call after you have your task prompt and any session preferences configured. |
| `HYPERBROWSER_START_EXTRACT_JOB` | Start Extract Job | Start an AI-powered data extraction job from one or more web pages. Use this tool to scrape structured data from websites by providing URLs and either a natural language prompt describing what to extract, or a JSON schema defining the output structure (or both for best results). Returns a jobId to track extraction progress via HYPERBROWSER_GET_EXTRACT_JOB_STATUS and retrieve results via HYPERBROWSER_GET_EXTRACT_JOB_RESULT. |
| `HYPERBROWSER_START_GEMINI_COMPUTER_USE_TASK` | Start Gemini Computer Use Task | Tool to start a Gemini Computer Use task for browser automation using Google's Gemini. Use when you need AI-driven automated browser interactions with Gemini models. |
| `HYPERBROWSER_START_WEB_CRAWL` | Start Web Crawl | Tool to start an asynchronous web crawl job that follows links from a starting URL and returns content from each page. Use when you need to crawl multiple pages from a website. |
| `HYPERBROWSER_STOP_BROWSER_USE_TASK` | Stop Browser Use Task | Tool to stop a running browser-use task. Use when halting an in-progress browser automation task after confirming its task ID. |
| `HYPERBROWSER_STOP_CLAUDE_COMPUTER_USE_TASK` | Stop Claude Computer Use Task | Tool to stop a running Claude computer use task. Use when a Claude computer use task is in progress and needs to be terminated. |
| `HYPERBROWSER_STOP_CUA_TASK` | Stop CUA Task | Tool to stop a running CUA task. Use when a CUA task is in progress and needs to be terminated. |
| `HYPERBROWSER_STOP_GEMINI_COMPUTER_USE_TASK` | Stop Gemini Computer Use Task | Tool to stop a running Gemini computer use task. Use when a Gemini computer use task is in progress and needs to be terminated. |
| `HYPERBROWSER_STOP_SESSION` | Stop Session | Tool to stop a running session by ID. Use after confirming the session is active. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Hyperbrowser MCP server is an implementation of the Model Context Protocol that connects your AI agent to Hyperbrowser. It provides structured and secure access so your agent can perform Hyperbrowser operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

No description provided.

### 1. Getting API Keys for OpenAI and Composio

OpenAI API Key
- Go to the [OpenAI dashboard](https://platform.openai.com/settings/organization/api-keys) and create an API key. You'll need credits to use the models, or you can connect to another model provider.
- Keep the API key safe.
Composio API Key
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Navigate to your API settings and generate a new API key.
- Store this key securely as you'll need it for authentication.

### 2. Install dependencies

No description provided.
```python
pip install composio-langchain langchain-mcp-adapters langchain python-dotenv
```

```typescript
npm install @composio/langchain @langchain/core @langchain/openai @langchain/mcp-adapters dotenv
```

### 3. Set up environment variables

Create a .env file in your project root.
What's happening:
- COMPOSIO_API_KEY authenticates your requests to Composio's API
- COMPOSIO_USER_ID identifies the user for session management
- OPENAI_API_KEY enables access to OpenAI's language models
```bash
COMPOSIO_API_KEY=your_composio_api_key_here
COMPOSIO_USER_ID=your_composio_user_id_here
OPENAI_API_KEY=your_openai_api_key_here
```

### 4. Import dependencies

No description provided.
```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from dotenv import load_dotenv
from composio import Composio
import asyncio
import os

load_dotenv()
```

```typescript
import { Composio } from '@composio/core';
import { LangchainProvider } from '@composio/langchain';
import { MultiServerMCPClient } from "@langchain/mcp-adapters";
import { createAgent } from "langchain";
import * as readline from 'readline';
import 'dotenv/config';

dotenv.config();
```

### 5. Initialize Composio client

What's happening:
- We're loading the COMPOSIO_API_KEY from environment variables and validating it exists
- Creating a Composio instance that will manage our connection to Hyperbrowser tools
- Validating that COMPOSIO_USER_ID is also set before proceeding
```python
async def main():
    composio = Composio(api_key=os.getenv("COMPOSIO_API_KEY"))

    if not os.getenv("COMPOSIO_API_KEY"):
        raise ValueError("COMPOSIO_API_KEY is not set")
    if not os.getenv("COMPOSIO_USER_ID"):
        raise ValueError("COMPOSIO_USER_ID is not set")
```

```typescript
const composioApiKey = process.env.COMPOSIO_API_KEY;
const userId = process.env.COMPOSIO_USER_ID;

if (!composioApiKey) throw new Error('COMPOSIO_API_KEY is not set');
if (!userId) throw new Error('COMPOSIO_USER_ID is not set');

async function main() {
    const composio = new Composio({
        apiKey: composioApiKey as string,
        provider: new LangchainProvider()
    });
```

### 6. Create a Tool Router session

What's happening:
- We're creating a Tool Router session that gives your agent access to Hyperbrowser tools
- The create method takes the user ID and specifies which toolkits should be available
- The returned session.mcp.url is the MCP server URL that your agent will use
- This approach allows the agent to dynamically load and use Hyperbrowser tools as needed
```python
# Create Tool Router session for Hyperbrowser
session = composio.create(
    user_id=os.getenv("COMPOSIO_USER_ID"),
    toolkits=['hyperbrowser']
)

url = session.mcp.url
```

```typescript
const session = await composio.create(
    userId as string,
    {
        toolkits: ['hyperbrowser']
    }
);

const url = session.mcp.url;
```

### 7. Configure the agent with the MCP URL

No description provided.
```python
client = MultiServerMCPClient({
    "hyperbrowser-agent": {
        "transport": "streamable_http",
        "url": session.mcp.url,
        "headers": {
            "x-api-key": os.getenv("COMPOSIO_API_KEY")
        }
    }
})

tools = await client.get_tools()

agent = create_agent("gpt-5", tools)
```

```typescript
const client = new MultiServerMCPClient({
    "hyperbrowser-agent": {
        transport: "http",
        url: url,
        headers: {
            "x-api-key": process.env.COMPOSIO_API_KEY
        }
    }
});

const tools = await client.getTools();

const agent = createAgent({ model: "gpt-5", tools });
```

### 8. Set up interactive chat interface

No description provided.
```python
conversation_history = []

print("Chat started! Type 'exit' or 'quit' to end the conversation.\n")
print("Ask any Hyperbrowser related question or task to the agent.\n")

while True:
    user_input = input("You: ").strip()

    if user_input.lower() in ['exit', 'quit', 'bye']:
        print("\nGoodbye!")
        break

    if not user_input:
        continue

    conversation_history.append({"role": "user", "content": user_input})
    print("\nAgent is thinking...\n")

    response = await agent.ainvoke({"messages": conversation_history})
    conversation_history = response['messages']
    final_response = response['messages'][-1].content
    print(f"Agent: {final_response}\n")
```

```typescript
let conversationHistory: any[] = [];

console.log("Chat started! Type 'exit' or 'quit' to end the conversation.\n");
console.log("Ask any Hyperbrowser related question or task to the agent.\n");

const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
    prompt: 'You: '
});

rl.prompt();

rl.on('line', async (userInput: string) => {
    const trimmedInput = userInput.trim();

    if (['exit', 'quit', 'bye'].includes(trimmedInput.toLowerCase())) {
        console.log("\nGoodbye!");
        rl.close();
        process.exit(0);
    }

    if (!trimmedInput) {
        rl.prompt();
        return;
    }

    conversationHistory.push({ role: "user", content: trimmedInput });
    console.log("\nAgent is thinking...\n");

    const response = await agent.invoke({ messages: conversationHistory });
    conversationHistory = response.messages;

    const finalResponse = response.messages[response.messages.length - 1]?.content;
    console.log(`Agent: ${finalResponse}\n`);
        
        rl.prompt();
    });

    rl.on('close', () => {
        console.log('\n👋 Session ended.');
        process.exit(0);
    });
```

### 9. Run the application

No description provided.
```python
if __name__ == "__main__":
    asyncio.run(main())
```

```typescript
main().catch((err) => {
    console.error('Fatal error:', err);
    process.exit(1);
});
```

## Complete Code

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from dotenv import load_dotenv
from composio import Composio
import asyncio
import os

load_dotenv()

async def main():
    composio = Composio(api_key=os.getenv("COMPOSIO_API_KEY"))
    
    if not os.getenv("COMPOSIO_API_KEY"):
        raise ValueError("COMPOSIO_API_KEY is not set")
    if not os.getenv("COMPOSIO_USER_ID"):
        raise ValueError("COMPOSIO_USER_ID is not set")
    
    session = composio.create(
        user_id=os.getenv("COMPOSIO_USER_ID"),
        toolkits=['hyperbrowser']
    )

    url = session.mcp.url
    
    client = MultiServerMCPClient({
        "hyperbrowser-agent": {
            "transport": "streamable_http",
            "url": url,
            "headers": {
                "x-api-key": os.getenv("COMPOSIO_API_KEY")
            }
        }
    })
    
    tools = await client.get_tools()
  
    agent = create_agent("gpt-5", tools)
    
    conversation_history = []
    
    print("Chat started! Type 'exit' or 'quit' to end the conversation.\n")
    print("Ask any Hyperbrowser related question or task to the agent.\n")
    
    while True:
        user_input = input("You: ").strip()
        
        if user_input.lower() in ['exit', 'quit', 'bye']:
            print("\nGoodbye!")
            break
        
        if not user_input:
            continue
        
        conversation_history.append({"role": "user", "content": user_input})
        print("\nAgent is thinking...\n")
        
        response = await agent.ainvoke({"messages": conversation_history})
        conversation_history = response['messages']
        final_response = response['messages'][-1].content
        print(f"Agent: {final_response}\n")

if __name__ == "__main__":
    asyncio.run(main())
```

```typescript
import { Composio } from '@composio/core';
import { LangchainProvider } from '@composio/langchain';
import { MultiServerMCPClient } from "@langchain/mcp-adapters";  
import { createAgent } from "langchain";
import * as readline from 'readline';
import 'dotenv/config';

const composioApiKey = process.env.COMPOSIO_API_KEY;
const userId = process.env.COMPOSIO_USER_ID;

if (!composioApiKey) throw new Error('COMPOSIO_API_KEY is not set');
if (!userId) throw new Error('COMPOSIO_USER_ID is not set');

async function main() {
    const composio = new Composio({
        apiKey: composioApiKey as string,
        provider: new LangchainProvider()
    });

    const session = await composio.create(
        userId as string,
        {
            toolkits: ['hyperbrowser']
        }
    );

    const url = session.mcp.url;
    
    const client = new MultiServerMCPClient({
        "hyperbrowser-agent": {
            transport: "http",
            url: url,
            headers: {
                "x-api-key": process.env.COMPOSIO_API_KEY
            }
        }
    });
    
    const tools = await client.getTools();
  
    const agent = createAgent({ model: "gpt-5", tools });
    
    let conversationHistory: any[] = [];
    
    console.log("Chat started! Type 'exit' or 'quit' to end the conversation.\n");
    console.log("Ask any Hyperbrowser related question or task to the agent.\n");
    
    const rl = readline.createInterface({
        input: process.stdin,
        output: process.stdout,
        prompt: 'You: '
    });

    rl.prompt();

    rl.on('line', async (userInput: string) => {
        const trimmedInput = userInput.trim();
        
        if (['exit', 'quit', 'bye'].includes(trimmedInput.toLowerCase())) {
            console.log("\nGoodbye!");
            rl.close();
            process.exit(0);
        }
        
        if (!trimmedInput) {
            rl.prompt();
            return;
        }
        
        conversationHistory.push({ role: "user", content: trimmedInput });
        console.log("\nAgent is thinking...\n");
        
        const response = await agent.invoke({ messages: conversationHistory });
        conversationHistory = response.messages;
        
        const finalResponse = response.messages[response.messages.length - 1]?.content;
        console.log(`Agent: ${finalResponse}\n`);
        
        rl.prompt();
    });

    rl.on('close', () => {
        console.log('\nSession ended.');
        process.exit(0);
    });
}

main().catch((err) => {
    console.error('Fatal error:', err);
    process.exit(1);
});
```

## Conclusion

You've successfully built a LangChain agent that can interact with Hyperbrowser through Composio's Tool Router.
Key features of this implementation:
- Dynamic tool loading through Composio's Tool Router
- Conversation history maintenance for context-aware responses
- Async Python provides clean, efficient execution of agent workflows
You can extend this further by adding error handling, implementing specific business logic, or integrating additional Composio toolkits to create multi-app workflows.

## How to build Hyperbrowser MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/hyperbrowser/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/hyperbrowser/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/hyperbrowser/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/hyperbrowser/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/hyperbrowser/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/hyperbrowser/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/hyperbrowser/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/hyperbrowser/framework/cli)
- [Google ADK](https://composio.dev/toolkits/hyperbrowser/framework/google-adk)
- [Vercel AI SDK](https://composio.dev/toolkits/hyperbrowser/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/hyperbrowser/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/hyperbrowser/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/hyperbrowser/framework/crew-ai)

## Related Toolkits

- [Apilio](https://composio.dev/toolkits/apilio) - Apilio is a home automation platform that lets you connect and control smart devices from different brands. It helps you build flexible automations with complex conditions, schedules, and integrations.
- [Basin](https://composio.dev/toolkits/basin) - Basin is a no-code form backend for quickly setting up reliable contact forms. It lets you collect and manage form submissions without writing any server-side code.
- [Bouncer](https://composio.dev/toolkits/bouncer) - Bouncer is an email validation platform that verifies the authenticity of email addresses in real-time and batch. It helps boost deliverability and reduce bounce rates for your communications.
- [Conveyor](https://composio.dev/toolkits/conveyor) - Conveyor is a platform that automates security reviews with a Trust Center and AI-driven questionnaire automation. It streamlines compliance and vendor security processes for faster, hassle-free reviews.
- [Crowdin](https://composio.dev/toolkits/crowdin) - Crowdin is a localization management platform that streamlines translation workflows and collaboration. It helps teams centralize multilingual content, boost productivity, and automate translation processes.
- [Databox](https://composio.dev/toolkits/databox) - Databox is a business analytics platform that connects your data from any tool and device. It helps you track KPIs, build dashboards, and discover actionable insights.
- [Detrack](https://composio.dev/toolkits/detrack) - Detrack is a delivery management platform for real-time tracking and proof of delivery. It helps businesses automate notifications and keep customers updated every step of the way.
- [Dnsfilter](https://composio.dev/toolkits/dnsfilter) - Dnsfilter is a cloud-based DNS security and content filtering solution. It helps organizations block online threats and manage safe internet access with ease.
- [Faraday](https://composio.dev/toolkits/faraday) - Faraday lets you embed AI in workflows across your stack for smarter automation. It boosts your favorite tools with actionable intelligence and seamless integration.
- [Feathery](https://composio.dev/toolkits/feathery) - Feathery is an AI-powered platform for building dynamic data intake forms with advanced logic. It helps teams automate complex workflows and collect structured data with ease.
- [Fillout forms](https://composio.dev/toolkits/fillout_forms) - Fillout forms is an online platform for building and managing forms with a flexible API. It lets you create, distribute, and collect responses from forms with ease.
- [Formdesk](https://composio.dev/toolkits/formdesk) - Formdesk is an online form builder for creating and managing professional forms. It's perfect for collecting data, automating workflows, and integrating form submissions with your favorite services.
- [Formsite](https://composio.dev/toolkits/formsite) - Formsite lets you build online forms and surveys with drag-and-drop simplicity. Capture, manage, and integrate form responses securely for streamlined workflows.
- [Graphhopper](https://composio.dev/toolkits/graphhopper) - GraphHopper is an enterprise-grade Directions API for routing, optimization, and geocoding across multiple vehicle types. It enables fast, reliable route planning and logistics automation for businesses.
- [La Growth Machine](https://composio.dev/toolkits/lagrowthmachine) - La Growth Machine automates multi-channel sales outreach and routine tasks for sales teams. Streamline your workflow and focus on closing more deals.
- [Leverly](https://composio.dev/toolkits/leverly) - Leverly is a workflow automation platform that connects and coordinates actions across your apps. It streamlines repetitive processes so your business runs smoother, faster, and with fewer manual steps.
- [Maintainx](https://composio.dev/toolkits/maintainx) - Maintainx is a cloud-based CMMS for centralizing maintenance data, communication, and workflows. It helps organizations streamline maintenance operations and improve team coordination.
- [Make](https://composio.dev/toolkits/make) - Make is an automation platform that connects your favorite apps and services. Build powerful, custom workflows without writing code.
- [Ntfy](https://composio.dev/toolkits/ntfy) - Ntfy is a notification service to send push messages to phones or desktops. Instantly deliver alerts and updates to users, devices, or teams.
- [Persona](https://composio.dev/toolkits/persona) - Persona offers identity infrastructure to automate user verification and compliance. It helps organizations securely verify users and reduce fraud risk.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Hyperbrowser MCP?

With a standalone Hyperbrowser MCP server, the agents and LLMs can only access a fixed set of Hyperbrowser tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Hyperbrowser and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with LangChain?

Yes, you can. LangChain fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Hyperbrowser tools.

### Can I manage the permissions and scopes for Hyperbrowser while using Tool Router?

Yes, absolutely. You can configure which Hyperbrowser scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Hyperbrowser data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
