# How to connect Composio MCP with VS Code

```json
{
  "title": "How to connect Composio MCP with VS Code",
  "toolkit": "Composio",
  "toolkit_slug": "composio",
  "framework": "VS Code",
  "framework_slug": "vscode",
  "url": "https://composio.dev/toolkits/composio/framework/vscode",
  "markdown_url": "https://composio.dev/toolkits/composio/framework/vscode.md",
  "updated_at": "2026-05-06T08:06:59.702Z"
}
```

## Introduction

### How to connect Composio MCP with VS Code
VS Code is the most popular code editor out there. With its recent AI makeover, it can do more than just help you write code. You can connect your applications to it and let LLMs automate many of the mundane tasks in your workflow.
In this guide, I will explain how to connect Composio with VS Code in the most secure and robust way possible via Composio.

## Also integrate Composio with

- [ChatGPT](https://composio.dev/toolkits/composio/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/composio/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/composio/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/composio/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/composio/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/composio/framework/codex)
- [Cursor](https://composio.dev/toolkits/composio/framework/cursor)
- [OpenCode](https://composio.dev/toolkits/composio/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/composio/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/composio/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/composio/framework/cli)
- [Google ADK](https://composio.dev/toolkits/composio/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/composio/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/composio/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/composio/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/composio/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/composio/framework/crew-ai)

## TL;DR

### Why use Composio?
Composio provides:
- Access to 1,000+ managed apps from a single MCP endpoint. This makes it convenient for agents to run cross-app workflows.
- Programmatic tool calling. Allows LLMs to write its code in a remote workbench to handle complex tool chaining. Reduces to-and-fro with LLMs for frequent tool calling.
- Large tool response handling outside the LLM context. This minimizes context bloat from large tool responses.
- Dynamic just-in-time access to thousands of tools across hundreds of apps. Composio loads the tools your agent needs, so LLMs are not overwhelmed by tools they do not need.

## Connect Composio to VS Code

### Integrate Composio MCP with VS Code
### 1. Install with one click
Click the button below to add Composio to VS Code. You will be prompted to authorize. This requires VS Code 1.99+ with GitHub Copilot.
[+Install in VS Code](vscode:mcp/install?%7B%22name%22%3A%22composio%22%2C%22type%22%3A%22http%22%2C%22url%22%3A%22https%3A%2F%2Fconnect.composio.dev%2Fmcp%22%7D)
### 2. Or add manually
Open or create .vscode/mcp.json in your project root and add the following configuration:

```bash
{
  "servers": {
    "composio": {
      "type": "http",
      "url": "https://connect.composio.dev/mcp"
    }
  }
}
```

## What is the Composio MCP server, and what's possible with it?

The Composio MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Composio account. It provides structured and secure access to your connected tools, so your agent can plan workflows, orchestrate complex actions, manage integrations, and execute cross-tool automations on your behalf.
- Automated workflow planning and execution: The agent can generate and run step-by-step plans for complex, multi-tool use cases—ensuring tasks are completed reliably, even when they span multiple services.
- Connection management and discovery: Effortlessly check the status of multiple toolkit connections, discover what integrations are active, and manage how your agent connects to different services.
- Tool and dependency exploration: Ask your agent to map out tool dependencies, discover related tools, and understand which tools work best together for your workflow.
- Direct code and command execution: Let the agent run code snippets or shell commands in supported environments, tying together automation across your stack.
- Bulk and parallel operations: Use specialized tools for parallel execution or to handle many similar tasks at once—speeding up large automations by making multiple calls in a single workflow.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `COMPOSIO_ASK_ORACLE` | Ask Oracle | Static helper that returns a comprehensive system prompt describing how to plan and execute tasks using the available composio tools and workflows. no inputs required; simply call to retrieve the prompt. always call this after the search tool is executed -- it will provide the best context on how to proceed, chain tools, and execute the task end-to-end. relevant tools referenced by the prompt: 1. composio search tools 2. composio multi execute tool 3. composio execute code 4. composio manage connections 5. composio bash tool |
| `COMPOSIO_CHECK_ACTIVE_CONNECTION` | Check active connection (deprecated) | Deprecated: use check active connections instead for bulk operations. check active connection status for a toolkit or specific connected account id. returns connection details if active, or required parameters for establishing connection if none exists. active connections enable agent actions on the toolkit. |
| `COMPOSIO_CHECK_ACTIVE_CONNECTIONS` | Check multiple active connections | Check active connection status for multiple toolkits or specific connected account ids. returns connection details if active, or required parameters for establishing connection if none exists. active connections enable agent actions on toolkits. |
| `COMPOSIO_CREATE_PLAN` | Create Plan | This is a workflow builder that ensures the LLM produces a complete, step-by-step plan for any use case. WHEN TO CALL: - Call this tool based on COMPOSIO_SEARCH_TOOLS output. If search tools response indicates create_plan should be called and the usecase is not easy, call it. - Use this tool after COMPOSIO_SEARCH_TOOLS or COMPOSIO_MANAGE_CONNECTIONS to generate an execution plan for the user's use case. - USE for medium or hard tasks — skip it for easy ones. - If the user switches to a new use case in the same chat and COMPOSIO_SEARCH_TOOLS again instructs you to call the planner, you MUST call this tool again for that new use case. Memory Integration: - You can choose to add the memory received from the search tool into the known_fields parameter of the plan function to enhance planning with discovered relationships and information. Outputs a complete plan with sections such as "workflow_steps", "complexity_assessment", "decision_matrix", "failure_handling" "output_format", and more as needed. If you skip this step for non-easy tasks, workflows will likely be incomplete, or fail during execution for complex tasks. Calling it guarantees reliable, accurate, and end-to-end workflows aligned with the available tools and connections. |
| `COMPOSIO_DOWNLOAD_S3_FILE` | Download S3 File | Download a file from a public s3 (or r2) url to a local path. |
| `COMPOSIO_ENABLE_TRIGGER` | Enable trigger | Enable a specific trigger for the authenticated user. |
| `COMPOSIO_EXECUTE_AGENT` | Execute agent | Execute complex workflows using ai agent reasoning between multiple tool calls. use this for: complex multi-step workflows requiring reasoning, error handling and retry logic. use composio multi execute tool instead for: simple parallel operations, batch operations on similar data (e.g., fetch 5 emails, get 10 sent messages, retrieve multiple user profiles), independent tool calls that don't need each other's results, or bulk operations where all parameters are known upfront. performance: agent calls add ~2-3 seconds overhead. use for workflows >3 tools or requiring conditional logic. avoid for simple parameter passing. |
| `COMPOSIO_EXECUTE_TOOL` | Execute Composio Tool | Execute a tool using the composio api. |
| `COMPOSIO_GET_DEPENDENCY_GRAPH` | Get Tool Dependency Graph | Get the dependency graph for a given tool, showing related parent tools that might be useful. this action calls the composio labs dependency graph api to retrieve tools that are commonly used together with or before the specified tool. this helps discover related tools and understand common workflows. |
| `COMPOSIO_GET_REQUIRED_PARAMETERS` | Get required parameters for connection | Gets the required parameters for connecting to a toolkit via initiate connection. returns the exact parameter names and types needed for initiate connection's parameters field. supports api keys, oauth credentials, connection fields, and hybrid authentication scenarios. if has default credentials is true, you can call initiate connection with empty parameters for seamless oauth flow. |
| `COMPOSIO_GET_RESPONSE_SCHEMA` | Get response schema | Retrieves the response schema for a specified composio tool. this action fetches the complete response schema definition for any valid composio tool, returning it as a dictionary that describes the expected response structure. |
| `COMPOSIO_INITIATE_CONNECTION` | Initiate connection | Initiate a connection to a toolkit with comprehensive authentication support. supports all authentication scenarios: 1. composio default oauth (no parameters needed) 2. custom oauth (user's client id/client secret) 3. api key/bearer token authentication 4. basic auth (username/password) 5. hybrid scenarios (oauth + connection fields like site name) 6. connection-only fields (subdomain, api key at connection level) 7. no authentication required automatically detects and validates auth config vs connection fields, provides helpful error messages for missing parameters. |
| `COMPOSIO_LIST_TOOLKITS` | List toolkits | List all the available toolkits on composio with filtering options. |
| `COMPOSIO_LIST_TRIGGERS` | List triggers | List available triggers and their configuration schemas. |
| `COMPOSIO_MANAGE_CONNECTION` | Manage connection | Manage a connection to a toolkit with comprehensive authentication support. supports all authentication scenarios: 1. composio default oauth (no parameters needed) 2. custom oauth (user's client id/client secret) 3. api key/bearer token authentication 4. basic auth (username/password) 5. hybrid scenarios (oauth + connection fields like site name) 6. connection-only fields (subdomain, api key at connection level) 7. no authentication required automatically detects and validates auth config vs connection fields, provides helpful error messages for missing parameters. notifies properly if a connection already exists or if authentication params are required from the user. |
| `COMPOSIO_MANAGE_CONNECTIONS` | Manage connections | Create or manage connections to user's apps. Returns a branded authentication link that works for OAuth, API keys, and all other auth types. Call policy: - First call COMPOSIO_SEARCH_TOOLS for the user's query. - If COMPOSIO_SEARCH_TOOLS indicates there is no active connection for a toolkit, call COMPOSIO_MANAGE_CONNECTIONS with the exact toolkit name(s) returned. - Use exact toolkit slugs returned by COMPOSIO_SEARCH_TOOLS; never invent toolkit names. - NEVER execute any toolkit tool without an ACTIVE connection. Tool Behavior: - If a connection is Active, the tool returns the connection details. Always use this to verify connection status and fetch metadata. - If a connection is not Active, returns a authentication link (redirect_url) to create new connection. - If reinitiate_all is true, the tool forces reconnections for all toolkits, even if they already have active connections. Workflow after initiating connection: - Always show the returned redirect_url as a FORMATTED MARKDOWN LINK to the user, and ask them to click on the link to finish authentication. - Begin executing tools only after the connection for that toolkit is confirmed Active. |
| `COMPOSIO_MULTI_EXECUTE_TOOL` | Multi Execute Composio Tools | Fast and parallel tool executor for tools discovered through COMPOSIO_SEARCH_TOOLS. Use this tool to execute up to 50 tools in parallel across apps only when they're logically independent (no ordering/output dependencies). Response contains structured outputs ready for immediate analysis - avoid reprocessing them via remote bash/workbench tools. Prerequisites: - Always use valid tool slugs and their arguments. NEVER invent tool slugs or argument fields. ALWAYS pass STRICTLY schema-compliant arguments with each tool execution. - Ensure an ACTIVE connection exists for the toolkits that are going to be executed. If none exists, MUST initiate one via COMPOSIO_MANAGE_CONNECTIONS before execution. - Only batch tools that are logically independent - no ordering, no output-to-input dependencies, and no intra-call chaining (tools in one call can't use each other's outputs). DO NOT pass dummy or placeholder inputs; always resolve required inputs using appropriate tools first. Usage guidelines: - If COMPOSIO_SEARCH_TOOLS returns a tool that can perform the task, prefer calling it via this executor. Do not write custom API calls or ad-hoc scripts for tasks that can be completed by available Composio tools. - Prefer parallel execution: group independent tools into a single multi-execute call where possible. - Predictively set sync_response_to_workbench=true if the response may be large or needed for later scripting. It still shows response inline; if the actual response data turns out small and easy to handle, keep everything inline and SKIP workbench usage. - Responses contain structured outputs for each tool. RULE: Small data - process yourself inline; large data - process in the workbench. - ALWAYS include inline references/links to sources in MARKDOWN format directly next to the relevant text. Eg provide slack thread links alongside with summary, render document links instead of raw IDs. Restrictions: Some tools or toolkits may be disabled in this environment. If the response indicates a restriction, inform the user and STOP execution immediately. Do NOT attempt workarounds or speculative actions. - CRITICAL: You MUST always include the 'memory' parameter - never omit it. Even if you think there's nothing to remember, include an empty object {} for memory. Memory Storage: - CRITICAL FORMAT: Memory must be a dictionary where keys are app names (strings) and values are arrays of strings. NEVER pass nested objects or dictionaries as values. - CORRECT format: {"slack": ["Channel general has ID C1234567"], "gmail": ["John's email is john@example.com"]} - Write memory entries in natural, descriptive language - NOT as key-value pairs. Use full sentences that clearly describe the relationship or information. - ONLY store information that will be valuable for future tool executions - focus on persistent data that saves API calls. - STORE: ID mappings, entity relationships, configs, stable identifiers. - DO NOT STORE: Action descriptions, temporary status updates, logs, or "sent/fetched" confirmations. - Examples of GOOD memory (store these): * "The important channel in Slack has ID C1234567 and is called #general" * "The team's main repository is owned by user 'teamlead' with ID 98765" * "The user prefers markdown docs with professional writing, no emojis" (user_preference) - Examples of BAD memory (DON'T store these): * "Successfully sent email to john@example.com with message hi" * "Fetching emails from last day (Sep 6, 2025) for analysis" - Do not repeat the memories stored or found previously. |
| `COMPOSIO_REMOTE_BASH_TOOL` | Run bash commands | Execute bash commands in a REMOTE sandbox for file operations, data processing, and system tasks. Essential for handling large tool responses saved to remote files. **Hard 3-minute (180s) execution limit** — break large tasks into smaller commands. PRIMARY USE CASES: - Process large tool responses saved by COMPOSIO_MULTI_EXECUTE_TOOL to remote sandbox - File system operations, extract specific information from JSON with shell tools like jq, awk, sed, grep, etc. - Commands run from /home/user directory by default |
| `COMPOSIO_REMOTE_WORKBENCH` | Execute Code remotely in work bench | Process **REMOTE FILES** or script BULK TOOL EXECUTIONS using Python code IN A REMOTE SANDBOX. If you can see the data in chat, DON'T USE THIS TOOL. **ONLY** use this when processing **data stored in a remote file** or when scripting bulk tool executions. DO NOT USE - When the complete response is already inline/in-memory, or you only need quick parsing, summarization, or basic math. USE IF - To parse/analyze tool outputs saved by COMPOSIO_MULTI_EXECUTE_TOOL to a remote file in the sandbox or to script multi-tool chains there. - For bulk or repeated executions of known Composio tools (e.g., add a label to 100 emails). - To call APIs via proxy_execute when no Composio tool exists for that API. OUTPUTS - Returns a compact result or, if too long, artifacts under `/mnt/files/.composio/output` (cloud-backed FUSE mount, persisted across sandbox restarts). IMPORTANT CODING RULES: 1. Stepwise Execution: Split work into small steps. Save intermediate outputs to `/mnt/files/` (cloud-backed, persisted across failures/timeouts) or variables. Call COMPOSIO_REMOTE_WORKBENCH again for the next step. 2. Notebook Persistence: This is a persistent Jupyter notebook cell: variables, functions, imports, and in-memory state persist across executions. Helper functions are preloaded. 3. Top-level cells: Do not use `return`; Jupyter only allows it inside functions. For final values, end with `output` or `print(output)`, not `return output`. 4. Parallelism & Timeout (CRITICAL): There is a **hard 3-minute (180s) execution limit** per cell. Always prioritize PARALLEL execution using ThreadPoolExecutor for bulk operations - e.g., call run_composio_tool or invoke_llm across rows. If data is large, split it into smaller batches across cells. 5. Checkpoints: Save checkpoints to `/mnt/files/` so that long runs can be resumed from the last completed step, even after a timeout or sandbox restart. 6. Schema Safety: Never assume the response schema for run_composio_tool if not known already from previous tools. To inspect schema, either run a simple request **outside** the workbench or use invoke_llm helper. 7. LLM Helpers: Always use invoke_llm helper for summary, analysis, or field extraction on results; prefer it for much better results over ad hoc filtering. 8. Avoid Meta Loops: Do not use run_composio_tool to call COMPOSIO_* meta tools. Only use it for app tools. 9. Pagination: Use when data spans multiple pages. Continue fetching pages with the returned next_page_token or cursor until none remains. Parallelize page fetches when the tool supports page_number. 10. No Hardcoding: Never hardcode data. Load it from files or tool responses, iterating to construct intermediate or final inputs/outputs. 11. If the final output is in a workbench file, use upload_local_file to download it - never expose the raw workbench file path to the user. Prefer to download useful artifacts after task is complete. ENV & HELPERS: - Home directory: `/home/user`. - NOTE: Helper functions already initialized in the workbench - DO NOT import or redeclare them: - `run_composio_tool(tool_slug: str, arguments: dict) -> tuple[Dict[str, Any], str]`: Execute a known Composio **app** tool. Do not invent names; match the tool input schema. Use for loops/parallel/bulk calls. i) run_composio_tool returns JSON with top-level "data". Parse carefully—structure may be nested. - `invoke_llm(query: str) -> tuple[str, str]`: Invoke an LLM for semantic tasks. Pass MAX 200k characters. i) NOTE Prompting guidance: When building prompts for invoke_llm, prefer f-strings (or concatenation) so literal braces stay intact. If using str.format, escape braces by doubling them ({{ }}). ii) Define the exact JSON schema you want and batch items into smaller groups to stay within token limit. - `upload_local_file(*file_paths) -> tuple[Dict[str, Any], str]`: Upload sandbox files to Composio S3/R2 storage for user-downloadable artifacts. - `proxy_execute(method, endpoint, toolkit, query_params=None, body=None, headers=None) -> tuple[Any, str]`: Call a toolkit API directly when no Composio tool exists. Only one toolkit can be invoked with proxy_execute per workbench call - `web_search(query: str) -> tuple[str, str]`: Search the web for information. - `smart_file_extract(sandbox_file_path: str, show_preview: bool = True) -> tuple[str, str]`: Extracts text from files in the sandbox (e.g., PDF, image). - Workbench comes with comprehensive Image Processing (PIL/Pillow, OpenCV, scikit-image), PyTorch ML libraries, Document and Report handling tools (pandoc, python-docx, pdfplumber, reportlab), and standard Data Analysis tools (pandas, numpy, matplotlib) for advanced visual, analytical, and AI tasks. All helper functions return a tuple (result, error). Always check error before using result. ## Python Helper Functions for LLM Scripting ### run_composio_tool Executes a known Composio tool via backend API. Do NOT call COMPOSIO_* meta tools to avoid cycles. def run_composio_tool(tool_slug: str, arguments: Dict[str, Any]) -> tuple[Dict[str, Any], str] # Returns: (tool_response_dict, error_message) # Success: ({"data": {actual_data}}, "") - Note the top-level data # Error: ({}, "error_message") or (response_data, "error_message") result, error = run_composio_tool("GMAIL_FETCH_EMAILS", {"max_results": 1, "user_id": "me"}) if error: print("GMAIL_FETCH_EMAILS error:", error) else: email_data = result.get("data", {}) print("Fetched:", email_data) ### invoke_llm Calls LLM for reasoning, analysis, and semantic tasks. Pass MAX 200k characters. def invoke_llm(query: str) -> tuple[str, str] # Returns: (llm_response, error_message) # Example: analyze tool response with LLM tool_resp, err = run_composio_tool("GMAIL_FETCH_EMAILS", {"max_results": 5, "user_id": "me"}) if not err: parsed = tool_resp.get("data", {}) resp, err2 = invoke_llm(f"Summarize these emails: {parsed}") if not err2: print(resp) # TIP: batch prompts to reduce LLM calls. ### upload_local_file Uploads sandbox files to Composio S3/R2 storage for upload/download requests involving generated sandbox artifacts. Single files upload directly; multiple files are auto-zipped. def upload_local_file(*file_paths) -> tuple[Dict[str, Any], str] # Returns: (result_dict, error_string) # Success: ({"s3_url": str, "uploaded_file": str, "type": str, "id": str, "s3key": str, "message": str}, "") # Error: ({}, "error_message") # Single file result, error = upload_local_file("/path/to/report.pdf") # Multiple files are auto-zipped result, error = upload_local_file("/home/user/doc1.txt", "/home/user/doc2.txt") if not error: print("Uploaded:", result["s3_url"]) ### proxy_execute Direct API call to a connected toolkit service. def proxy_execute( method: Literal["GET","POST","PUT","DELETE","PATCH"], endpoint: str, toolkit: str, query_params: Optional[Dict[str, str]] = None, body: Optional[object] = None, headers: Optional[Dict[str, str]] = None, ) -> tuple[Any, str] # Returns: (response_data, error_message) # Example: GET request with query parameters query_params = {"q": "is:unread", "maxResults": "10"} data, error = proxy_execute("GET", "/gmail/v1/users/me/messages", "gmail", query_params=query_params) if not error: print("Success:", data) ### web_search Searches the web via Exa AI. def web_search(query: str) -> tuple[str, str] # Returns: (search_results_text, error_message) results, error = web_search("latest developments in AI") if not error: print("Results:", results) ## Best Practices ### Error-first pattern and Defensive parsing (print keys while narrowing) res, err = run_composio_tool("GMAIL_FETCH_EMAILS", {"max_results": 5}) if err: print("error:", err) elif isinstance(res, dict): print("res keys:", list(res.keys())) data = res.get("data") or {} print("data keys:", list(data.keys())) msgs = data.get("messages") or [] print("messages count:", len(msgs)) for m in msgs: print("subject:", m.get("subject", "")) ### Parallelize within the 3-minute cell timeout Adjust concurrency so all tasks finish within 3 minutes. import concurrent.futures MAX_CONCURRENCY = 10 # Adjust as needed def process_one(item): result, error = run_composio_tool("GMAIL_SEND_EMAIL", item) if error: return {"status": "failed", "error": error} return {"status": "ok", "data": result} with concurrent.futures.ThreadPoolExecutor(max_workers=MAX_CONCURRENCY) as ex: results = list(ex.map(process_one, items)) |
| `COMPOSIO_RETRIEVE_TOOLKITS` | Retrieve Toolkits | Toolkits are like github, linear, gmail, etc. tools are like send email, create issue, etc programmatic functions that can be used to perform the action. not all toolkits support all tools. some toolkits support only a subset of tools that might be possible to perform with that toolkit. use this action to retrieve the toolkits that can be used to perform the action. this list is only probabilistic. retrieve toolkits for a specified usecase. so for example, if use case is to "send an email" this action will return all the toolkits that can be used to send email. simiarly if use case is to "create a github issue" this action will return all the toolkits that can be used to create a github issue. after using this, to confirm whether the toolkit can indeed potentially support the use case, use the action retrieve actions. |
| `COMPOSIO_SEARCH_AGENT` | Search agent | Discover tools and analyze dependencies for complex workflows using ai agent. this action uses an ai agent to intelligently search for tools across toolkits and create optimized execution sequences with detailed instructions. |
| `COMPOSIO_SEARCH_TOOLS` | Search Composio Tools | Tool Server Info: Composio connects 500+ apps—Slack, GitHub, Notion, Google Workspace (Gmail, Sheets, Drive, Calendar), Microsoft (Outlook, Teams), X/Twitter, Figma, Web Search / Deep research, Browser tool (scrape URLs, browser automation), Meta apps (Instagram, Meta Ads), TikTok, AI tools like Nano Banana & Veo3, and more—for seamless cross-app automation. Use this tool to discover relevant tools plus the recommended plan and common pitfalls for reliable execution. Always call this tool first whenever a user mentions or implies an external app, service, or workflow—never say "I don't have access to X/Y app" before calling it. Usage guidelines: - Use this tool whenever kicking off a task. Re-run it when you need additional tools/plans due to missing details, errors, or a changed use case. - If the user pivots to a different use case in same chat, you MUST call this tool again with the new use case and generate a new session_id. - Specify the use_case with a normalized description of the problem, query, or task. Be clear and precise. Queries can be simple single-app actions or multiple linked queries for complex cross-app workflows. - Pass known_fields along with use_case as a string of key–value hints (for example, "channel_name: general") to help the search resolve missing details such as IDs. Splitting guidelines (Important): 1. Atomic queries: 1 query = 1 tool call. Include hidden prerequisites (e.g., add "get Linear issue" before "update Linear issue"). 2. Include app names: If user names a toolkit, include it in every sub query so intent stays scoped (e.g., "fetch Gmail emails", "reply to Gmail email"). 3. English input: Translate non-English prompts while preserving intent and identifiers. Example: User query: "send an email to John welcoming him and create a meeting invite for tomorrow" Search call: queries: [ {use_case: "send an email to someone", known_fields: "recipient_name: John"}, {use_case: "create a meeting invite", known_fields: "meeting_date: tomorrow"} ] Plan review checklist (Important): - The response includes a detailed execution plan and common pitfalls. You MUST review this plan carefully, adapt it to your current context, and generate your own final step-by-step plan before execution. Execute the steps in order to ensure reliable and accurate execution. Skipping or ignoring required steps can lead to unexpected failures. - Check the plan and pitfalls for input parameter nuances (required fields, IDs, formats, limits). Before executing any tool, you MUST review its COMPLETE input schema and provide STRICTLY schema-compliant arguments to avoid invalid-input errors. - Determine whether pagination is needed; if a response returns a pagination token and completeness is implied, paginate until exhaustion and do not return partial results. Response: - Tools & Input Schemas: The response lists toolkits (apps) and tools suitable for the task, along with their tool_slug, description, input schema / schemaRef, and related tools for prerequisites, alternatives, or next steps. - NOTE: Tools with schemaRef instead of input_schema require you to call COMPOSIO_GET_TOOL_SCHEMAS first to load their full input_schema before use. - Connection Info: If a toolkit has an active connection, the response includes it along with any available current user information. If no active connection exists, you MUST initiate a new connection via COMPOSIO_MANAGE_CONNECTIONS with the correct toolkit name. DO NOT execute any toolkit tool without an ACTIVE connection. - Time Info: The response includes the current UTC time for reference. You can reference UTC time from the response if needed. - The tools returned to you through this are to be called via COMPOSIO_MULTI_EXECUTE_TOOL. Ensure each tool execution specifies the correct tool_slug and arguments exactly as defined by the tool's input schema. - The response includes a memory parameter containing relevant information about the use case and the known fields that can be used to determine the flow of execution. Any user preferences in memory must be adhered to. SESSION: ALWAYS set this parameter, first for any workflow. Pass session: {generate_id: true} for new workflows OR session: {id: "EXISTING_ID"} to continue. ALWAYS use the returned session_id in ALL subsequent meta tool calls. |
| `COMPOSIO_WAIT_FOR_CONNECTION` | Wait for connection | Wait for connections to be established for given toolkits. |
| `COMPOSIO_CREATE_UPDATE_RECIPE` | Create / Update Recipe from Workflow | Convert executed workflow into a reusable notebook. Only use when workflow is complete or user explicitly requests. --- DESCRIPTION FORMAT (MARKDOWN) - MUST BE NEUTRAL --- Description is for ANY user of this recipe, not just the creator. Keep it generic. - NO PII (no real emails, names, channel names, repo names) - NO user-specific defaults (defaults go in defaults_for_required_parameters only) - Use placeholder examples only Generate rich markdown with these sections: ## Overview [2-3 sentences: what it does, what problem it solves] ## How It Works [End-to-end flow in plain language] ## Key Features - [Feature 1] - [Feature 2] ## Step-by-Step Flow 1. **[Step]**: [What happens] 2. **[Step]**: [What happens] ## Apps & Integrations \| App \| Purpose \| \|-----\|---------\| \| [App] \| [Usage] \| ## Inputs Required \| Input \| Description \| Format \| \|-------\|-------------\|--------\| \| channel_name \| Slack channel to post to \| WITHOUT # prefix \| (No default values here - just format guidance) ## Output [What the recipe produces] ## Notes & Limitations - [Edge cases, rate limits, caveats] --- CODE STRUCTURE --- Code has 2 parts: 1. DOCSTRING HEADER (comments) - context, learnings, version history 2. EXECUTABLE CODE - clean Python that runs DOCSTRING HEADER (preserve all history when updating): """ RECIPE: [Name] FLOW: [App1] → [App2] → [Output] VERSION HISTORY: v2 (current): [What changed] - [Why] v1: Initial version API LEARNINGS: - [API_NAME]: [Quirk, e.g., Response nested at data.data] KNOWN ISSUES: - [Issue and fix] """ Then EXECUTABLE CODE follows (keep code clean, learnings stay in docstring). --- INPUT SCHEMA (USER-FRIENDLY) --- Ask for: channel_name, repo_name, sheet_url, email_address Never ask for: channel_id, spreadsheet_id, user_id (resolve in code) Never ask for large inputs: use invoke_llm to generate content in code GOOD DESCRIPTIONS (explicit format, generic examples - no PII): channel_name: Slack channel WITHOUT # prefix repo_name: Repository name only, NOT owner/repo google_sheet_url: Full URL from browser gmail_label: Label as shown in Gmail sidebar REQUIRED vs OPTIONAL: - Required: things that change every run (channel name, date range, search terms) - Optional: generic settings with sensible defaults (sheet tab, row limits) --- DEFAULTS FOR REQUIRED PARAMETERS --- - Provide in defaults_for_required_parameters for all required inputs - Use values from workflow context - Use empty string if no value available - never hallucinate - Match types: string param needs string default, number needs number - Defaults are private to creator, not shared when recipe is published - SCHEDULE-FRIENDLY DEFAULTS: - Use RELATIVE time references unless user asks otherwise, not absolute dates ✓ "last_24_hours", "past_week", "7" (days back) ✗ "2025-01-15", "December 18, 2025" - - Never include timezone as an input parameter unless specifically asked - - Test: "Will this default work if recipe runs tomorrow?" --- CODING RULES --- SINGLE EXECUTION: Generate complete notebook that runs in one invocation. CODE CORRECTNESS: Must be syntactically and semantically correct and executable. ENVIRONMENT VARIABLES: All inputs via os.environ.get(). Code is shared - no PII. TIMEOUT: 4 min hard limit. Use ThreadPoolExecutor for bulk operations. SCHEMA SAFETY: Never assume API response schema. Use invoke_llm to parse unknown responses. NESTED DATA: APIs often double-nest. Always extract properly before using. ID RESOLUTION: Convert names to IDs in code using FIND/SEARCH tools. FAIL LOUDLY: Raise Exception if expected data is empty. Never silently continue. CONTENT GENERATION: Never hardcode text. Use invoke_llm() for generated content. DEBUGGING: Timestamp all print statements. NO META LOOPS: Never call RUBE_* or COMPOSIO_* meta tools via run_composio_tool. OUTPUT: End with just output variable (no print). --- HELPERS --- Available in notebook (dont import). See RUBE_REMOTE_WORKBENCH for details: run_composio_tool(slug, args) returns (result, error) invoke_llm(prompt, reasoning_effort="low") returns (response, error) # reasoning_effort: "low" (bulk classification), "medium" (summarization), "high" (creative/complex content) # Always specify based on task - use low by default, medium for analysis, high for creative generation proxy_execute(method, endpoint, toolkit, ...) returns (result, error) upload_local_file(*paths) returns (result, error) --- CHECKLIST --- - Description: Neutral, no PII, no defaults - for any user - Docstring header: Version history, API learnings (preserve on update) - Input schema: Human-friendly names, format guidance, no large inputs - Defaults: In defaults_for_required_parameters, type-matched, from context - Code: Single execution, os.environ.get(), no PII, fail loudly - Output: Ends with just output |
| `COMPOSIO_EXECUTE_RECIPE` | Execute Recipe | Executes a Recipe |
| `COMPOSIO_UPSERT_RECIPE` | Create / Update Recipe from Workflow | Convert the executed workflow into a recipe using Python Pydantic code. The recipe_slug parameter is required. If a recipe with the provided slug already exists, a new version will be created. If the slug does not exist, a new recipe will be created. This tool allows you to: 1. Save the executed workflow as a reusable recipe using Python Pydantic code 2. The recipe_slug parameter is required. If a recipe with the provided slug already exists, a new version will be created. If the slug does not exist, a new recipe will be created. 3. Recipes are defined using Python Pydantic models that extend ComposioRecipe base class 4. You should generate the Python Pydantic code for the recipe based on the executed workflow, please see below for more instructions on how to generate the code for the recipe 5. The recipe code must define request and response Pydantic models and implement the execute method WHEN TO USE - Only run this tool when the workflow is completed and successful or if the user explicitly asked to run this tool DO NOT USE - When the workflow is still being processed, or not yet completed and the user explicitly didn't ask to run this tool IMPORTANT CODING RULES: 1. Single Execution: Please generate the code for the full recipe that can be executed in a single invocation 2. Schema Safety: Never assume the response schema for run_composio_tool if not known already from previous tools. To inspect schema, either run a simple request **outside** the workbench via COMPOSIO_MULTI_EXECUTE_TOOL or use invoke_llm helper. 3. Parallelism & Timeout (CRITICAL): There is a hard timeout of 4 minutes so complete the code within that. Prioritize PARALLEL execution using ThreadPoolExecutor with suitable concurrency for bulk operations - e.g., call run_composio_tool or invoke_llm parallelly across rows to maximize efficiency. 4. LLM Helpers: You should always use invoke_llm helper for summary, analysis, or field extraction on results. This is a smart LLM that will give much better results than any adhoc filtering. 5. Avoid Meta Loops: Do not use run_composio_tool to call COMPOSIO_MULTI_EXECUTE_TOOL or other COMPOSIO_* meta tools to avoid cycles. Only use it for app tools. 6. Pagination: Use when data spans multiple pages. Continue fetching pages with the returned next_page_token or cursor until none remains. Parallelize fetching pages if tool supports page_number. 7. No Hardcoding: Never hardcode data in code. Always load it from files or tool responses, iterating to construct intermediate or final inputs/outputs. 8. No Hardcoding: Please do NOT hardcode any PII related information like email / name / home address / social security id or anything like that. This is very risky 9. NEVER HARDCODE CONTENT (CRITICAL): For ANY content generation use case, you MUST use invoke_llm helper instead of hardcoding. This includes but is not limited to: social media posts (Twitter, LinkedIn, Instagram, Facebook, Reddit, etc.), blog posts, articles, email content, SEO reports, market research, jokes, stories, creative writing, product descriptions, news summaries, documentation, or ANY text content that should be unique, personalized, or contextual. Always use invoke_llm with a specific prompt to generate fresh content every time. 10. Dynamic Content Generation: Structure your code like this for content generation: content_prompt = f"Generate a [specific type] about [topic] that [requirements]" then generated_content, error = invoke_llm(content_prompt). Every execution should produce different, contextually appropriate content. 11. Code Correctness (CRITICAL): Code must be syntactically and semantically correct and executable. 12. Recipe Structure: The recipe code must follow this structure: - Import required modules: from pydantic import BaseModel, Field; from recipes.base import ComposioRecipe - Define Request model: class RecipeRequest(BaseModel) with Field descriptions - Define Response model: class RecipeResponse(BaseModel) with Field descriptions - Define Recipe class: class RecipeName(ComposioRecipe[RecipeRequest, RecipeResponse]) with name attribute and execute method - The execute method should use self.run_composio_tool() to call tools and return RecipeResponse instance - IMPORTANT: The recipe class must have a name attribute that is a slug-type identifier (lowercase, underscores, no spaces). Example: name = "weather_lookup" or name = "github_repo_analyzer". This name will be used as the recipe slug identifier. 13. Pydantic Models: Use proper Field descriptions and types. Request model should have all input parameters with Field(..., description="...") 14. Response Model: Should match the expected output structure with proper Field descriptions 15. Tool Execution: Use self.run_composio_tool(tool_slug, arguments) to execute tools within the recipe 16. Error Handling: Handle errors appropriately and return meaningful error messages 17. Debugging (CRITICAL): For every print statement in your code, please prefix it with the time at which it is being executed. This will help me investigate latency related stuff 18. If there are any errors while running, please throw the error so that the person running the recipe can see and fix it 19. FAIL LOUDLY (CRITICAL): If you expect data but get 0 results, raise Exception immediately. NEVER silently continue or create empty outputs. Recovery loop will fix the code - don't hide issues. Example: if len(items) == 0: raise Exception("Expected data but got none") 20. NESTED DATA (CRITICAL): APIs often double-nest data. Always extract: data = result.get("data", {}); if "data" in data: data = data["data"]. Try flexible field names: item.get("id") or item.get("channel_id") IMPORTANT SCHEMA RULES: 1. Keep request model simple - include only parameters users would want to vary between runs 2. Do not ask for large inputs. Use invoke_llm helper to generate large content in the code 3. HUMAN-FRIENDLY INPUTS (CRITICAL): - ✓ Ask for: channel_name, google_sheet_url, repo_name, email_address - ✗ Never ask for: channel_id, spreadsheet_id, document_id, user_id - Extract IDs in code: use FIND/SEARCH tools to convert names/URLs to IDs - For URLs: extract IDs in code with regex (e.g., spreadsheet_id from google_sheet_url) 4. REQUIRED vs OPTIONAL: Mark as required only if it's specific to the user's workflow and would change every run. Generic settings should be optional with sensible defaults 5. Identify what varies between runs: channel name, date range, search terms = required. Sheet tab name, row limits, formatting = optional 6. [CRITICAL]: Use search/find tools in code to convert human inputs (names/URLs) to IDs before calling other tools ENV & HELPERS: Recipes inherit from ComposioRecipe and have access to these helper methods: 1. self.run_composio_tool(tool_slug: str, params: Dict) -> Tuple[Dict, Optional[str]] - Execute a Composio tool - Returns: (result dict, error string or None) 2. self.invoke_llm(prompt: str) -> Tuple[str, Optional[str]] - Invoke an LLM for semantic analysis, reasoning, summarization, and other generic tasks - Returns: (completion text, error string or None) 3. self.web_search(query: str) -> Tuple[str, Optional[str]] - Perform web search - Returns: (answer text, error string or None) NOTE: The recipe code must be valid Python Pydantic code that extends ComposioRecipe base class. |
| `COMPOSIO_GET_RECIPE` | Get Recipe Details by Slug | Get the details of an existing recipe by its slug. Returns the recipe's name, description, input/output schemas, and the toolkits it uses. Use this to inspect a recipe's structure before executing it. |
| `COMPOSIO_GET_RECIPE_DETAILS` | Get Existing Recipe Details | Get the details of the existing recipe for a given recipe id. |
| `COMPOSIO_WAIT_FOR_CONNECTIONS` | Wait for connection | Wait for user auth to finish. Call ONLY after you have shown the Auth link from COMPOSIO_MANAGE_CONNECTIONS. Wait until mode=any/all toolkits reach a terminal state (ACTIVE/FAILED) or timeout. Example Input: { toolkits: ["gmail","outlook"], mode: "any" } |
| `COMPOSIO_GET_TOOL_SCHEMAS` | Get Tool Schemas | Retrieve input schemas for tools by slug. Returns complete parameter definitions required to execute each tool. Only pass tool slugs returned by COMPOSIO_SEARCH_TOOLS — never guess or fabricate slugs. If unsure of the exact slug, call COMPOSIO_SEARCH_TOOLS first. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

Once connected, VS Code can access the Composio MCP server via Composio to run the app actions you authorize, directly from your coding workflow.

## Complete Code

None listed.

## Conclusion

### Way Forward
Now that Composio is connected, extend your setup by connecting the other apps you already use every day, so your agent can run true cross-app workflows end to end.
- Connect Calendar to turn threads into scheduled meetings automatically.
- Connect Slack or Teams to post summaries, approvals, and alerts where your team works.
- Connect Notion, Linear, Jira, or Asana to convert requests into tickets, tasks, and docs.
- Connect Drive, Dropbox, or OneDrive to fetch, file, and share attachments without manual steps.
- Connect HubSpot or Salesforce to log customer context, update records, and draft follow-ups.
Start with one workflow you do repeatedly, then keep adding apps as you find new handoffs. With everything behind a single MCP endpoint, your agent can coordinate multiple tools safely and reliably in one conversation.

## How to build Composio MCP Agent with another framework

- [ChatGPT](https://composio.dev/toolkits/composio/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/composio/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/composio/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/composio/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/composio/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/composio/framework/codex)
- [Cursor](https://composio.dev/toolkits/composio/framework/cursor)
- [OpenCode](https://composio.dev/toolkits/composio/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/composio/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/composio/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/composio/framework/cli)
- [Google ADK](https://composio.dev/toolkits/composio/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/composio/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/composio/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/composio/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/composio/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/composio/framework/crew-ai)

## Related Toolkits

- [Composio search](https://composio.dev/toolkits/composio_search) - Composio search is a unified web search toolkit spanning travel, e-commerce, news, financial markets, images, and more. It lets you and your apps tap into up-to-date web data from a single, easy-to-integrate service.
- [Perplexityai](https://composio.dev/toolkits/perplexityai) - Perplexityai delivers natural, conversational AI models for generating human-like text. Instantly get context-aware, high-quality responses for chat, search, or complex workflows.
- [Browser tool](https://composio.dev/toolkits/browser_tool) - Browser tool is a virtual browser integration that lets AI agents interact with the web programmatically. It enables automated browsing, scraping, and action-taking from any AI workflow.
- [Ai ml api](https://composio.dev/toolkits/ai_ml_api) - Ai ml api is a suite of AI/ML models for natural language and image tasks. It provides fast, scalable access to advanced AI capabilities for your apps and workflows.
- [Aivoov](https://composio.dev/toolkits/aivoov) - Aivoov is an AI-powered text-to-speech platform offering 1,000+ voices in over 150 languages. Instantly turn written content into natural, human-like audio for any application.
- [All images ai](https://composio.dev/toolkits/all_images_ai) - All-Images.ai is an AI-powered image generation and management platform. It helps you create, search, and organize images effortlessly with advanced AI capabilities.
- [Anthropic administrator](https://composio.dev/toolkits/anthropic_administrator) - Anthropic administrator is an API for managing Anthropic organizational resources like members, workspaces, and API keys. It helps you automate admin tasks and streamline resource management across your Anthropic organization.
- [Api labz](https://composio.dev/toolkits/api_labz) - Api labz is a platform offering a suite of AI-driven APIs and workflow tools. It helps developers automate tasks and build smarter, more efficient applications.
- [Apipie ai](https://composio.dev/toolkits/apipie_ai) - Apipie ai is an AI model aggregator offering a single API for accessing top AI models from multiple providers. It helps developers build cost-efficient, latency-optimized AI solutions without juggling multiple integrations.
- [Astica ai](https://composio.dev/toolkits/astica_ai) - Astica ai provides APIs for computer vision, NLP, and voice synthesis. Integrate advanced AI features into your app with a single API key.
- [Bigml](https://composio.dev/toolkits/bigml) - BigML is a machine learning platform that lets you build, train, and deploy predictive models from your data. Its intuitive interface and robust API make machine learning accessible and efficient.
- [Botbaba](https://composio.dev/toolkits/botbaba) - Botbaba is a platform for building, managing, and deploying conversational AI chatbots across messaging channels. It streamlines chatbot automation, making it easier to integrate AI into customer interactions.
- [Botpress](https://composio.dev/toolkits/botpress) - Botpress is an open-source platform for building, deploying, and managing chatbots. It helps teams automate conversations and deliver rich, interactive messaging experiences.
- [Chatbotkit](https://composio.dev/toolkits/chatbotkit) - Chatbotkit is a platform for building and managing AI-powered chatbots using robust APIs and SDKs. It lets you easily add conversational AI to your apps for better user engagement.
- [Cody](https://composio.dev/toolkits/cody) - Cody is an AI assistant built for businesses, trained on your company's knowledge and data. It delivers instant answers and insights, tailored for your team.
- [Context7 MCP](https://composio.dev/toolkits/context7_mcp) - Context7 MCP delivers live, version-specific code docs and examples right from the source. It helps developers and AI agents instantly retrieve authoritative programming info—no more out-of-date docs.
- [Customgpt](https://composio.dev/toolkits/customgpt) - CustomGPT.ai lets you build and deploy chatbots tailored to your own data and business needs. Get precise and context-aware AI conversations without writing code.
- [Datarobot](https://composio.dev/toolkits/datarobot) - Datarobot is a machine learning platform that automates model development, deployment, and monitoring. It empowers organizations to quickly gain predictive insights from large datasets.
- [Deepgram](https://composio.dev/toolkits/deepgram) - Deepgram is an AI-powered speech recognition platform for accurate audio transcription and understanding. It enables fast, scalable speech-to-text with advanced audio intelligence features.
- [DeepImage](https://composio.dev/toolkits/deepimage) - DeepImage is an AI-powered image enhancer and upscaler. Get higher-quality images with just a few clicks.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Composio MCP?

With a standalone Composio MCP server, the agents and LLMs can only access a fixed set of Composio tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Composio and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with VS Code?

Yes, you can. VS Code fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Composio tools.

### Can I manage the permissions and scopes for Composio while using Tool Router?

Yes, absolutely. You can configure which Composio scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Composio data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
