# How to integrate Openai MCP with Google ADK

```json
{
  "title": "How to integrate Openai MCP with Google ADK",
  "toolkit": "Openai",
  "toolkit_slug": "openai",
  "framework": "Google ADK",
  "framework_slug": "google-adk",
  "url": "https://composio.dev/toolkits/openai/framework/google-adk",
  "markdown_url": "https://composio.dev/toolkits/openai/framework/google-adk.md",
  "updated_at": "2026-05-12T10:20:48.181Z"
}
```

## Introduction

This guide walks you through connecting Openai to Google ADK using the Composio tool router. By the end, you'll have a working Openai agent that can list all available openai models, upload a file for fine-tuning, create a new assistant with gpt-4 through natural language commands.
This guide will help you understand how to give your Google ADK agent real control over a Openai account through Composio's Openai MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Openai with

- [ChatGPT](https://composio.dev/toolkits/openai/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/openai/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/openai/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/openai/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/openai/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/openai/framework/codex)
- [Cursor](https://composio.dev/toolkits/openai/framework/cursor)
- [VS Code](https://composio.dev/toolkits/openai/framework/vscode)
- [OpenCode](https://composio.dev/toolkits/openai/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/openai/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/openai/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/openai/framework/cli)
- [LangChain](https://composio.dev/toolkits/openai/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/openai/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/openai/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/openai/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/openai/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- Get a Openai account set up and connected to Composio
- Install the Google ADK and Composio packages
- Create a Composio Tool Router session for Openai
- Build an agent that connects to Openai through MCP
- Interact with Openai using natural language

## What is Google ADK?

Google ADK (Agents Development Kit) is Google's framework for building AI agents powered by Gemini models. It provides tools for creating agents that can use external services through the Model Context Protocol.
Key features include:
- Gemini Integration: Native support for Google's Gemini models
- MCP Toolset: Built-in support for Model Context Protocol tools
- Streamable HTTP: Connect to external services through streamable HTTP
- CLI and Web UI: Run agents via command line or web interface

## What is the Openai MCP server, and what's possible with it?

The Openai MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your OpenAI account. It provides structured and secure access to your models, assistants, files, threads, and fine-tuning jobs, so your agent can perform actions like managing assistants, handling conversations, uploading or organizing files, and working with OpenAI models on your behalf.
- Assistant and conversation management: Quickly create, update, or delete OpenAI assistants and manage threads or messages for seamless conversational flows.
- File uploads and organization: Let your agent upload new files, list all uploaded documents, or delete unnecessary files to keep your workspace tidy.
- Model discovery and utilization: Effortlessly list all available OpenAI models—including vision and multimodal—and retrieve their details to choose the best fit for your tasks.
- Fine-tuning job insights: View a complete list of your organization's fine-tune jobs and track their progress or review results as needed.
- Thread and run management: Create, modify, or inspect threads and run steps to fully control and monitor interactive agent conversations.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `OPENAI_ADD_UPLOAD_PART` | Add Upload Part | Tool to add a part (chunk of bytes) to an Upload object. Use when uploading large files in chunks, with each part up to 64 MB. |
| `OPENAI_CANCEL_BATCH` | Cancel batch | Tool to cancel an in-progress batch. Use when you need to stop a batch that is currently processing. The batch will be in status 'cancelling' for up to 10 minutes before changing to 'cancelled', where partial results (if any) will be available. |
| `OPENAI_CANCEL_EVAL_RUN` | Cancel evaluation run | Tool to cancel an ongoing evaluation run. Use when you need to stop an evaluation run that is currently in progress. |
| `OPENAI_CANCEL_RESPONSE` | Cancel Response | Tool to cancel a background model response by its ID. Use when you need to stop a response that was created with the 'background' parameter set to true. Only background responses can be cancelled; attempting to cancel a non-background response will fail. |
| `OPENAI_CANCEL_RUN` | Cancel Run | Tool to cancel a run that is currently in progress. Use when you need to stop an assistant run that is taking too long or is no longer needed. The run's status will transition to 'cancelling' and then 'cancelled'. |
| `OPENAI_CANCEL_UPLOAD` | Cancel upload | Tool to cancel an upload. Use when you need to stop an upload that is in progress. No parts may be added after cancellation. |
| `OPENAI_COMPACT_RESPONSE` | Compact Response | Tool to compact a conversation or response to reduce token usage. Use when you need to reduce the size of long conversations while preserving important context. Either provide an array of input messages or reference a previous response ID to compact. |
| `OPENAI_CREATE_AUDIO_TRANSCRIPTION` | Create Audio Transcription | Tool to transcribe audio files to text via OpenAI Audio Transcriptions API. Use when you need to convert speech in audio files to written text, optionally with timestamps or speaker diarization. |
| `OPENAI_CREATE_AUDIO_TRANSLATION` | Create Audio Translation | Tool to translate audio files to English text via OpenAI Audio Translations API. Use when you need to convert speech in audio files (any language) to English text. |
| `OPENAI_CREATE_BATCH` | Create Batch | Tool to create and execute a batch from an uploaded file of requests. Use after uploading a JSONL file with purpose 'batch' to process multiple API requests in a single batch operation. |
| `OPENAI_CREATE_CHAT_COMPLETION` | Create Chat Completion | Tool to create a chat completion response from OpenAI models. Use for conversational AI, text generation, function calling, multimodal tasks with vision/audio, and structured JSON outputs. Supports advanced features like reasoning models, tool use, and streaming responses. |
| `OPENAI_CREATE_COMPLETION` | Create Completion (Legacy) | Tool to generate text completions using OpenAI's legacy Completions API. Use for single-turn text generation with models like gpt-3.5-turbo-instruct. Note: This endpoint is legacy; prefer Chat Completions for newer models. |
| `OPENAI_CREATE_CONTAINER` | Create Container | Tool to create a new container with configurable memory, expiration, file access, and network policies. Use when you need to provision an isolated execution environment. |
| `OPENAI_CREATE_CONTAINER_FILE` | Create Container File | Tool to create a file in a container. Use when adding files to an existing container either by referencing an uploaded file ID or by uploading raw file content directly. |
| `OPENAI_CREATE_CONVERSATION` | Create Conversation | Tool to create a new conversation for multi-turn interactions. Use when initializing a persistent conversation with optional starter messages. |
| `OPENAI_CREATE_CONVERSATION_ITEMS` | Create Conversation Items | Tool to create items in a conversation with the given ID. Use when adding messages or other items to an existing conversation. |
| `OPENAI_CREATE_EMBEDDINGS` | Create Embeddings | Tool to generate text embeddings via the OpenAI embeddings endpoint. Use for search, clustering, recommendations, and vector database storage workflows. |
| `OPENAI_CREATE_EVAL` | Create Eval | Tool to create an evaluation structure for testing a model's performance. An evaluation is a set of testing criteria and data source config that dictates the schema of data used in the evaluation. Use when setting up automated testing for model outputs with graders like label_model, string_check, or text_similarity. |
| `OPENAI_CREATE_EVAL_RUN` | Create Evaluation Run | Tool to create a new evaluation run for testing model configurations. Use when you need to kick off an evaluation with a specific data source and model configuration to test. |
| `OPENAI_CREATE_FINE_TUNING_JOB` | Create fine-tuning job | Tool to create a fine-tuning job which begins the process of creating a new model from a given dataset. Use when you need to start fine-tuning a model with your training data. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. |
| `OPENAI_CREATE_IMAGE` | Generate Image | Tool to generate an image via the OpenAI Images API and return hosted image asset URL and metadata. Use when you need to create images from text descriptions for single-shot image generation. |
| `OPENAI_CREATE_IMAGE_EDIT` | Edit Image | Tool to create edited or extended images via OpenAI Images Edit API. Use when you need to modify existing images based on a text prompt, with optional mask support for targeted edits. |
| `OPENAI_CREATE_IMAGE_VARIATION` | Create Image Variation | Tool to create a variation of a given image using the OpenAI Images API. Use when you need to generate alternative versions of an existing image. Only supports dall-e-2 model. |
| `OPENAI_CREATE_MESSAGE` | Create Message | Tool to create a new message in a specific thread. Appends the message only — does not trigger a model response; create a separate run to obtain assistant replies. Use when adding messages to an existing conversation after confirming the thread ID. |
| `OPENAI_CREATE_MODERATION` | Create Moderation | Tool to classify text and/or image inputs for potentially harmful content via the OpenAI Moderation API. Use for content safety checks, filtering user-generated content, or monitoring for policy violations across 13 harm categories including harassment, hate, violence, sexual content, self-harm, and illicit activities. |
| `OPENAI_CREATE_REALTIME_CALL` | Create Realtime Call | Tool to create a Realtime API call over WebRTC and receive the SDP answer needed to complete the peer connection. Use when initiating a bidirectional audio/data WebRTC session with OpenAI's Realtime API. Returns the SDP answer and call ID for connection establishment. |
| `OPENAI_CREATE_REALTIME_CLIENT_SECRET` | Create Realtime Client Secret | Tool to create an ephemeral client secret for authenticating Realtime API connections. Use when you need to establish a WebSocket connection to OpenAI's Realtime API for voice or streaming interactions. |
| `OPENAI_CREATE_REALTIME_SESSION` | Create Realtime Session | Tool to create an ephemeral API token for client-side Realtime API applications. Use when setting up browser-based real-time audio/text interactions. |
| `OPENAI_CREATE_REALTIME_TRANSCRIPTION_SESSION` | Create Realtime Transcription Session | Tool to create an ephemeral API token for realtime transcriptions via the Realtime API. Use when you need to authenticate browser clients for realtime audio transcription sessions. Returns a session object with a client_secret containing a usable ephemeral API token. |
| `OPENAI_CREATE_RESPONSE` | Create Response | Tool to generate a one-shot model response via the Responses API. Use for multimodal analysis (image + text), OCR/text extraction from images, or structured JSON outputs. For structured outputs, configure text.format with type='json_schema' and your schema. Returns the full Response object including id, output array, and aggregated output_text for parsing structured data. |
| `OPENAI_CREATE_RUN` | Create Run | Tool to create a run on a thread with an assistant. Use when you need to execute an assistant to generate responses. Creating a message alone does not cause the assistant to respond; a run is the execution primitive. After creating the run, typically poll the run status until it reaches a terminal state (completed, failed, cancelled, expired), then read the assistant's messages from the thread. |
| `OPENAI_CREATE_SKILL` | Create Skill | Tool to create a skill from uploaded files. Use when you need to create a new skill with SKILL.md and supporting files. |
| `OPENAI_CREATE_SPEECH` | Create Speech (TTS) | Tool to generate text-to-speech audio using OpenAI's Audio API. Use when you need to convert text to natural-sounding speech with a choice of voices and models. Returns a hosted audio file URL with metadata, not raw bytes. |
| `OPENAI_CREATE_THREAD` | Create Thread | Tool to create a new thread. Use when initializing a conversation with optional starter messages. Returns a thread_id that must be persisted and passed to all subsequent calls (e.g., OPENAI_CREATE_MESSAGE, OPENAI_RETRIEVE_THREAD). Create a fresh thread for each independent conversation or unrelated task. |
| `OPENAI_CREATE_THREAD_AND_RUN` | Create Thread And Run | Tool to create a thread and run it in one request. Use when you need to start a new conversation and immediately execute the assistant to generate a response. This is more efficient than calling create_thread and create_run separately. |
| `OPENAI_CREATE_UPLOAD` | Create Upload | Tool to create an intermediate Upload object for large file uploads. Use when uploading files larger than the direct upload limit by adding Parts to the Upload. The Upload accepts up to 8 GB total and expires after one hour if not completed. |
| `OPENAI_CREATE_VECTOR_STORE` | Create Vector Store | Tool to create a new vector store. Use when you need to create a collection of processed files for file_search tools. |
| `OPENAI_CREATE_VECTOR_STORE_FILE` | Create Vector Store File | Tool to create a vector store file by attaching a File to a vector store. Use when you need to add files to a vector store for file search capabilities. |
| `OPENAI_CREATE_VECTOR_STORE_FILE_BATCH` | Create vector store file batch | Tool to create a vector store file batch. Use when attaching multiple files to a vector store for file search capabilities. |
| `OPENAI_CREATE_VIDEO` | Create Video | Tool to create a video using Sora models via the OpenAI Videos API. Use when you need to generate videos from text descriptions. The response includes a job ID and status for tracking the asynchronous video generation process. |
| `OPENAI_CREATE_VIDEO_REMIX` | Create Video Remix | Tool to create a video remix from an existing generated video using OpenAI's Video API. Use when you need to transform or modify a previously generated video based on a new prompt. The remix operation creates a new video job that applies the new prompt to the original video content. |
| `OPENAI_DELETE_ASSISTANT` | Delete assistant | Tool to delete a specific assistant by its ID. Use when you need to remove an assistant after confirming its ID. Deletion is irreversible and permanently removes all assistant configuration. |
| `OPENAI_DELETE_CHAT_COMPLETION` | Delete chat completion | Tool to delete a stored chat completion by its ID. Use when you need to remove a chat completion that was created with store=true. |
| `OPENAI_DELETE_CONTAINER` | Delete container | Tool to delete a specific container by its ID. Use when you need to remove a container after confirming its ID. |
| `OPENAI_DELETE_CONTAINER_FILE` | Delete container file | Tool to delete a file from a container. Use when you need to remove a file from a specific container by providing both container ID and file ID. |
| `OPENAI_DELETE_CONVERSATION` | Delete conversation | Tool to delete a conversation by its ID. Items in the conversation will not be deleted. Use when you need to remove a conversation. |
| `OPENAI_DELETE_CONVERSATION_ITEM` | Delete conversation item | Tool to delete an item from a conversation with the given IDs. Use when you need to remove a specific message or item from a conversation. |
| `OPENAI_DELETE_EVAL` | Delete evaluation | Tool to delete a specific evaluation by its ID. Use when you need to remove an evaluation after confirming its ID. |
| `OPENAI_DELETE_EVAL_RUN` | Delete evaluation run | Tool to delete an evaluation run. Use when you need to permanently remove a specific run from an evaluation. |
| `OPENAI_DELETE_FILE` | Delete file | Tool to delete a file by its ID after confirming the target. Files persist indefinitely and count against storage quotas; periodically list and remove unneeded files to avoid hitting file-count or size caps. |
| `OPENAI_DELETE_MESSAGE` | Delete message | Tool to delete a message from a thread. Use when you need to remove a specific message by its ID after confirming both the thread and message IDs. |
| `OPENAI_DELETE_RESPONSE` | Delete response | Tool to delete a model response with the given ID. Use when you need to remove a stored response from the Responses API. |
| `OPENAI_DELETE_SKILL` | Delete skill | Tool to delete a specific skill by its ID. Use when you need to remove a skill after confirming its ID. |
| `OPENAI_DELETE_THREAD` | Delete thread | Tool to delete a thread by its ID. Use when you need to remove a conversation thread after confirming the target ID. |
| `OPENAI_DELETE_VECTOR_STORE` | Delete Vector Store | Tool to delete a vector store. Use when you need to permanently remove a vector store by its ID. |
| `OPENAI_DELETE_VECTOR_STORE_FILE` | Delete Vector Store File | Tool to delete a vector store file. This removes the file from the vector store but does not delete the file itself. Use when you need to remove a file from a vector store's search index. |
| `OPENAI_DELETE_VIDEO` | Delete video | Tool to delete a video by its ID. Use when you need to remove a video after confirming its identifier. |
| `OPENAI_DOWNLOAD_FILE` | Download file | Tool to download the contents of a specified file by its ID. Use when you need to retrieve file content from OpenAI. |
| `OPENAI_DOWNLOAD_VIDEO` | Download Video Content | Tool to download video content (MP4) or preview assets from OpenAI Videos API. Use when you need to retrieve the actual video file, thumbnail, or spritesheet for a generated video. |
| `OPENAI_GET_CHAT_COMPLETION` | Get Chat Completion | Tool to retrieve a stored chat completion. Use when you need to fetch a previously created chat completion that was stored with store=true. |
| `OPENAI_GET_CHAT_COMPLETION_MESSAGES` | Get Chat Completion Messages | Tool to retrieve messages from a stored chat completion. Use when you need to fetch the conversation history of a previously created chat completion. Only works with completions that were created with the store parameter set to true. |
| `OPENAI_GET_CHATKIT_THREAD` | Get ChatKit thread | Tool to retrieve a ChatKit thread by its ID. Use when you need to fetch details about a specific ChatKit thread including its status, title, and owner. |
| `OPENAI_GET_CONVERSATION_ITEM` | Get Conversation Item | Tool to retrieve a single item from a conversation. Use when you need to fetch details of a specific message or item within a conversation. |
| `OPENAI_GET_EVAL` | Get Eval | Tool to retrieve an evaluation by ID. Use when you need to inspect an existing evaluation's configuration, data source settings, or testing criteria. |
| `OPENAI_GET_EVAL_RUN` | Get Evaluation Run | Tool to retrieve an evaluation run by ID to check status and results. Use when you need to monitor an evaluation run's progress or retrieve its final results after creation. |
| `OPENAI_GET_EVAL_RUN_OUTPUT_ITEM` | Get Eval Run Output Item | Tool to retrieve a specific output item from an evaluation run by its ID. Use when you need detailed results for a particular test case within an evaluation run. |
| `OPENAI_GET_EVAL_RUN_OUTPUT_ITEMS` | Get eval run output items | Tool to get a list of output items for an evaluation run. Use when you need to retrieve the detailed results of an eval run, including grading outcomes, input/output samples, and token usage. |
| `OPENAI_GET_EVAL_RUNS` | Get Evaluation Runs | Tool to get a paginated list of runs for an evaluation. Use when you need to retrieve all runs or check the status of multiple runs for a specific evaluation. |
| `OPENAI_GET_INPUT_TOKEN_COUNTS` | Get Input Token Counts | Tool to calculate input token counts for OpenAI API requests. Use when you need to estimate token usage before making an API call, validate content fits within model limits, or optimize prompts for token efficiency. |
| `OPENAI_GET_MESSAGE` | Get Message | Tool to retrieve a specific message from a thread by its ID. Use when you need to fetch details of a particular message in an Assistants thread. |
| `OPENAI_GET_RESPONSE` | Get Response | Tool to retrieve a model response by ID. Use when you need to check the status, output, or metadata of a previously created response. |
| `OPENAI_GET_RUN_STEP` | Get Run Step | Tool to retrieve a specific run step from an Assistants API run to inspect detailed execution progress, view tool calls, or check message creation. Use when you need details about a specific step in a run's execution. |
| `OPENAI_GET_VECTOR_STORE` | Get Vector Store | Tool to retrieve a vector store by its ID. Use when you need to get details of an existing vector store for file_search operations. |
| `OPENAI_GET_VECTOR_STORE_FILE` | Get Vector Store File | Tool to retrieve a file from a vector store. Use when you need to check the status or details of a file attached to a vector store. |
| `OPENAI_GET_VECTOR_STORE_FILE_BATCH` | Get Vector Store File Batch | Tool to retrieve a vector store file batch. Use when you need to check the status of a batch file processing operation. |
| `OPENAI_GET_VIDEO` | Get Video | Tool to retrieve a video generation job by its unique identifier. Use when you need to check the status, progress, or details of a video generation task. |
| `OPENAI_LIST_ASSISTANTS` | List Assistants | Tool to list assistants to discover the correct assistant_id by name or metadata. Use when assistant_id is unknown to avoid 404 errors. |
| `OPENAI_LIST_BATCHES` | List Batches | Tool to list your organization's batches. Use when you need to view all batches, check batch statuses, or find a specific batch by listing them. |
| `OPENAI_LIST_CHAT_COMPLETIONS` | List Chat Completions | Tool to list stored chat completions that were created with the `store` parameter set to true. Use to retrieve previously stored completions for review, analysis, or audit purposes. |
| `OPENAI_LIST_CHATKIT_THREAD_ITEMS` | List ChatKit thread items | Tool to list ChatKit thread items. Use when you need to retrieve all items from a ChatKit thread, including messages, tool calls, and tasks. |
| `OPENAI_LIST_CONTAINER_FILES` | List container files | Tool to list files in a container. Use when you need to view all files uploaded to a specific container. |
| `OPENAI_LIST_CONTAINERS` | List Containers | Tool to list containers. Use when you need to view all containers in your organization or filter by name. |
| `OPENAI_LIST_CONVERSATION_ITEMS` | List Conversation Items | Tool to list all items for a conversation with the given ID. Use when you need to retrieve the items in a conversation for review or processing. |
| `OPENAI_LIST_ENGINES` | List engines | Tool to list available engines and their basic information. Use when you need to discover which engines are available. |
| `OPENAI_LIST_EVALS` | List Evals | Tool to list evaluations for a project. Use when you need to view all evaluations, check their configurations, or find a specific evaluation by listing them. |
| `OPENAI_LIST_FILES` | List files | Tool to retrieve a list of files uploaded to your organization/project context. Cursor-paginated: continue requesting pages until next_cursor is null to retrieve all files. Files from other project environments are not visible. Cached file_ids may become stale if files are deleted; re-query before referencing file_ids in downstream workflows. |
| `OPENAI_LIST_FILES_IN_VECTOR_STORE_BATCH` | List Files in Vector Store Batch | Tool to list vector store files in a batch. Use when you need to retrieve all files that were added to a vector store as part of a specific batch operation. |
| `OPENAI_LIST_FINE_TUNES` | List fine-tunes | Tool to list your organization's fine-tuning jobs. Use when you need to review all fine-tune runs. Results are scoped to the authenticated organization; an empty list means no fine-tunes exist for that org specifically. |
| `OPENAI_LIST_FINE_TUNING_EVENTS` | List fine-tuning job events | Tool to get status updates for a fine-tuning job. Use when you need to monitor the progress of a fine-tuning job or retrieve event logs. |
| `OPENAI_LIST_FINE_TUNING_JOB_CHECKPOINTS` | List fine-tuning job checkpoints | Tool to list checkpoints for a fine-tuning job. Use when you need to retrieve model checkpoints created during the fine-tuning process. Checkpoints are created at various steps during training and can be used to evaluate model performance at different stages. |
| `OPENAI_LIST_INPUT_ITEMS` | List Input Items | Tool to retrieve input items for a given response from the OpenAI Responses API. Use when you need to inspect the inputs that were used to generate a specific response. |
| `OPENAI_LIST_MESSAGES` | List Messages | Tool to list messages in an Assistants thread to fetch the assistant's generated outputs after a run completes. Use after polling a run to completion to retrieve assistant responses. |
| `OPENAI_LIST_MODELS` | List models | Tool to list available models scoped to the current account/organization — some public models may be absent due to permissions. Use when you need to discover which models you can call. Results may include deprecated or assistant-incompatible models; confirm a specific model's status via OPENAI_RETRIEVE_MODEL before using it in OPENAI_CREATE_ASSISTANT or other calls. Use model IDs exactly as returned — misspelled or unsupported IDs cause validation errors. Models vary in capability (vision, file-based features, context size); inspect metadata before assuming support. Filter results by id, owned_by, or status to handle large result sets efficiently. |
| `OPENAI_LIST_RUNS` | List Runs | Tool to list runs belonging to a thread. Use when you need to view all runs that have been executed on a specific thread, for example to find a run by status or to track execution history. |
| `OPENAI_LIST_RUN_STEPS` | List Run Steps | Tool to list run steps for an Assistants API run to track detailed execution progress, inspect tool calls, and view message creation events. Use after creating a run to monitor its step-by-step execution or to retrieve tool call details after completion. When polling to monitor active runs, use reasonable intervals to avoid rate-limit errors. |
| `OPENAI_LIST_SKILLS` | List Skills | Tool to list skills. Use when you need to discover available skills or retrieve skill IDs by name or description. |
| `OPENAI_LIST_THREADS` | List ChatKit Threads | Tool to list ChatKit threads with pagination and filtering. Use when you need to retrieve threads for a specific user or browse all available threads. |
| `OPENAI_LIST_VECTOR_STORE_FILES` | List Vector Store Files | Tool to list files in a vector store. Use when you need to retrieve all files attached to a specific vector store. |
| `OPENAI_LIST_VECTOR_STORES` | List Vector Stores | Tool to list vector stores to discover available vector stores by name or metadata. Use when you need to view all vector stores in your organization. |
| `OPENAI_LIST_VIDEOS` | List Videos | Tool to list all video generation jobs. Use when you need to view all videos that have been created or check the status of multiple video generation tasks. |
| `OPENAI_MODIFY_ASSISTANT` | Modify Assistant | Tool to modify an existing assistant. Use when you need to update an assistant's configuration, model, instructions, tools, or metadata. |
| `OPENAI_MODIFY_MESSAGE` | Modify Message | Tool to modify an existing message's metadata in a thread. Use when you need to update metadata associated with a specific message. |
| `OPENAI_MODIFY_RUN` | Modify Run | Tool to modify a run's metadata. Use when you need to update metadata information for a specific run. |
| `OPENAI_MODIFY_THREAD` | Modify thread | Tool to modify an existing thread's metadata. Use after obtaining the thread ID when you need to update metadata. |
| `OPENAI_MODIFY_VECTOR_STORE` | Modify Vector Store | Tool to modify an existing vector store. Use when you need to update the name, expiration policy, or metadata of a vector store. |
| `OPENAI_RETRIEVE_ASSISTANT` | Retrieve assistant | Tool to retrieve details of a specific assistant. Use when you need to confirm assistant metadata before performing further operations. Prefer this over repeated OPENAI_CREATE_ASSISTANT calls to avoid cluttering assistant configurations. |
| `OPENAI_RETRIEVE_BATCH` | Retrieve Batch | Tool to retrieve a batch by ID to check its status, progress, and results. Use when you need to check the current state of a batch job, view completion statistics, or access output file IDs. |
| `OPENAI_RETRIEVE_CONTAINER` | Retrieve container | Tool to retrieve details of a specific container by its ID. Use when you need to fetch container configuration, status, or metadata. |
| `OPENAI_RETRIEVE_CONTAINER_FILE` | Retrieve container file | Tool to retrieve metadata for a specific file in a container. Use when you need to get details about a container file including its size, path, and creation timestamp. |
| `OPENAI_RETRIEVE_CONTAINER_FILE_CONTENT` | Retrieve container file content | Tool to retrieve the content of a file within a container. Use when you need to download or access the actual file data from a container. |
| `OPENAI_RETRIEVE_ENGINE` | Retrieve engine | Tool to retrieve details of a specific engine. Use when you need to confirm engine availability and metadata before using it. |
| `OPENAI_RETRIEVE_FILE` | Retrieve file | Tool to retrieve information about a specific file. Use when you need to check the details, status, or metadata of a file by its ID. |
| `OPENAI_RETRIEVE_FINE_TUNING_JOB` | Retrieve fine-tuning job | Tool to retrieve information about a fine-tuning job. Use when you need to check the status, progress, or details of an existing fine-tuning job. |
| `OPENAI_RETRIEVE_MODEL` | Retrieve model | Tool to retrieve details of a specific model, confirming its metadata (ownership, created date) and verifying access under your org — a model appearing in OPENAI_LIST_MODELS does not guarantee access. Use before depending on a model ID in downstream calls. |
| `OPENAI_RETRIEVE_RUN` | Retrieve run | Tool to retrieve an Assistants run by ID to check status, errors, and usage. Use when polling run status until it reaches a terminal state (completed, failed, cancelled, incomplete, expired) before reading thread messages. |
| `OPENAI_RETRIEVE_THREAD` | Retrieve thread | Tool to retrieve metadata of a specific thread by its ID — does not include message bodies or assistant replies (those require a completed run and separate message listing). Use before listing messages. After thread creation or message appending, wait 2–3 seconds before polling; use exponential backoff (1s, 2s, 4s) for HTTP 429 responses. Avoid concurrent OPENAI_CREATE_MESSAGE writes during retrieval to prevent inconsistent message states. |
| `OPENAI_RETRIEVE_VECTOR_STORE_FILE_CONTENT` | Retrieve Vector Store File Content | Tool to retrieve the parsed contents of a vector store file. Use when you need to access the actual text content that has been processed and chunked for a file in a vector store. |
| `OPENAI_RUN_GRADER` | Run grader | Tool to run a grader to evaluate model performance on a given sample. Use when you need to evaluate model outputs against expected results using various grading strategies like string matching, text similarity, custom Python code, or model-based scoring. |
| `OPENAI_SEARCH_VECTOR_STORE` | Search Vector Store | Tool to search a vector store for relevant chunks based on a query and file attributes filter. Use when you need to find specific content within a vector store for retrieval-augmented generation (RAG) or custom search workflows. |
| `OPENAI_SUBMIT_TOOL_OUTPUTS_TO_RUN` | Submit Tool Outputs to Run | Tool to submit tool call outputs to continue a run that requires action. Use when a run has status 'requires_action' and required_action.type is 'submit_tool_outputs'. All tool outputs must be submitted together in a single request. |
| `OPENAI_UPDATE_CHAT_COMPLETION` | Update Chat Completion | Tool to update metadata for a stored chat completion. Use when you need to modify tags or metadata for a completion that was created with store=true. |
| `OPENAI_UPDATE_CONVERSATION` | Update Conversation | Tool to update a conversation's metadata. Use when you need to modify metadata for an existing conversation. |
| `OPENAI_UPDATE_EVAL` | Update Eval | Tool to update certain properties of an evaluation (name and metadata). Use when you need to rename an evaluation or update its metadata without changing the data source configuration or testing criteria. |
| `OPENAI_UPDATE_VECTOR_STORE_FILE_ATTRIBUTES` | Update Vector Store File Attributes | Tool to update custom attributes on a vector store file. Use when you need to add or modify metadata associated with files in a vector store. |
| `OPENAI_UPLOAD_FILE` | Upload file | Tool to upload a file for use across OpenAI endpoints. Use before referencing the file in tasks like fine-tuning. Returns a `file_id` that must be explicitly passed to endpoints like OPENAI_CREATE_ASSISTANT or OPENAI_CREATE_MESSAGE — uploaded files are not auto-discovered. Files exceeding size limits return HTTP 413; split or compress oversized content before retrying. Verify uploads via OPENAI_LIST_FILES before referencing `file_id`, as unsupported formats may be silently rejected. Repeated uploads accumulate against storage quotas; delete unused files with OPENAI_DELETE_FILE. |
| `OPENAI_VALIDATE_GRADER` | Validate grader configuration | Tool to validate a grader configuration for fine-tuning jobs. Use when you need to verify that a grader is correctly configured before using it in a fine-tuning job. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Openai MCP server is an implementation of the Model Context Protocol that connects your AI agent to Openai. It provides structured and secure access so your agent can perform Openai operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before starting, make sure you have:
- A Google API key for Gemini models
- A Composio account and API key
- Python 3.9 or later installed
- Basic familiarity with Python

### 1. Getting API Keys for Google and Composio

Google API Key
- Go to [Google AI Studio](https://aistudio.google.com/app/apikey) and create an API key.
- Copy the key and keep it safe. You will put this in GOOGLE_API_KEY.
Composio API Key and User ID
- Log in to the [Composio dashboard](https://dashboard.composio.dev?utm_source=toolkits&utm_medium=framework_docs).
- Go to Settings → API Keys and copy your Composio API key. Use this for COMPOSIO_API_KEY.
- Decide on a stable user identifier to scope sessions, often your email or a user ID. Use this for COMPOSIO_USER_ID.

### 2. Install dependencies

Inside your virtual environment, install the required packages.
What's happening:
- google-adk is Google's Agents Development Kit
- composio connects your agent to Openai via MCP
- python-dotenv loads environment variables
```bash
pip install google-adk composio python-dotenv
```

### 3. Set up ADK project

Set up a new Google ADK project.
What's happening:
- This creates an agent folder with a root agent file and .env file
```bash
adk create my_agent
```

### 4. Set environment variables

Save all your credentials in the .env file.
What's happening:
- GOOGLE_API_KEY authenticates with Google's Gemini models
- COMPOSIO_API_KEY authenticates with Composio
- COMPOSIO_USER_ID identifies the user for session management
```bash
GOOGLE_API_KEY=your-google-api-key
COMPOSIO_API_KEY=your-composio-api-key
COMPOSIO_USER_ID=your-user-id-or-email
```

### 5. Import modules and validate environment

What's happening:
- os reads environment variables
- Composio is the main Composio SDK client
- GoogleProvider declares that you are using Google ADK as the agent runtime
- Agent is the Google ADK LLM agent class
- McpToolset lets the ADK agent call MCP tools over HTTP
```python
import os
import warnings

from composio import Composio
from dotenv import load_dotenv
from google.adk.agents.llm_agent import Agent
from google.adk.tools.mcp_tool.mcp_session_manager import StreamableHTTPConnectionParams
from google.adk.tools.mcp_tool.mcp_toolset import McpToolset

load_dotenv()

warnings.filterwarnings("ignore", message=".*BaseAuthenticatedTool.*")

GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
COMPOSIO_USER_ID = os.getenv("COMPOSIO_USER_ID")

if not GOOGLE_API_KEY:
    raise ValueError("GOOGLE_API_KEY is not set in the environment.")
if not COMPOSIO_API_KEY:
    raise ValueError("COMPOSIO_API_KEY is not set in the environment.")
if not COMPOSIO_USER_ID:
    raise ValueError("COMPOSIO_USER_ID is not set in the environment.")
```

### 6. Create Composio client and Tool Router session

What's happening:
- Authenticates to Composio with your API key
- Declares Google ADK as the provider
- Spins up a short-lived MCP endpoint for your user and selected toolkit
- Stores the MCP HTTP URL for the ADK MCP integration
```python
composio_client = Composio(api_key=COMPOSIO_API_KEY)

composio_session = composio_client.create(
    user_id=COMPOSIO_USER_ID,
    toolkits=["openai"],
)

COMPOSIO_MCP_URL = composio_session.mcp.url,
print(f"Composio MCP URL: {COMPOSIO_MCP_URL}")
```

### 7. Set up the McpToolset and create the Agent

What's happening:
- Connects the ADK agent to the Composio MCP endpoint through McpToolset
- Uses Gemini as the model powering the agent
- Lists exact tool names in instruction to reduce misnamed tool calls
```python
composio_toolset = McpToolset(
    connection_params=StreamableHTTPConnectionParams(
        url=COMPOSIO_MCP_URL,
        headers={"x-api-key": COMPOSIO_API_KEY}
    )
)

root_agent = Agent(
    model="gemini-2.5-flash",
    name="composio_agent",
    description="An agent that uses Composio tools to perform actions.",
    instruction=(
        "You are a helpful assistant connected to Composio. "
        "You have the following tools available: "
        "COMPOSIO_SEARCH_TOOLS, COMPOSIO_MULTI_EXECUTE_TOOL, "
        "COMPOSIO_MANAGE_CONNECTIONS, COMPOSIO_REMOTE_BASH_TOOL, COMPOSIO_REMOTE_WORKBENCH. "
        "Use these tools to help users with Openai operations."
    ),
    tools=[composio_toolset],
)

print("\nAgent setup complete. You can now run this agent directly ;)")
```

### 8. Run the agent

Execute the agent from the project root. The web command opens a web portal where you can chat with the agent.
What's happening:
- adk run runs the agent in CLI mode
- adk web . opens a web UI for interactive testing
```bash
# Run in CLI mode
adk run my_agent

# Or run in web UI mode
adk web
```

## Complete Code

```python
import os
import warnings

from composio import Composio
from composio_google import GoogleProvider
from dotenv import load_dotenv
from google.adk.agents.llm_agent import Agent
from google.adk.tools.mcp_tool.mcp_session_manager import StreamableHTTPConnectionParams
from google.adk.tools.mcp_tool.mcp_toolset import McpToolset

load_dotenv()
warnings.filterwarnings("ignore", message=".*BaseAuthenticatedTool.*")

GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
COMPOSIO_USER_ID = os.getenv("COMPOSIO_USER_ID")

if not GOOGLE_API_KEY:
    raise ValueError("GOOGLE_API_KEY is not set in the environment.")
if not COMPOSIO_API_KEY:
    raise ValueError("COMPOSIO_API_KEY is not set in the environment.")
if not COMPOSIO_USER_ID:
    raise ValueError("COMPOSIO_USER_ID is not set in the environment.")

composio_client = Composio(api_key=COMPOSIO_API_KEY, provider=GoogleProvider())

composio_session = composio_client.create(
    user_id=COMPOSIO_USER_ID,
    toolkits=["openai"],
)

COMPOSIO_MCP_URL = composio_session.mcp.url


composio_toolset = McpToolset(
    connection_params=StreamableHTTPConnectionParams(
        url=COMPOSIO_MCP_URL,
        headers={"x-api-key": COMPOSIO_API_KEY}
    )
)

root_agent = Agent(
    model="gemini-2.5-flash",
    name="composio_agent",
    description="An agent that uses Composio tools to perform actions.",
    instruction=(
        "You are a helpful assistant connected to Composio. "
        "You have the following tools available: "
        "COMPOSIO_SEARCH_TOOLS, COMPOSIO_MULTI_EXECUTE_TOOL, "
        "COMPOSIO_MANAGE_CONNECTIONS, COMPOSIO_REMOTE_BASH_TOOL, COMPOSIO_REMOTE_WORKBENCH. "
        "Use these tools to help users with Openai operations."
    ),  
    tools=[composio_toolset],
)

print("\nAgent setup complete. You can now run this agent directly ;)")
```

## Conclusion

You've successfully integrated Openai with the Google ADK through Composio's MCP Tool Router. Your agent can now interact with Openai using natural language commands.
Key takeaways:
- The Tool Router approach dynamically routes requests to the appropriate Openai tools
- Environment variables keep your credentials secure and separate from code
- Clear agent instructions reduce tool calling errors
- The ADK web UI provides an interactive interface for testing and development
You can extend this setup by adding more toolkits to the toolkits array in your session configuration.

## How to build Openai MCP Agent with another framework

- [ChatGPT](https://composio.dev/toolkits/openai/framework/chatgpt)
- [OpenAI Agents SDK](https://composio.dev/toolkits/openai/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/openai/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/openai/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/openai/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/openai/framework/codex)
- [Cursor](https://composio.dev/toolkits/openai/framework/cursor)
- [VS Code](https://composio.dev/toolkits/openai/framework/vscode)
- [OpenCode](https://composio.dev/toolkits/openai/framework/opencode)
- [OpenClaw](https://composio.dev/toolkits/openai/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/openai/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/openai/framework/cli)
- [LangChain](https://composio.dev/toolkits/openai/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/openai/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/openai/framework/mastra-ai)
- [LlamaIndex](https://composio.dev/toolkits/openai/framework/llama-index)
- [CrewAI](https://composio.dev/toolkits/openai/framework/crew-ai)

## Related Toolkits

- [Composio](https://composio.dev/toolkits/composio) - Composio is an integration platform that connects AI agents with hundreds of business tools. It streamlines authentication and lets you trigger actions across services—no custom code needed.
- [Composio search](https://composio.dev/toolkits/composio_search) - Composio search is a unified web search toolkit spanning travel, e-commerce, news, financial markets, images, and more. It lets you and your apps tap into up-to-date web data from a single, easy-to-integrate service.
- [Perplexityai](https://composio.dev/toolkits/perplexityai) - Perplexityai delivers natural, conversational AI models for generating human-like text. Instantly get context-aware, high-quality responses for chat, search, or complex workflows.
- [Browser tool](https://composio.dev/toolkits/browser_tool) - Browser tool is a virtual browser integration that lets AI agents interact with the web programmatically. It enables automated browsing, scraping, and action-taking from any AI workflow.
- [Ai ml api](https://composio.dev/toolkits/ai_ml_api) - Ai ml api is a suite of AI/ML models for natural language and image tasks. It provides fast, scalable access to advanced AI capabilities for your apps and workflows.
- [Aivoov](https://composio.dev/toolkits/aivoov) - Aivoov is an AI-powered text-to-speech platform offering 1,000+ voices in over 150 languages. Instantly turn written content into natural, human-like audio for any application.
- [All images ai](https://composio.dev/toolkits/all_images_ai) - All-Images.ai is an AI-powered image generation and management platform. It helps you create, search, and organize images effortlessly with advanced AI capabilities.
- [Anthropic administrator](https://composio.dev/toolkits/anthropic_administrator) - Anthropic administrator is an API for managing Anthropic organizational resources like members, workspaces, and API keys. It helps you automate admin tasks and streamline resource management across your Anthropic organization.
- [Api labz](https://composio.dev/toolkits/api_labz) - Api labz is a platform offering a suite of AI-driven APIs and workflow tools. It helps developers automate tasks and build smarter, more efficient applications.
- [Apipie ai](https://composio.dev/toolkits/apipie_ai) - Apipie ai is an AI model aggregator offering a single API for accessing top AI models from multiple providers. It helps developers build cost-efficient, latency-optimized AI solutions without juggling multiple integrations.
- [Astica ai](https://composio.dev/toolkits/astica_ai) - Astica ai provides APIs for computer vision, NLP, and voice synthesis. Integrate advanced AI features into your app with a single API key.
- [Bigml](https://composio.dev/toolkits/bigml) - BigML is a machine learning platform that lets you build, train, and deploy predictive models from your data. Its intuitive interface and robust API make machine learning accessible and efficient.
- [Botbaba](https://composio.dev/toolkits/botbaba) - Botbaba is a platform for building, managing, and deploying conversational AI chatbots across messaging channels. It streamlines chatbot automation, making it easier to integrate AI into customer interactions.
- [Botpress](https://composio.dev/toolkits/botpress) - Botpress is an open-source platform for building, deploying, and managing chatbots. It helps teams automate conversations and deliver rich, interactive messaging experiences.
- [Chatbotkit](https://composio.dev/toolkits/chatbotkit) - Chatbotkit is a platform for building and managing AI-powered chatbots using robust APIs and SDKs. It lets you easily add conversational AI to your apps for better user engagement.
- [Cody](https://composio.dev/toolkits/cody) - Cody is an AI assistant built for businesses, trained on your company's knowledge and data. It delivers instant answers and insights, tailored for your team.
- [Context7 MCP](https://composio.dev/toolkits/context7_mcp) - Context7 MCP delivers live, version-specific code docs and examples right from the source. It helps developers and AI agents instantly retrieve authoritative programming info—no more out-of-date docs.
- [Customgpt](https://composio.dev/toolkits/customgpt) - CustomGPT.ai lets you build and deploy chatbots tailored to your own data and business needs. Get precise and context-aware AI conversations without writing code.
- [Datarobot](https://composio.dev/toolkits/datarobot) - Datarobot is a machine learning platform that automates model development, deployment, and monitoring. It empowers organizations to quickly gain predictive insights from large datasets.
- [Deepgram](https://composio.dev/toolkits/deepgram) - Deepgram is an AI-powered speech recognition platform for accurate audio transcription and understanding. It enables fast, scalable speech-to-text with advanced audio intelligence features.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Openai MCP?

With a standalone Openai MCP server, the agents and LLMs can only access a fixed set of Openai tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Openai and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with Google ADK?

Yes, you can. Google ADK fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Openai tools.

### Can I manage the permissions and scopes for Openai while using Tool Router?

Yes, absolutely. You can configure which Openai scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Openai data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
