How to integrate Render MCP with LlamaIndex

Trusted by
AWS
Glean
Zoom
Airtable

30 min · no commitment · see it on your stack

Render logo
LlamaIndex logo
divider

Introduction

This guide walks you through connecting Render to LlamaIndex using the Composio tool router. By the end, you'll have a working Render agent that can deploy latest code to staging service, restart production web service now, get current status of all services through natural language commands.

This guide will help you understand how to give your LlamaIndex agent real control over a Render account through Composio's Render MCP server.

Before we dive in, let's take a quick look at the key ideas and tools involved.

Also integrate Render with

TL;DR

Here's what you'll learn:
  • Set your OpenAI and Composio API keys
  • Install LlamaIndex and Composio packages
  • Create a Composio Tool Router session for Render
  • Connect LlamaIndex to the Render MCP server
  • Build a Render-powered agent using LlamaIndex
  • Interact with Render through natural language

What is LlamaIndex?

LlamaIndex is a data framework for building LLM applications. It provides tools for connecting LLMs to external data sources and services through agents and tools.

Key features include:

  • ReAct Agent: Reasoning and acting pattern for tool-using agents
  • MCP Tools: Native support for Model Context Protocol
  • Context Management: Maintain conversation context across interactions
  • Async Support: Built for async/await patterns

What is the Render MCP server, and what's possible with it?

The Render MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Render account. It provides structured and secure access to your cloud infrastructure, so your agent can perform actions like deploying applications, managing services, monitoring site health, restarting instances, and scaling resources on your behalf.

  • Automated application deployment: Instantly deploy new web apps or services without manual steps, letting your agent handle setup and rollouts.
  • Service monitoring and status checks: Ask your agent to check the health and uptime of your apps or services, so you’re always up to speed on what’s running smoothly—and what’s not.
  • Instance management and restarts: Enable your agent to restart, stop, or scale up/down your running services to quickly respond to changes or issues.
  • Resource scaling and configuration: Let your agent adjust resource allocations, increasing or decreasing capacity based on current needs or traffic spikes.
  • Error diagnostics and log retrieval: Have your agent fetch logs or error reports to help troubleshoot issues before they become major problems.

Supported Tools & Triggers

Tools
Add Header RuleTool to add a custom HTTP header rule to a Render service.
Add or Update Secret FileTool to add or update a secret file for a Render service.
Add Resources to EnvironmentTool to add resources to a Render environment.
Add RouteTool to add redirect or rewrite rules to a Render service.
Create Custom DomainTool to add a custom domain to a Render service.
Create Environment GroupTool to create a new environment group.
Create EnvironmentTool to create a new environment within a Render project.
Create Postgres InstanceTool to create a new Postgres instance on Render.
Create Registry CredentialTool to create a registry credential.
Delete Environment Group VariableTool to remove an environment variable from an environment group.
Delete Environment Group Secret FileTool to remove a secret file from an environment group.
Delete EnvironmentTool to delete a specified environment.
Delete Key ValueTool to delete a Key Value instance.
Delete Owner Log StreamTool to delete a log stream for an owner.
Delete Owner Metrics StreamTool to delete a metrics stream for a workspace.
Delete Registry CredentialTool to delete a registry credential.
Delete Secret FileTool to delete a secret file from a Render service.
Delete ServiceTool to delete a service.
Disconnect BlueprintTool to disconnect a blueprint from your Render account.
Get Active ConnectionsTool to get active connection count metrics for Render resources.
Get Bandwidth SourcesTool to get bandwidth usage breakdown by traffic source.
Get CPU UsageTool to retrieve CPU usage metrics for Render resources.
Get CPU LimitTool to retrieve CPU limit metrics for Render resources.
Get Disk CapacityTool to get disk capacity metrics for Render resources.
Get Disk UsageTool to retrieve disk usage metrics for Render resources.
Get Instance CountTool to get instance count metrics for Render resources.
Get Memory UsageTool to get memory usage metrics for one or more resources.
Get Memory LimitTool to get memory limit metrics for Render resources over a specified time range.
Get Memory TargetTool to get memory target metrics for Render resources.
Get UserTool to get the authenticated user.
Link Service to Environment GroupTool to link a service to an environment group.
List Application Filter ValuesTool to list queryable instance values for application metrics.
List BlueprintsTool to list all blueprints.
List DeploysTool to list recent deploys for a Render service with pagination and filtering.
List DisksTool to list all disks.
List Environment GroupsTool to list environment groups.
List EnvironmentsTool to list environments for a project.
List Environment Variables for ServiceTool to list all environment variables configured directly on a Render service (with pagination).
List InstancesTool to list instances of a service.
List Key Value InstancesTool to list all Key Value instances.
List LogsTool to list logs for a specific workspace and resource.
List Log Label ValuesTool to list log label values for a workspace.
List Maintenance RunsTool to list maintenance runs.
List Notification OverridesTool to list notification overrides for services.
List Workspace MembersTool to list workspace members.
List OwnersTool to list owners (users and teams).
List Postgres InstancesTool to list Postgres instances.
List Postgres ExportsTool to list all exports for a Postgres instance.
List PostgreSQL UsersTool to list PostgreSQL user credentials for a Render PostgreSQL database instance.
List ProjectsList Projects
List Registry CredentialsTool to list registry credentials.
List Resource Log StreamsTool to list resource log stream overrides.
List RoutesTool to list redirect/rewrite rules for a service.
List Secret FilesTool to list secret files for a Render service.
List ServicesTool to list all services.
List Task RunsTool to list task runs.
List TasksTool to list tasks.
List WebhooksTool to list all webhooks.
List WorkflowsTool to list workflows.
List Workflow VersionsTool to list workflow versions.
Restart ServiceTool to restart a service.
Resume ServiceTool to resume a suspended service.
Retrieve Custom DomainTool to retrieve a specific custom domain for a service.
Retrieve deployRetrieve deploy
Retrieve Environment GroupTool to retrieve a specific environment group by ID.
Retrieve Environment VariableTool to retrieve a specific environment variable from a Render environment group.
Retrieve Environment Group Secret FileTool to retrieve secret file from an environment group.
Retrieve Environment VariableTool to retrieve a specific environment variable from a Render service.
Retrieve OwnerTool to retrieve a specific owner (workspace) by ID.
Retrieve Owner Notification SettingsTool to retrieve notification settings for a specific owner (workspace).
Retrieve Postgres InstanceTool to retrieve a specific Postgres instance.
Retrieve ProjectTool to retrieve a specific project by ID.
Retrieve Registry CredentialTool to retrieve a registry credential by ID.
Retrieve Secret FileTool to retrieve a secret file from a Render service.
Retrieve ServiceTool to retrieve a specific service by ID.
Stream Task Runs EventsTool to stream real-time task run events via Server-Sent Events (SSE).
Subscribe to LogsTool to subscribe to real-time logs via WebSocket connection.
Suspend ServiceTool to suspend a service.
Trigger DeployTool to trigger a new deploy for a specified service.
Update Environment GroupTool to update an environment group's name.
Update Environment Group VariableTool to add or update an environment variable in an environment group.
Update Environment Group Secret FileTool to add or update a secret file in an environment group.
Update Environment VariableTool to add or update an environment variable for a Render service.
Update Environment Variables for ServiceTool to update environment variables for a Render service.
Update Header RulesTool to replace all header rules for a Render service.
Update Owner Log StreamTool to update log stream configuration for an owner.
Update Owner Notification SettingsTool to update notification settings for a specific owner (workspace).
Update Postgres InstanceTool to update a Postgres instance configuration.
Update ProjectTool to update a project's name.
Update Registry CredentialTool to update a registry credential.
Update Resource Log StreamTool to update log stream override for a resource.
Update RoutesTool to update redirect/rewrite rules for a service.
Update Secret Files for ServiceTool to update secret files for a Render service.
Update ServiceTool to update a service configuration.
Verify Custom DomainTool to verify DNS configuration for a custom domain.

What is the Composio tool router, and how does it fit here?

What is Composio SDK?

Composio's Composio SDK helps agents find the right tools for a task at runtime. You can plug in multiple toolkits (like Gmail, HubSpot, and GitHub), and the agent will identify the relevant app and action to complete multi-step workflows. This can reduce token usage and improve the reliability of tool calls. Read more here: Getting started with Composio SDK

The tool router generates a secure MCP URL that your agents can access to perform actions.

How the Composio SDK works

The Composio SDK follows a three-phase workflow:

  1. Discovery: Searches for tools matching your task and returns relevant toolkits with their details.
  2. Authentication: Checks for active connections. If missing, creates an auth config and returns a connection URL via Auth Link.
  3. Execution: Executes the action using the authenticated connection.

Step-by-step Guide

Prerequisites

Before you begin, make sure you have:
  • Python 3.8/Node 16 or higher installed
  • A Composio account with the API key
  • An OpenAI API key
  • A Render account and project
  • Basic familiarity with async Python/Typescript

Getting API Keys for OpenAI, Composio, and Render

OpenAI API key (OPENAI_API_KEY)
  • Go to the OpenAI dashboard
  • Create an API key if you don't have one
  • Assign it to OPENAI_API_KEY in .env
Composio API key and user ID
  • Log into the Composio dashboard
  • Copy your API key from Settings
    • Use this as COMPOSIO_API_KEY
  • Pick a stable user identifier (email or ID)
    • Use this as COMPOSIO_USER_ID

Installing dependencies

pip install composio-llamaindex llama-index llama-index-llms-openai llama-index-tools-mcp python-dotenv

Create a new Python project and install the necessary dependencies:

  • composio-llamaindex: Composio's LlamaIndex integration
  • llama-index: Core LlamaIndex framework
  • llama-index-llms-openai: OpenAI LLM integration
  • llama-index-tools-mcp: MCP client for LlamaIndex
  • python-dotenv: Environment variable management

Set environment variables

bash
OPENAI_API_KEY=your-openai-api-key
COMPOSIO_API_KEY=your-composio-api-key
COMPOSIO_USER_ID=your-user-id

Create a .env file in your project root:

These credentials will be used to:

  • Authenticate with OpenAI's GPT-5 model
  • Connect to Composio's Tool Router
  • Identify your Composio user session for Render access

Import modules

import asyncio
import os
import dotenv

from composio import Composio
from composio_llamaindex import LlamaIndexProvider
from llama_index.core.agent.workflow import ReActAgent
from llama_index.core.workflow import Context
from llama_index.llms.openai import OpenAI
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec

dotenv.load_dotenv()

Create a new file called render_llamaindex_agent.py and import the required modules:

Key imports:

  • asyncio: For async/await support
  • Composio: Main client for Composio services
  • LlamaIndexProvider: Adapts Composio tools for LlamaIndex
  • ReActAgent: LlamaIndex's reasoning and action agent
  • BasicMCPClient: Connects to MCP endpoints
  • McpToolSpec: Converts MCP tools to LlamaIndex format

Load environment variables and initialize Composio

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
COMPOSIO_USER_ID = os.getenv("COMPOSIO_USER_ID")

if not OPENAI_API_KEY:
    raise ValueError("OPENAI_API_KEY is not set in the environment")
if not COMPOSIO_API_KEY:
    raise ValueError("COMPOSIO_API_KEY is not set in the environment")
if not COMPOSIO_USER_ID:
    raise ValueError("COMPOSIO_USER_ID is not set in the environment")

What's happening:

This ensures missing credentials cause early, clear errors before the agent attempts to initialise.

Create a Tool Router session and build the agent function

async def build_agent() -> ReActAgent:
    composio_client = Composio(
        api_key=COMPOSIO_API_KEY,
        provider=LlamaIndexProvider(),
    )

    session = composio_client.create(
        user_id=COMPOSIO_USER_ID,
        toolkits=["render"],
    )

    mcp_url = session.mcp.url
    print(f"Composio MCP URL: {mcp_url}")

    mcp_client = BasicMCPClient(mcp_url, headers={"x-api-key": COMPOSIO_API_KEY})
    mcp_tool_spec = McpToolSpec(client=mcp_client)
    tools = await mcp_tool_spec.to_tool_list_async()

    llm = OpenAI(model="gpt-5")

    description = "An agent that uses Composio Tool Router MCP tools to perform Render actions."
    system_prompt = """
    You are a helpful assistant connected to Composio Tool Router.
    Use the available tools to answer user queries and perform Render actions.
    """
    return ReActAgent(tools=tools, llm=llm, description=description, system_prompt=system_prompt, verbose=True)

What's happening here:

  • We create a Composio client using your API key and configure it with the LlamaIndex provider
  • We then create a tool router MCP session for your user, specifying the toolkits we want to use (in this case, render)
  • The session returns an MCP HTTP endpoint URL that acts as a gateway to all your configured tools
  • LlamaIndex will connect to this endpoint to dynamically discover and use the available Render tools.
  • The MCP tools are mapped to LlamaIndex-compatible tools and plug them into the Agent.

Create an interactive chat loop

async def chat_loop(agent: ReActAgent) -> None:
    ctx = Context(agent)
    print("Type 'quit', 'exit', or Ctrl+C to stop.")

    while True:
        try:
            user_input = input("\nYou: ").strip()
        except (KeyboardInterrupt, EOFError):
            print("\nBye!")
            break

        if not user_input or user_input.lower() in {"quit", "exit"}:
            print("Bye!")
            break

        try:
            print("Agent: ", end="", flush=True)
            handler = agent.run(user_input, ctx=ctx)

            async for event in handler.stream_events():
                # Stream token-by-token from LLM responses
                if hasattr(event, "delta") and event.delta:
                    print(event.delta, end="", flush=True)
                # Show tool calls as they happen
                elif hasattr(event, "tool_name"):
                    print(f"\n[Using tool: {event.tool_name}]", flush=True)

            # Get final response
            response = await handler
            print()  # Newline after streaming
        except KeyboardInterrupt:
            print("\n[Interrupted]")
            continue
        except Exception as e:
            print(f"\nError: {e}")

What's happening here:

  • We're creating a direct terminal interface to chat with your Render database
  • The LLM's responses are streamed to the CLI for faster interaction.
  • The agent uses context to maintain conversation history
  • You can type 'quit' or 'exit' to stop the chat loop gracefully
  • Agent responses and any errors are displayed in a clear, readable format

Define the main entry point

async def main() -> None:
    agent = await build_agent()
    await chat_loop(agent)

if __name__ == "__main__":
    # Handle Ctrl+C gracefully
    signal.signal(signal.SIGINT, lambda s, f: (print("\nBye!"), exit(0)))
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("\nBye!")

What's happening here:

  • We're orchestrating the entire application flow
  • The agent gets built with proper error handling
  • Then we kick off the interactive chat loop so you can start talking to Render

Run the agent

npx ts-node llamaindex-agent.ts

When prompted, authenticate and authorise your agent with Render, then start asking questions.

Complete Code

Here's the complete code to get you started with Render and LlamaIndex:

import asyncio
import os
import signal
import dotenv

from composio import Composio
from composio_llamaindex import LlamaIndexProvider
from llama_index.core.agent.workflow import ReActAgent
from llama_index.core.workflow import Context
from llama_index.llms.openai import OpenAI
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec

dotenv.load_dotenv()

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
COMPOSIO_USER_ID = os.getenv("COMPOSIO_USER_ID")

if not OPENAI_API_KEY:
    raise ValueError("OPENAI_API_KEY is not set")
if not COMPOSIO_API_KEY:
    raise ValueError("COMPOSIO_API_KEY is not set")
if not COMPOSIO_USER_ID:
    raise ValueError("COMPOSIO_USER_ID is not set")

async def build_agent() -> ReActAgent:
    composio_client = Composio(
        api_key=COMPOSIO_API_KEY,
        provider=LlamaIndexProvider(),
    )

    session = composio_client.create(
        user_id=COMPOSIO_USER_ID,
        toolkits=["render"],
    )

    mcp_url = session.mcp.url
    print(f"Composio MCP URL: {mcp_url}")

    mcp_client = BasicMCPClient(mcp_url, headers={"x-api-key": COMPOSIO_API_KEY})
    mcp_tool_spec = McpToolSpec(client=mcp_client)
    tools = await mcp_tool_spec.to_tool_list_async()

    llm = OpenAI(model="gpt-5")
    description = "An agent that uses Composio Tool Router MCP tools to perform Render actions."
    system_prompt = """
    You are a helpful assistant connected to Composio Tool Router.
    Use the available tools to answer user queries and perform Render actions.
    """
    return ReActAgent(
        tools=tools,
        llm=llm,
        description=description,
        system_prompt=system_prompt,
        verbose=True,
    );

async def chat_loop(agent: ReActAgent) -> None:
    ctx = Context(agent)
    print("Type 'quit', 'exit', or Ctrl+C to stop.")

    while True:
        try:
            user_input = input("\nYou: ").strip()
        except (KeyboardInterrupt, EOFError):
            print("\nBye!")
            break

        if not user_input or user_input.lower() in {"quit", "exit"}:
            print("Bye!")
            break

        try:
            print("Agent: ", end="", flush=True)
            handler = agent.run(user_input, ctx=ctx)

            async for event in handler.stream_events():
                # Stream token-by-token from LLM responses
                if hasattr(event, "delta") and event.delta:
                    print(event.delta, end="", flush=True)
                # Show tool calls as they happen
                elif hasattr(event, "tool_name"):
                    print(f"\n[Using tool: {event.tool_name}]", flush=True)

            # Get final response
            response = await handler
            print()  # Newline after streaming
        except KeyboardInterrupt:
            print("\n[Interrupted]")
            continue
        except Exception as e:
            print(f"\nError: {e}")

async def main() -> None:
    agent = await build_agent()
    await chat_loop(agent)

if __name__ == "__main__":
    # Handle Ctrl+C gracefully
    signal.signal(signal.SIGINT, lambda s, f: (print("\nBye!"), exit(0)))
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("\nBye!")

Conclusion

You've successfully connected Render to LlamaIndex through Composio's Tool Router MCP layer. Key takeaways:
  • Tool Router dynamically exposes Render tools through an MCP endpoint
  • LlamaIndex's ReActAgent handles reasoning and orchestration; Composio handles integrations
  • The agent becomes more capable without increasing prompt size
  • Async Python provides clean, efficient execution of agent workflows
You can easily extend this to other toolkits like Gmail, Notion, Stripe, GitHub, and more by adding them to the toolkits parameter.

How to build Render MCP Agent with another framework

FAQ

What are the differences in Tool Router MCP and Render MCP?

With a standalone Render MCP server, the agents and LLMs can only access a fixed set of Render tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Render and many other apps based on the task at hand, all through a single MCP endpoint.

Can I use Tool Router MCP with LlamaIndex?

Yes, you can. LlamaIndex fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Render tools.

Can I manage the permissions and scopes for Render while using Tool Router?

Yes, absolutely. You can configure which Render scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Render data and credentials are handled as safely as possible.

Used by agents from

Context
Letta
glean
HubSpot
Agent.ai
Altera
DataStax
Entelligence
Rolai
Context
Letta
glean
HubSpot
Agent.ai
Altera
DataStax
Entelligence
Rolai
Context
Letta
glean
HubSpot
Agent.ai
Altera
DataStax
Entelligence
Rolai

Never worry about agent reliability

We handle tool reliability, observability, and security so you never have to second-guess an agent action.