# How to integrate Textit MCP with LlamaIndex

```json
{
  "title": "How to integrate Textit MCP with LlamaIndex",
  "toolkit": "Textit",
  "toolkit_slug": "textit",
  "framework": "LlamaIndex",
  "framework_slug": "llama-index",
  "url": "https://composio.dev/toolkits/textit/framework/llama-index",
  "markdown_url": "https://composio.dev/toolkits/textit/framework/llama-index.md",
  "updated_at": "2026-05-12T10:28:17.083Z"
}
```

## Introduction

This guide walks you through connecting Textit to LlamaIndex using the Composio tool router. By the end, you'll have a working Textit agent that can create a new campaign for event reminders, list all contact groups for segmentation, retrieve details about a specific campaign through natural language commands.
This guide will help you understand how to give your LlamaIndex agent real control over a Textit account through Composio's Textit MCP server.
Before we dive in, let's take a quick look at the key ideas and tools involved.

## Also integrate Textit with

- [OpenAI Agents SDK](https://composio.dev/toolkits/textit/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/textit/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/textit/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/textit/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/textit/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/textit/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/textit/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/textit/framework/cli)
- [Google ADK](https://composio.dev/toolkits/textit/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/textit/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/textit/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/textit/framework/mastra-ai)
- [CrewAI](https://composio.dev/toolkits/textit/framework/crew-ai)

## TL;DR

Here's what you'll learn:
- Set your OpenAI and Composio API keys
- Install LlamaIndex and Composio packages
- Create a Composio Tool Router session for Textit
- Connect LlamaIndex to the Textit MCP server
- Build a Textit-powered agent using LlamaIndex
- Interact with Textit through natural language

## What is LlamaIndex?

LlamaIndex is a data framework for building LLM applications. It provides tools for connecting LLMs to external data sources and services through agents and tools.
Key features include:
- ReAct Agent: Reasoning and acting pattern for tool-using agents
- MCP Tools: Native support for Model Context Protocol
- Context Management: Maintain conversation context across interactions
- Async Support: Built for async/await patterns

## What is the Textit MCP server, and what's possible with it?

The Textit MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Textit account. It provides structured and secure access to your chatbots, contacts, campaigns, and messaging flows, so your agent can create campaigns, manage contact groups, organize labels, retrieve broadcasts, and handle messaging operations on your behalf.
- Automated campaign management: Let your agent create, retrieve, or list messaging campaigns, helping you launch outreach efforts to targeted contact groups without lifting a finger.
- Contact group creation and segmentation: Easily segment your audience by having your agent create or delete contact groups, keeping your communication organized and relevant.
- Custom label organization: Enable your agent to create new message labels, allowing for smarter categorization and easier tracking of important conversations or topics.
- Broadcast and archive retrieval: Effortlessly fetch lists of broadcasts or message archives, so your agent can provide summaries or analyze past messaging performance.
- Contact management: Direct your agent to delete outdated or unnecessary contacts, ensuring your database stays clean and up-to-date automatically.

## Supported Tools

| Tool slug | Name | Description |
|---|---|---|
| `TEXTIT_CREATE_CAMPAIGN` | Create Campaign | Tool to create a new campaign in TextIt. Use when you need to start a messaging campaign for a specific contact group. |
| `TEXTIT_CREATE_GROUP` | Create Contact Group | Tool to create a new contact group. Use when segmenting contacts before sending messages. |
| `TEXTIT_CREATE_LABEL` | Create Label | Tool to create a new message label. Use when you need to categorize messages. Example: Create a label 'Important'. Creates a label under your organization using the TextIt Labels API. |
| `TEXTIT_DELETE_CONTACT` | Delete Contact | Tool to delete an existing contact. Use when you have the contact's UUID and need to remove it. |
| `TEXTIT_DELETE_GROUP` | Delete Contact Group | Tool to delete an existing contact group. Use after ensuring the group has no active triggers or campaigns. |
| `TEXTIT_DELETE_LABEL` | Delete Label | Tool to delete a message label by UUID. Use when you need to remove an existing label from your TextIt workspace. |
| `TEXTIT_GET_CAMPAIGN` | Get Campaign | Tool to retrieve details about a specific campaign. Use when you have the campaign's UUID and need its full metadata. |
| `TEXTIT_GET_WORKSPACE` | Get Workspace | Tool to retrieve current workspace details including name, country, languages, and timezone. Use when you need workspace configuration information. |
| `TEXTIT_LIST_ARCHIVES` | List Archives | Tool to retrieve a list of message and run archives. Use when you need to browse or manage existing archives after authenticating. |
| `TEXTIT_LIST_BROADCASTS` | List Broadcasts | Tool to list broadcasts. Use when you need to retrieve broadcasts with optional filters and pagination. |
| `TEXTIT_LIST_CAMPAIGN_EVENTS2` | List Campaign Events 2 | Tool to retrieve campaign events with optional filtering. Use when you need to list scheduled triggers within campaigns, optionally filtering by event UUID or campaign UUID. |
| `TEXTIT_LIST_CAMPAIGNS` | List Campaigns | Tool to list campaigns. Use after authentication to retrieve campaigns, optionally filtering by uuid or date range. |
| `TEXTIT_LIST_CHANNELS` | List Channels | Tool to list channels. Use when you need to retrieve a paginated list of your organization's channels after confirming authentication. |
| `TEXTIT_LIST_CLASSIFIERS` | List Classifiers | Tool to list NLU classifiers configured for your organization. Use when you need to retrieve natural language understanding classifiers (wit.ai, luis, bothub) after confirming authentication. |
| `TEXTIT_LIST_CONTACTS` | List Contacts | Tool to retrieve a list of contacts. Use when you need to fetch contacts with optional filters (UUID, URN, group, or modified date). Use after authenticating your client. |
| `TEXTIT_LIST_FIELDS` | List custom contact fields | Tool to retrieve a list of custom contact fields. Use when you need to view or filter all defined contact fields with pagination and optional search. |
| `TEXTIT_LIST_FLOWS` | List Flows | Tool to retrieve a list of flows for your organization. Use when you need to fetch automated conversation flows with optional filters (UUID, type, archived status, or modified date). |
| `TEXTIT_LIST_FLOW_STARTS` | List Flow Starts | Tool to retrieve a list of manual flow starts. Use when you need to fetch flow start records with optional filters and pagination. |
| `TEXTIT_LIST_GLOBALS` | List Globals | Tool to list global variables. Use when you need to retrieve all workspace-level variables after authenticating. |
| `TEXTIT_LIST_GROUPS2` | List Groups | Tool to list contact groups for your organization. Use when you need to fetch groups with optional filtering by uuid or name. |
| `TEXTIT_LIST_LABELS2` | List Labels 2 | Tool to retrieve a list of message labels for your organization. Use when you need to filter labels by UUID or name. |
| `TEXTIT_LIST_MESSAGES` | List Messages | Tool to retrieve a list of messages. Use when you need to fetch messages with optional filters (UUID, folder, contact, broadcast, or date range). Results are paginated. |
| `TEXTIT_LIST_RESTHOOK_EVENTS` | List Resthook Events | Tool to retrieve recent resthook events for your organization. Use when you need to inspect webhook events that have been triggered, optionally filtered by resthook slug. Events are returned in reverse chronological order. |
| `TEXTIT_LIST_RESTHOOKS` | List Resthooks | Tool to list configured resthooks (webhooks). Use when you need to retrieve the resthooks configured in your TextIt account. |
| `TEXTIT_LIST_RESTHOOK_SUBSCRIBERS` | List Resthook Subscribers | Tool to list webhook subscribers for your organization's resthooks. Use when you need to retrieve the target URLs that receive webhook events for specific resthooks. |
| `TEXTIT_LIST_RUNS` | List Runs | Tool to retrieve a list of flow runs. Use when you need to filter or browse run history by flow, contact, or status. |
| `TEXTIT_LIST_TICKETS` | List Tickets | Tool to retrieve support tickets for your organization. Use when you need to fetch tickets with optional filters (UUID, contact, topic, or assignee). Returns paginated ticket data. |
| `TEXTIT_LIST_TOPICS2` | List Topics V2 | Tool to list topics in the workspace for categorizing tickets. Use when you need to retrieve topics, optionally filtered by UUID. |
| `TEXTIT_LIST_USERS` | List Users | Tool to retrieve a list of user logins in your workspace with their roles and teams. Use when you need to fetch users with optional UUID filter. Results are ordered by newest created first. |
| `TEXTIT_SEND_BROADCAST` | Send Broadcast | Tool to send a new broadcast message. Use after composing message translations and selecting recipients (urns, contacts, or groups). |
| `TEXTIT_UPDATE_CONTACT` | Update Contact | Tool to update an existing contact. Use after identifying the contact's UUID or URN and preparing details. |

## Supported Triggers

None listed.

## Creating MCP Server - Stand-alone vs Composio SDK

The Textit MCP server is an implementation of the Model Context Protocol that connects your AI agent to Textit. It provides structured and secure access so your agent can perform Textit operations on your behalf through a secure, permission-based interface.
With Composio's managed implementation, you don't have to create your own developer app. For production, if you're building an end product, we recommend using your own credentials. The managed server helps you prototype fast and go from 0-1 faster.

## Step-by-step Guide

### 1. Prerequisites

Before you begin, make sure you have:
- Python 3.8/Node 16 or higher installed
- A Composio account with the API key
- An OpenAI API key
- A Textit account and project
- Basic familiarity with async Python/Typescript

### 1. Getting API Keys for OpenAI, Composio, and Textit

No description provided.

### 2. Installing dependencies

No description provided.
```python
pip install composio-llamaindex llama-index llama-index-llms-openai llama-index-tools-mcp python-dotenv
```

```typescript
npm install @composio/llamaindex @llamaindex/openai @llamaindex/tools @llamaindex/workflow dotenv
```

### 3. Set environment variables

Create a .env file in your project root:
These credentials will be used to:
- Authenticate with OpenAI's GPT-5 model
- Connect to Composio's Tool Router
- Identify your Composio user session for Textit access
```bash
OPENAI_API_KEY=your-openai-api-key
COMPOSIO_API_KEY=your-composio-api-key
COMPOSIO_USER_ID=your-user-id
```

### 4. Import modules

No description provided.
```python
import asyncio
import os
import dotenv

from composio import Composio
from composio_llamaindex import LlamaIndexProvider
from llama_index.core.agent.workflow import ReActAgent
from llama_index.core.workflow import Context
from llama_index.llms.openai import OpenAI
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec

dotenv.load_dotenv()
```

```typescript
import "dotenv/config";
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";

import { Composio } from "@composio/core";

import { mcp } from "@llamaindex/tools";
import { agent as createAgent } from "@llamaindex/workflow";
import { openai } from "@llamaindex/openai";

dotenv.config();
```

### 5. Load environment variables and initialize Composio

No description provided.
```python
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
COMPOSIO_USER_ID = os.getenv("COMPOSIO_USER_ID")

if not OPENAI_API_KEY:
    raise ValueError("OPENAI_API_KEY is not set in the environment")
if not COMPOSIO_API_KEY:
    raise ValueError("COMPOSIO_API_KEY is not set in the environment")
if not COMPOSIO_USER_ID:
    raise ValueError("COMPOSIO_USER_ID is not set in the environment")
```

```typescript
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const COMPOSIO_API_KEY = process.env.COMPOSIO_API_KEY;
const COMPOSIO_USER_ID = process.env.COMPOSIO_USER_ID;

if (!OPENAI_API_KEY) throw new Error("OPENAI_API_KEY is not set");
if (!COMPOSIO_API_KEY) throw new Error("COMPOSIO_API_KEY is not set");
if (!COMPOSIO_USER_ID) throw new Error("COMPOSIO_USER_ID is not set");
```

### 6. Create a Tool Router session and build the agent function

What's happening here:
- We create a Composio client using your API key and configure it with the LlamaIndex provider
- We then create a tool router MCP session for your user, specifying the toolkits we want to use (in this case, textit)
- The session returns an MCP HTTP endpoint URL that acts as a gateway to all your configured tools
- LlamaIndex will connect to this endpoint to dynamically discover and use the available Textit tools.
- The MCP tools are mapped to LlamaIndex-compatible tools and plug them into the Agent.
```python
async def build_agent() -> ReActAgent:
    composio_client = Composio(
        api_key=COMPOSIO_API_KEY,
        provider=LlamaIndexProvider(),
    )

    session = composio_client.create(
        user_id=COMPOSIO_USER_ID,
        toolkits=["textit"],
    )

    mcp_url = session.mcp.url
    print(f"Composio MCP URL: {mcp_url}")

    mcp_client = BasicMCPClient(mcp_url, headers={"x-api-key": COMPOSIO_API_KEY})
    mcp_tool_spec = McpToolSpec(client=mcp_client)
    tools = await mcp_tool_spec.to_tool_list_async()

    llm = OpenAI(model="gpt-5")

    description = "An agent that uses Composio Tool Router MCP tools to perform Textit actions."
    system_prompt = """
    You are a helpful assistant connected to Composio Tool Router.
    Use the available tools to answer user queries and perform Textit actions.
    """
    return ReActAgent(tools=tools, llm=llm, description=description, system_prompt=system_prompt, verbose=True)
```

```typescript
async function buildAgent() {

  console.log(`Initializing Composio client...${COMPOSIO_USER_ID!}...`);
  console.log(`COMPOSIO_USER_ID: ${COMPOSIO_USER_ID!}...`);

  const composio = new Composio({
    apiKey: COMPOSIO_API_KEY,
    provider: new LlamaindexProvider(),
  });

  const session = await composio.create(
    COMPOSIO_USER_ID!,
    {
      toolkits: ["textit"],
    },
  );

  const mcpUrl = session.mcp.url;
  console.log(`Composio Tool Router MCP URL: ${mcpUrl}`);

  const server = mcp({
    url: mcpUrl,
    clientName: "composio_tool_router_with_llamaindex",
    requestInit: {
      headers: {
        "x-api-key": COMPOSIO_API_KEY!,
      },
    },
    // verbose: true,
  });

  const tools = await server.tools();

  const llm = openai({ apiKey: OPENAI_API_KEY, model: "gpt-5" });

  const agent = createAgent({
    name: "composio_tool_router_with_llamaindex",
        description : "An agent that uses Composio Tool Router MCP tools to perform actions.",
    systemPrompt:
      "You are a helpful assistant connected to Composio Tool Router."+
"Use the available tools to answer user queries and perform Textit actions." ,
    llm,
    tools,
  });

  return agent;
}
```

### 7. Create an interactive chat loop

No description provided.
```python
async def chat_loop(agent: ReActAgent) -> None:
    ctx = Context(agent)
    print("Type 'quit', 'exit', or Ctrl+C to stop.")

    while True:
        try:
            user_input = input("\nYou: ").strip()
        except (KeyboardInterrupt, EOFError):
            print("\nBye!")
            break

        if not user_input or user_input.lower() in {"quit", "exit"}:
            print("Bye!")
            break

        try:
            print("Agent: ", end="", flush=True)
            handler = agent.run(user_input, ctx=ctx)

            async for event in handler.stream_events():
                # Stream token-by-token from LLM responses
                if hasattr(event, "delta") and event.delta:
                    print(event.delta, end="", flush=True)
                # Show tool calls as they happen
                elif hasattr(event, "tool_name"):
                    print(f"\n[Using tool: {event.tool_name}]", flush=True)

            # Get final response
            response = await handler
            print()  # Newline after streaming
        except KeyboardInterrupt:
            print("\n[Interrupted]")
            continue
        except Exception as e:
            print(f"\nError: {e}")
```

```typescript
async function chatLoop(agent: ReturnType<typeof createAgent>) {
  const rl = readline.createInterface({ input, output });

  console.log("Type 'quit' or 'exit' to stop.");

  while (true) {
    let userInput: string;

    try {
      userInput = (await rl.question("\nYou: ")).trim();
    } catch {
      console.log("\nAgent: Bye!");
      break;
    }

    if (!userInput) {
      continue;
    }

    const lower = userInput.toLowerCase();
    if (lower === "quit" || lower === "exit") {
      console.log("Agent: Bye!");
      break;
    }

    try {
      process.stdout.write("Agent: ");

      const stream = agent.runStream(userInput);
      let finalResult: any = null;

      for await (const event of stream) {
        // The event.data contains the streamed content
        const data: any = event.data;

        // Check for streaming delta content
        if (data?.delta) {
          process.stdout.write(data.delta);
        }

        // Store final result for fallback
        if (data?.result || data?.message) {
          finalResult = data;
        }
      }

      // If no streaming happened, show the final result
      if (finalResult) {
        const answer =
          finalResult.result ??
          finalResult.message?.content ??
          finalResult.message ??
          "";
        if (answer && typeof answer === "string" && !answer.includes("[object")) {
          process.stdout.write(answer);
        }
      }

      console.log(); // New line after streaming completes
    } catch (err: any) {
      console.error("\nAgent error:", err?.message ?? err);
    }
  }

  rl.close();
}
```

### 8. Define the main entry point

What's happening here:
- We're orchestrating the entire application flow
- The agent gets built with proper error handling
- Then we kick off the interactive chat loop so you can start talking to Textit
```python
async def main() -> None:
    agent = await build_agent()
    await chat_loop(agent)

if __name__ == "__main__":
    # Handle Ctrl+C gracefully
    signal.signal(signal.SIGINT, lambda s, f: (print("\nBye!"), exit(0)))
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("\nBye!")
```

```typescript
async function main() {
  try {
    const agent = await buildAgent();
    await chatLoop(agent);
  } catch (err) {
    console.error("Failed to start agent:", err);
    process.exit(1);
  }
}

main();
```

### 9. Run the agent

When prompted, authenticate and authorise your agent with Textit, then start asking questions.
```bash
python llamaindex_agent.py
```

```typescript
npx ts-node llamaindex-agent.ts
```

## Complete Code

```python
import asyncio
import os
import signal
import dotenv

from composio import Composio
from composio_llamaindex import LlamaIndexProvider
from llama_index.core.agent.workflow import ReActAgent
from llama_index.core.workflow import Context
from llama_index.llms.openai import OpenAI
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec

dotenv.load_dotenv()

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
COMPOSIO_API_KEY = os.getenv("COMPOSIO_API_KEY")
COMPOSIO_USER_ID = os.getenv("COMPOSIO_USER_ID")

if not OPENAI_API_KEY:
    raise ValueError("OPENAI_API_KEY is not set")
if not COMPOSIO_API_KEY:
    raise ValueError("COMPOSIO_API_KEY is not set")
if not COMPOSIO_USER_ID:
    raise ValueError("COMPOSIO_USER_ID is not set")

async def build_agent() -> ReActAgent:
    composio_client = Composio(
        api_key=COMPOSIO_API_KEY,
        provider=LlamaIndexProvider(),
    )

    session = composio_client.create(
        user_id=COMPOSIO_USER_ID,
        toolkits=["textit"],
    )

    mcp_url = session.mcp.url
    print(f"Composio MCP URL: {mcp_url}")

    mcp_client = BasicMCPClient(mcp_url, headers={"x-api-key": COMPOSIO_API_KEY})
    mcp_tool_spec = McpToolSpec(client=mcp_client)
    tools = await mcp_tool_spec.to_tool_list_async()

    llm = OpenAI(model="gpt-5")
    description = "An agent that uses Composio Tool Router MCP tools to perform Textit actions."
    system_prompt = """
    You are a helpful assistant connected to Composio Tool Router.
    Use the available tools to answer user queries and perform Textit actions.
    """
    return ReActAgent(
        tools=tools,
        llm=llm,
        description=description,
        system_prompt=system_prompt,
        verbose=True,
    );

async def chat_loop(agent: ReActAgent) -> None:
    ctx = Context(agent)
    print("Type 'quit', 'exit', or Ctrl+C to stop.")

    while True:
        try:
            user_input = input("\nYou: ").strip()
        except (KeyboardInterrupt, EOFError):
            print("\nBye!")
            break

        if not user_input or user_input.lower() in {"quit", "exit"}:
            print("Bye!")
            break

        try:
            print("Agent: ", end="", flush=True)
            handler = agent.run(user_input, ctx=ctx)

            async for event in handler.stream_events():
                # Stream token-by-token from LLM responses
                if hasattr(event, "delta") and event.delta:
                    print(event.delta, end="", flush=True)
                # Show tool calls as they happen
                elif hasattr(event, "tool_name"):
                    print(f"\n[Using tool: {event.tool_name}]", flush=True)

            # Get final response
            response = await handler
            print()  # Newline after streaming
        except KeyboardInterrupt:
            print("\n[Interrupted]")
            continue
        except Exception as e:
            print(f"\nError: {e}")

async def main() -> None:
    agent = await build_agent()
    await chat_loop(agent)

if __name__ == "__main__":
    # Handle Ctrl+C gracefully
    signal.signal(signal.SIGINT, lambda s, f: (print("\nBye!"), exit(0)))
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("\nBye!")
```

```typescript
import "dotenv/config";
import readline from "node:readline/promises";
import { stdin as input, stdout as output } from "node:process";

import { Composio } from "@composio/core";
import { LlamaindexProvider } from "@composio/llamaindex";

import { mcp } from "@llamaindex/tools";
import { agent as createAgent } from "@llamaindex/workflow";
import { openai } from "@llamaindex/openai";

dotenv.config();

const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const COMPOSIO_API_KEY = process.env.COMPOSIO_API_KEY;
const COMPOSIO_USER_ID = process.env.COMPOSIO_USER_ID;

if (!OPENAI_API_KEY) {
    throw new Error("OPENAI_API_KEY is not set in the environment");
  }
if (!COMPOSIO_API_KEY) {
    throw new Error("COMPOSIO_API_KEY is not set in the environment");
  }
if (!COMPOSIO_USER_ID) {
    throw new Error("COMPOSIO_USER_ID is not set in the environment");
  }

async function buildAgent() {

  console.log(`Initializing Composio client...${COMPOSIO_USER_ID!}...`);
  console.log(`COMPOSIO_USER_ID: ${COMPOSIO_USER_ID!}...`);

  const composio = new Composio({
    apiKey: COMPOSIO_API_KEY,
    provider: new LlamaindexProvider(),
  });

  const session = await composio.create(
    COMPOSIO_USER_ID!,
    {
      toolkits: ["textit"],
    },
  );

  const mcpUrl = session.mcp.url;
  console.log(`Composio Tool Router MCP URL: ${mcpUrl}`);

  const server = mcp({
    url: mcpUrl,
    clientName: "composio_tool_router_with_llamaindex",
    requestInit: {
      headers: {
        "x-api-key": COMPOSIO_API_KEY!,
      },
    },
    // verbose: true,
  });

  const tools = await server.tools();

  const llm = openai({ apiKey: OPENAI_API_KEY, model: "gpt-5" });

  const agent = createAgent({
    name: "composio_tool_router_with_llamaindex",
    description:
      "An agent that uses Composio Tool Router MCP tools to perform actions.",
    systemPrompt:
      "You are a helpful assistant connected to Composio Tool Router."+
"Use the available tools to answer user queries and perform Textit actions." ,
    llm,
    tools,
  });

  return agent;
}

async function chatLoop(agent: ReturnType<typeof createAgent>) {
  const rl = readline.createInterface({ input, output });

  console.log("Type 'quit' or 'exit' to stop.");

  while (true) {
    let userInput: string;

    try {
      userInput = (await rl.question("\nYou: ")).trim();
    } catch {
      console.log("\nAgent: Bye!");
      break;
    }

    if (!userInput) {
      continue;
    }

    const lower = userInput.toLowerCase();
    if (lower === "quit" || lower === "exit") {
      console.log("Agent: Bye!");
      break;
    }

    try {
      process.stdout.write("Agent: ");

      const stream = agent.runStream(userInput);
      let finalResult: any = null;

      for await (const event of stream) {
        // The event.data contains the streamed content
        const data: any = event.data;

        // Check for streaming delta content
        if (data?.delta) {
          process.stdout.write(data.delta);
        }

        // Store final result for fallback
        if (data?.result || data?.message) {
          finalResult = data;
        }
      }

      // If no streaming happened, show the final result
      if (finalResult) {
        const answer =
          finalResult.result ??
          finalResult.message?.content ??
          finalResult.message ??
          "";
        if (answer && typeof answer === "string" && !answer.includes("[object")) {
          process.stdout.write(answer);
        }
      }

      console.log(); // New line after streaming completes
    } catch (err: any) {
      console.error("\nAgent error:", err?.message ?? err);
    }
  }

  rl.close();
}

async function main() {
  try {
    const agent = await buildAgent();
    await chatLoop(agent);
  } catch (err: any) {
    console.error("Failed to start agent:", err?.message ?? err);
    process.exit(1);
  }
}

main();
```

## Conclusion

You've successfully connected Textit to LlamaIndex through Composio's Tool Router MCP layer.
Key takeaways:
- Tool Router dynamically exposes Textit tools through an MCP endpoint
- LlamaIndex's ReActAgent handles reasoning and orchestration; Composio handles integrations
- The agent becomes more capable without increasing prompt size
- Async Python provides clean, efficient execution of agent workflows
You can easily extend this to other toolkits like Gmail, Notion, Stripe, GitHub, and more by adding them to the toolkits parameter.

## How to build Textit MCP Agent with another framework

- [OpenAI Agents SDK](https://composio.dev/toolkits/textit/framework/open-ai-agents-sdk)
- [Claude Agent SDK](https://composio.dev/toolkits/textit/framework/claude-agents-sdk)
- [Claude Code](https://composio.dev/toolkits/textit/framework/claude-code)
- [Claude Cowork](https://composio.dev/toolkits/textit/framework/claude-cowork)
- [Codex](https://composio.dev/toolkits/textit/framework/codex)
- [OpenClaw](https://composio.dev/toolkits/textit/framework/openclaw)
- [Hermes](https://composio.dev/toolkits/textit/framework/hermes-agent)
- [CLI](https://composio.dev/toolkits/textit/framework/cli)
- [Google ADK](https://composio.dev/toolkits/textit/framework/google-adk)
- [LangChain](https://composio.dev/toolkits/textit/framework/langchain)
- [Vercel AI SDK](https://composio.dev/toolkits/textit/framework/ai-sdk)
- [Mastra AI](https://composio.dev/toolkits/textit/framework/mastra-ai)
- [CrewAI](https://composio.dev/toolkits/textit/framework/crew-ai)

## Related Toolkits

- [Gmail](https://composio.dev/toolkits/gmail) - Gmail is Google's email service with powerful spam protection, search, and G Suite integration. It keeps your inbox organized and makes communication fast and reliable.
- [Outlook](https://composio.dev/toolkits/outlook) - Outlook is Microsoft's email and calendaring platform for unified communications and scheduling. It helps users stay organized with powerful email, contacts, and calendar management.
- [Slack](https://composio.dev/toolkits/slack) - Slack is a channel-based messaging platform for teams and organizations. It helps people collaborate in real time, share files, and connect all their tools in one place.
- [Gong](https://composio.dev/toolkits/gong) - Gong is a platform for video meetings, call recording, and team collaboration. It helps teams capture conversations, analyze calls, and turn insights into action.
- [Microsoft teams](https://composio.dev/toolkits/microsoft_teams) - Microsoft Teams is a collaboration platform that combines chat, meetings, and file sharing within Microsoft 365. It keeps distributed teams connected and productive through seamless virtual communication.
- [Slackbot](https://composio.dev/toolkits/slackbot) - Slackbot is a conversational automation tool for Slack that handles reminders, notifications, and automated responses. It boosts team productivity by streamlining onboarding, answering FAQs, and managing timely alerts—all right inside Slack.
- [2chat](https://composio.dev/toolkits/_2chat) - 2chat is an API platform for WhatsApp and multichannel text messaging. It streamlines chat automation, group management, and real-time messaging for developers.
- [Agent mail](https://composio.dev/toolkits/agent_mail) - Agent mail provides AI agents with dedicated email inboxes for sending, receiving, and managing emails. It empowers agents to communicate autonomously with people, services, and other agents—no human intervention needed.
- [Basecamp](https://composio.dev/toolkits/basecamp) - Basecamp is a project management and team collaboration tool by 37signals. It helps teams organize tasks, share files, and communicate efficiently in one place.
- [Chatwork](https://composio.dev/toolkits/chatwork) - Chatwork is a team communication platform with group chats, file sharing, and task management. It helps businesses boost collaboration and streamline productivity.
- [Clickmeeting](https://composio.dev/toolkits/clickmeeting) - ClickMeeting is a cloud-based platform for running online meetings and webinars. It helps businesses and individuals host, manage, and engage virtual audiences with ease.
- [Confluence](https://composio.dev/toolkits/confluence) - Confluence is Atlassian's team collaboration and knowledge management platform. It helps your team organize, share, and update documents and project content in one secure workspace.
- [Dailybot](https://composio.dev/toolkits/dailybot) - DailyBot streamlines team collaboration with chat-based standups, reminders, and polls. It keeps work flowing smoothly in your favorite messaging platforms.
- [Dialmycalls](https://composio.dev/toolkits/dialmycalls) - Dialmycalls is a mass notification service for sending voice and text messages to contacts. It helps teams and organizations quickly broadcast urgent alerts and updates.
- [Dialpad](https://composio.dev/toolkits/dialpad) - Dialpad is a cloud-based business phone and contact center system for teams. It unifies voice, video, messaging, and meetings across your devices.
- [Discord](https://composio.dev/toolkits/discord) - Discord is a real-time messaging and VoIP platform for communities and teams. It lets users chat, share media, and collaborate across public and private channels.
- [Discordbot](https://composio.dev/toolkits/discordbot) - Discordbot is an automation tool for Discord servers that handles moderation, messaging, and user engagement. It helps communities run smoothly by automating routine and complex tasks.
- [Echtpost](https://composio.dev/toolkits/echtpost) - Echtpost is a secure digital communication platform for encrypted document and message exchange. It ensures confidential data stays private and protected during transmission.
- [Egnyte](https://composio.dev/toolkits/egnyte) - Egnyte is a cloud-based platform for secure file sharing, storage, and governance. It helps teams collaborate efficiently while maintaining data compliance and security.
- [Google Meet](https://composio.dev/toolkits/googlemeet) - Google Meet is a secure video conferencing platform for virtual meetings, chat, and screen sharing. It helps teams connect, collaborate, and communicate seamlessly from anywhere.

## Frequently Asked Questions

### What are the differences in Tool Router MCP and Textit MCP?

With a standalone Textit MCP server, the agents and LLMs can only access a fixed set of Textit tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Textit and many other apps based on the task at hand, all through a single MCP endpoint.

### Can I use Tool Router MCP with LlamaIndex?

Yes, you can. LlamaIndex fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Textit tools.

### Can I manage the permissions and scopes for Textit while using Tool Router?

Yes, absolutely. You can configure which Textit scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

### How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Textit data and credentials are handled as safely as possible.

---
[See all toolkits](https://composio.dev/toolkits) · [Composio docs](https://docs.composio.dev/llms.txt)
