How to integrate Mistral ai MCP with Vercel AI SDK v6

Trusted by
AWS
Glean
Zoom
Airtable

30 min · no commitment · see it on your stack

Mistral ai logo
Vercel AI SDK logo
divider

Introduction

This guide walks you through connecting Mistral ai to Vercel AI SDK v6 using the Composio tool router. By the end, you'll have a working Mistral ai agent that can summarize this research paper in simple terms, generate python code for sorting a list, explain the difference between ai and ml through natural language commands.

This guide will help you understand how to give your Vercel AI SDK agent real control over a Mistral ai account through Composio's Mistral ai MCP server.

Before we dive in, let's take a quick look at the key ideas and tools involved.

Also integrate Mistral ai with

TL;DR

Here's what you'll learn:
  • How to set up and configure a Vercel AI SDK agent with Mistral ai integration
  • Using Composio's Tool Router to dynamically load and access Mistral ai tools
  • Creating an MCP client connection using HTTP transport
  • Building an interactive CLI chat interface with conversation history management
  • Handling tool calls and results within the Vercel AI SDK framework

What is Vercel AI SDK?

The Vercel AI SDK is a TypeScript library for building AI-powered applications. It provides tools for creating agents that can use external services and maintain conversation state.

Key features include:

  • streamText: Core function for streaming responses with real-time tool support
  • MCP Client: Built-in support for Model Context Protocol via @ai-sdk/mcp
  • Step Counting: Control multi-step tool execution with stopWhen: stepCountIs()
  • OpenAI Provider: Native integration with OpenAI models

What is the Mistral ai MCP server, and what's possible with it?

The Mistral ai MCP server is an implementation of the Model Context Protocol that connects your AI agent and assistants like Claude, Cursor, etc directly to your Mistral ai account. It provides structured and secure access to your Mistral AI models, so your agent can perform actions like generating text, summarizing content, answering questions, extracting structured information, and handling advanced language tasks on your behalf.

  • Text generation and completion: Have your agent produce coherent, context-aware text responses, complete prompts, or generate creative content leveraging Mistral's advanced models.
  • Summarization and paraphrasing: Ask your agent to summarize lengthy documents or rephrase input text for improved clarity or brevity.
  • Question answering and information extraction: Let your agent answer questions, extract key facts, or pull structured data from unstructured content automatically.
  • Content classification and sentiment analysis: Enable your agent to categorize text, detect topics, or analyze sentiment to inform downstream workflows.
  • Conversational AI and dialogue management: Build rich, multi-turn conversations or chatbots that handle context, intent, and user queries seamlessly using Mistral's models.

Supported Tools & Triggers

Tools
Append to conversationTool to append new entries to an existing conversation in Mistral AI.
Create AgentTool to create a new AI agent with custom configuration (Beta).
Create Agents CompletionTool to generate completions using a Mistral AI agent with specific instructions and tools.
Create Audio TranscriptionTranscribe audio files to text using Mistral AI's Voxtral models.
Create Chat CompletionGenerate conversational responses from Mistral AI models.
Create Chat ModerationTool to classify chat content for moderation purposes across 9 categories.
Create EmbeddingsTool to generate vector embeddings for input text using Mistral AI embedding models.
Create FIM CompletionGenerate code completions using fill-in-the-middle functionality.
Create libraryTool to create a new document library.
Create library shareCreate or update sharing permissions for a library.
Create ModerationTool to classify text content for moderation purposes across 9 categories.
Create OCRExtract text and structured data from images and documents using Mistral AI's OCR capabilities.
Create or Update Agent AliasTool to create or update an agent version alias.
Delete agentPermanently deletes an agent by its ID (Beta feature).
Delete ConversationTool to delete a conversation by its ID (Beta).
Delete FileDelete a file by its ID from Mistral AI.
Delete libraryPermanently deletes a library and all of its documents from Mistral AI.
Delete library documentPermanently deletes a document from a Mistral AI library.
Delete library shareRemove sharing permissions for a library from a user, workspace, or organization.
Download FileDownload the content of a previously uploaded file from Mistral AI.
Get AgentTool to retrieve details of a specific Mistral AI agent by its ID.
Get Agent VersionRetrieve a specific version of an agent (Beta).
Get ConversationTool to retrieve details of a specific conversation.
Get Conversation HistoryRetrieve the full history of a conversation in Mistral AI.
Get Conversation MessagesRetrieve all messages from a Mistral AI conversation.
Get document extracted text URLRetrieve a signed URL to download the extracted text from a document in a Mistral AI library.
Get document signed URLGet a signed URL to download a document from a Mistral AI library.
Get Document StatusRetrieve the processing status of a document in a Mistral AI library.
Get Document Text ContentRetrieve the extracted text content of a specific document from a Mistral AI library (Beta).
Get File Signed URLGet a time-limited signed URL for downloading a file from Mistral AI.
List Fine Tuning JobsList fine-tuning jobs with optional filtering and pagination.
Get libraryRetrieve detailed information about a specific library.
Get Library DocumentRetrieve metadata for a specific document in a Mistral AI library.
Get ModelTool to retrieve detailed information about a specific Mistral AI model by its ID.
List agent aliasesRetrieve all aliases for an agent version.
List AgentsTool to list all configured agents (Beta).
List Agent VersionsList all versions of a specific agent.
List Batch JobsTool to retrieve a list of all batch jobs with optional filtering and pagination.
List ConversationsList all created conversations (Beta).
List FilesTool to list all files available to the user.
List librariesList all document libraries accessible to your organization.
List Library DocumentsList all documents in a Mistral AI document library.
List library sharesList all sharing permissions for a document library.
List ModelsTool to retrieve all available Mistral AI models including base models and fine-tuned models.
Reprocess documentReprocess a document in a Mistral AI library (Beta).
Restart ConversationTool to restart a conversation from a specific point (Beta).
Retrieve FileRetrieve metadata of a file uploaded to Mistral AI.
Start ConversationTool to start a new conversation with a Mistral AI agent or base model.
Update AgentTool to update an existing agent's configuration.
Update agent versionTool to update the current version of an agent (Beta).
Update libraryTool to update an existing document library's properties.
Update library documentUpdate the metadata of a document in a Mistral AI library.
Upload FileUpload a file to Mistral AI for use in fine-tuning, batch processing, or OCR.
Upload Library DocumentUpload a document to a Mistral AI library for use with RAG-enabled agents.

What is the Composio tool router, and how does it fit here?

What is Composio SDK?

Composio's Composio SDK helps agents find the right tools for a task at runtime. You can plug in multiple toolkits (like Gmail, HubSpot, and GitHub), and the agent will identify the relevant app and action to complete multi-step workflows. This can reduce token usage and improve the reliability of tool calls. Read more here: Getting started with Composio SDK

The tool router generates a secure MCP URL that your agents can access to perform actions.

How the Composio SDK works

The Composio SDK follows a three-phase workflow:

  1. Discovery: Searches for tools matching your task and returns relevant toolkits with their details.
  2. Authentication: Checks for active connections. If missing, creates an auth config and returns a connection URL via Auth Link.
  3. Execution: Executes the action using the authenticated connection.

Step-by-step Guide

Prerequisites

Before you begin, make sure you have:
  • Node.js and npm installed
  • A Composio account with API key
  • An OpenAI API key

Getting API Keys for OpenAI and Composio

OpenAI API Key
  • Go to the OpenAI dashboard and create an API key. You'll need credits to use the models, or you can connect to another model provider.
  • Keep the API key safe.
Composio API Key
  • Log in to the Composio dashboard.
  • Navigate to your API settings and generate a new API key.
  • Store this key securely as you'll need it for authentication.

Install required dependencies

bash
npm install @ai-sdk/openai @ai-sdk/mcp @composio/core ai dotenv

First, install the necessary packages for your project.

What you're installing:

  • @ai-sdk/openai: Vercel AI SDK's OpenAI provider
  • @ai-sdk/mcp: MCP client for Vercel AI SDK
  • @composio/core: Composio SDK for tool integration
  • ai: Core Vercel AI SDK
  • dotenv: Environment variable management

Set up environment variables

bash
OPENAI_API_KEY=your_openai_api_key_here
COMPOSIO_API_KEY=your_composio_api_key_here
COMPOSIO_USER_ID=your_user_id_here

Create a .env file in your project root.

What's needed:

  • OPENAI_API_KEY: Your OpenAI API key for GPT model access
  • COMPOSIO_API_KEY: Your Composio API key for tool access
  • COMPOSIO_USER_ID: A unique identifier for the user session

Import required modules and validate environment

typescript
import "dotenv/config";
import { openai } from "@ai-sdk/openai";
import { Composio } from "@composio/core";
import * as readline from "readline";
import { streamText, type ModelMessage, stepCountIs } from "ai";
import { createMCPClient } from "@ai-sdk/mcp";

const composioAPIKey = process.env.COMPOSIO_API_KEY;
const composioUserID = process.env.COMPOSIO_USER_ID;

if (!process.env.OPENAI_API_KEY) throw new Error("OPENAI_API_KEY is not set");
if (!composioAPIKey) throw new Error("COMPOSIO_API_KEY is not set");
if (!composioUserID) throw new Error("COMPOSIO_USER_ID is not set");

const composio = new Composio({
  apiKey: composioAPIKey,
});
What's happening:
  • We're importing all necessary libraries including Vercel AI SDK's OpenAI provider and Composio
  • The dotenv/config import automatically loads environment variables
  • The MCP client import enables connection to Composio's tool server

Create Tool Router session and initialize MCP client

typescript
async function main() {
  // Create a tool router session for the user
  const session = await composio.create(composioUserID!, {
    toolkits: ["mistral_ai"],
  });

  const mcpUrl = session.mcp.url;
What's happening:
  • We're creating a Tool Router session that gives your agent access to Mistral ai tools
  • The create method takes the user ID and specifies which toolkits should be available
  • The returned mcp object contains the URL and authentication headers needed to connect to the MCP server
  • This session provides access to all Mistral ai-related tools through the MCP protocol

Connect to MCP server and retrieve tools

typescript
const mcpClient = await createMCPClient({
  transport: {
    type: "http",
    url: mcpUrl,
    headers: session.mcp.headers, // Authentication headers for the Composio MCP server
  },
});

const tools = await mcpClient.tools();
What's happening:
  • We're creating an MCP client that connects to our Composio Tool Router session via HTTP
  • The mcp.url provides the endpoint, and mcp.headers contains authentication credentials
  • The type: "http" is important - Composio requires HTTP transport
  • tools() retrieves all available Mistral ai tools that the agent can use

Initialize conversation and CLI interface

typescript
let messages: ModelMessage[] = [];

console.log("Chat started! Type 'exit' or 'quit' to end the conversation.\n");
console.log(
  "Ask any questions related to mistral_ai, like summarize my last 5 emails, send an email, etc... :)))\n",
);

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
  prompt: "> ",
});

rl.prompt();
What's happening:
  • We initialize an empty messages array to maintain conversation history
  • A readline interface is created to accept user input from the command line
  • Instructions are displayed to guide the user on how to interact with the agent

Handle user input and stream responses with real-time tool feedback

typescript
rl.on("line", async (userInput: string) => {
  const trimmedInput = userInput.trim();

  if (["exit", "quit", "bye"].includes(trimmedInput.toLowerCase())) {
    console.log("\nGoodbye!");
    rl.close();
    process.exit(0);
  }

  if (!trimmedInput) {
    rl.prompt();
    return;
  }

  messages.push({ role: "user", content: trimmedInput });
  console.log("\nAgent is thinking...\n");

  try {
    const stream = streamText({
      model: openai("gpt-5"),
      messages,
      tools,
      toolChoice: "auto",
      stopWhen: stepCountIs(10),
      onStepFinish: (step) => {
        for (const toolCall of step.toolCalls) {
          console.log(`[Using tool: ${toolCall.toolName}]`);
          }
          if (step.toolCalls.length > 0) {
            console.log(""); // Add space after tool calls
          }
        },
      });

      for await (const chunk of stream.textStream) {
        process.stdout.write(chunk);
      }

      console.log("\n\n---\n");

      // Get final result for message history
      const response = await stream.response;
      if (response?.messages?.length) {
        messages.push(...response.messages);
      }
    } catch (error) {
      console.error("\nAn error occurred while talking to the agent:");
      console.error(error);
      console.log(
        "\nYou can try again or restart the app if it keeps happening.\n",
      );
    } finally {
      rl.prompt();
    }
  });

  rl.on("close", async () => {
    await mcpClient.close();
    console.log("\n👋 Session ended.");
    process.exit(0);
  });
}

main().catch((err) => {
  console.error("Fatal error:", err);
  process.exit(1);
});
What's happening:
  • We use streamText instead of generateText to stream responses in real-time
  • toolChoice: "auto" allows the model to decide when to use Mistral ai tools
  • stopWhen: stepCountIs(10) allows up to 10 steps for complex multi-tool operations
  • onStepFinish callback displays which tools are being used in real-time
  • We iterate through the text stream to create a typewriter effect as the agent responds
  • The complete response is added to conversation history to maintain context
  • Errors are caught and displayed with helpful retry suggestions

Complete Code

Here's the complete code to get you started with Mistral ai and Vercel AI SDK:

typescript
import "dotenv/config";
import { openai } from "@ai-sdk/openai";
import { Composio } from "@composio/core";
import * as readline from "readline";
import { streamText, type ModelMessage, stepCountIs } from "ai";
import { createMCPClient } from "@ai-sdk/mcp";

const composioAPIKey = process.env.COMPOSIO_API_KEY;
const composioUserID = process.env.COMPOSIO_USER_ID;

if (!process.env.OPENAI_API_KEY) throw new Error("OPENAI_API_KEY is not set");
if (!composioAPIKey) throw new Error("COMPOSIO_API_KEY is not set");
if (!composioUserID) throw new Error("COMPOSIO_USER_ID is not set");

const composio = new Composio({
  apiKey: composioAPIKey,
});

async function main() {
  // Create a tool router session for the user
  const session = await composio.create(composioUserID!, {
    toolkits: ["mistral_ai"],
  });

  const mcpUrl = session.mcp.url;

  const mcpClient = await createMCPClient({
    transport: {
      type: "http",
      url: mcpUrl,
      headers: session.mcp.headers, // Authentication headers for the Composio MCP server
    },
  });

  const tools = await mcpClient.tools();

  let messages: ModelMessage[] = [];

  console.log("Chat started! Type 'exit' or 'quit' to end the conversation.\n");
  console.log(
    "Ask any questions related to mistral_ai, like summarize my last 5 emails, send an email, etc... :)))\n",
  );

  const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
    prompt: "> ",
  });

  rl.prompt();

  rl.on("line", async (userInput: string) => {
    const trimmedInput = userInput.trim();

    if (["exit", "quit", "bye"].includes(trimmedInput.toLowerCase())) {
      console.log("\nGoodbye!");
      rl.close();
      process.exit(0);
    }

    if (!trimmedInput) {
      rl.prompt();
      return;
    }

    messages.push({ role: "user", content: trimmedInput });
    console.log("\nAgent is thinking...\n");

    try {
      const stream = streamText({
        model: openai("gpt-5"),
        messages,
        tools,
        toolChoice: "auto",
        stopWhen: stepCountIs(10),
        onStepFinish: (step) => {
          for (const toolCall of step.toolCalls) {
            console.log(`[Using tool: ${toolCall.toolName}]`);
          }
          if (step.toolCalls.length > 0) {
            console.log(""); // Add space after tool calls
          }
        },
      });

      for await (const chunk of stream.textStream) {
        process.stdout.write(chunk);
      }

      console.log("\n\n---\n");

      // Get final result for message history
      const response = await stream.response;
      if (response?.messages?.length) {
        messages.push(...response.messages);
      }
    } catch (error) {
      console.error("\nAn error occurred while talking to the agent:");
      console.error(error);
      console.log(
        "\nYou can try again or restart the app if it keeps happening.\n",
      );
    } finally {
      rl.prompt();
    }
  });

  rl.on("close", async () => {
    await mcpClient.close();
    console.log("\n👋 Session ended.");
    process.exit(0);
  });
}

main().catch((err) => {
  console.error("Fatal error:", err);
  process.exit(1);
});

Conclusion

You've successfully built a Mistral ai agent using the Vercel AI SDK with streaming capabilities! This implementation provides a powerful foundation for building AI applications with natural language interfaces and real-time feedback.

Key features of this implementation:

  • Real-time streaming responses for a better user experience with typewriter effect
  • Live tool execution feedback showing which tools are being used as the agent works
  • Dynamic tool loading through Composio's Tool Router with secure authentication
  • Multi-step tool execution with configurable step limits (up to 10 steps)
  • Comprehensive error handling for robust agent execution
  • Conversation history maintenance for context-aware responses

You can extend this further by adding custom error handling, implementing specific business logic, or integrating additional Composio toolkits to create multi-app workflows.

How to build Mistral ai MCP Agent with another framework

FAQ

What are the differences in Tool Router MCP and Mistral ai MCP?

With a standalone Mistral ai MCP server, the agents and LLMs can only access a fixed set of Mistral ai tools tied to that server. However, with the Composio Tool Router, agents can dynamically load tools from Mistral ai and many other apps based on the task at hand, all through a single MCP endpoint.

Can I use Tool Router MCP with Vercel AI SDK v6?

Yes, you can. Vercel AI SDK v6 fully supports MCP integration. You get structured tool calling, message history handling, and model orchestration while Tool Router takes care of discovering and serving the right Mistral ai tools.

Can I manage the permissions and scopes for Mistral ai while using Tool Router?

Yes, absolutely. You can configure which Mistral ai scopes and actions are allowed when connecting your account to Composio. You can also bring your own OAuth credentials or API configuration so you keep full control over what the agent can do.

How safe is my data with Composio Tool Router?

All sensitive data such as tokens, keys, and configuration is fully encrypted at rest and in transit. Composio is SOC 2 Type 2 compliant and follows strict security practices so your Mistral ai data and credentials are handled as safely as possible.

Used by agents from

Context
Letta
glean
HubSpot
Agent.ai
Altera
DataStax
Entelligence
Rolai
Context
Letta
glean
HubSpot
Agent.ai
Altera
DataStax
Entelligence
Rolai
Context
Letta
glean
HubSpot
Agent.ai
Altera
DataStax
Entelligence
Rolai

Never worry about agent reliability

We handle tool reliability, observability, and security so you never have to second-guess an agent action.