How to build your first AI agent with MCP in Rust

by RohitAug 19, 20256 min read
MCPAI AgentsAI Use Case

Everyone's building AI agents these days, and everyone's teaching you how to do it in Python or JavaScript. Nothing wrong with Python. It's fast to prototype with and has a mature ecosystem. But I wanted to try something different. What if we could build a multi-agent system that orchestrates different specialised agents, each connected to real-world tools via MCP (Model Context Protocol), and what if we built it in Rust?

That’s exactly why I built Codepilot, a multi-agent system that can handle Linear project management, GitHub repository operations, and Supabase tasks, all through a beautiful terminal UI.

It’s a fun side project, and if you’re curious and want to try things with Rust, maybe you'll find this useful. The source code is available on my GitHub here: rohittcodes/codepilot.

Why a Multi-Agent System and why Rust in particular?

Traditional AI agents are great, but they often struggle when you need to handle multiple domains or complex workflows. What if you want to:

  • Create a GitHub issue and link it to a Linear project.

  • Query your Supabase database and create a summary report.

  • Manage repositories across different services.

A multi-agent system solves this by having specialised agents that can collaborate and orchestrate complex workflows.

credits: Langchain

And why Rust??

Rust isn’t the usual go-to for AI, but it has some killer benefits on its side:

  • Performance: Zero-cost abstractions and memory safety mean your agent runs fast without eating resources

  • Type Safety: Errors can be caught at compile time, not when your agent’s halfway through a task.

  • Ecosystem Potential: Although the AI ecosystem is more mature in Python, Rust’s async/await model and strict typing make it ideal for agents to juggle between multiple tools, APIs, or tasks.

And now, if you wish to build something fast, reliable, and scalable, Rust becomes a solid choice there. So, before we dive deep into building it, let’s start with the basics.

What is an AI Agent by the way?

An AI agent is a program that can understand your intent and take actions on your behalf. Think of it as one of your intelligent assistants that doesn't just chat, it does things. In our case, the agent understands when you're asking about Linear issues, GitHub repositories, or Supabase data, and then calls the appropriate APIs to retrieve the information, combining it with natural language responses.

Agentic Architectures (credits: Langchain)

One of the key insights here is that LLMs excel at understanding intent (what you want to do), but struggle to access real-time information. By combining LLMs with APIs, we can create a program that automates tasks for you, eliminating the need for manual effort. Now you can get the best of both worlds: natural language understanding plus real-time information access.

Getting started with the Rust AI Agents

Alright, let’s build the thing. I didn’t want to overthink about the setup, just a plain Rust binary project, a few crates to make async work easier, and enough structure to plug the tools.

Setting up the project

First, we create a new Rust binary project (not a lib):

Add these dependencies to your Cargo.toml:

Building my first agent in Rust

The core idea is simple: A multi-agent system with specialized agents, which will have tools that the LLM can call, then let the LLM decide which agent to use based on the user’s query. Here’s the basic structure:

Integrating Composio MCP Servers

To connect each agent with real-world APIs, we use Composio MCP Integration. These servers expose authenticated API actions your agents can call, without you having to handwrite integrations.

For Codepilot, I’ve set up MCP servers for:

  • GitHub: To handle repos, issues, PRs, etc.

  • Linear: For project and issue management

  • Supabase: For querying and updating data

You can create your own MCP server configs in a few clicks.

How to add your own MCP config in Composio

If you want to integrate your tools (or replicate what I’ve done), here’s the flow using the new Composio dashboard:

  1. Log in and go to MCP Configs.

  2. Click “Create MCP Config”

  3. Give it a name (like linear-agent or github-bot)

  4. Choose the toolkit (e.g., Linear, GitHub, Supabase)

  5. Select how you want to handle authentication.

  6. Paste your API keys or use OAuth to connect.

  7. Pick the tools you want your agent to have access to

  8. Hit “Create MCP Server”. This will prompt you to a dialog where you can copy the MCP server URL. Paste it in the .env file with appropriate variable names.

Codepilot Architecture:

The core architecture is built around three core principles:

  1. Specialized Agents: Each agent (Linear, Github, Supabase) is an expert in its domain.

  2. MCP Integration: All agents connect to the MCP tools via Composio Integrations.

  3. Intelligent Orchestration: A central orchestrator that routes queries to the right agent.

The Core Components

Each agent is a specialist who knows how to work with its specific domain tools, fetched dynamically from MCP Servers.

Dynamic Tool Discovery

One of the coolest features is that each agent discovers its tools dynamically from MCP servers:

This means, no hardcoded operations - the agents automatically adapt to whatever tools are available on their MCP Servers!

What's happening here?

The system uses pure LLM-based tool selection with intelligent fallbacks. When you ask a question, here's what happens:

True LLM-Based Tool Selection

The agents use a sophisticated approach where the LLM analyses your request and mentions specific tools:

Constrained Agent Configuration

To prevent the LLM from calling internal tools, we use a constrained configuration:

Clear Tool Constraints

The system prompt explicitly constrains the LLM only to use available MCP tools:

When you ask "List all my GitHub repositories", the system:

  1. Orchestrator LLM → "USE_GITHUB_AGENT"

  2. GitHub Agent LLM → "I would use GITHUB_LIST_REPOSITORIES to fetch your repositories."

  3. Tool Execution → Executes GITHUB_LIST_REPOSITORIES with proper arguments.

  4. Result → "LLM Analysis: [reasoning] + GitHub Operation: [tool execution result]"

Understanding the code

At the core of everything here is the MultiAgentOrchestrator struct, which wires everything together:

Each agent here resides in its own module, making it easy to plug in or swap out components. The LLM is guided by a system prompt that tells it exactly what tools are available and how to use them. Something like:

Pretty clean, right? You get reasoning, tool usage, and a conversational reply, all in a single setup.

Demo of what I’ve built and how things work (High Level)

Here’s what the interaction looks like:

The multi-agent system intelligently routes each query to the appropriate agent, then combines the LLM's conversational response with real data.

Conclusion

This was a fun little project to work on, given the usual Python-heavy agent world. Rust isn't traditionally the go-to for these AI workflows, but it's surprisingly too good at handling real-world agent logic once you get past the initial obstructions. The type system gives you confidence, async works well enough, and once you have your tools in place, everything seems quite simple to plug.

Not production-ready yet, but as a weekend project and to learn things, I'd say it's totally worth trying to build things like this. Again, the complete source code is here: rohittcodes/codepilot. Try it out and let me know what you come up with.

R
AuthorRohit

Share