


OpenAI has just released AgentKit at DevDay 2025, a comprehensive set of tools for building and deploying AI agents. The kit contains
Agent Builder: A visual drag-and-drop canvas for building agents.
ChatKit: A toolkit to embed a chat-based agent in your product.
Connector registry: A central place for managing how tools connect across ChatGPT.
This was a surprising launch, which shook the internet and posed an existential threat to nearly half of Silicon Valley's companies. But we are not here for that. Let's get to the business and understand what Agent Builder is, who it is for, and how to build your first agent with it, so you don't fall behind in the AI rat race.
Let's gooo.
TL; DR
Agent Builder is OpenAI's visual drag-and-drop node-based agent builder.
It supports MCP servers, allowing you to equip your agent with any hosted MCP servers to undertake real-world actions.
It has multiple nodes, such as Agent, MCP, and Guardrail, that allow you to build custom agents easily in hours.
It also allows you to export your agent logic as Typescript/Python code and further modify it if you wish.
In this article, I've shared how you can configure Rube MCP to build a YouTube Q&A agent with guardrails with RAG.
You can also export the Agent code in Typescript/Python, powered by OpenAI's Agents SDK for further customisations.
What is Agent Builder, and who is it for?
Agent Builder is the n8n-like visual AI workflow builder from OpenAI. It has multiple nodes that you can connect to build agents, and it also allows you to export code, enabling you to tweak it to customise. This reduces the time required to create a functional agent.
Agent Builder has the internet in a split. Is it another GPT store moment from OpenAI or the next Apple store? Who is it actually aimed at? Is it for developers to simplify agent building or for non-technical folks to supplement Make.com/n8n workflows?
Well, this is somewhere in between; at the current state, it can be suitable for both these personas. The drag-and-drop visual builder simplifies the creation of basic agents, enabling developers to export the code and expand it as needed. Meanwhile, non-developers can use the workflows directly by embedding them inside ChatKit.
With that in mind, let's start building our first agent inside Agent Builder.
Prerequisites
Head to the https://platform.openai.com/agent-builder and
Log in with your OpenAI Credentials. If you don’t have one, create it and ensure that you add your billing details
Verify your organisation in the account settings to run the agents in preview mode.
Hosted MCP servers to let the agent access data in third-party services.
Next in the Agent Builder Tab, you will see 3 Tabs and a familiar interface:
Workflows → Published Workflows. By default, a My Flow might be given.
Drafts → All unfinished / not published flows can be found here.
Templates → Predefined templates that work out of the box. Good for starters.
However, we will opt for a blank flow for maximum flexibility.

Agent Builder Platform For reference
What is Rube MCP?
Agent Builder supports hosted MCP servers, and for a lot of agentic workflows, you'd need multiple MCP tools, like Slack, Gmail, Google Sheets, and more, but adding these many servers (each with 10+ tools) eats into the LLM context window in a big way. We have solved this with Rube MCP, a universal MCP server that routes to appropriate servers and loads relevant tools based on context.
We will discuss later how you can add Rube to your agent node.
So, now let's get to building our agent.
Building The YouTube Q/A Agent Step by Step
For simplicity and clarity, I have broken the flow on a node-by-node basis. Each section covers how to set up, with additional details on the node.
For the overview, we are going to follow the following flow:
User Query -> Safety net -> Rag Query by Agent -! Rube MCP (YT/EXA Search) -
However, all this begins with adding a start node.
Set Entry Point with Start Node
Click on the + Create button. This will open a blank canvas, much like n8n, but with a Start Node.

New OpenAI Agent Builder workflow with only the start node
The start node acts as an entry point to any workflow. It offers two variables:
Input variables
Define inputs to the workflow.
Use
input_as_textto ****represent user-provided text / append it to chat historyDefault variable representing input, if no state variable is provided.
State variables
Additional input parameters are passed during the input process.
Persistent across the entire workflow and accessible using state nodes.
Can be defined just as variables and store a single data type
For now, let’s keep only the Input variables. Time to add input validation next
Input Validation with Guardrail Node

Next, we need to add input validation/guardrails to ensure that filtered query inputs are only passed to the model for a better chat experience.
To do this, let’s add a Guardrail node and connect it to the start node. If you click on the Guardrail node, you will find many options:

Here is what each does:
Field | Function |
|---|---|
Name | Name of the node |
Input | Name of the input variable; can vary if text state variable is also defined |
Personally identifiable information (PII) | Detects and redacts any PII |
Moderation | Classifies and blocks harmful content if prompt contains it |
Jailbreak | Detects prompt injection and jailbreak; keeps model on task |
Hallucination | Verifies input by validating against the knowledge source (e.g., RAG store) |
Continue on error | Sets an error path if guardrail fails; not recommended for production flows |
Same
Name: Name of the node
Input: Name of the input variable; can vary if text state variable is also defined.
Personally identifiable information (PII): Detects and Redacts any PII
Moderation: Classifies and blocks the harmful content if the prompt contains it.
Jailbreak: Detects prompt injection and jailbreak; tries to keep the model on task
Hallucination: Verifies the input by validating against the knowledge source (rag vector store) if added.
Continue on error: Provides an option to set an error path if the guardrail fails to run. Not a suitable choice for production-grade flows.
Having understood all of this, let’s set them up.
Moderation: Click ⚙️ → Most Critical → Save → Toggle On
Jailbreak: Toggle On (keep defaults)
Hallucinations: Click ⚙️ → Add Vector Store Id (generated in next section) → Save → Toggle On (For later step when mentioned)

moderation guardrail setup

jailbreak guardrail insides
With this, we have set up our guardrail with both a pass and a fail path. Yup, it’s that simple!
Now let’s add the agent - the heart of the flow
Adding the Brain with Agent Node, Rube MCP & Vector Store
To add an agent, click on the Agent Node in the sidebar and connect it with the pass path.

Then, inside agents, let's configure a few things:
Name: Name of the agent, let’s use YouTube Q/A Agent
Instructions: Instructions on how the agent should act, use one from the gist, or you can make your own following the OpenAI agent builder prompting guide. Or, use the ✏️ icon to generate a prompt.
Include Chat History: Whether to include past conversation or not. Suitable for context building, but racks up cost. Let’s set it on.
Model: Model to use. Let’s go with GPT-5. You can opt for cheaper models if the usage is high gpt5-mini / gpt4o
Reasoning: Can’t be turned off, can only be set to minimum. Let’s go for medium here.
Output Format: Supports text, JSON, and widgets. Let’s keep default - text
Verbosity: Set too low for shorter answers and vice versa. Let go with medium
Summary: Shows the reasoning summary in chat. I am setting it to null, but you can keep it enabled if you prefer.
Write Conversation History: Let’s proceed, saving the data to the conversation history state.
Now, to add Rag Vector Store and MCP support, click on + the tools
For Vector Store:
Select File Search from the list
Add all files
Save
Copy the generated vector ID and paste it in the Hallucinations vector_id field and save.
This is how it looks in practice

Adding MCP to the Agent Builder
It comes with a default set of MCP servers, maintained by OpenAI, including Gmail, Drive, and Outlook, among others.

Also, third-party official providers.

But, we all know the real fun is in adding custom MCP servers; that's the freedom. OpenAI allows adding them, just click on +server

You can add servers with different authentication methods, No Auth, Access token/API Key and Custom headers.
For this blog post, we will use Rube MCP. This is our flagship MCP, which dynamically connects to over 500 applications, including HubSpot, Jira, and YouTube, among others.
For adding Rube MCP to the agent node:
Select MCP Servers
Click
+ ServersIn the URL, put: https://rube.app/mcp (see above)
In Name put
rube_mcpAuthentication: API Key → Get it from:
Go to the Rube app,

Select Install Rube
Navigate to
N8N & Others TabGenerate Token
Copy and paste the token under
API Key/Auth tokenbox
Save
Sample output you see after this step

And you are done!
Is it that easy to connect both?
Apart from this, there are other fields within the OpenAI agent builder agent node as well. You can learn more at the OpenAI agent builder node docs.
Finally, the only thing left is to define the Fail Path, so let’s set that.
Ending Chat Session with End Node

Remember the failure path in the guardrail node, connect it with the End Node by selecting it from the sidebar.
The end node takes input_as_text and returns a JSON.
No, you don’t need to write it yourself; use OpenAI Agent Builder's native JSON Schema Generator and prompt it with your needs in natural language.
Let’s prompt it by clicking ✏️ Generate
& here is the generated schema. Make sure to click update

I have not often pursued this path.
Anyway, now that everything is done, it's time to test the agent!
Testing Time
For testing, head to the Preview at the top, and a chat window opens. Enter your query and see the responses along with intermediate querying and reasoning steps.
Here is what it looks like:
Note: This was initial run, after little while I changed the name of Agent to YT QA Agent for explain ability. Also since this is a tutorial, I haven’t published, but you can by clicking
Publishbutton.

Amazing, we built our first multilingual YouTube Q/A agent that:
has a personality,
filter’s hate speech input output,
response based on only vector store searches, and in a friendly manner
handles jailbreak
takes multilingual input and responds only in English.
And all this took like 5 minutes and four nodes - much faster prototyping than N8N.
In case you want to get the code, just hitCode
→ Agent’s SDK → Python / TypeScript and Copy. For Reference.

But we have only scratched the surface; there are more nodes to explore!
What else can you build?
But what are some of the real-world use cases that you can build now using OpenAI's Agent Builder?
So, I've compiled a list of use cases and MCPs that they will need to build the agents using Agent Builder. You can get all these apps with Rube MCP. And it will only load the necessary tools for a task, given the context.
Verticals | Tools | Usecases |
|---|---|---|
Customer Support | HubSpot, Salesforce, Zendesk, Gmail, Outlook, Slack | 1. Automatically create and assign support tickets from customer emails or Slack messages. |
CRM | HubSpot, Salesforce, Pipedrive, Apollo, Freshdesk | 1. Auto-update lead status when a deal moves forward in email or chat. |
Ticketing | Jira, Linear, Trello | 1. Convert bug reports from Slack or email directly into Jira or Linear tasks. |
Productivity | Google Calendar, Calendly, Gmail, Notion, Docs | 1. Schedule meetings automatically through Calendar when someone books via Calendly. |
Development | GitHub, GitLab, Supabase, Bitbucket, Sentry | 1. Create Jira or Linear issues automatically when Sentry reports a GitHub error. |
Social and Communications | Reddit, Twitter/X, LinkedIn, YouTube, Facebook, WhatsApp, Discord, Veo 3 | 1. Post marketing updates or videos simultaneously across multiple platforms. |
Exploring the OpenAI Agent Builder Nodes
Open AI Agent Builder comes with 12 nodes categorised into four sections, with three nodes in each section. Here is what it does at a high level:
Core
Agent → Already saw acting as a brain by letting the user call the model with their instructions, tools and agent actions. Use when you need LLM processing.
End → Ends the flow instantly / returns workflow output. Use for guardrails or handling unexpected user behaviour. Ensure that the workflow output is set in the desired format.
Note → Add a stick note. Helpful in adding instructions in flow/documenting.
Tools
File Search → Another name for OpenAI Vector Store. Queries the vector store for relevant information. Ideal for rag-based use cases.
Guardrails → All about safety. Adds PII, jailbreak, hallucination checks & runs moderation for input output. Useful for production
MCP → Unlike N8N, open ai agent builder users MCPs instead of webhooks / APIs to interact with external or companies’ internal tools & services. Use with complex workflows requiring multiple service results. E.g. we used Rube MCP to verify the citations internally.
Logic
If / Else → For Conditional branching. Use to create conditions to branch workflows. Yup, multiple workflows in a single canvas allowed!
While → Loop till condition is true. Use when the total iteration is not known, for example, polling an api for call status as completed. Must write in CEL (click on learn more in the while window at the bottom)
User Approval → Add a human in the loop. Pauses execution for us to approve or reject a step. Great for moderation on high-risk tasks, such as finance, e-commerce, and legal, among others.
Data
Transform → Reshapes data using CEL. Support JSON output. Useful when enforcing type restrictions of the production or preprocessing input for agents to read
State → Global variable that can be used throughout the workflow. Useful if you need inputs and outputs that can be referenced throughout the workflow. Yup, it's the State variable we learnt at the start.
In case you are interested in learning more, refer to the OpenAI Agent Builder official documentation
Conclusion
Knowing all the nodes is a good head start, but the real essence is understanding:
Entire workflow logic,
How data passes between nodes,
For your use case, which node will work more efficiently and why?
How production-grade workflows are built.
And what are the best practices to follow while building agents?
With Agent Builder, OpenAI enables businesses and developers alike to create simple prototypes and production-grade development with fewer nodes and setup.
Learning a new tool never hurts, so head to the OpenAI Agent Builder, connect Rube MCP and get started building.
FAQ
Q1: What is OpenAI Agent Builder, and who is it for?
A: Agent Builder is a visual workflow tool from OpenAI where you build AI agents by connecting drag-and-drop nodes (Start, Guardrail, Agent, MCP, etc). It’s intended for both developers (who might later export the logic as code) and non-technical users who want a quicker way to build agent workflows.
Q2: What are the basic prerequisites to start using Agent Builder with MCP servers?
A: You’ll need an OpenAI account with billing details set up and your organisation verified (so you can run Agent Builder workflows). Also, you’ll need access to one or more hosted MCP (Model Context Protocol) servers to allow your agent to connect to external data/tools.
Q3: What are some of the key production-grade features to be aware of in Agent Builder?
A: Some essential features include:
Guardrail nodes, which help with moderation, PII detection, and hallucination mitigation.
MCP nodes, to integrate external tools/services (via MCP servers) rather than building all integrations manually.
The ability to export your workflow logic into code (TypeScript/Python) if you want to move beyond the visual builder into custom code.
OpenAI has just released AgentKit at DevDay 2025, a comprehensive set of tools for building and deploying AI agents. The kit contains
Agent Builder: A visual drag-and-drop canvas for building agents.
ChatKit: A toolkit to embed a chat-based agent in your product.
Connector registry: A central place for managing how tools connect across ChatGPT.
This was a surprising launch, which shook the internet and posed an existential threat to nearly half of Silicon Valley's companies. But we are not here for that. Let's get to the business and understand what Agent Builder is, who it is for, and how to build your first agent with it, so you don't fall behind in the AI rat race.
Let's gooo.
TL; DR
Agent Builder is OpenAI's visual drag-and-drop node-based agent builder.
It supports MCP servers, allowing you to equip your agent with any hosted MCP servers to undertake real-world actions.
It has multiple nodes, such as Agent, MCP, and Guardrail, that allow you to build custom agents easily in hours.
It also allows you to export your agent logic as Typescript/Python code and further modify it if you wish.
In this article, I've shared how you can configure Rube MCP to build a YouTube Q&A agent with guardrails with RAG.
You can also export the Agent code in Typescript/Python, powered by OpenAI's Agents SDK for further customisations.
What is Agent Builder, and who is it for?
Agent Builder is the n8n-like visual AI workflow builder from OpenAI. It has multiple nodes that you can connect to build agents, and it also allows you to export code, enabling you to tweak it to customise. This reduces the time required to create a functional agent.
Agent Builder has the internet in a split. Is it another GPT store moment from OpenAI or the next Apple store? Who is it actually aimed at? Is it for developers to simplify agent building or for non-technical folks to supplement Make.com/n8n workflows?
Well, this is somewhere in between; at the current state, it can be suitable for both these personas. The drag-and-drop visual builder simplifies the creation of basic agents, enabling developers to export the code and expand it as needed. Meanwhile, non-developers can use the workflows directly by embedding them inside ChatKit.
With that in mind, let's start building our first agent inside Agent Builder.
Prerequisites
Head to the https://platform.openai.com/agent-builder and
Log in with your OpenAI Credentials. If you don’t have one, create it and ensure that you add your billing details
Verify your organisation in the account settings to run the agents in preview mode.
Hosted MCP servers to let the agent access data in third-party services.
Next in the Agent Builder Tab, you will see 3 Tabs and a familiar interface:
Workflows → Published Workflows. By default, a My Flow might be given.
Drafts → All unfinished / not published flows can be found here.
Templates → Predefined templates that work out of the box. Good for starters.
However, we will opt for a blank flow for maximum flexibility.

Agent Builder Platform For reference
What is Rube MCP?
Agent Builder supports hosted MCP servers, and for a lot of agentic workflows, you'd need multiple MCP tools, like Slack, Gmail, Google Sheets, and more, but adding these many servers (each with 10+ tools) eats into the LLM context window in a big way. We have solved this with Rube MCP, a universal MCP server that routes to appropriate servers and loads relevant tools based on context.
We will discuss later how you can add Rube to your agent node.
So, now let's get to building our agent.
Building The YouTube Q/A Agent Step by Step
For simplicity and clarity, I have broken the flow on a node-by-node basis. Each section covers how to set up, with additional details on the node.
For the overview, we are going to follow the following flow:
User Query -> Safety net -> Rag Query by Agent -! Rube MCP (YT/EXA Search) -
However, all this begins with adding a start node.
Set Entry Point with Start Node
Click on the + Create button. This will open a blank canvas, much like n8n, but with a Start Node.

New OpenAI Agent Builder workflow with only the start node
The start node acts as an entry point to any workflow. It offers two variables:
Input variables
Define inputs to the workflow.
Use
input_as_textto ****represent user-provided text / append it to chat historyDefault variable representing input, if no state variable is provided.
State variables
Additional input parameters are passed during the input process.
Persistent across the entire workflow and accessible using state nodes.
Can be defined just as variables and store a single data type
For now, let’s keep only the Input variables. Time to add input validation next
Input Validation with Guardrail Node

Next, we need to add input validation/guardrails to ensure that filtered query inputs are only passed to the model for a better chat experience.
To do this, let’s add a Guardrail node and connect it to the start node. If you click on the Guardrail node, you will find many options:

Here is what each does:
Field | Function |
|---|---|
Name | Name of the node |
Input | Name of the input variable; can vary if text state variable is also defined |
Personally identifiable information (PII) | Detects and redacts any PII |
Moderation | Classifies and blocks harmful content if prompt contains it |
Jailbreak | Detects prompt injection and jailbreak; keeps model on task |
Hallucination | Verifies input by validating against the knowledge source (e.g., RAG store) |
Continue on error | Sets an error path if guardrail fails; not recommended for production flows |
Same
Name: Name of the node
Input: Name of the input variable; can vary if text state variable is also defined.
Personally identifiable information (PII): Detects and Redacts any PII
Moderation: Classifies and blocks the harmful content if the prompt contains it.
Jailbreak: Detects prompt injection and jailbreak; tries to keep the model on task
Hallucination: Verifies the input by validating against the knowledge source (rag vector store) if added.
Continue on error: Provides an option to set an error path if the guardrail fails to run. Not a suitable choice for production-grade flows.
Having understood all of this, let’s set them up.
Moderation: Click ⚙️ → Most Critical → Save → Toggle On
Jailbreak: Toggle On (keep defaults)
Hallucinations: Click ⚙️ → Add Vector Store Id (generated in next section) → Save → Toggle On (For later step when mentioned)

moderation guardrail setup

jailbreak guardrail insides
With this, we have set up our guardrail with both a pass and a fail path. Yup, it’s that simple!
Now let’s add the agent - the heart of the flow
Adding the Brain with Agent Node, Rube MCP & Vector Store
To add an agent, click on the Agent Node in the sidebar and connect it with the pass path.

Then, inside agents, let's configure a few things:
Name: Name of the agent, let’s use YouTube Q/A Agent
Instructions: Instructions on how the agent should act, use one from the gist, or you can make your own following the OpenAI agent builder prompting guide. Or, use the ✏️ icon to generate a prompt.
Include Chat History: Whether to include past conversation or not. Suitable for context building, but racks up cost. Let’s set it on.
Model: Model to use. Let’s go with GPT-5. You can opt for cheaper models if the usage is high gpt5-mini / gpt4o
Reasoning: Can’t be turned off, can only be set to minimum. Let’s go for medium here.
Output Format: Supports text, JSON, and widgets. Let’s keep default - text
Verbosity: Set too low for shorter answers and vice versa. Let go with medium
Summary: Shows the reasoning summary in chat. I am setting it to null, but you can keep it enabled if you prefer.
Write Conversation History: Let’s proceed, saving the data to the conversation history state.
Now, to add Rag Vector Store and MCP support, click on + the tools
For Vector Store:
Select File Search from the list
Add all files
Save
Copy the generated vector ID and paste it in the Hallucinations vector_id field and save.
This is how it looks in practice

Adding MCP to the Agent Builder
It comes with a default set of MCP servers, maintained by OpenAI, including Gmail, Drive, and Outlook, among others.

Also, third-party official providers.

But, we all know the real fun is in adding custom MCP servers; that's the freedom. OpenAI allows adding them, just click on +server

You can add servers with different authentication methods, No Auth, Access token/API Key and Custom headers.
For this blog post, we will use Rube MCP. This is our flagship MCP, which dynamically connects to over 500 applications, including HubSpot, Jira, and YouTube, among others.
For adding Rube MCP to the agent node:
Select MCP Servers
Click
+ ServersIn the URL, put: https://rube.app/mcp (see above)
In Name put
rube_mcpAuthentication: API Key → Get it from:
Go to the Rube app,

Select Install Rube
Navigate to
N8N & Others TabGenerate Token
Copy and paste the token under
API Key/Auth tokenbox
Save
Sample output you see after this step

And you are done!
Is it that easy to connect both?
Apart from this, there are other fields within the OpenAI agent builder agent node as well. You can learn more at the OpenAI agent builder node docs.
Finally, the only thing left is to define the Fail Path, so let’s set that.
Ending Chat Session with End Node

Remember the failure path in the guardrail node, connect it with the End Node by selecting it from the sidebar.
The end node takes input_as_text and returns a JSON.
No, you don’t need to write it yourself; use OpenAI Agent Builder's native JSON Schema Generator and prompt it with your needs in natural language.
Let’s prompt it by clicking ✏️ Generate
& here is the generated schema. Make sure to click update

I have not often pursued this path.
Anyway, now that everything is done, it's time to test the agent!
Testing Time
For testing, head to the Preview at the top, and a chat window opens. Enter your query and see the responses along with intermediate querying and reasoning steps.
Here is what it looks like:
Note: This was initial run, after little while I changed the name of Agent to YT QA Agent for explain ability. Also since this is a tutorial, I haven’t published, but you can by clicking
Publishbutton.

Amazing, we built our first multilingual YouTube Q/A agent that:
has a personality,
filter’s hate speech input output,
response based on only vector store searches, and in a friendly manner
handles jailbreak
takes multilingual input and responds only in English.
And all this took like 5 minutes and four nodes - much faster prototyping than N8N.
In case you want to get the code, just hitCode
→ Agent’s SDK → Python / TypeScript and Copy. For Reference.

But we have only scratched the surface; there are more nodes to explore!
What else can you build?
But what are some of the real-world use cases that you can build now using OpenAI's Agent Builder?
So, I've compiled a list of use cases and MCPs that they will need to build the agents using Agent Builder. You can get all these apps with Rube MCP. And it will only load the necessary tools for a task, given the context.
Verticals | Tools | Usecases |
|---|---|---|
Customer Support | HubSpot, Salesforce, Zendesk, Gmail, Outlook, Slack | 1. Automatically create and assign support tickets from customer emails or Slack messages. |
CRM | HubSpot, Salesforce, Pipedrive, Apollo, Freshdesk | 1. Auto-update lead status when a deal moves forward in email or chat. |
Ticketing | Jira, Linear, Trello | 1. Convert bug reports from Slack or email directly into Jira or Linear tasks. |
Productivity | Google Calendar, Calendly, Gmail, Notion, Docs | 1. Schedule meetings automatically through Calendar when someone books via Calendly. |
Development | GitHub, GitLab, Supabase, Bitbucket, Sentry | 1. Create Jira or Linear issues automatically when Sentry reports a GitHub error. |
Social and Communications | Reddit, Twitter/X, LinkedIn, YouTube, Facebook, WhatsApp, Discord, Veo 3 | 1. Post marketing updates or videos simultaneously across multiple platforms. |
Exploring the OpenAI Agent Builder Nodes
Open AI Agent Builder comes with 12 nodes categorised into four sections, with three nodes in each section. Here is what it does at a high level:
Core
Agent → Already saw acting as a brain by letting the user call the model with their instructions, tools and agent actions. Use when you need LLM processing.
End → Ends the flow instantly / returns workflow output. Use for guardrails or handling unexpected user behaviour. Ensure that the workflow output is set in the desired format.
Note → Add a stick note. Helpful in adding instructions in flow/documenting.
Tools
File Search → Another name for OpenAI Vector Store. Queries the vector store for relevant information. Ideal for rag-based use cases.
Guardrails → All about safety. Adds PII, jailbreak, hallucination checks & runs moderation for input output. Useful for production
MCP → Unlike N8N, open ai agent builder users MCPs instead of webhooks / APIs to interact with external or companies’ internal tools & services. Use with complex workflows requiring multiple service results. E.g. we used Rube MCP to verify the citations internally.
Logic
If / Else → For Conditional branching. Use to create conditions to branch workflows. Yup, multiple workflows in a single canvas allowed!
While → Loop till condition is true. Use when the total iteration is not known, for example, polling an api for call status as completed. Must write in CEL (click on learn more in the while window at the bottom)
User Approval → Add a human in the loop. Pauses execution for us to approve or reject a step. Great for moderation on high-risk tasks, such as finance, e-commerce, and legal, among others.
Data
Transform → Reshapes data using CEL. Support JSON output. Useful when enforcing type restrictions of the production or preprocessing input for agents to read
State → Global variable that can be used throughout the workflow. Useful if you need inputs and outputs that can be referenced throughout the workflow. Yup, it's the State variable we learnt at the start.
In case you are interested in learning more, refer to the OpenAI Agent Builder official documentation
Conclusion
Knowing all the nodes is a good head start, but the real essence is understanding:
Entire workflow logic,
How data passes between nodes,
For your use case, which node will work more efficiently and why?
How production-grade workflows are built.
And what are the best practices to follow while building agents?
With Agent Builder, OpenAI enables businesses and developers alike to create simple prototypes and production-grade development with fewer nodes and setup.
Learning a new tool never hurts, so head to the OpenAI Agent Builder, connect Rube MCP and get started building.
FAQ
Q1: What is OpenAI Agent Builder, and who is it for?
A: Agent Builder is a visual workflow tool from OpenAI where you build AI agents by connecting drag-and-drop nodes (Start, Guardrail, Agent, MCP, etc). It’s intended for both developers (who might later export the logic as code) and non-technical users who want a quicker way to build agent workflows.
Q2: What are the basic prerequisites to start using Agent Builder with MCP servers?
A: You’ll need an OpenAI account with billing details set up and your organisation verified (so you can run Agent Builder workflows). Also, you’ll need access to one or more hosted MCP (Model Context Protocol) servers to allow your agent to connect to external data/tools.
Q3: What are some of the key production-grade features to be aware of in Agent Builder?
A: Some essential features include:
Guardrail nodes, which help with moderation, PII detection, and hallucination mitigation.
MCP nodes, to integrate external tools/services (via MCP servers) rather than building all integrations manually.
The ability to export your workflow logic into code (TypeScript/Python) if you want to move beyond the visual builder into custom code.
We've been optimising every app so you can build agents that work (irl)

We've been optimising every app so you can build agents that work (irl)
Recommended Blogs
Recommended Blogs
Stay updated.
product

Stay updated.
product



