Deep dive into CrewAI (With Examples)
Part of our experiments in messing around with different frameworks!
This was written in March, 2024 and with the current pace of progress it might have become redundant Already!
In this era of generative AI, specifically Large Language Models(LLMs) and large Multi-modal Models(LMMs), Autonomous AI agents enable seamless integration of Generative AI into daily workflows with ease.
What if we could harness not only one but Multiple Autonomous AI Agents, each specialized in different tasks like web browsing, database interaction, or complex calculations and specific tools like image generation, web scraping, api integration etc., all working together towards predefined goals to achieve desired results? The tasks will be executed in sequential or hierarchical Process, all the agents working like a Crew together. And one of the best frameworks for this, with additional benefit of being open-source, is CrewAI.
This article aims to provide a comprehensive overview of the CrewAI platform’s components through two CrewAI Examples walkthroughs. The first Example will introduce you to the framework, while the second will take you into advanced strategies and utilities for optimizing your crew’s performance in achieving your goals.
Major components of CrewAI
In this section we will discuss all the components that forms the core of CrewAI.
- Agents: Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like ‘Researcher’, ‘Writer’, or ‘Customer Support’, each contributing to the overall goal of the crew.
- Tasks: In the CrewAI framework, tasks are individual assignments that agents complete. They encapsulate necessary information for execution, including a description, assigned agent, required tools, offering flexibility for various action complexities.
- Tools: A tool in CrewAI is a skill or function that agents can utilize to perform various actions. This includes tools from the crewAI Toolkit and LangChain Tools, enabling everything from simple searches to complex interactions and effective teamwork among agents.
- Process: In CrewAI, processes orchestrate the execution of tasks by agents, akin to project management in human teams. These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
- Crew: A crew in CrewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
Here is a visual representation of how all of them works together:
CrewAI Initial Setup
So, First things first. We got to do do some initial setting up in order to use CrewAI.
pip install 'crewai[tools]'
pip install duckduckgo-search
The [tools]
prefix will install some additional tools implemented inside CrewAI. Note that, we can seamlessly use these tools, as well as tools implemented in langchain
with CrewAI, same as we can implement our own Tools, that we will see in a second. We have also installed duckduckgo-search
, a Python API for the DuckDuckGo search engine which enhances internet browsing capabilities for our crew.
In this article, we also aim to guide you on the best practices for securely managing and handling API keys. Stay tuned for valuable insights and strategies to safeguard your API keys effectively.
LLM Selection
For peak performance with your CrewAI crew, consider using GPT-4 or the slightly cheaper GPT-3.5 from OpenAI for your AI agents. These models are the backbone of your agents and significantly impact their capabilities. To use them, get an API key from OpenAI and set up essential environment variables in a .env
file (which should only reside in your computer):
OPENAI_API_BASE='https://api.openai.com/v1/chat/completions'
OPENAI_MODEL_NAME='gpt-4-turbo-preview' # or gpt-3.5-turbo
OPENAI_API_KEY='YOUR OPENAI API KEY'
Among open-source alternatives to use with CrewAI, models like Mistral-7B-Instruct-v0.1
and Mixtral-8x7B-v0.1
can be hosted on your hardware using frameworks likeOllama
. You can follow the instructions on the ReadME page to set up, and when all that is done, set up the variables:
OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_KEY=''
For running CrewAI with cost-effective hosted LLMs, consider services like together.ai, offering affordable options for open-source models; check their pricing and available models before configuring variables.
For Services like together.ai
:
OPENAI_API_BASE='https://api.together.xyz'
OPENAI_MODEL_NAME='mistralai/Mixtral-8x7B-Instruct-v0.1' # or any model of your choice
OPENAI_API_KEY='YOUR TogetherAI API KEY'
Now we are all ready to initiate our first Crew!
Example — 1: Blog Writer Assistant using CrewAI
This introductory application aims to familiarize you with the components of CrewAI and their usage through a simple project — a crew tasked with researching a topic online and producing a well-structured blog based on the research.
First lets import the necessary modules:
import os
from crewai import Agent, Task, Crew, Process
from langchain_community.tools import DuckDuckGoSearchRun# Initialize the tool
search_tool = DuckDuckGoSearchRun()
Tools can be implemented using a decorator as well as using OOP style inheritance. All the details are mentioned here.
Now we define two agents as planned — an Senior Research Analyst
and Tech Content Strategist
:
from langchain_openai import ChatOpenAI
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
tools=[search_tool],
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True
)
Lets discuss on the arguments of the Agent
class.
role
defines the agent’s function within the crew. It determines the kind of tasks the agent is best suited for.goal
is the the individual objective that the agent aims to achieve. It guides the agent’s decision-making process.backstory
provides context to the agent’s role and goal, enriching the interaction and collaboration dynamics.
There are a lot of arguments supported in the Agent
class, which can be found in the documentation.
It is to be noted that, the CrewAI platform is task-specific, not agent-specific. Meaning, the goal of running a crew
is executing all the tasks(sequentially or not, depending on the process
), not utilizing all the agents. The same agents might be used in more than one tasks or might not be used at all, in order to execute all the tasks.
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.""",
expected_output="Full analysis report in bullet points",
agent=researcher
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer
)
These are the two tasks we want this crew
to execute. Explanation and usage of all the parameters is mentioned in the corresponding documentation.
Now as all the tools
, agents
and tasks
have been defined, we will define the crew
, that will manage the collaboration of all these, and act as the glue between them.
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
process=Process.sequential,
verbose=2, # You can set it to 1 or 2 to different logging levels
)
Here the specified process is Sequential meaning it will execute tasks sequentially, ensuring tasks are completed in an orderly progression, with the output of one task serving as context for the next. Among other arguments, we can specify different specialized LLM, for function_calling_llm
as well as manager_llm
which is necessary in the case of a Hierarchical process, where the execution order of tasks is determined by the manager_llm
. The complete list of available options in Processes and Crew are mention here and here respectively.
Finally, we will kick_off
the execution of the crew, and after the execution, result of the final task is returned.
# Get your crew to work!
result = crew.kickoff()
print("-----------------------------")
print(result)
We can also access the outputs of specific tasks like this after the execution:
print(eval(task1.output.json()))
# or
print(eval(task2.output.json()))
To have the whole script main.py
together:
import os
from crewai import Agent, Task, Crew, Process
from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
# Define your agents with roles and goals
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
tools=[search_tool]
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True
)
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.""",
expected_output="Full analysis report in bullet points",
agent=researcher
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2, # You can set it to 1 or 2 to different logging levels
)
# Get your crew to work!
result = crew.kickoff()
print("-----------------------------")
print(result)
Example-2: Trip Planner Assistant using CrewAI
In the second project, we embark on creating a Trip Planner Assistant using CrewAI. This tool will assist in planning your journey, detailing your destination, costs, and crafting a comprehensive trip itinerary based on your interests. Lets take a trip to how we create this amazing tool with CrewAI.
First, we will need a SERPER api for leveraging the capabilities Google Search(we have used DuckDuckGo in the previous project, but lets learn something new). You need to just sign up, and obtain the key.
Next, we introduce another awesome tool Browserless, a valuable tool that mimics a full-fledged browser environment. By using Browserless locally with Docker, you can simulate website functionalities seamlessly. If Docker isn’t installed yet, do so by following the installation instructions online as it is an essential developer tool.
So, when you have docker, run this:
docker run -d --rm --name browserless -p 3000:3000 browserless/chrome
Now with all the tools set up, we are ready to build out trip assistant.
.env
: File with all environmental variables
To securely store your API keys, create a file named .env
and input your keys within. Ensure the .env
file remains confidential, and consider adding it to your .gitignore
to prevent exposure when committing to Github. The .env
file should follow this format:
SERPER_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXX
OPENAI_API_BASE=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
OPENAI_MODEL_NAME=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
OPENAI_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Remember not to enclose the values in quotes.
tools.py
: Define all the tools
Now, we will create all the tools one by one. We will use tools for searching the internet(Where SERPER_API
will come in handy), scrape websites that contain useful information(browserless
will be useful here) and finally one simple calculator, for our budget expense calculation.
The first tool we will use is SearchTool
. To create a custom tool, here we will use the @tool
decorator method. We will leverage the SERPER_API_KEY
for querying Google, fetching relevant titles and snippets for the given query.
import json
import os
import requests
from crewai import Agent, Task
from langchain.tools import tool
class SearchTool():
@tool("Search the internet")
def search_internet(query):
"""Useful to search the internet
about a a given topic and return relevant results"""
top_result_to_return = 4
url = "https://google.serper.dev/search"
payload = json.dumps({"q": query})
headers = {
'X-API-KEY': os.environ['SERPER_API_KEY'],
'content-type': 'application/json'
}
# Send the request to search
response = requests.request("POST", url, headers=headers, data=payload)
# check if there is an organic key, don't include sponsor results
if 'organic' not in response.json():
return "Sorry, I couldn't find anything about that, there could be an error with you serper api key."
else:
results = response.json()['organic']
string = []
for result in results[:top_result_to_return]:
try:
string.append('\n'.join([
f"Title: {result['title']}", f"Link: {result['link']}",
f"Snippet: {result['snippet']}", "\n-----------------"
]))
except KeyError:
next
return '\n'.join(string)
We first process Google search results by converting them into understandable strings from JSON format. Then, we utilize a locally deployed Browserless instance with Docker to navigate websites and extract relevant content from the links. This process streamlines information retrieval, enhancing the capabilities of our trip planning assistant. Do you remember, at the beginning of this example we have run the local instance of browserless
with docker? Lets write a BrowserTool
to utilize that.
from unstructured.partition.html import partition_html
class BrowserTool():
@tool("Scrape website content")
def scrape_and_summarize_website(website):
"""Useful to scrape and summarize a website content"""
# url = f"https://chrome.browserless.io/content?token={os.environ['BROWSERLESS_API_KEY']}"
url = "http://localhost:3000/content"
payload = json.dumps({"url": website})
headers = {'cache-control': 'no-cache', 'content-type': 'application/json'}
response = requests.request("POST", url, headers=headers, data=payload)
elements = partition_html(text=response.text)
content = "\n\n".join([str(el) for el in elements])
content = [content[i:i + 8000] for i in range(0, len(content), 8000)]
summaries = []
for chunk in content:
chunking_agent = Agent(
role='Principal Researcher',
goal=
'Do amazing researches and summaries based on the content you are working with',
backstory=
"You're a Principal Researcher at a big company and you need to do a research about a given topic.",
allow_delegation=False)
chunking_task = Task(
agent=agent,
description=
f'Analyze and summarize the content bellow, make sure to include the most relevant information in the summary, return only the summary nothing else.\n\nCONTENT\n----------\n{chunk}'
)
summary = task.execute()
summaries.append(summary)
return "\n\n".join(summaries)
Here we set the url to http://localhost:3000/content
ensures connectivity with the locally running browserless
container. For scaling needs or serverless options, browserless.io
’s enterprise offering can provide a suitable solution. Simply integrate the respective API key into your .env
file and use the commented URL as required.
Upon receiving the response from Browserless, the content undergoes chunking, cleaning, and merging to organize the data efficiently into a unified text called content
.
To handle the fixed context lengths of language models, we break down the content into smaller text chunks. For each chunk, we create a chunking task and a corresponding agent to summarize it. After executing each task, we aggregate all the summaries into one single summary text, enabling us to utilize a concise summary of the entire webpage instead of its full content.
We know that the LLMs often make mistakes while doing complex mathematical calculations, but they are very good at writing the equation. So, our last tool will be a CalculatorTool
.
class CalculatorTool():
@tool("Make a calculation")
def calculate(operation):
"""Useful to perform any mathematical calculations,
like sum, minus, multiplication, division, etc.
The input to this tool should be a mathematical
expression, a couple examples are `200*7` or `5000/2*10`
"""
try:
return eval(operation)
except SyntaxError:
return "Error: Invalid syntax in mathematical expression"
Here we saw how to implement newer tools for our crewAI agents, both simpler ones as well as complex ones different third party tools and services.
So, the complete tools.py
file:
import json
import os
import requests
from crewai import Agent, Task
from langchain.tools import tool
from unstructured.partition.html import partition_html
class SearchTool():
@tool("Search the internet")
def search_internet(query):
"""Useful to search the internet
about a a given topic and return relevant results"""
top_result_to_return = 4
url = "https://google.serper.dev/search"
payload = json.dumps({"q": query})
headers = {
'X-API-KEY': os.environ['SERPER_API_KEY'],
'content-type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
# check if there is an organic key
if 'organic' not in response.json():
return "Sorry, I couldn't find anything about that, there could be an error with you serper api key."
else:
results = response.json()['organic']
string = []
for result in results[:top_result_to_return]:
try:
string.append('\n'.join([
f"Title: {result['title']}", f"Link: {result['link']}",
f"Snippet: {result['snippet']}", "\n-----------------"
]))
except KeyError:
next
return '\n'.join(string)
class BrowserTool():
@tool("Scrape website content")
def scrape_and_summarize_website(website):
"""Useful to scrape and summarize a website content"""
# url = f"https://chrome.browserless.io/content?token={os.environ['BROWSERLESS_API_KEY']}"
url = "http://localhost:3000/content"
payload = json.dumps({"url": website})
headers = {'cache-control': 'no-cache', 'content-type': 'application/json'}
response = requests.request("POST", url, headers=headers, data=payload)
elements = partition_html(text=response.text)
content = "\n\n".join([str(el) for el in elements])
content = [content[i:i + 8000] for i in range(0, len(content), 8000)]
summaries = []
for chunk in content:
agent = Agent(
role='Principal Researcher',
goal=
'Do amazing researches and summaries based on the content you are working with',
backstory=
"You're a Principal Researcher at a big company and you need to do a research about a given topic.",
allow_delegation=False)
task = Task(
agent=agent,
description=
f'Analyze and summarize the content bellow, make sure to include the most relevant information in the summary, return only the summary nothing else.\n\nCONTENT\n----------\n{chunk}'
)
summary = task.execute()
summaries.append(summary)
return "\n\n".join(summaries)
class CalculatorTool():
@tool("Make a calculation")
def calculate(operation):
"""Useful to perform any mathematical calculations,
like sum, minus, multiplication, division, etc.
The input to this tool should be a mathematical
expression, a couple examples are `200*7` or `5000/2*10`
"""
try:
return eval(operation)
except SyntaxError:
return "Error: Invalid syntax in mathematical expression"
agents.py
: Define all the agents
An essential aspect of designing our crew involves breaking down the target task into manageable goals and aligning agents with the necessary skills to execute these goals effectively. In the context of tour planning, we can identify key roles crucial for a successful trip coordination.
Firstly, we introduce the city_selection_expert
— an agent equipped with general knowledge of popular cities to guide us in selecting the best destination based on preferences and seasonal considerations. Following this, the local_expert
steps in to provide in-depth insights about the selected city, offering valuable local perspectives. Finally, the travel_concierge
, a seasoned specialist in travel planning and logistics, brings decades of experience to orchestrate seamless trip arrangements.
By delineating roles and responsibilities for each agent, we ensure a comprehensive and tailored approach to tour planning. The agents.py
script will reflect the allocation of these roles to create a cohesive and efficient Crew for the task at hand.
from crewai import Agent
# from langchain.llms import OpenAI
from tools import BrowserTool, CalculatorTool,SearchTool
class TripAgents():
def city_selection_agent(self):
return Agent(
role='City Selection Expert',
goal='Select the best city based on weather, season, and prices',
backstory=
'An expert in analyzing travel data to pick ideal destinations',
tools=[
SearchTool.search_internet,
BrowserTool.scrape_and_summarize_website,
],
verbose=True)
def local_expert(self):
return Agent(
role='Local Expert at this city',
goal='Provide the BEST insights about the selected city',
backstory="""A knowledgeable local guide with extensive information
about the city, it's attractions and customs""",
tools=[
SearchTool.search_internet,
BrowserTool.scrape_and_summarize_website,
],
verbose=True)
def travel_concierge(self):
return Agent(
role='Amazing Travel Concierge',
goal="""Create the most amazing travel itineraries with budget and
packing suggestions for the city""",
backstory="""Specialist in travel planning and logistics with
decades of experience""",
tools=[
SearchTool.search_internet,
BrowserTool.scrape_and_summarize_website,
CalculatorTool.calculate,
],
verbose=True)
Notice that, we have assigned both the SearchTool
and BrowserTool
, empowering them with internet exploration capabilities. Only travel_concierge
possesses the specialized ability to handle calculations for expense planning.
tasks.py
: Define all Tasks
As we previously mentioned, most important part of this crew is identifying how to divide the objective into separate tasks. For our project, we will divide our main target(getting a comprehensive travel plan) into three separate tasks, identify_task
, gather_task
and plan_task
. Here are the description of each task,
identify_task
: The task involves analyzing cities based on criteria like weather, events, and costs to select the best one for the trip, considering factors such as current weather, cultural events, and expenses.
gather_task
: As the local expert, the task is to create a detailed guide for travelers, covering attractions, customs, events, and daily activities unique to the chosen city. The guide aims to offer insights only known to locals, providing a thorough overview of the city’s highlights, including hidden gems, landmarks, weather forecasts, and estimated costs.
plan_task
: Expanding from the guide, this task involves creating a detailed 7-day travel itinerary with daily plans. It includes weather forecasts, dining suggestions, packing tips, and a budget breakdown for a comprehensive trip plan.
So, let’s see now the tools.py
file will contain:
from crewai import Task
from textwrap import dedent
from datetime import date
class TripTasks():
def identify_task(self, agent, origin, cities, interests, range):
return Task(description=dedent(f"""
Analyze and select the best city for the trip based
on specific criteria such as weather patterns, seasonal
events, and travel costs. This task involves comparing
multiple cities, considering factors like current weather
conditions, upcoming cultural or seasonal events, and
overall travel expenses.
{self.__tip_section()}
Traveling from: {origin}
City Options: {cities}
Trip Date: {range}
Traveler Interests: {interests}
"""),
expected_output=dedent(f"""Your final answer must be a detailed
report on the chosen city, and everything you found out
about it, including the actual flight costs, weather
forecast and attractions.
"""),
agent=agent)
def gather_task(self, agent, origin, interests, range):
return Task(description= dedent(f"""
As a local expert on this city you must compile an
in-depth guide for someone traveling there and wanting
to have THE BEST trip ever!
Gather information about key attractions, local customs,
special events, and daily activity recommendations.
Find the best spots to go to, the kind of place only a
local would know.
This guide should provide a thorough overview of what
the city has to offer, including hidden gems, cultural
hotspots, must-visit landmarks, weather forecasts, and
high level costs.
{self.__tip_section()}
Trip Date: {range}
Traveling from: {origin}
Traveler Interests: {interests}
"""),
expected_output=dedent(f"""
The final answer must be a comprehensive city guide,
rich in cultural insights and practical tips,
tailored to enhance the travel experience.
"""),
agent=agent)
def plan_task(self, agent, origin, interests, range):
return Task(description=dedent(f"""
Expand this guide into a a full 7-day travel
itinerary with detailed per-day plans, including
weather forecasts, places to eat, packing suggestions,
and a budget breakdown.
You MUST suggest actual places to visit, actual hotels
to stay and actual restaurants to go to.
This itinerary should cover all aspects of the trip,
from arrival to departure, integrating the city guide
information with practical travel logistics.
{self.__tip_section()}
Trip Date: {range}
Traveling from: {origin}
Traveler Interests: {interests}
"""),
expected_output=dedent(f"""
Your final answer MUST be a complete expanded travel plan,
formatted as markdown, encompassing a daily schedule,
anticipated weather conditions, recommended clothing and
items to pack, and a detailed budget, ensuring THE BEST
TRIP EVER, Be specific and give it a reason why you picked
# up each place, what make them special!
"""),
agent=agent)
def __tip_section(self):
return "If you do your BEST WORK, I'll tip you $100!"
We have also tempted in the descriptions with If you do your BEST WORK, I’ll tip you $100!
. Lets see how greedy our Crew is!
main.py
: Putting All things together
Now all the components (tools, agents and tasks) are defined, so we will create a complete crew, TripCrew
.
class TripCrew:
def __init__(self, origin, cities, date_range, interests):
self.cities = cities
self.origin = origin
self.interests = interests
self.date_range = date_range
def run(self):
agents = TripAgents()
tasks = TripTasks()
# Initiate the agents
city_selector_agent = agents.city_selection_agent()
local_expert_agent = agents.local_expert()
travel_concierge_agent = agents.travel_concierge()
# Initiate the tasks
identify_task = tasks.identify_task(
city_selector_agent,
self.origin,
self.cities,
self.interests,
self.date_range
)
gather_task = tasks.gather_task(
local_expert_agent,
self.origin,
self.interests,
self.date_range
)
plan_task = tasks.plan_task(
travel_concierge_agent,
self.origin,
self.interests,
self.date_range
)
# Initiate Crew
crew = Crew(
agents=[
city_selector_agent, local_expert_agent, travel_concierge_agent
],
tasks=[identify_task, gather_task, plan_task],
process=Process.sequential,
verbose=True
)
result = crew.kickoff()
return result
For initializing this class, we will take origin
(Where the user is starting from), cities
(Where the user is willing to go), date_range
(Approximate date range of journey), interests
(What are the hobby and interests of the User). And we will take all these data from user interactively.
if __name__ == "__main__":
print("## Welcome to Trip Planner Crew")
print('-——————————')
location = input(
dedent("""
From where will you be traveling from?
"""))
cities = input(
dedent("""
What are the cities options you are interested in visiting?
"""))
date_range = input(
dedent("""
What is the date range you are interested in traveling?
"""))
interests = input(
dedent("""
What are some of your high level interests and hobbies?
"""))
trip_crew = TripCrew(location, cities, date_range, interests)
result = trip_crew.run()
print("\n\n########################")
print("## Here is you Trip Plan")
print("########################\n")
print(result)
So, putting all these together, in the main.py
file:
from crewai import Crew
from textwrap import dedent
from agents import TripAgents
from tasks import TripTasks
from dotenv import load_dotenv
load_dotenv()
class TripCrew:
def __init__(self, origin, cities, date_range, interests):
self.cities = cities
self.origin = origin
self.interests = interests
self.date_range = date_range
def run(self):
agents = TripAgents()
tasks = TripTasks()
city_selector_agent = agents.city_selection_agent()
local_expert_agent = agents.local_expert()
travel_concierge_agent = agents.travel_concierge()
identify_task = tasks.identify_task(
city_selector_agent,
self.origin,
self.cities,
self.interests,
self.date_range
)
gather_task = tasks.gather_task(
local_expert_agent,
self.origin,
self.interests,
self.date_range
)
plan_task = tasks.plan_task(
travel_concierge_agent,
self.origin,
self.interests,
self.date_range
)
crew = Crew(
agents=[
city_selector_agent, local_expert_agent, travel_concierge_agent
],
tasks=[identify_task, gather_task, plan_task],
verbose=True
)
result = crew.kickoff()
return result
if __name__ == "__main__":
print("## Welcome to Trip Planner Crew")
print('-------------------------------')
location = input(
dedent("""
From where will you be traveling from?
"""))
cities = input(
dedent("""
What are the cities options you are interested in visiting?
"""))
date_range = input(
dedent("""
What is the date range you are interested in traveling?
"""))
interests = input(
dedent("""
What are some of your high level interests and hobbies?
"""))
trip_crew = TripCrew(location, cities, date_range, interests)
result = trip_crew.run()
print("\n\n########################")
print("## Here is you Trip Plan")
print("########################\n")
print(result)
We use load_dotenv()
, to load the environmental variables we defined in the .env
file. And then we can access the environmental variables with os.environ
.
Some Enhancements in CrewAI Examples
What if After writing the blog in project-1, the Crew could automatically check the blog with tools like Grammarly or Quillbot. How the publication of the blog directly to your Medium account or your very own Substack newsletter can streamline the content creation process significantly.
What if the trip we planned in project-2, the Crew could take care of booking the best flights for you with MakeMyTrip or making reservation at the best hotel for you with BookMyTrip? By integrating such functionalities seamlessly, the Crew could revolutionize the travel experience by handling all aspects of the journey effortlessly.
At Composio, we are dedicated to pushing the boundaries of AI-first agentic software integrations, aiming to merge these and hundreds more daily used tools seamlessly with powerful AI agents. And the most exciting part is you can used these easy integrations with your raw LLM call, with utility libraries like Langchain, or multi-agent interaction toolkit such as CrewAI or Autogen. Stay tuned for exciting developments in these transformative capabilities that promise to enhance efficiency and convenience in our daily lives.
Final words
In this article, we covered the CrewAI platform. By following the examples, I hope you’ve learned essential components and tricks for creating what you need. If you have any questions or suggestions to improve the article’s helpfulness, feel free to reach out. See you next time!
N.B: Both the examples we have used here are taken and modified from the CrewAI examples . There are a lot more examples, and you can explore further. All source codes used in this article is available here.