Introduction: Do You Really Need a Framework?
So far in this series, we have built Agents from scratch — with pure Python code. But in the real world, there are ready-made frameworks that handle a lot of the work for you.
The question is: which framework should you use? Do you even need a framework at all? In this episode, we review the most important frameworks and help you make the right choice.
LangChain — The (Old) King of Frameworks
LangChain was the first major framework for building LLM applications. It has huge popularity but also a lot of criticism.
What does it do?
LangChain is a large toolbox that includes: LLM connections, memory management, tool handling, RAG pipelines, and much more.
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.tools import tool
from langchain_core.prompts import ChatPromptTemplate
# Define tools
@tool
def calculate(expression: str) -> str:
"""Calculates a math expression."""
try:
return str(eval(expression))
except:
return "Calculation error"
@tool
def get_weather(city: str) -> str:
"""Returns the weather for a city."""
weather_data = {
"Tehran": "Sunny, 28 degrees",
"London": "Cloudy, 15 degrees",
}
return weather_data.get(city, f"No data for {city}")
# Build Agent
llm = ChatOpenAI(model="gpt-4o")
tools = [calculate, get_weather]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Use tools when needed."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Run
result = executor.invoke({
"input": "What is the weather in Tehran and what is 25 times 4?"
})
print(result["output"])
Pros:
- Huge ecosystem — has a module for almost everything
- Lots of documentation and a large community
- Integration with many services (OpenAI, Anthropic, Pinecone, etc.)
Cons:
- Too complex — overkill for simple tasks
- Too many abstraction layers — debugging is hard
- API changes frequently — code from 3 months ago might not work
- Performance issues sometimes
LangGraph — The Next Generation of LangChain
LangGraph is from the same LangChain team but takes a different approach. Instead of linear chains, it uses graphs. Each step is a node in the graph and the Agent can move between nodes.
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
from langchain_openai import ChatOpenAI
import operator
class AgentState(TypedDict):
messages: Annotated[list, operator.add]
next_action: str
llm = ChatOpenAI(model="gpt-4o")
def analyze_request(state: AgentState) -> AgentState:
"""Analyzes the user request."""
messages = state["messages"]
response = llm.invoke([
{"role": "system", "content": """
Analyze the user request.
If it is technical, say 'technical'
If it is general, say 'general'
Just say one word.
"""},
{"role": "user", "content": messages[-1]}
])
return {
"messages": [f"Analysis: {response.content}"],
"next_action": response.content.strip().lower()
}
def handle_technical(state: AgentState) -> AgentState:
"""Handles technical questions."""
messages = state["messages"]
response = llm.invoke([
{"role": "system", "content": "You are a technical expert."},
{"role": "user", "content": messages[0]}
])
return {"messages": [response.content], "next_action": "done"}
def handle_general(state: AgentState) -> AgentState:
"""Handles general questions."""
messages = state["messages"]
response = llm.invoke([
{"role": "system", "content": "You are a general assistant."},
{"role": "user", "content": messages[0]}
])
return {"messages": [response.content], "next_action": "done"}
def route(state: AgentState) -> str:
"""Determines the next path."""
if state["next_action"] == "technical":
return "technical"
return "general"
# Build graph
graph = StateGraph(AgentState)
graph.add_node("analyze", analyze_request)
graph.add_node("technical", handle_technical)
graph.add_node("general", handle_general)
graph.set_entry_point("analyze")
graph.add_conditional_edges("analyze", route, {
"technical": "technical",
"general": "general"
})
graph.add_edge("technical", END)
graph.add_edge("general", END)
app = graph.compile()
When is LangGraph better?
- When your Agent has a complex flow (conditions, loops, branches)
- When you want precise control over how the Agent decides
- When you need state management
CrewAI — When Multiple Agents Must Collaborate
CrewAI is specifically designed for Multi-Agent systems. Each Agent has a role and collaborates with others — like a real team.
from crewai import Agent, Task, Crew, Process
# Define Agents
researcher = Agent(
role="Researcher",
goal="Find accurate and up-to-date information",
backstory="You are an experienced researcher who can "
"distinguish reliable sources from unreliable ones.",
verbose=True,
allow_delegation=False,
)
writer = Agent(
role="Writer",
goal="Write engaging and readable articles",
backstory="You are a professional writer who can "
"explain complex topics simply and engagingly.",
verbose=True,
allow_delegation=False,
)
# Define Tasks
research_task = Task(
description="Research {topic}. Find at least 5 key points.",
expected_output="List of key points with sources",
agent=researcher,
)
write_task = Task(
description="Based on the research, write a 500-word article.",
expected_output="Complete article with title and paragraphs",
agent=writer,
)
# Build Crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff(inputs={"topic": "AI impact on education"})
Pros:
- Simple and intuitive API
- Role definitions feel natural
- Great for multi-step workflows
Cons:
- Still maturing
- Fine-grained control over Agent behavior is difficult
- High token consumption (each Agent calls the LLM separately)
AutoGen (Microsoft)
AutoGen is from Microsoft and focuses on conversations between Agents. Agents chat with each other and reach conclusions through discussion.
from autogen import ConversableAgent
assistant = ConversableAgent(
name="assistant",
system_message="You are a Python developer. Write clean, well-documented code.",
llm_config={"model": "gpt-4o"},
)
reviewer = ConversableAgent(
name="reviewer",
system_message="You are a code reviewer. Review the code and report issues. "
"If the code is good, say 'APPROVED'.",
llm_config={"model": "gpt-4o"},
)
user_proxy = ConversableAgent(
name="user",
human_input_mode="NEVER",
is_termination_msg=lambda msg: "APPROVED" in msg.get("content", ""),
)
# Start conversation
user_proxy.initiate_chat(
assistant,
message="Write a function that checks if a string is a palindrome.",
max_turns=6,
)
Special feature:
Agents can discuss back and forth over multiple rounds. For example, the developer writes code, the reviewer finds issues, the developer fixes them — until both are satisfied.
Semantic Kernel (Microsoft)
Semantic Kernel is another framework from Microsoft that is more enterprise-oriented. If you work in the Microsoft ecosystem (Azure, .NET), it is a good choice.
When is it suitable?
- Enterprise projects on Azure
- Teams working with C# or .NET
- When you need official Microsoft support
Overall Comparison — Which One to Choose?
Let me summarize:
LangChain: For simple projects where you want to start quickly. But for complex projects, it can become a headache.
LangGraph: When your Agent has complex flows and you want precise control.
CrewAI: When you have multiple specialist Agents that need to collaborate.
AutoGen: When you want Agents to discuss and exchange ideas with each other.
Semantic Kernel: When you are in the Microsoft ecosystem.
No framework: When your project is simple, no framework is needed. Pure Python + model API = enough.
When NOT to Use a Framework?
This section is important. Frameworks are not always the answer:
- Simple project: If you just want a simple chatbot, a framework is unnecessary overhead.
- Learning: If you are learning about Agents, build without a framework first to understand the concepts.
- Critical performance: Frameworks add abstraction layers that slow things down.
- Need for complete control: If you want to know exactly what every line of code does.
Summary
Frameworks are useful tools but not essential:
- LangChain and LangGraph for large ecosystems and complex flows
- CrewAI for simple Multi-Agent setups
- AutoGen for conversations between Agents
- No framework for simple projects and learning
- No framework is “the best” — it depends on your project
Next episode, we discuss Agent security — how to prevent dangerous Agent behavior.