Introduction: Let Us Build Something Real
So far we have talked a lot of theory. Now it is time to get our hands dirty and build a real Agent that people can actually use.
In this episode, we build a smart Telegram assistant. Not a simple chatbot — but a real Agent with tools, memory, and the ability to do useful things. Step by step, from zero to deployment.
What Are We Building?
A Telegram assistant that:
- Has natural conversation
- Can check the weather
- Can perform mathematical calculations
- Has memory and remembers previous conversations
- Works in both groups and private chats
Step 1: Setup and Installation
# Install libraries
# pip install python-telegram-bot openai redis
# Project structure:
# telegram_agent/
# |-- bot.py (main file)
# |-- agent.py (Agent logic)
# |-- tools.py (tools)
# |-- memory.py (memory)
# |-- config.py (configuration)
# +-- requirements.txt
Step 2: Configuration
# config.py
import os
TELEGRAM_TOKEN = os.getenv("TELEGRAM_TOKEN")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
WEATHER_API_KEY = os.getenv("WEATHER_API_KEY") # OpenWeatherMap
REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379")
# Model
MODEL_NAME = "gpt-4o-mini"
MAX_TOKENS = 1000
# Limits
MAX_MESSAGES_PER_MINUTE = 10
MAX_HISTORY_LENGTH = 20
Step 3: Tools
# tools.py
import json
import httpx
from config import WEATHER_API_KEY
# Tool definitions for OpenAI
TOOLS = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name (in English)"
}
},
"required": ["city"]
}
}
},
{
"type": "function",
"function": {
"name": "calculate",
"description": "Calculate a math expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Math expression"
}
},
"required": ["expression"]
}
}
},
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current date and time",
"parameters": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "Timezone",
"default": "Asia/Tehran"
}
}
}
}
},
]
# Tool implementations
async def get_weather(city: str) -> str:
"""Gets weather from API."""
url = "https://api.openweathermap.org/data/2.5/weather"
params = {
"q": city,
"appid": WEATHER_API_KEY,
"units": "metric",
}
async with httpx.AsyncClient() as client:
try:
resp = await client.get(url, params=params)
data = resp.json()
if resp.status_code != 200:
return f"Error: City '{city}' not found."
temp = data["main"]["temp"]
feels = data["main"]["feels_like"]
desc = data["weather"][0]["description"]
humidity = data["main"]["humidity"]
return (
f"City: {city}\n"
f"Temperature: {temp} C (feels like: {feels} C)\n"
f"Condition: {desc}\n"
f"Humidity: {humidity}%"
)
except Exception as e:
return f"Error fetching weather: {str(e)}"
def calculate(expression: str) -> str:
"""Calculates a math expression (safe)."""
allowed = set("0123456789+-*/().% ")
if not all(c in allowed for c in expression):
return "Error: only simple math expressions allowed."
try:
result = eval(expression)
return f"{expression} = {result}"
except Exception as e:
return f"Calculation error: {str(e)}"
def get_current_time(timezone: str = "Asia/Tehran") -> str:
"""Current date and time."""
from datetime import datetime
import pytz
try:
tz = pytz.timezone(timezone)
now = datetime.now(tz)
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
except:
return "Invalid timezone."
# Map tool name to function
TOOL_FUNCTIONS = {
"get_weather": get_weather,
"calculate": calculate,
"get_current_time": get_current_time,
}
async def execute_tool(name: str, args: dict) -> str:
"""Executes a tool."""
func = TOOL_FUNCTIONS.get(name)
if not func:
return f"Tool '{name}' does not exist."
import asyncio
if asyncio.iscoroutinefunction(func):
return await func(**args)
return func(**args)
Step 4: Memory
# memory.py
import json
import redis.asyncio as redis
from config import REDIS_URL, MAX_HISTORY_LENGTH
class Memory:
"""Conversation memory with Redis."""
def __init__(self):
self.redis = redis.from_url(REDIS_URL)
def _key(self, chat_id: int) -> str:
return f"chat:{chat_id}:history"
async def get_history(self, chat_id: int) -> list:
"""Returns conversation history."""
data = await self.redis.get(self._key(chat_id))
if data:
return json.loads(data)
return []
async def add_message(self, chat_id: int, role: str, content: str):
"""Adds a message to history."""
history = await self.get_history(chat_id)
history.append({"role": role, "content": content})
# Only keep the last N messages
if len(history) > MAX_HISTORY_LENGTH:
history = history[-MAX_HISTORY_LENGTH:]
await self.redis.set(
self._key(chat_id),
json.dumps(history, ensure_ascii=False),
ex=86400 * 7, # 7 days retention
)
async def clear(self, chat_id: int):
"""Clears history."""
await self.redis.delete(self._key(chat_id))
Step 5: Agent Logic
# agent.py
from openai import AsyncOpenAI
from config import OPENAI_API_KEY, MODEL_NAME, MAX_TOKENS
from tools import TOOLS, execute_tool
from memory import Memory
client = AsyncOpenAI(api_key=OPENAI_API_KEY)
memory = Memory()
SYSTEM_PROMPT = """You are a smart assistant.
Rules:
- Be concise and helpful
- If you do not know, say you do not know
- Use tools for real-world tasks
- Never give fake information
Your tools:
- get_weather: city weather
- calculate: calculator
- get_current_time: date and time
"""
async def process_message(chat_id: int, user_message: str, user_name: str) -> str:
"""Processes user message and returns a response."""
# Conversation history
history = await memory.get_history(chat_id)
# Build message list
messages = [{"role": "system", "content": SYSTEM_PROMPT}]
messages.extend(history)
messages.append({"role": "user", "content": user_message})
# Call LLM
response = await client.chat.completions.create(
model=MODEL_NAME,
messages=messages,
tools=TOOLS,
max_tokens=MAX_TOKENS,
)
msg = response.choices[0].message
# If it wants to use a tool
if msg.tool_calls:
import json as json_mod
# Execute all tools
tool_results = []
for tool_call in msg.tool_calls:
args = json_mod.loads(tool_call.function.arguments)
result = await execute_tool(tool_call.function.name, args)
tool_results.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result,
})
# Call LLM again with tool results
messages.append({
"role": "assistant",
"content": msg.content,
"tool_calls": [
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
}
}
for tc in msg.tool_calls
]
})
messages.extend(tool_results)
response = await client.chat.completions.create(
model=MODEL_NAME,
messages=messages,
max_tokens=MAX_TOKENS,
)
final_content = response.choices[0].message.content
else:
final_content = msg.content
# Save to memory
await memory.add_message(chat_id, "user", user_message)
await memory.add_message(chat_id, "assistant", final_content)
return final_content
Step 6: The Telegram Bot
# bot.py
import logging
from telegram import Update
from telegram.ext import (
Application,
CommandHandler,
MessageHandler,
filters,
ContextTypes,
)
from config import TELEGRAM_TOKEN
from agent import process_message
from memory import Memory
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
memory = Memory()
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""/start command"""
await update.message.reply_text(
"Hello! I am a smart assistant.\n\n"
"I can:\n"
"- Answer your questions\n"
"- Tell you the weather\n"
"- Do math calculations\n\n"
"Ask me anything!"
)
async def clear(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""/clear command - clear memory"""
await memory.clear(update.effective_chat.id)
await update.message.reply_text("Memory cleared! Starting fresh.")
async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
"""Process text messages."""
if not update.message or not update.message.text:
return
chat_id = update.effective_chat.id
user_message = update.message.text
user_name = update.effective_user.first_name
# In groups, only respond when mentioned
if update.effective_chat.type in ["group", "supergroup"]:
bot_username = context.bot.username
if f"@{bot_username}" not in user_message:
return
user_message = user_message.replace(f"@{bot_username}", "").strip()
# Show typing indicator
await context.bot.send_chat_action(chat_id=chat_id, action="typing")
try:
response = await process_message(chat_id, user_message, user_name)
await update.message.reply_text(response)
except Exception as e:
logger.error(f"Error: {e}")
await update.message.reply_text("Sorry, something went wrong. Please try again.")
def main():
"""Start the bot."""
app = Application.builder().token(TELEGRAM_TOKEN).build()
# Commands
app.add_handler(CommandHandler("start", start))
app.add_handler(CommandHandler("clear", clear))
# Text messages
app.add_handler(
MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message)
)
logger.info("Bot is running...")
app.run_polling()
if __name__ == "__main__":
main()
Step 7: Managing Groups vs Private Chats
An important note: the bot’s behavior in groups should differ from private chats.
Private chat: Respond to every message. Memory is specific to that user.
Group: Only respond when mentioned (@bot_name). Memory is shared among all group members. We implemented this in the code above — notice how handle_message checks whether we are in a group and whether the bot was mentioned.
Step 8: Deployment
# Dockerfile
# FROM python:3.11-slim
# WORKDIR /app
# COPY requirements.txt .
# RUN pip install -r requirements.txt
# COPY . .
# CMD ["python", "bot.py"]
# requirements.txt
# python-telegram-bot==21.0
# openai==1.30.0
# redis==5.0.0
# httpx==0.27.0
# pytz==2024.1
# docker-compose.yml
# version: "3.8"
# services:
# bot:
# build: .
# env_file: .env
# depends_on:
# - redis
# restart: always
# redis:
# image: redis:7-alpine
# volumes:
# - redis_data:/data
# volumes:
# redis_data:
# Run:
# docker-compose up -d
Suggested Improvements
This is the base version. A few ideas to make it better:
- Rate limiting: Prevent spam — max 10 messages per minute per user
- More tools: Web search, translation, link summarization
- Image support: With Vision models, also analyze images
- /help command: Complete guide to capabilities
- Logging and monitoring: Record all conversations for improvement
- Guardrails: From the previous episode, add security layers
Summary
- Building a Telegram Agent combines python-telegram-bot + OpenAI API + Redis
- Tools show the real power of the Agent
- Memory with Redis is persistent and fast
- Managing groups vs private chats matters
- Deployment with Docker is the simplest approach
Next episode: Testing and Debugging Agents — how to make sure your Agent works correctly.