Skip to content

agno-agi/agno

Repository files navigation

Overview

Agno is a lightweight framework for building multi-modal Agents.

Simple, Fast, and Agnostic

Agno is designed with three core principles:

  • Simplicity: No graphs, chains, or convoluted patterns — just pure python.
  • Uncompromising Performance: Blazing fast agents with a minimal memory footprint.
  • Truly Agnostic: Any model, any provider, any modality. Future-proof agents.

Key features

Here's why you should build Agents with Agno:

  • Lightning Fast: Agent creation is 6000x faster than LangGraph (see performance).
  • Model Agnostic: Use any model, any provider, no lock-in.
  • Multi Modal: Native support for text, image, audio and video.
  • Multi Agent: Delegate tasks across a team of specialized agents.
  • Memory Management: Store user sessions and agent state in a database.
  • Knowledge Stores: Use vector databases for Agentic RAG or dynamic few-shot.
  • Structured Outputs: Make Agents respond with structured data.
  • Monitoring: Track agent sessions and performance in real-time on agno.com.

Installation

pip install -U agno

What are Agents?

Agents are autonomous programs that use language models to achieve tasks. They solve problems by running tools, accessing knowledge and memory to improve responses.

Instead of a rigid binary definition, let's think of Agents in terms of agency and autonomy.

  • Level 0: Agents with no tools (basic inference tasks).
  • Level 1: Agents with tools for autonomous task execution.
  • Level 2: Agents with knowledge, combining memory and reasoning.
  • Level 3: Teams of agents collaborating on complex workflows.

Example - Basic Agent

from agno.agent import Agent
from agno.models.openai import OpenAIChat

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    description="You are an enthusiastic news reporter with a flair for storytelling!",
    markdown=True
)
agent.print_response("Tell me about a breaking news story from New York.", stream=True)

To run the agent, install dependencies and export your OPENAI_API_KEY.

pip install agno openai

export OPENAI_API_KEY=sk-xxxx

python basic_agent.py

View this example in the cookbook

Example - Agent with tools

This basic agent will obviously make up a story, lets give it a tool to search the web.

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    description="You are an enthusiastic news reporter with a flair for storytelling!",
    tools=[DuckDuckGoTools()],
    show_tool_calls=True,
    markdown=True
)
agent.print_response("Tell me about a breaking news story from New York.", stream=True)

Install dependencies and run the Agent:

pip install duckduckgo-search

python agent_with_tools.py

Now you should see a much more relevant result.

View this example in the cookbook

Example - Agent with knowledge

Agents can store knowledge in a vector database and use it for RAG or dynamic few-shot learning.

Agno agents use Agentic RAG by default, which means they will search their knowledge base for the specific information they need to achieve their task.

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.embedder.openai import OpenAIEmbedder
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
from agno.vectordb.lancedb import LanceDb, SearchType

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    description="You are a Thai cuisine expert!",
    instructions=[
        "Search your knowledge base for Thai recipes.",
        "If the question is better suited for the web, search the web to fill in gaps.",
        "Prefer the information in your knowledge base over the web results."
    ],
    knowledge=PDFUrlKnowledgeBase(
        urls=["https://agno-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
        vector_db=LanceDb(
            uri="tmp/lancedb",
            table_name="recipes",
            search_type=SearchType.hybrid,
            embedder=OpenAIEmbedder(id="text-embedding-3-small"),
        ),
    ),
    tools=[DuckDuckGoTools()],
    show_tool_calls=True,
    markdown=True
)

# Comment out after the knowledge base is loaded
if agent.knowledge is not None:
    agent.knowledge.load()

agent.print_response("How do I make chicken and galangal in coconut milk soup", stream=True)
agent.print_response("What is the history of Thai curry?", stream=True)

Install dependencies and run the Agent:

pip install lancedb tantivy pypdf duckduckgo-search

python agent_with_knowledge.py

View this example in the cookbook

Example - Multi Agent Teams

Agents work best when they have a singular purpose, a narrow scope and a small number of tools. When the number of tools grows beyond what the language model can handle or the tools belong to different categories, use a team of agents to spread the load.

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.yfinance import YFinanceTools

web_agent = Agent(
    name="Web Agent",
    role="Search the web for information",
    model=OpenAIChat(id="gpt-4o"),
    tools=[DuckDuckGoTools()],
    instructions="Always include sources",
    show_tool_calls=True,
    markdown=True,
)

finance_agent = Agent(
    name="Finance Agent",
    role="Get financial data",
    model=OpenAIChat(id="gpt-4o"),
    tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True)],
    instructions="Use tables to display data",
    show_tool_calls=True,
    markdown=True,
)

agent_team = Agent(
    team=[web_agent, finance_agent],
    model=OpenAIChat(id="gpt-4o"),
    instructions=["Always include sources", "Use tables to display data"],
    show_tool_calls=True,
    markdown=True,
)

agent_team.print_response("What's the market outlook and financial performance of AI semiconductor companies?", stream=True)

Install dependencies and run the Agent team:

pip install duckduckgo-search yfinance

python agent_team.py

View this example in the cookbook

Performance

Agno is designed for high performance agentic systems:

  • Agent instantiation: <5μs on average (5000x faster than LangGraph).
  • Memory footprint: <0.01Mib on average (50x less memory than LangGraph).

Tested on an Apple M4 Mackbook Pro.

While an Agent's performance is bottlenecked by inference, we must do everything possible to minimize execution time, reduce memory usage, and parallelize tool calls. These numbers are may seem minimal, but they add up even at medium scale.

Instantiation time

Let's measure the time it takes for an Agent with 1 tool to start up. We'll run the evaluation 1000 times to get a baseline measurement.

You should run the evaluation yourself on your own machine, please, do not take these results at face value.

# Setup virtual environment
./scripts/perf_setup.sh
source .venvs/perfenv/bin/activate
# OR Install dependencies manually
# pip install openai agno langgraph langchain_openai

# Agno
python evals/performance/instantiation_with_tool.py

# LangGraph
python evals/performance/other/langgraph_instantiation.py

The following evaluation is run on an Apple M4 Mackbook Pro, but we'll soon be moving this to a Github actions runner for consistency.

LangGraph is on the right, we start it first to give it a head start.

Agno is on the left, notice how it finishes before LangGraph gets 1/2 way through the runtime measurement, and hasn't even started the memory measurement. That's how fast Agno is.

agno_vs_langgraph_perf.mp4

Dividing the average time of a Langgraph Agent by the average time of an Agno Agent:

0.020526s / 0.000002s ~ 10,263

In this particular run, Agno Agent instantiation is roughly 10,000 times faster than Langgraph Agent instantiation. Sure, the runtime will be dominated by inference, but these numbers add up as the number of Agents grows.

The numbers continue to favor Agno as the number of tools grow, and we all memory and knowledge stores.

Memory usage

To measure memory usage, we use the tracemalloc library. We first calculate a baseline memory usage by running an empty function, then run the Agent 1000x times and calculate the difference. This gives a (reasonably) isolated measurement of the memory usage of the Agent.

We recommend running the evaluation yourself on your own machine, and digging into the code to see how it works. If we've made a mistake, please let us know.

Dividing the average memory usage of a Langgraph Agent by the average memory usage of an Agno Agent:

0.137273/0.002528 ~ 54.3

Langgraph Agents use ~50x more memory than Agno Agents. In our opinion, memory usage is a much more important metric than instantiation time. As we start running thousands of Agents in production, these numbers directly start affecting the cost of running the Agents.

Conclusion

Agno agents are designed for high-performance and while we do share some benchmarks against other frameworks, we should be mindful that accuracy and reliability are more important than speed.

We'll be publishing accuracy and reliability benchmarks running on Github actions in the coming weeks. Given that each framework is different and we won't be able to tune their performance like we do with Agno, for future benchmarks we'll only be comparing against ourselves.

Cursor Setup

When building Agno agents, using the Agno docs as a documentation source in Cursor is a great way to speed up your development.

  1. In Cursor, go to the settings or preferences section.
  2. Find the section to manage documentation sources.
  3. Add https://docs.agno.com to the list of documentation URLs.
  4. Save the changes.

Now, Cursor will have access to the Agno documentation.

Documentation, Community & More examples

Contributions

We welcome contributions, read our contributing guide to get started.

Telemetry

Agno logs which model an agent used so we can prioritize updates to the most popular providers. You can disable this by setting AGNO_TELEMETRY=false in your environment.

⬆️ Back to Top