...

/

Extending Capabilities with Tools

Extending Capabilities with Tools

Learn how to extend agent capabilities by integrating built-in tools in Llama Stack.

Language models are powerful, but they’re not all-knowing. They cannot execute code, search current information, or interact with structured systems out of the box. That’s where tools come in.

Llama Stack allows agents to call external tools as part of their reasoning process. These tools may perform computations, look things up, or operate on user data. The model itself decides when and how to call them; tools are not manually triggered by developers. They are dynamically selected based on the model’s reasoning, then integrated into the final response.

Press + to interact

In this lesson, we’ll learn how tools work in Llama Stack, how to register and use the built-in code interpreter, and how to define your own custom tool using a simple Python function.

Why tools matter

LLMs are good at generating text, but they don’t have access to:

  • Live knowledge, like API calls or updated data

  • Structured computation like arithmetic, data processing, or filtering

  • External systems like databases or application state

For example, try asking a model to compute 93 * (4 + 27.3)^2. It might give a reasonable-sounding answer, but it’s likely incorrect unless it has tool access.

Tools allow the model to offload certain steps of reasoning to deterministic systems, like Python code execution, then use the result in its final response. This makes agents more accurate, more capable, and more useful in complex domains.

Tool types in Llama Stack

Llama Stack supports multiple types of tools, all handled via the Tools API and integrated into agents:

Built-in tools

Packaged with Llama Stack, these ...