...

/

Integrate Tools with Agents

Integrate Tools with Agents

Learn how to integrate tool calling with LLMs and structure their responses using Pydantic validation.

So now that we’ve got our LLM speaking fluent Pydantic—clear, structured, and reliable—what’s next? Well, structure’s just the start. The real fun begins when you let your model do things, not just think or talk, but act. That’s where tools come in.

Imagine this: the model doesn’t know the current weather. It can’t look up a stock price. It can’t query your internal knowledge base. But you can give it those powers by letting it call functions, trigger APIs, or run snippets of custom logic. These are tools. Once your model can use tools, it’s no longer just answering questions; instead, it’s solving problems.

Press + to interact

Let’s break down how that works, and how to build your own tools from scratch.

How does function calling actually work?

Alright, time to open the toolbox. Function calling is one of the most important mechanics in modern agentic workflows. However, not every model ships with native function-calling support, so double-check your LLM or provider’s docs before you rely on it.

Function calling allows the model to step beyond passive text generation and actively interact with the external world—our application, APIs, and custom code. Here’s how it works:

  1. You can define functions, real Python functions, and register them as tools that the LLM can “see.”

  2. Based on the prompt and the conversation, the model might decide to call one of these functions instead of (or in addition to) just generating plain text.

That’s the core mechanic. You’re giving it options, and it chooses what to use, like a player scanning their hand in a card game. This is the backbone of agentic systems. When we say “tools” throughout this course, we’re talking about this specific capability: letting the LLM call into your code to extend what it knows and what it can do. Whether it’s a database query, a search engine entry, or sending a Slack message, it’s all just a tool to the model. When making a request to generate a model response, we can enable tool access by passing tool definitions via the tools parameter in our API call. This lets the model know what’s available to it.

But here’s the key idea, and it’s one many folks miss at first: The LLM does not actually call your function!

Press + to interact

Let’s say that again: the model doesn’t invoke the code directly. What it does is inspect the list of tools you’ve made available, the names, the parameters, and the descriptions. If it thinks one is appropriate based on the conversation, it replies with a set of arguments for that function. That’s it. It says, “Hey, based on what’s going on, I think you should run get_weather(city='Seattle').” It’s still up to you, the developer, to wire that into your loop and actually run the function using those parameters. The model suggests the action, and your code executes it. That’s the fundamental dynamic. The model acts as a ...