Tool Use & Function Calling
How AI agents interact with the world through tools — from API calls to database queries to file operations.
Learning Objectives
- •Understand how LLMs decide when and which tools to call
- •Define tool schemas that guide accurate function selection
- •Trace the complete tool execution lifecycle from intent to result
- •Apply advanced patterns: parallel calls, chaining, and error recovery
- •Map business capabilities to tool integrations
- •Implement tool integrations using modern AI SDKs
From Agents to Tool Use
BasicFrom Agents to Tool Use
You've seen the agent loop — observe, think, act. But what does "act" actually mean? In practice, agents act by calling tools: searching databases, hitting APIs, reading files, or sending messages.
This section dives deep into tool use — the mechanism that bridges an LLM's intelligence with real-world capabilities.
What Is Tool Use?
BasicWhat Is Tool Use?
A Large Language Model (LLM) on its own can only generate text. It cannot check your calendar, query a database, or send an email. Tool use is the mechanism that gives an agent hands — the ability to act on the world, not just describe it.
The Core Idea
When an LLM is given a set of tool definitions (name, description, parameter schema), it can decide mid-conversation that it needs to call one. Instead of generating a prose answer, it outputs a structured tool call request — a JSON object naming the tool and providing arguments. Your application intercepts this, executes the real function, and feeds the result back to the model so it can continue reasoning.
Why This Matters
Before tool use, AI assistants could only answer from their training data. With tool use, agents can:
- Read live data (APIs, databases, file systems)
- Write changes (create records, send messages, update configs)
- Compute deterministically (run code, perform calculations)
- Orchestrate workflows (trigger CI/CD, approve requests)
The Magic Moment
The key insight is that the LLM chooses when to call a tool and which arguments to provide. You do not hard-code if/else logic. The model reads the user's intent, matches it against available tool descriptions, and emits the call. This makes agents flexible — add a new tool and the agent can use it without rewriting any routing logic.
Tool use is what separates a chatbot from an agent. A chatbot talks; an agent acts.
Tool Definition Patterns
BasicTool Definition Patterns
A tool definition is the contract between your application and the LLM. The model never sees your implementation code — only the name, description, and parameter schema you provide. Getting these right is the single biggest factor in reliable tool use.
Anatomy of a Tool Definition
Every tool definition has three parts:
- Name — A concise identifier (e.g.,
search_documents,get_weather). Use snake_case and make it verb-first so the model understands the action. - Description — A natural-language explanation of when and why to use this tool. This is the most important field. The LLM uses it to decide relevance.
- Parameter Schema — A JSON Schema (or Zod schema) defining the inputs. Each parameter should include its own
.describe()string explaining what it means.
Writing Good Descriptions
Vague descriptions cause the model to call the wrong tool or skip it entirely. Compare:
- Bad:
"Gets data from the system" - Good:
"Retrieve the current account balance for a customer. Use when the user asks about their balance, remaining credits, or payment status. Requires the customer_id."
Include when to use, what it returns, and any constraints (rate limits, required fields).
Schema Design Tips
- Use enums to constrain values the model might hallucinate (e.g.,
status: "open" | "closed") - Mark optional parameters with defaults so the model does not have to guess
- Keep parameter counts low (under 5) — every parameter is a chance for the model to make an error
- Add
.describe()to every field; models read these during selection
Both OpenAI and Anthropic accept JSON Schema for tool parameters, and SDK libraries like Vercel AI SDK and LangChain let you define schemas with Zod for type-safe validation at runtime.
Safety Considerations for Tool Use
BasicSafety Considerations for Tool Use
Now that you've seen tools in action, let's talk about the risks. Giving an agent access to tools is giving it the ability to affect real systems — and that demands careful guardrails.
Principle of Least Privilege
Only give an agent access to the tools it needs for its specific task. A customer support agent doesn't need database write access. A code review agent doesn't need deployment tools. Every unnecessary tool is an unnecessary attack surface.
Audit Logging
Every tool call an agent makes should be logged — what was called, with what arguments, when, and what the result was. This isn't optional. When something goes wrong (and it will), audit logs are how you diagnose the issue and demonstrate accountability.
Tiered Permissions
Not all actions carry the same risk. Structure tool permissions in tiers:
- Read-only — Low risk. Let the agent query freely.
- Write/create — Medium risk. Log and monitor.
- Update/modify — Higher risk. Consider requiring confirmation.
- Delete/destructive — Highest risk. Require explicit human approval before execution.
Approval Gates for Destructive Actions
For high-stakes tool calls — deleting records, sending external communications, modifying financial data — insert a human-in-the-loop approval step. The agent prepares the action and presents it for review; a human confirms before execution.
The Danger of Raw Database Access
Giving an agent direct SQL access without guardrails is one of the most dangerous patterns. The agent could generate DROP TABLE, expose PII in query results, or run expensive queries that degrade production performance. Always use parameterized queries, restrict to specific tables, and filter sensitive columns from results.
Section Recap
BasicKey Takeaways
Before you move on, here's what to remember from this section:
- Tool use lets LLMs request that external functions be executed — the LLM generates arguments, your code runs the function
- Tool schemas (JSON Schema / Zod) define the contract — clear descriptions drive accurate tool selection
- The execution flow: user message → LLM selects tool → host validates & executes → result feeds back to LLM
- Parallel vs. sequential — parallelize independent calls for speed; chain dependent calls for correctness
- Error handling — return structured errors to the LLM so it can reason about recovery strategies
- Safety first — tiered permissions, approval gates for high-risk tools, and output sanitization protect against misuse
Check Your Understanding: Tool Use & Function Calling
BasicTest Your Knowledge
5 questions selected from a pool based on your difficulty level. Retry for different questions.
~5 min