Function Calling
Also known as: Tool Use, Tool Calling, AI Actions
A capability of AI models to generate structured outputs that invoke predefined functions or APIs, enabling AI systems to take actions, retrieve data, and interact with external systems.
“A capability of AI models to generate structured outputs that invoke predefined functions or APIs, enabling AI systems to take actions, retrieve data, and interact with external systems.
“
Overview
Function calling (also called tool use) is the capability that transforms language models from passive text generators into active agents that can interact with external systems. When a model supports function calling, it can analyze a user's request, determine that an external function needs to be called, generate the appropriate function call with structured parameters, and then incorporate the function's response into its own output.
How It Works
- Function Definition: Available functions are described to the model with their names, parameters, and descriptions
- Intent Recognition: The model analyzes the user's request and determines if a function call is needed
- Parameter Extraction: The model generates a structured function call with the appropriate parameters
- Execution: The application executes the function and returns results to the model
- Response Generation: The model incorporates the function results into its response
Common Use Cases
- Data Retrieval: Querying databases, APIs, or search engines
- Actions: Sending emails, creating records, or triggering workflows
- Calculations: Performing precise mathematical operations
- Code Execution: Running code in sandboxed environments
Context Management and Function Calling
Function calling is a critical context management tool because it allows AI systems to dynamically expand their context at runtime. Instead of pre-loading all possible context, the model can call functions to retrieve exactly the context it needs for a specific query, enabling more efficient context window utilization.
Sources & Further Reading
Related Terms
Large Language Model
A type of AI model trained on vast amounts of text data that can understand, generate, and manipulate human language, typically based on the transformer architecture with billions of parameters.
Model Context Protocol
An open standard developed by Anthropic that standardizes how AI applications connect to external data sources, tools, and context providers through a unified protocol.
Prompt Engineering
The practice of designing, optimizing, and structuring inputs (prompts) to AI language models to elicit desired outputs, including techniques for instruction formatting, context provision, and output specification.