OpenAI Function Calling: Letting LLMs Interact with Your Code
Master OpenAI's function calling feature to let language models invoke your Python functions, parse structured arguments, and build tool-augmented AI applications.
What Is Function Calling?
Function calling (also called tool use) lets an LLM decide when to invoke a function you define, generate the correct arguments as structured JSON, and then incorporate the function's result into its response. This bridges the gap between the model's language capabilities and your application's data and actions.
Use cases include fetching real-time data, querying databases, sending emails, creating records, calling external APIs — anything your code can do.
Defining Tools
You describe your functions using JSON Schema in the tools parameter:
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a given city.",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city name, e.g., 'San Francisco'",
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit",
},
},
"required": ["city"],
},
},
},
]
The description fields are critical — the model reads them to decide when and how to call the function.
Making a Tool-Augmented Request
Pass the tools array along with your messages:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful weather assistant."},
{"role": "user", "content": "What is the weather like in Tokyo?"},
],
tools=tools,
tool_choice="auto", # let the model decide whether to call a tool
)
message = response.choices[0].message
if message.tool_calls:
for tool_call in message.tool_calls:
print(f"Function: {tool_call.function.name}")
print(f"Arguments: {tool_call.function.arguments}")
print(f"Call ID: {tool_call.id}")
When the model decides a tool is needed, finish_reason is tool_calls and the message.tool_calls array contains one or more function calls with JSON string arguments.
The Complete Tool Call Loop
Function calling requires a multi-turn conversation. You send the request, execute the function, then send the result back:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
import json
def get_weather(city: str, units: str = "celsius") -> dict:
# In production, call a real weather API
return {"city": city, "temperature": 22, "units": units, "condition": "partly cloudy"}
# Step 1: Send the user message with tools
messages = [
{"role": "system", "content": "You are a helpful weather assistant."},
{"role": "user", "content": "What is the weather in Tokyo and London?"},
]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
)
assistant_message = response.choices[0].message
# Step 2: Execute each tool call
if assistant_message.tool_calls:
messages.append(assistant_message) # add the assistant's tool call message
for tool_call in assistant_message.tool_calls:
args = json.loads(tool_call.function.arguments)
result = get_weather(**args)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result),
})
# Step 3: Send results back to the model
final_response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
)
print(final_response.choices[0].message.content)
The model sees the tool results and produces a natural language summary for the user.
Controlling Tool Choice
The tool_choice parameter controls when tools are used:
# Let the model decide (default)
tool_choice = "auto"
# Force a specific function
tool_choice = {"type": "function", "function": {"name": "get_weather"}}
# Prevent tool use entirely
tool_choice = "none"
# Require the model to call at least one tool
tool_choice = "required"
Multiple Tools in One Application
Real applications expose several tools. The model picks the right one based on context:
tools = [
{
"type": "function",
"function": {
"name": "search_products",
"description": "Search the product catalog by keyword.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"max_results": {"type": "integer", "default": 5},
},
"required": ["query"],
},
},
},
{
"type": "function",
"function": {
"name": "get_order_status",
"description": "Check the status of an order by order ID.",
"parameters": {
"type": "object",
"properties": {
"order_id": {"type": "string"},
},
"required": ["order_id"],
},
},
},
]
When the user says "Where is my order #12345?", the model calls get_order_status. When they say "Show me wireless headphones", it calls search_products.
FAQ
Can the model call multiple functions in parallel?
Yes. The model can return multiple entries in the tool_calls array within a single response. You should execute them all and send back all results before making the next API call.
What happens if the function returns an error?
Return the error as the tool result content. The model will see the error and can communicate it to the user or try a different approach. For example: {"error": "Order not found"}.
How do I prevent the model from hallucinating function arguments?
Write detailed descriptions for each parameter, use enum for constrained values, and mark fields as required when they must be provided. The more specific your schema, the more reliable the arguments.
#OpenAI #FunctionCalling #Tools #Python #AIAgents #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.