OpenAI's function calling — now called tools — is the most reliable way to get structured JSON from an LLM. Instead of parsing free-form text, you define a schema and the model returns a JSON object that matches it exactly. This post covers everything from basic setup to production patterns for gpt-4o and later models.
What Function Calling Actually Does
When you include a tool definition in your API call, two things happen:
- The model decides whether to call a tool based on the conversation context
- If it calls a tool, it returns structured JSON matching your schema — not in
message.content, but inmessage.tool_calls[0].function.arguments
The model doesn't "run" any code. It generates the JSON arguments that you would pass to a function. You're responsible for calling the actual function and (optionally) returning the result to the model for a follow-up response.
Basic Setup
import OpenAI from 'openai';
const client = new OpenAI();
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'Get me the weather in San Francisco' }
],
tools: [
{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City name or coordinates',
},
unit: {
type: 'string',
enum: ['celsius', 'fahrenheit'],
description: 'Temperature unit',
},
},
required: ['location'],
},
},
},
],
});
const message = response.choices[0].message;
if (message.tool_calls) {
const call = message.tool_calls[0];
const args = JSON.parse(call.function.arguments);
// args = { location: "San Francisco", unit: "celsius" }
// Call your actual function
const weather = await getWeather(args.location, args.unit);
// Return result to model for final response
const followUp = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'Get me the weather in San Francisco' },
message, // the assistant's tool call
{
role: 'tool',
tool_call_id: call.id,
content: JSON.stringify(weather),
},
],
});
}
Generating Tool Schemas from JSON
Writing tool schemas by hand is tedious and error-prone — especially getting the required arrays right. Use the JSON to OpenAI Schema tool instead. Paste a sample JSON object, set the function name and description, and get the complete tool definition ready to paste.
For the weather example:
{
"location": "San Francisco",
"unit": "celsius"
}
Paste that, set name = get_weather, and get back the exact schema above.
Strict Mode: Guaranteed Schema Adherence
Regular function calling guarantees the response is valid JSON but not that it exactly matches your schema. In practice, the model occasionally misses required fields or adds unexpected ones.
Strict mode (strict: true) changes this — the model is constrained to produce exactly the schema you define:
{
type: 'function',
function: {
name: 'create_user',
strict: true, // ← enables guaranteed schema adherence
parameters: {
type: 'object',
properties: {
name: { type: 'string' },
email: { type: 'string' },
role: { type: 'string', enum: ['admin', 'editor', 'viewer'] },
},
required: ['name', 'email', 'role'],
additionalProperties: false, // required with strict: true
},
},
}
Strict mode requirements
For strict mode to work:
- Every property must be in the
requiredarray additionalProperties: falsemust be set on every object- No
$refor recursive schemas anyOfis supported only for nullable types:anyOf: [{type: 'string'}, {type: 'null'}]
Enable Strict mode in the JSON to OpenAI Schema tool to generate a compliant schema automatically.
response_format vs Tools: When to Use Which
There are now three ways to get structured JSON from OpenAI:
| Approach | Use when |
|---|---|
| tools | You want the model to "call" an action conditionally |
| response_format: { type: "json_object" } | You always want JSON, any shape |
| response_format: { type: "json_schema" } | You always want JSON, specific shape (Structured Outputs) |
Tools
Best for agentic workflows where the model should decide when to call a function:
// The model calls get_user only if the question is about a user
tools: [{ type: 'function', function: { name: 'get_user', ... } }]
tool_choice: 'auto' // model decides when to call
Force a specific tool with tool_choice: { type: 'tool', name: 'function_name' }.
json_object mode
The simplest way to get JSON — no schema required. The model will return some JSON object, but the shape is not guaranteed. Useful for quick extraction or when the schema is highly variable.
response_format: { type: 'json_object' }
// Must mention "JSON" in the system/user message or the API errors
json_schema (Structured Outputs)
Most reliable. The model is constrained to return exactly your schema:
response_format: {
type: 'json_schema',
json_schema: {
name: 'user_profile',
strict: true,
schema: {
type: 'object',
properties: {
name: { type: 'string' },
email: { type: 'string' },
role: { type: 'string' }
},
required: ['name', 'email', 'role'],
additionalProperties: false
}
}
}
The response is in message.content (not tool_calls), so you parse it with JSON.parse(message.content!).
Parallel Tool Calls
When you define multiple tools, the model can call several in one response:
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Get weather for NYC and LA' }],
tools: [getWeatherTool],
});
const toolCalls = response.choices[0].message.tool_calls ?? [];
// toolCalls might have 2 entries — one for NYC, one for LA
const results = await Promise.all(
toolCalls.map(async (call) => {
const args = JSON.parse(call.function.arguments);
const result = await getWeather(args.location);
return { tool_call_id: call.id, result };
})
);
Return all results in the follow-up message:
const followUpMessages = [
...originalMessages,
response.choices[0].message,
...results.map(r => ({
role: 'tool' as const,
tool_call_id: r.tool_call_id,
content: JSON.stringify(r.result),
})),
];
Validating Tool Arguments at Runtime
The model should return JSON matching your schema — but in non-strict mode, validate defensively:
import { z } from 'zod';
const GetWeatherArgs = z.object({
location: z.string().min(1),
unit: z.enum(['celsius', 'fahrenheit']).default('celsius'),
});
function handleToolCall(call: ChatCompletionMessageToolCall) {
if (call.function.name === 'get_weather') {
const args = GetWeatherArgs.parse(JSON.parse(call.function.arguments));
return getWeather(args.location, args.unit);
}
}
Generate the Zod schema from your tool's parameter JSON using the JSON to Zod Schema tool.
A Real-World Pattern: Data Extraction
A common use case is extracting structured data from unstructured text:
const extractProductSchema = {
type: 'function',
function: {
name: 'extract_product',
description: 'Extract product information from text',
strict: true,
parameters: {
type: 'object',
properties: {
name: { type: 'string', description: 'Product name' },
price: { type: 'number', description: 'Price in USD' },
currency: { type: 'string', description: 'Currency code e.g. USD' },
availability: {
type: 'string',
enum: ['in_stock', 'out_of_stock', 'unknown'],
description: 'Availability status'
},
features: {
type: 'array',
items: { type: 'string' },
description: 'List of product features'
},
},
required: ['name', 'price', 'currency', 'availability', 'features'],
additionalProperties: false,
},
},
};
async function extractProduct(description: string) {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'user',
content: `Extract product information from this text: ${description}`
}
],
tools: [extractProductSchema],
tool_choice: { type: 'tool', name: 'extract_product' },
});
const call = response.choices[0].message.tool_calls![0];
return JSON.parse(call.function.arguments);
}
Common Errors and Fixes
tool_calls is undefined
The model responded in text mode — add tool_choice: 'required' to force tool use, or check if the message includes a refusal.
arguments is an empty object {}
Your tool description is too vague. Add clearer descriptions to each property and make sure the user's message gives the model enough context to fill the fields.
Schema validation fails in strict mode
Check that every object in your schema has additionalProperties: false and that every property is in the required array. Nested objects need these too.
The model calls the wrong tool
Improve the description field on each tool to make their purposes distinct. For single-tool use, use tool_choice: { type: 'tool', name: 'your_function' }.
Tools mentioned in this post:
- JSON to OpenAI Schema — generate tool definitions from JSON samples
- JSON to Zod Schema — validate tool arguments at runtime
- Fix LLM JSON — repair malformed JSON responses