Fix LLM JSON
Paste broken JSON from ChatGPT, Claude, or any AI — get clean, valid JSON back instantly.
Related Tools
Why LLM JSON Breaks (and How to Fix It)
Language models generate JSON by predicting tokens, not by executing a grammar. The model has been trained on vastly more JavaScript and Python than strict JSON, so it constantly bleeds in conventions from those languages: trailing commas (valid JS, invalid JSON), single quotes (valid Python, invalid JSON), True/False/None (Python literals), and markdown fences (because the model often wraps code in them).
This tool uses jsonrepair — a battle-tested open-source library — to automatically detect and fix all these patterns. It handles everything from a missing comma to a truncated response where the model hit its token limit mid-object.
How to Use This Tool
- Paste your broken JSON into the Input panel — or click a preset to see an example.
- The output updates automatically with the repaired, formatted JSON.
- The status banner shows what was fixed — click it to expand the full issue list.
- Copy the output with the Copy button or download as a file.
What Gets Fixed
Trailing commas
{"a": 1,} → {"a": 1}
Single quotes
{'k': 'v'} → {"k": "v"}
Unquoted keys
{key: 1} → {"key": 1}
Python literals
True/False/None → true/false/null
Markdown fences
```json{...}``` → {...}
JS comments
{// note "a":1} → {"a":1}
Missing commas
{"a":1 "b":2} → {"a":1,"b":2}
Truncated JSON
{"arr":[1,2, → {"arr":[1,2]}
Use jsonrepair in Your Own Code
// npm install jsonrepair
import { jsonrepair } from 'jsonrepair';
// In your LLM response handler:
async function parseLlmJson(rawOutput: string) {
try {
return JSON.parse(rawOutput);
} catch {
// Fall back to repair instead of throwing
return JSON.parse(jsonrepair(rawOutput));
}
}Frequently Asked Questions
- Why does JSON from ChatGPT and Claude keep breaking?
- LLMs generate JSON by predicting tokens, not by following a grammar. They frequently make the same mistakes: trailing commas (valid in JavaScript but not JSON), single quotes instead of double quotes, unquoted keys, Python-style True/False/None, and adding markdown fences like ```json. The model has seen far more JavaScript and Python than strict JSON, so it bleeds those conventions in.
- What types of JSON errors can this tool fix?
- Trailing commas, single-quoted strings, unquoted object keys, Python/Ruby literals (True → true, False → false, None → null), JavaScript comments (// and /* */), markdown code fences, unescaped control characters in strings, missing commas between values, truncated/cut-off JSON with unmatched brackets, and multiple concatenated JSON objects.
- What if the tool can't fix my JSON?
- When jsonrepair can't automatically recover the JSON, it means the input is too structurally broken — for example, the text contains no parseable JSON at all, or it's so heavily mangled that the intended structure is ambiguous. In that case, try: (1) re-prompting the LLM with explicit instructions like 'respond with valid JSON only, no explanation, no markdown', (2) using structured output mode if your LLM API supports it (OpenAI's response_format: {type: "json_object"}).
- How do I prevent LLMs from producing broken JSON in the first place?
- Several approaches: (1) Use OpenAI's json_object or json_schema response formats — these constrain the model to output valid JSON. (2) Use Anthropic's tool_use feature which enforces a schema. (3) Prompt explicitly: 'Return only a raw JSON object. No markdown, no explanation, no trailing commas.' (4) In production, always wrap LLM JSON parsing in a try/catch with jsonrepair as a fallback.
- Can I use jsonrepair programmatically in my own code?
- Yes — the jsonrepair npm package is open source. Install it with `npm install jsonrepair`. Usage: import { jsonrepair } from 'jsonrepair'; const fixed = jsonrepair(brokenString);. It's synchronous and runs in Node.js and the browser. JSON Kit uses this same library under the hood.
- What is truncated JSON and how does it get repaired?
- Truncated JSON happens when an LLM hits its max_tokens limit mid-response and cuts off. You get something like {"results": [{"id": 1 — missing all closing brackets. jsonrepair detects unbalanced brackets and closes them automatically. The repaired output will be structurally valid, but the last item may be incomplete — you should re-prompt with a higher token limit for a complete response.
- Does this tool send my JSON to a server?
- No. All repair processing happens in your browser using the jsonrepair library compiled to JavaScript. Your JSON never leaves your device — important since LLM outputs often contain sensitive data, API responses, or user information.
- What's the difference between this and the JSON Formatter tool?
- The JSON Formatter requires valid JSON as input — it will show an error if your JSON has syntax issues. Fix LLM JSON accepts broken JSON and attempts to repair it first, then formats the result. Use the formatter when you know your JSON is valid; use this tool when it came from an LLM or another source that might have introduced errors.
About This Tool
Fix LLM JSON is one of the differentiating tools in JSON Kit — built specifically for the AI era where broken JSON is a constant hazard. Under the hood it uses the jsonrepairnpm package by Jos de Jong, combined with JSON Kit's own issue-detection layer that labels what was wrong. All processing is 100% browser-side — your LLM outputs, which often contain sensitive data, never touch a server.