Flatten JSON online to convert nested objects into flat dot keys
When you flatten JSON, you transform a nested object/array structure into a single-level object where each key represents a path (commonly using dot notation). This is practical when your target system expects columns (CSV exports, BI tools, SQL tables) or when you need stable keys for searching logs and comparing payloads.
This tool supports both directions:
- Flatten: nested → flat keys
- Unflatten: flat keys → nested structure
Key Features
- Flattens nested JSON objects into a flat structure with path-based keys
- Unflattens “flat JSON” back into nested objects/arrays
- Helps normalize API payloads so they’re easier to query and map into tables
- Makes diffs and comparisons more deterministic by reducing deep nesting noise
- Designed for quick triage: paste, convert, copy/download
Use Cases
- Convert nested API responses into a flat shape for spreadsheet import or BI tools.
- Build a stable “schema-like” key list for data mapping and ETL transforms.
- Flatten event payloads to create log-friendly keys for searching and alerting.
- Prepare a JSON object for JSONPath exploration when you want predictable paths; then drill in with JSONPath expression.
- Unflatten a stored “dot key” format back into the original structure before sending it to an API.
How To Use
- Paste your JSON into the input (object or array).
- Choose Flatten if you want a single-level object with path keys.
- Choose Unflatten if you already have path keys and want nested JSON back.
- Convert and review the output. If the result is going to be consumed by another system, do a quick round-trip test: flatten → unflatten and confirm you get back the same structure (ordering aside).
Tip for reliable results: keep your delimiter consistent across your workflow. Many systems assume dots for object paths and numeric segments for arrays.
How It Works
Flattening walks your JSON tree and emits one key/value for each leaf value:
- Objects: keys are appended to the current path.
- Arrays: numeric indexes are appended (for example,
items.0.id).
- Leaf values: strings, numbers, booleans, and nulls become the final value.
Unflattening reverses this by splitting each path key into segments and rebuilding objects/arrays as needed.
Practical interpretation:
- Flattening increases the number of keys (“wide” JSON). That’s expected and often desirable for tabular systems.
- A key limitation is ambiguity when your data mixes object keys that look like numbers (e.g.,
"0": "x") with arrays. If your source can contain numeric object keys, be explicit about how you want them treated.
Manual sanity check (fast): after flattening, sort the keys and scan for unexpected path segments. If a segment looks “wrong,” it usually comes from an inconsistent source schema, not the flattener.
To keep review deterministic, you can optionally key-sort the output using JSON sort.
Examples
Common patterns people search for (“flatten json”, “json flattener online”, “flatten nested json”) map to these examples:
- API payload to columns: flatten a nested response and then map keys to database columns.
- Flatten JSON to CSV: flatten first, then export key/value pairs or feed the wide object into your CSV tooling.
- Flatten JSON JavaScript / Python: validate your code output by comparing it to the tool output on the same sample payload.
- LeetCode-style flatten: verify edge cases around arrays and mixed nesting.
```json
{
"user": {"id": 42, "name": "Roxy"},
"orders": [{"id": "A1", "total": 19.99}, {"id": "B2", "total": 5.0}],
"active": true
}
```
Edge Cases & Troubleshooting
- Top-level arrays: flattening an array is valid; the output paths typically start with
0, 1, etc. Some downstream tools dislike numeric root keys—wrap the array in an object if needed.
- Nulls and missing fields: flattening preserves
null values, but missing fields won’t exist as keys. Decide whether your downstream needs explicit placeholders.
- Large payloads: very large JSON can be expensive to flatten because the output becomes wide. If you hit performance limits, reduce the payload to a representative sample.
- Delimiter collisions: if a source key contains dots already (e.g.,
"a.b": 1), dot notation becomes ambiguous. You may need an alternate delimiter or an escaping strategy in your pipeline.
- Mixed schemas: if an API changes shape between requests, flattened keys will drift. Use the tool to detect drift by flattening two samples and comparing keys.
Reverse Intent
If you receive event data in a flat-dot format (common in analytics, some log systems, and certain ETL exports), unflattening is the reverse direction you want. A good practice is to keep both representations:
- Flat for indexing/searching/columns
- Nested for API compatibility and developer readability
If unflatten output looks unexpected, check whether array segments are indexed consistently (e.g., items.0, items.1).
FAQ
What is flattening a JSON?
Flattening JSON means turning nested objects and arrays into a single-level object where keys represent paths to each leaf value. It’s a transformation that preserves values while changing the shape.
What is flatten()?
In many libraries, flatten() refers to reducing nested structures into a simpler representation. For JSON, it typically means converting deep paths into one-level “path keys,” often using dot notation.
Why do we flatten data?
Most analytics and storage systems work best with tabular data. Flattening reduces complexity, makes schemas easier to map, and produces stable keys for searching, comparison, and exports.
Is a JSON file a flat file?
Not necessarily. JSON can be flat (one level) or deeply nested. “Flat file” usually describes formats like CSV or TSV. Flattening JSON is a way to make JSON more compatible with flat-file workflows.
Next Steps / Related Workflows
- If you’re validating data transformations, flatten two versions of a payload and compare the resulting key sets to spot schema drift. For value-level differences, use JSON compare.
- When exporting to tables, document your delimiter and array indexing conventions so other teams can reproduce the same mapping.
- For production ETL, consider a schema registry or contract tests so you detect breaking changes before they land in your warehouse.