JSONShield
jsoncsvjavascriptpython

How to Convert JSON to CSV (3 Methods: Online, JavaScript, Python)

How to Convert JSON to CSV (3 Methods: Online, JavaScript, Python)

You have a JSON file. You need a CSV. Maybe it's API response data going into a spreadsheet, or a database export your manager wants in Excel. Whatever the reason, converting JSON to CSV is one of those tasks that sounds trivial until you hit nested objects, arrays inside arrays, or inconsistent keys across records.

Here are three reliable ways to do it, from quickest to most flexible.

Method 1: Online Tools (Zero Setup)

If you have a one-off conversion and the data is not sensitive, an online tool is the fastest path.

jsonshield.com has a JSON-to-CSV converter that runs entirely client-side -- your data never leaves your browser. Paste your JSON, click convert, download the CSV. It handles nested objects by flattening keys with dot notation (e.g., address.city becomes a column header).

Other solid options include ConvertCSV.com and json-csv.com. The key thing to check: does the tool send your data to a server? If your JSON contains any credentials, tokens, or PII, use a client-side tool or one of the code methods below.

Method 2: JavaScript with Papa Parse

Papa Parse is the gold standard for CSV handling in JavaScript. Most people know it as a CSV parser, but it also converts JSON to CSV.

Basic Conversion

import Papa from 'papaparse';

const data = [ { name: "Alice", age: 30, city: "NYC" }, { name: "Bob", age: 25, city: "LA" }, { name: "Charlie", age: 35, city: "Chicago" } ];

const csv = Papa.unparse(data); console.log(csv); // Output: // name,age,city // Alice,30,NYC // Bob,25,LA // Charlie,35,Chicago

Handling Nested Objects

Papa Parse does not automatically flatten nested objects. You need to preprocess:

function flattenObject(obj, prefix = '') {
  const result = {};
  for (const [key, value] of Object.entries(obj)) {
    const newKey = prefix ? ${prefix}.${key} : key;
    if (value && typeof value === 'object' && !Array.isArray(value)) {
      Object.assign(result, flattenObject(value, newKey));
    } else if (Array.isArray(value)) {
      result[newKey] = value.join('; ');
    } else {
      result[newKey] = value;
    }
  }
  return result;
}

const nested = [ { name: "Alice", address: { street: "123 Main St", city: "NYC" }, hobbies: ["reading", "hiking"] }, { name: "Bob", address: { street: "456 Oak Ave", city: "LA" }, hobbies: ["gaming"] } ];

const flat = nested.map(obj => flattenObject(obj)); const csv = Papa.unparse(flat); console.log(csv); // name,address.street,address.city,hobbies // Alice,123 Main St,NYC,"reading; hiking" // Bob,456 Oak Ave,LA,gaming

Node.js File Conversion

import { readFileSync, writeFileSync } from 'fs';
import Papa from 'papaparse';

const json = JSON.parse(readFileSync('data.json', 'utf-8')); const csv = Papa.unparse(json); writeFileSync('data.csv', csv); console.log('Done. Rows:', json.length);

Install with npm install papaparse.

Method 3: Python with pandas

Python's pandas library makes this a one-liner for simple cases and handles complex transformations gracefully.

Basic Conversion

import pandas as pd

df = pd.read_json('data.json') df.to_csv('output.csv', index=False)

That is the entire script. For most flat JSON arrays, this is all you need.

Handling Nested JSON

Use json_normalize to flatten nested structures:

import pandas as pd
import json

with open('data.json') as f: data = json.load(f)

Flatten nested objects

df = pd.json_normalize(data, sep='.') df.to_csv('output.csv', index=False)

json_normalize is powerful. It handles multiple levels of nesting and lets you control how deep it flattens:

# Example: API response with nested user profiles
data = [
    {
        "id": 1,
        "user": {
            "name": "Alice",
            "contact": {"email": "alice@example.com", "phone": "555-0101"}
        },
        "scores": [95, 87, 91]
    }
]

df = pd.json_normalize(data, sep='_') print(df.columns.tolist())

['id', 'user_name', 'user_contact_email', 'user_contact_phone', 'scores']

Handling Arrays Inside Records

Arrays need special treatment. You have two options:

# Option 1: Join arrays into strings
df['scores'] = df['scores'].apply(lambda x: '; '.join(map(str, x)))

Option 2: Explode into separate rows

df = df.explode('scores')

Large Files

For JSON files that are too large for memory:

import ijson  # pip install ijson
import csv

with open('large.json', 'rb') as infile, open('output.csv', 'w', newline='') as outfile: writer = None for record in ijson.items(infile, 'item'): if writer is None: writer = csv.DictWriter(outfile, fieldnames=record.keys()) writer.writeheader() writer.writerow(record)

This streams the JSON file record by record instead of loading everything into memory.

Common Gotchas

Inconsistent keys across records. If record 1 has {name, age} and record 2 has {name, email}, pandas handles this gracefully (fills missing values with NaN). Papa Parse will use the union of all keys as headers.

Null vs empty string. JSON null and empty string "" are different. Both pandas and Papa Parse preserve this distinction, but some spreadsheet applications treat them the same on import.

Unicode. Always write CSV files with UTF-8 encoding. In Python: df.to_csv('output.csv', index=False, encoding='utf-8-sig'). The utf-8-sig variant adds a BOM that helps Excel detect the encoding correctly.

Dates. JSON has no date type -- dates are strings. If your CSV consumer expects a specific date format, parse and reformat in your conversion script:

df['created_at'] = pd.to_datetime(df['created_at']).dt.strftime('%Y-%m-%d')

Which Method Should You Use?

ScenarioBest Method
One-off, non-sensitive dataOnline tool
Frontend/Node.js projectPapa Parse
Data analysis pipelinepandas
Files > 1GBijson + csv (streaming)
Deeply nested, complex JSONpandas json_normalize

For most developers, the pandas two-liner is the right answer. It handles edge cases, scales to reasonable file sizes, and the code is readable enough that your future self will understand it.

Related Tools

Want API access + no ads? Pro coming soon.