Pipe All the Things
Compose powerful data pipelines from small, focused commands. Each record is one JSON object per line — Unix philosophy meets structured data.
A toolkit for taming JSON streams with Unix pipes — because your data called, and it wants to be transformed.

Got a CSV? Turn it into insights in three lines:
# Who's spending the most? Let's find out.
recs fromcsv --header purchases.csv \
| recs collate --key customer -a sum,amount \
| recs sort --key amount=-n \
| recs totablecustomer sum_amount
-------- ----------
Alice 9402.50
Bob 7281.00
Charlie 3104.75Or maybe you've got some JSON and you need answers now:
cat api-response.json \
| recs fromjsonarray \
| recs grep '{{status}} === "active"' \
| recs xform '{{age}} = {{age}} + 1' \
| recs totable -k name,status,age# One-liner (macOS / Linux)
curl -fsSL https://raw.githubusercontent.com/benbernard/RecordStream/master/install.sh | bashThat detects your platform, downloads the right binary, and puts it in your $PATH. Updates happen automatically — recs checks for new versions in the background and tells you when one is available. Run recs --update to upgrade in place.
RecordStream is built on a simple idea: one JSON object per line. Every command reads records from stdin, does something useful, and writes records to stdout. Chain them with pipes and you've got a data pipeline that would make a shell wizard weep with joy.
Input commands create records from the outside world — CSV files, databases, XML, Apache logs, you name it.
Transform commands reshape, filter, sort, collate, and generally boss your data around.
Output commands turn records into something humans can read — tables, CSV, HTML, pretty-printed JSON.
The result? Complex data transformations expressed as readable, composable, debuggable pipelines. No Spark cluster required.