Getting Started
Five minutes. That's all it takes to go from "what is this?" to "how did I live without this?"
Installation
# One-liner (macOS / Linux)
curl -fsSL https://raw.githubusercontent.com/benbernard/RecordStream/master/install.sh | bashThis detects your platform (linux/darwin, x64/arm64), downloads the correct binary from GitHub releases, and installs it to /usr/local/bin (or ~/.local/bin if that's not writable). You can customize with environment variables:
# Install to a specific directory
INSTALL_DIR=~/bin curl -fsSL https://raw.githubusercontent.com/benbernard/RecordStream/master/install.sh | bash
# Install a specific version
VERSION=v1.0.0 curl -fsSL https://raw.githubusercontent.com/benbernard/RecordStream/master/install.sh | bashVerify it works:
echo '{"name":"world"}' | recs eval '"Hello, {{name}}!"'
# Hello, world!If you see that, you're in business.
Updating
recs checks for updates automatically in the background (at most once per day). When a new version is available, you'll see a notice:
recs v1.1.0 available (current: v1.0.0). Run: recs --updateTo update:
recs --updateTo disable update checks (e.g. in CI), pass --no-update-check.
Your First Pipeline
RecordStream works with records — JSON objects, one per line. Let's start with the simplest possible pipeline:
echo '{"greeting":"hello","target":"world"}' | recs eval '"{{greeting}}, {{target}}!"'Output:
hello, world!The {{greeting}} syntax is a key spec — it reaches into the record and grabs the value. More on that in the Key Specs guide.
A Real-World Example
Say you've got a CSV of server logs:
timestamp,host,status,latency_ms
2024-01-15T10:00:00Z,web-1,200,45
2024-01-15T10:00:01Z,web-2,500,2301
2024-01-15T10:00:02Z,web-1,200,52
2024-01-15T10:00:03Z,web-3,200,38
2024-01-15T10:00:04Z,web-2,500,3102Before recs: You'd write a Python script, import csv, loop through rows, build dictionaries, filter, aggregate... 20 lines minimum.
After recs:
# Which hosts are having a bad day?
recs fromcsv --header logs.csv \
| recs grep '{{status}} >= 500' \
| recs collate --key host -a count \
| recs sort --key count=-n \
| recs totablehost count
---- -----
web-2 2Web-2 is the culprit. Case closed.
The Three Flavors of Commands
Every recs command falls into one of three categories:
Input: Getting Data In
These commands create records from external sources:
recs fromcsv --header data.csv # CSV files
recs fromjsonarray < data.json # JSON arrays
recs fromkv --delim '=' < config.txt # Key-value pairs
recs fromre '(\d+)\s+(\w+)' -k num,word < data.txt # Regex extractionTransform: Doing Stuff With It
These commands reshape, filter, and aggregate:
recs grep '{{age}} > 21' # Filter records
recs xform '{{name}} = {{name}}.toUpperCase()' # Modify in place
recs sort --key age=-n # Sort (numeric, descending)
recs collate --key dept -a sum,salary # Group and aggregateOutput: Getting Data Out
These commands render records for human consumption:
recs totable # ASCII table
recs tocsv -k name,age # Back to CSV
recs toprettyprint # Pretty-printed JSON
recs tohtml # HTML tableComposing Commands
The power is in the pipes. Each command does one thing well, and you chain them together:
recs fromcsv --header employees.csv \
| recs grep '{{department}} === "Engineering"' \
| recs xform '{{annual}} = {{salary}} * 12' \
| recs sort --key annual=-n \
| recs topn --key annual -n 5 \
| recs totable -k name,annualThat pipeline:
- Reads a CSV with headers
- Keeps only Engineering records
- Computes annual salary
- Sorts by salary descending
- Takes the top 5
- Displays as a table
All without writing a single script file. Your data just got piped into shape.
Next Steps
- Snippets — Learn the
{{keyspec}}syntax and inline code expressions - Key Specs — Navigate nested data like a pro
- The Pipeline Model — Understand the philosophy
- Command Reference — Every command, every flag, every example
