Find the footguns in your TimescaleDB setup before they go off.
npx hyperaudit postgresql://user:pass@host:5432/dbnamehyperaudit connects to your Postgres database, audits every TimescaleDB primitive it finds — hypertables, chunks, compression policies, continuous aggregates, retention policies — and tells you exactly what's wrong, and the exact SQL to fix it.
hyperaudit report — metrics_db
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TimescaleDB 2.14.2 · 6 hypertables · 1,847 chunks
Score: 61/100 Grade: C+
- Wasted storage: 34 GB
- Critical findings: 3
- Warnings: 7
- Info: 4
Biggest wins:
1. Enable compression on events_raw → save 28 GB
2. Fix cagg refresh on hourly_metrics → 4h 23m stale
3. UUID in segmentby on sensor_data → killing compression ratio
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Run with --fix to get the SQL for each finding.
A browser dashboard opens automatically with a full interactive report, chunk timeline visualization, and a simulator to preview config changes before making them.
No install required:
npx hyperaudit <db-url>Or install globally:
npm install -g hyperaudit
hyperaudit <db-url># Full audit with browser dashboard
npx hyperaudit postgresql://user:pass@host/db
# Terminal only, no browser
npx hyperaudit <db-url> --no-browser
# Print the SQL fix for every finding
npx hyperaudit <db-url> --fix
# Output raw findings as JSON (good for CI)
npx hyperaudit <db-url> --json
# Run with demo data, no db required
npx hyperaudit --simulate
# Preview how a config change would affect your score
npx hyperaudit <db-url> --simulate --chunk-interval 1week --compression-after 7days
# Export a shareable report card (no sensitive data)
npx hyperaudit <db-url> --share| Check | Severity |
|---|---|
| Uncompressed chunks past compression policy window | 🔴 Critical |
| Compression enabled but zero space savings | 🔴 Critical |
High-cardinality column in segmentby (UUID, random IDs) |
🔴 Critical |
Low correlation on orderby column |
|
| No compression policy on hypertables older than 7 days | |
| Compression ratio below expected baseline for data type | ℹ️ Info |
| Check | Severity |
|---|---|
| Chunk interval too small (fragmentation, high overhead) | |
| Chunk interval too large (pruning suffers, compression less effective) | |
| Abnormal chunk proliferation | |
| Future chunks pre-created beyond reasonable horizon | ℹ️ Info |
| Orphaned chunks outside any policy scope | 🔴 Critical |
| Check | Severity |
|---|---|
| Cagg staleness beyond refresh interval | 🔴 Critical |
| No refresh policy defined | |
| Materialization gaps (time ranges with holes) | |
| Cagg being queried but too stale to be useful | 🔴 Critical |
(requires pg_stat_statements)
| Check | Severity |
|---|---|
| Recent queries not benefiting from chunk pruning | 🔴 Critical |
| Queries scanning future chunks | |
| Mixed compressed/uncompressed chunk scans in same plan | |
| Missing index on time column for non-partitioning queries |
| Check | Severity |
|---|---|
| Data beyond retention policy that wasn't dropped | 🔴 Critical |
| No retention policy on hypertables above size threshold |
| Check | Severity |
|---|---|
| High-cardinality secondary dimension missing space partition | ℹ️ Info |
| Columns better suited as a separate hypertable | ℹ️ Info |
Every finding includes the exact SQL or TimescaleDB API call to resolve it.
🔴 CRITICAL · sensor_data
847 uncompressed chunks past compression policy window
Estimated waste: 28 GB (~$56/mo)
Fix:
SELECT compress_chunk(i) FROM show_chunks('sensor_data', older_than => INTERVAL '7 days') i;
To prevent recurrence:
SELECT add_compression_policy('sensor_data', INTERVAL '7 days');
When you run hyperaudit, it spins up a local server and opens a browser report automatically. The dashboard includes:
- Chunk timeline — all chunks visualized across time, colored by status (compressed, uncompressed, policy-violated, orphaned)
- Per-hypertable drilldown — full findings for each table
- Simulator — adjust chunk interval, compression policy, retention window and see how your score changes before touching anything
- Shareable card — exports a clean PNG of your score and top findings, no connection string or sensitive data included
Disable with --no-browser if you're running in CI or just want the terminal output.
Each check has a weighted impact on your score. Critical findings lose more points than warnings. The score is designed so that a well-configured production database should score 85+. A fresh database with defaults and no tuning typically scores 40–60.
hyperaudit is read-only. It never writes to your database. It queries:
timescaledb_information.*— hypertables, chunks, compression settings, continuous aggregates, jobs, job statschunk_compression_stats(),hypertable_compression_stats(),hypertable_detailed_size()— size and compression internalspg_stat_user_tables,pg_class,pg_indexes,pg_stats— Postgres system catalogspg_stat_statements— query health checks (optional, gracefully skipped if not enabled)
No data leaves your network. The browser dashboard runs entirely locally.
- Node.js 18+
- PostgreSQL 13+ with TimescaleDB installed
- The database user needs
pg_read_all_statsor equivalent read access to system catalogs
Issues and PRs welcome. If you find a check that should exist and doesn't, open an issue.
MIT