Skip to content

lasect/hyperaudit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hyperaudit

Find the footguns in your TimescaleDB setup before they go off.

npx hyperaudit postgresql://user:pass@host:5432/dbname

What it does

hyperaudit connects to your Postgres database, audits every TimescaleDB primitive it finds — hypertables, chunks, compression policies, continuous aggregates, retention policies — and tells you exactly what's wrong, and the exact SQL to fix it.

hyperaudit report — metrics_db
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TimescaleDB 2.14.2 · 6 hypertables · 1,847 chunks

Score:  61/100   Grade: C+

-  Wasted storage:       34 GB  
-  Critical findings:    3
-  Warnings:             7
-  Info:                 4

Biggest wins:
  1. Enable compression on events_raw          → save 28 GB
  2. Fix cagg refresh on hourly_metrics        → 4h 23m stale
  3. UUID in segmentby on sensor_data          → killing compression ratio
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Run with --fix to get the SQL for each finding.

A browser dashboard opens automatically with a full interactive report, chunk timeline visualization, and a simulator to preview config changes before making them.


Install

No install required:

npx hyperaudit <db-url>

Or install globally:

npm install -g hyperaudit
hyperaudit <db-url>

Usage

# Full audit with browser dashboard
npx hyperaudit postgresql://user:pass@host/db

# Terminal only, no browser
npx hyperaudit <db-url> --no-browser

# Print the SQL fix for every finding
npx hyperaudit <db-url> --fix

# Output raw findings as JSON (good for CI)
npx hyperaudit <db-url> --json

# Run with demo data, no db required
npx hyperaudit --simulate

# Preview how a config change would affect your score
npx hyperaudit <db-url> --simulate --chunk-interval 1week --compression-after 7days

# Export a shareable report card (no sensitive data)
npx hyperaudit <db-url> --share

Checks

Compression

Check Severity
Uncompressed chunks past compression policy window 🔴 Critical
Compression enabled but zero space savings 🔴 Critical
High-cardinality column in segmentby (UUID, random IDs) 🔴 Critical
Low correlation on orderby column ⚠️ Warning
No compression policy on hypertables older than 7 days ⚠️ Warning
Compression ratio below expected baseline for data type ℹ️ Info

Chunk Health

Check Severity
Chunk interval too small (fragmentation, high overhead) ⚠️ Warning
Chunk interval too large (pruning suffers, compression less effective) ⚠️ Warning
Abnormal chunk proliferation ⚠️ Warning
Future chunks pre-created beyond reasonable horizon ℹ️ Info
Orphaned chunks outside any policy scope 🔴 Critical

Continuous Aggregates

Check Severity
Cagg staleness beyond refresh interval 🔴 Critical
No refresh policy defined ⚠️ Warning
Materialization gaps (time ranges with holes) ⚠️ Warning
Cagg being queried but too stale to be useful 🔴 Critical

Query Health

(requires pg_stat_statements)

Check Severity
Recent queries not benefiting from chunk pruning 🔴 Critical
Queries scanning future chunks ⚠️ Warning
Mixed compressed/uncompressed chunk scans in same plan ⚠️ Warning
Missing index on time column for non-partitioning queries ⚠️ Warning

Retention

Check Severity
Data beyond retention policy that wasn't dropped 🔴 Critical
No retention policy on hypertables above size threshold ⚠️ Warning

Schema

Check Severity
High-cardinality secondary dimension missing space partition ℹ️ Info
Columns better suited as a separate hypertable ℹ️ Info

The --fix flag

Every finding includes the exact SQL or TimescaleDB API call to resolve it.

🔴 CRITICAL · sensor_data
   847 uncompressed chunks past compression policy window
   Estimated waste: 28 GB (~$56/mo)

   Fix:
   SELECT compress_chunk(i) FROM show_chunks('sensor_data', older_than => INTERVAL '7 days') i;

   To prevent recurrence:
   SELECT add_compression_policy('sensor_data', INTERVAL '7 days');

Browser Dashboard

When you run hyperaudit, it spins up a local server and opens a browser report automatically. The dashboard includes:

  • Chunk timeline — all chunks visualized across time, colored by status (compressed, uncompressed, policy-violated, orphaned)
  • Per-hypertable drilldown — full findings for each table
  • Simulator — adjust chunk interval, compression policy, retention window and see how your score changes before touching anything
  • Shareable card — exports a clean PNG of your score and top findings, no connection string or sensitive data included

Disable with --no-browser if you're running in CI or just want the terminal output.


How the score works

Each check has a weighted impact on your score. Critical findings lose more points than warnings. The score is designed so that a well-configured production database should score 85+. A fresh database with defaults and no tuning typically scores 40–60.


What hyperaudit reads

hyperaudit is read-only. It never writes to your database. It queries:

  • timescaledb_information.* — hypertables, chunks, compression settings, continuous aggregates, jobs, job stats
  • chunk_compression_stats(), hypertable_compression_stats(), hypertable_detailed_size() — size and compression internals
  • pg_stat_user_tables, pg_class, pg_indexes, pg_stats — Postgres system catalogs
  • pg_stat_statements — query health checks (optional, gracefully skipped if not enabled)

No data leaves your network. The browser dashboard runs entirely locally.


Requirements

  • Node.js 18+
  • PostgreSQL 13+ with TimescaleDB installed
  • The database user needs pg_read_all_stats or equivalent read access to system catalogs

Contributing

Issues and PRs welcome. If you find a check that should exist and doesn't, open an issue.


License

MIT

About

auditing tool for timescaledb across hypertables, compression, continuous aggregates, and query health.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors