feat: post-rethink substrate batch — aliases / unused-type-members / suppressions / V8 coverage#75
Conversation
Ships four roadmap items extracted from the dominance-rethink + non-goals
reassessment, alongside a docs-governance cleanup that purges fallow
references (research adoption-tracker no longer the codemap mission;
roadmap is the north star).
Substrate features (each independently changeset'd):
1. **Outcome-shaped CLI aliases** (XS, patch) — five thin wrappers over
`query --recipe <id>`: `codemap dead-code` / `deprecated` / `boundaries`
/ `hotspots` / `coverage-gaps`. Every `query` flag passes through.
Capped at 5; mapping in `src/cli/aliases.ts`. Closes verb-obviousness
gap; Moat-A clean (recipe IS the SQL).
2. **`unused-type-members` recipe** (S, patch) — field-level enumeration
of `type_members` whose owning type has no detectable importer. Sister
to `unimported-exports`; STRICTLY ADVISORY — codemap can't see indexed
access (`T['field']`), `keyof T`, mapped types, destructuring, or
re-export chains. Bundled `.md` documents every false-positive class.
3. **`suppressions` substrate** (S, minor — schema 9→10) — opt-in markers
parser extension recognises `// codemap-ignore-{next-line,file} <recipe-id>`
(also `#`, `--`, `<!--`, `/*` leaders) → new `suppressions` table.
Recipes opt in via `LEFT JOIN`; `untested-and-dead` (line + file) and
`unimported-exports` (file only — exports has no `line_number`) honor
today. Stays consistent with the no-opinionated-rule-engine Floor:
no severity, no suppression-by-default, no universal-honor; consumer-
chosen substrate, not policy.
4. **V8 runtime coverage parser** (S, minor) — `codemap ingest-coverage
--runtime <dir>` reads `NODE_V8_COVERAGE=...`-style directories
(one or more `coverage-*.json` per process), maps byte-offset ranges
to per-line hits via innermost-wins range walking, dispatches to the
existing `upsertCoverageRows` core. Format auto-detection unchanged
for files; `--runtime` is the explicit V8 opt-in. Local-only — SaaS
aggregation explicitly out of scope (different product class).
Docs cleanup (no functional impact):
- Deletes `docs/research/fallow.md` and
`docs/research/competitive-scan-2026-04.md` (adoption-from-fallow
framing no longer the codemap mission).
- Slims fallow refs from `lsp-diagnostic-push.md`, `why-codemap.md`,
`non-goals-reassessment-2026-05.md`, `audit-pr-architecture` skill,
`docs-governance` rule + skill, lessons.md.
- Adds `No remote-repo cloning` Floor to `roadmap.md` (recovered from
the deleted competitive-scan; rejected in PR #23 as demoware that
doesn't fit the SQL-index thesis).
- Reframes `No opinionated rule engine` Floor to clarify suppressions
ship as opt-in substrate, not as policy primitives.
- Reframes `No runtime tracing` Floor to scope V8 protocol post-mortem
ingestion as the static-side adjacent capability (live tracing /
production beacons still out of scope).
Schema bump: `SCHEMA_VERSION = 10` (added `suppressions` table). Auto-
detected; consumers get a one-time full reindex on next codemap run.
Tests: 858 pass (was 847 at branch base — +11 across `aliases.test.ts`,
`markers.test.ts` suppression coverage, `coverage-engine.test.ts` V8 paths,
`cmd-ingest-coverage.test.ts` `--runtime` flag). Golden-query suite gains
`unused-type-members` scenario.
Per [`docs/README.md` Rule 10](docs/README.md), bundled agent rule + skill
in `templates/agents/` and the dev-side mirror in `.agents/` updated in
lockstep — drift between the two pairs is CLI-prefix-only.
🦋 Changeset detectedLatest commit: 27cacfe The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
📝 WalkthroughWalkthroughThis PR introduces four major interconnected features: opt-in suppression directives for recipes (via source comments), V8 runtime coverage ingestion, CLI outcome aliases for common recipes, and a new ChangesSuppressions Substrate & Feature Implementations
V8 Runtime Coverage Ingestion
Outcome Aliases (CLI Shortcuts)
Unused Type Members Recipe
Documentation Restructuring & Governance
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes This PR spans multiple interconnected systems (suppressions across parsing/schema/indexing/recipes; V8 coverage engine expansion; CLI aliasing; new recipe; documentation restructuring). While individual feature layers are internally coherent, the total diff touches heterogeneous areas (database/schema, parsing infrastructure, CLI routing, templates, recipe SQL, and governance docs), requiring separate reasoning for each feature domain and careful cross-checking of how suppressions are threaded through the full stack. The sheer number of files and the conceptual density of new substrate patterns (suppression scoping, V8 offset mapping) demand careful review. Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Node's better-sqlite3 path in `runSql` splits the multi-statement DDL on `;` before calling `prepare()` — semicolons inside `--` line comments become statement boundaries, producing comment-only "statements" that better-sqlite3 rejects with `RangeError: The supplied SQL string contains no statements`. Bun's `bun:sqlite` accepts multi-statement strings directly so the bug only surfaces in the Node CI build job. Caveat already documented at `src/sqlite-db.ts` (`runSql`'s JSDoc): "do not put `;` inside `--` line comments in `db.ts` SQL strings". The slimmed suppressions DDL comment in 4541547 reintroduced three of them. Reworded to drop every semicolon while keeping the same intent (file vs next-line scope encoding, opt-in JOIN posture, source pointer to markers.ts). Caught by CI on PR #75 Build job; reproducible locally with `bun run build && node dist/index.mjs --full`.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
src/db.ts (1)
148-153: 💤 Low valueConsider a UNIQUE constraint on
suppressions(file_path, line_number, recipe_id)for defensive schema design.The incremental path (cascade delete via
deleteFileData) and full-rebuild path (dropAll) both prevent duplicates correctly today. However, a UNIQUE constraint +INSERT OR IGNOREininsertSuppressionswould make the schema self-documenting and guard against any future dual-ingestion paths.♻️ Proposed change
In the DDL:
CREATE TABLE IF NOT EXISTS suppressions ( id INTEGER PRIMARY KEY AUTOINCREMENT, file_path TEXT NOT NULL REFERENCES files(path) ON DELETE CASCADE, line_number INTEGER NOT NULL, - recipe_id TEXT NOT NULL + recipe_id TEXT NOT NULL, + UNIQUE (file_path, line_number, recipe_id) ) STRICT;And in
insertSuppressions:- "INSERT INTO suppressions (file_path, line_number, recipe_id)", + "INSERT OR IGNORE INTO suppressions (file_path, line_number, recipe_id)",🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/db.ts` around lines 148 - 153, Add a UNIQUE constraint to the suppressions table on (file_path, line_number, recipe_id) and make insertSuppressions use an idempotent insert (e.g., INSERT OR IGNORE or equivalent) so duplicate rows can't be created; locate the CREATE TABLE statement for suppressions to add the UNIQUE(...) clause and update the insertSuppressions function to use a no-op-on-duplicate insert pattern, leaving deleteFileData and dropAll unchanged but noting they already prevent duplicates.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.agents/skills/docs-governance/SKILL.md:
- Line 88: Update the SKILL.md guidance so the "Closing research" section
follows the canonical Rule 8 lifecycle: state that when a repo-level tool is
adopted the research note under docs/research/ becomes a "what shipped" appendix
rather than remaining as a tracker, and ensure the text directs agents to move
operational rules into .agents/rules/ and keep positioning in
docs/why-codemap.md and research/non-goals-reassessment-2026-05.md; replace the
current instruction to delete adopted research files with the Rule 8 flow
(convert to an appendix unless the research framework is reusable) and normalize
terminology across the "Closing research" heading and the earlier paragraph to
remove the conflicting lifecycle language.
In `@docs/glossary.md`:
- Around line 128-133: Update the "V8 runtime" usage in the codemap
ingest-coverage help to match the shipped CLI shape: change the documented form
from `--runtime` (boolean) to `--runtime <dir>` and update the prose for the V8
runtime bullet under `codemap ingest-coverage <path> [--runtime] [--json]` to
explicitly state that the flag takes a directory argument containing
NODE_V8_COVERAGE dumps, e.g. describe it as `--runtime <dir>` (local-only) and
keep the notes about skipping non-file:// URLs, merging duplicate URLs, and
innermost-wins byte-range-to-line behavior.
In `@src/application/coverage-engine.ts`:
- Around line 470-480: buildLineOffsets currently computes line start offsets
using JS string indices (UTF-16 code units) which mismatches V8's byte offsets;
change buildLineOffsets to compute UTF-8 byte offsets instead: encode the source
with TextEncoder (or use the same byte buffer you ingest), iterate the resulting
Uint8Array and push positions where byte === 10 (LF) + 1 into the offsets array,
returning byte offsets; also ensure offsetToLine uses those byte offsets (and
that the initial offsets[0] === 0 and final bounds cover the whole byte length)
so V8 byte-range lookups map to the correct line numbers.
- Around line 405-453: The current loop groups scripts by URL (scriptsByUrl) but
writes directly into a shared lineHits so later script ranges can overwrite
earlier non-zero counts with zeros; instead, for each script in urlScripts
compute a per-script hits array (e.g., localLineHits computed using
buildLineOffsets, offsetToLine and the innermost-wins sort) and then merge that
into the shared lineHits by taking Math.max(lineHits[i] ?? 0, localLineHits[i]
?? 0) (or another explicit merge rule) for each line; ensure this change is done
inside the URL handling block (where urlScripts is iterated) so final lineHits
reflects merged multi-process coverage rather than last-writer-wins.
In `@src/cli/cmd-ingest-coverage.ts`:
- Around line 159-183: resolveV8Directory currently accepts any top-level .json
files and may silently skip non-V8 payloads; restrict acceptance to V8 coverage
filenames and fail if none of the files contain a valid V8 payload. Update
resolveV8Directory to filter filenames with a pattern like coverage-*.json
(replace .filter(f => f.endsWith(".json")) with a check for
/^coverage-.*\.json$/) and after reading/attempting to parse the files, validate
each file's parsed JSON contains the expected V8 payload (e.g., a top-level
"result" array/object); if zero files contain a valid V8 payload, throw an Error
describing no valid V8 coverage found. Apply the same filename pattern +
post-parse validation to the other ingestion path that currently skips non-V8
payloads (the block around the other JSON-reading function referenced in the
review).
In `@src/parsed-types.ts`:
- Line 37: Add API documentation for the newly exported ParsedFile.suppressions
public field: update the ParsedFile declaration to include a concise doc comment
describing what suppressions contains, its semantics (e.g., list of
SuppressionRow entries applied to the file), when it can be empty or undefined,
and any invariants or usage notes; reference the ParsedFile type and the
SuppressionRow type in the comment so consumers understand the relationship and
expected contents.
In `@templates/agents/rules/codemap.md`:
- Line 45: Update the table row for the "Coverage ingest" entry so it no longer
implies V8 is auto-detected: state that Istanbul (`coverage-final.json`) and
LCOV (`lcov.info`) are auto-detected, and that V8 coverage requires explicitly
passing the `--runtime` flag (i.e., opt-in) and points at a
`NODE_V8_COVERAGE=...` directory; keep the `codemap ingest-coverage <path>
[--runtime] [--json]` command, the note about joinability to `symbols` for
"untested AND dead" queries, and the "Local-only — no SaaS aggregation" wording
unchanged.
---
Nitpick comments:
In `@src/db.ts`:
- Around line 148-153: Add a UNIQUE constraint to the suppressions table on
(file_path, line_number, recipe_id) and make insertSuppressions use an
idempotent insert (e.g., INSERT OR IGNORE or equivalent) so duplicate rows can't
be created; locate the CREATE TABLE statement for suppressions to add the
UNIQUE(...) clause and update the insertSuppressions function to use a
no-op-on-duplicate insert pattern, leaving deleteFileData and dropAll unchanged
but noting they already prevent duplicates.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5bb903d3-2813-4548-89d9-311bf54c16dc
📒 Files selected for processing (44)
.agents/lessons.md.agents/rules/codemap.md.agents/rules/docs-governance.md.agents/skills/audit-pr-architecture/SKILL.md.agents/skills/codemap/SKILL.md.agents/skills/docs-governance/SKILL.md.changeset/outcome-aliases.md.changeset/suppressions-substrate.md.changeset/unused-type-members-recipe.md.changeset/v8-runtime-coverage.mdREADME.mddocs/README.mddocs/architecture.mddocs/glossary.mddocs/plans/lsp-diagnostic-push.mddocs/research/competitive-scan-2026-04.mddocs/research/fallow.mddocs/research/non-goals-reassessment-2026-05.mddocs/roadmap.mddocs/why-codemap.mdfixtures/golden/minimal/unused-type-members.jsonfixtures/golden/scenarios.jsonsrc/adapters/builtin.tssrc/adapters/types.tssrc/application/coverage-engine.test.tssrc/application/coverage-engine.tssrc/application/index-engine.tssrc/cli/aliases.test.tssrc/cli/aliases.tssrc/cli/bootstrap.tssrc/cli/cmd-ingest-coverage.test.tssrc/cli/cmd-ingest-coverage.tssrc/cli/main.tssrc/db.tssrc/markers.test.tssrc/markers.tssrc/parse-worker-core.tssrc/parsed-types.tstemplates/agents/rules/codemap.mdtemplates/agents/skills/codemap/SKILL.mdtemplates/recipes/unimported-exports.sqltemplates/recipes/untested-and-dead.sqltemplates/recipes/unused-type-members.mdtemplates/recipes/unused-type-members.sql
💤 Files with no reviewable changes (2)
- docs/research/fallow.md
- docs/research/competitive-scan-2026-04.md
| `codemap ingest-coverage <path> [--runtime] [--json]` reads a coverage artifact and writes statement-level rows into the `coverage` table. Three formats: | ||
|
|
||
| - **Istanbul JSON** (`coverage-final.json`) — emitted natively by `c8`, `nyc`, `vitest --coverage --coverage.reporter=json`, `jest --coverage --coverageReporters=json`. Parser reads `statementMap` + `s` (per-statement hit counts). | ||
| - **LCOV** (`lcov.info`) — emitted by `bun test --coverage`, `c8 --reporter=lcov`, every legacy stack. Parser tokenises `SF:` / `DA:<line>,<count>` / `end_of_record` records; ignores `TN:` / `FN:` / `BRDA:` / `LF:` / `LH:` (statement coverage only in v1). | ||
| - **LCOV** (`lcov.info`) — emitted by `bun test --coverage`, `c8 --reporter=lcov`, every legacy stack. Parser tokenises `SF:` / `DA:<line>,<count>` / `end_of_record` records; ignores `TN:` / `FN:` / `BRDA:` / `LF:` / `LH:` (statement coverage only). | ||
| - **V8 runtime** (with `--runtime`) — opt-in directory mode reading `NODE_V8_COVERAGE=...` per-process dumps (`coverage-<pid>-<ts>-<seq>.json`). Each script's byte-offset ranges are converted to per-line hit counts (innermost-wins: smaller ranges override the function-as-a-whole count). Skips non-`file://` URLs (Node internals, `evalmachine.<anonymous>`); merges duplicate-URL scripts across dumps. Useful for "delete cold code with stronger evidence" agent flows. **Local-only — SaaS aggregation explicitly out of scope** (different product class). | ||
|
|
There was a problem hiding this comment.
Align V8 CLI syntax with the shipped command shape.
This section documents runtime ingestion as a boolean --runtime flag, but the PR objective describes --runtime <dir>. Please make this command form explicit and consistent here to avoid copy/paste failures.
As per coding guidelines, "Documentation files must be kept up-to-date with code changes."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/glossary.md` around lines 128 - 133, Update the "V8 runtime" usage in
the codemap ingest-coverage help to match the shipped CLI shape: change the
documented form from `--runtime` (boolean) to `--runtime <dir>` and update the
prose for the V8 runtime bullet under `codemap ingest-coverage <path>
[--runtime] [--json]` to explicitly state that the flag takes a directory
argument containing NODE_V8_COVERAGE dumps, e.g. describe it as `--runtime
<dir>` (local-only) and keep the notes about skipping non-file:// URLs, merging
duplicate URLs, and innermost-wins byte-range-to-line behavior.
There was a problem hiding this comment.
--runtime IS a boolean flag — <path> (the existing positional, not a runtime-specific argument) carries the directory. Verify against src/cli/cmd-ingest-coverage.ts:
- Help:
Usage: codemap ingest-coverage <path> [--runtime] [--json] - Type:
runtime: boolean - Parser:
if (a === "--runtime") { runtime = true; continue; }(no value consumption)
The PR-body shorthand --runtime <dir> was sloppy markup on my part referring to the directory passed via <path> while --runtime is set, not a CLI argument shape. Docs are correct as-is.
There was a problem hiding this comment.
@SutuSebastian, you're right — I verified the logic: --runtime consumes no value and is stored as runtime: boolean, while the directory is passed via the existing <path> positional. The glossary's documented form codemap ingest-coverage <path> [--runtime] [--json] is accurate. My comment was incorrect; no change is needed here.
| /** | ||
| * `offsets[i]` = byte offset of line `i + 1` start. Approximation: we walk | ||
| * char codes (UTF-16) not bytes, so a multi-byte char before column 0 | ||
| * shifts subsequent offsets by 1; acceptable for line-resolution coverage. | ||
| */ | ||
| function buildLineOffsets(source: string): number[] { | ||
| const offsets: number[] = [0]; | ||
| for (let i = 0; i < source.length; i++) { | ||
| if (source.charCodeAt(i) === 10) offsets.push(i + 1); | ||
| } | ||
| return offsets; |
There was a problem hiding this comment.
This line table is not byte-aware.
The helper says V8 ranges are byte offsets, but it builds offsets from JS string indices. After the first multibyte character, offsetToLine can resolve later ranges to the wrong line. Please build line starts from the UTF-8 bytes you actually ingest, not from UTF-16 code units.
Suggested direction
- let source: string;
+ let sourceBytes: Buffer;
try {
- source = readFileSync(absPath, "utf-8");
+ sourceBytes = readFileSync(absPath);
} catch {
@@
- const lineOffsets = buildLineOffsets(source);
+ const lineOffsets = buildLineOffsets(sourceBytes);-function buildLineOffsets(source: string): number[] {
+function buildLineOffsets(source: Uint8Array): number[] {
const offsets: number[] = [0];
for (let i = 0; i < source.length; i++) {
- if (source.charCodeAt(i) === 10) offsets.push(i + 1);
+ if (source[i] === 10) offsets.push(i + 1);
}
return offsets;
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/application/coverage-engine.ts` around lines 470 - 480, buildLineOffsets
currently computes line start offsets using JS string indices (UTF-16 code
units) which mismatches V8's byte offsets; change buildLineOffsets to compute
UTF-8 byte offsets instead: encode the source with TextEncoder (or use the same
byte buffer you ingest), iterate the resulting Uint8Array and push positions
where byte === 10 (LF) + 1 into the offsets array, returning byte offsets; also
ensure offsetToLine uses those byte offsets (and that the initial offsets[0] ===
0 and final bounds cover the whole byte length) so V8 byte-range lookups map to
the correct line numbers.
There was a problem hiding this comment.
V8 source offsets are NOT bytes — they're UTF-16 code-unit positions per the Chrome DevTools Protocol spec for Profiler.CoverageRange:
startOffsetinteger — JavaScript script source offset for the range start.
V8 represents JS source as UTF-16 internally, and "JavaScript script source offset" matches the units String.prototype.charCodeAt(i) walks. v8-to-istanbul (the canonical V8→Istanbul converter shipped with c8 and Node's coverage tooling) directly indexes the JS string with no byte conversion either.
So source.charCodeAt(i) === 10 IS correct — same units as the V8 offsets it consumes. My JSDoc was misleading on this point (suggested bytes-vs-UTF-16 was an "approximation"); updated in 27cacfe to cite the spec directly so the next reviewer doesn't repeat the question. No behaviour change needed.
There was a problem hiding this comment.
@SutuSebastian, you're correct — I apologize for the noise. The Chrome DevTools Protocol spec defines startOffset and endOffset as "JavaScript script source offset for the range start/end", which means they're positions in the UTF-16 JS string representation, exactly what String.prototype.charCodeAt(i) walks. The byte-conversion suggestion was wrong.
The updated JSDoc in commit 27cacfe now makes this explicit, which will prevent the same misread in future reviews. No behavior change needed — the implementation was already correct.
✏️ Learnings added
Learnt from: SutuSebastian
Repo: stainless-code/codemap PR: 75
File: src/application/coverage-engine.ts:480-490
Timestamp: 2026-05-05T20:23:04.682Z
Learning: In `src/application/coverage-engine.ts`, V8 `Profiler.CoverageRange` `startOffset`/`endOffset` values are **UTF-16 code-unit positions** (JavaScript string indices), NOT byte offsets. This matches the Chrome DevTools Protocol spec ("JavaScript script source offset") and V8's internal UTF-16 string representation. The `buildLineOffsets` function correctly uses `source.charCodeAt(i) === 10` (JS string indexing) to build line offset tables — no byte conversion is needed or correct. Do not suggest changing this to byte-based offset computation.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| exports?: ExportRow[]; | ||
| components?: ComponentRow[]; | ||
| markers?: MarkerRow[]; | ||
| suppressions?: SuppressionRow[]; |
There was a problem hiding this comment.
Document the new public ParsedFile.suppressions field.
ParsedFile is exported, but Line 37 adds a new public field without accompanying API docs.
Suggested fix
markers?: MarkerRow[];
+ /** Per-recipe suppression directives parsed from source comments. */
suppressions?: SuppressionRow[];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| suppressions?: SuppressionRow[]; | |
| markers?: MarkerRow[]; | |
| /** Per-recipe suppression directives parsed from source comments. */ | |
| suppressions?: SuppressionRow[]; |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/parsed-types.ts` at line 37, Add API documentation for the newly exported
ParsedFile.suppressions public field: update the ParsedFile declaration to
include a concise doc comment describing what suppressions contains, its
semantics (e.g., list of SuppressionRow entries applied to the file), when it
can be empty or undefined, and any invariants or usage notes; reference the
ParsedFile type and the SuppressionRow type in the comment so consumers
understand the relationship and expected contents.
There was a problem hiding this comment.
Looking at the same ParsedFile interface, 9 sibling fields carry no JSDoc — symbols, imports, exports, components, markers, typeMembers, calls, cssVariables, cssClasses, cssKeyframes. The codebase convention (codified in .agents/rules/concise-comments.md) is "JSDoc only when the field carries non-obvious semantics not visible from the name + type" — that rule's "Cut" list explicitly names "Restating the function/variable name" as a comment to delete.
suppressions?: SuppressionRow[] matches that bar:
- Name says "list of suppressions"
- Type points at
SuppressionRowwhich does carry a JSDoc indb.tsexplaining the row shape (file_path / line_number scope encoding / recipe_id) and thesuppressionstable's DDL-level docstring documents the wire-level invariants
Adding a JSDoc on suppressions? here would be inconsistent with the rest of the interface. Leaving as-is.
There was a problem hiding this comment.
@SutuSebastian, that's a fair point — the name + type are self-documenting here, and adding a JSDoc would be inconsistent with the nine sibling fields that follow the same pattern. I'll stand down on this one.
✏️ Learnings added
Learnt from: SutuSebastian
Repo: stainless-code/codemap PR: 75
File: src/parsed-types.ts:37-37
Timestamp: 2026-05-05T20:23:01.334Z
Learning: In the `stainless-code/codemap` repository, JSDoc is only added to fields/functions when the semantics are non-obvious from the name and type alone (governed by `.agents/rules/concise-comments.md`). "Restating the function/variable name" is explicitly listed as a comment to delete. The `ParsedFile` interface in `src/parsed-types.ts` follows this convention — most fields (e.g., `symbols`, `imports`, `exports`, `components`, `markers`, `typeMembers`, `calls`, `cssVariables`, `cssClasses`, `cssKeyframes`, `suppressions`) carry no JSDoc because the name + referenced type (which may carry its own JSDoc in `db.ts`) are sufficient. Do not flag missing JSDoc on self-explanatory fields in this codebase.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
Three real bugs from the CodeRabbit pass; pushed back on three hallucinations (replies on the threads). * **coverage-engine.ts (T3)** — duplicate-URL scripts overwrote each other via shared `lineHits`. A multi-process test run where one process didn't hit a line could zero out another process's non-zero count. Fix: walk innermost-wins per script into a per-script accumulator, then merge across scripts with `Math.max`. New regression test exercises a "hot" + "cold" duplicate pair where last-writer-wins would have produced zero hits but Math.max-merge correctly preserves the hot count. * **cmd-ingest-coverage.ts (T5)** — `--runtime <dir>` accepted any top-level `*.json` and silently skipped non-V8 payloads, so pointing the flag at the wrong directory reported a successful ingest with zero rows. Two safeguards: (1) `coverage-*.json` filename filter so obvious mis-targets fail at directory resolution; (2) post-parse guard that throws when zero of the matched files contained a V8 `result` array — surfaces the "wrong directory" case loudly instead of returning success with no data. * **docs-governance/SKILL.md § Closing research (T1)** — § Closing research said "delete the research file unless its comparison framework is reusable", contradicting the canonical [docs/README.md Rule 8](docs/README.md) lifecycle which says "slim to a 'What shipped' appendix". Aligned the SKILL flow with Rule 8 (slim by default) and called out the § 6 anti-pattern exception explicitly: per-tool / peer-tool trackers get deleted because the framing was off-mission, not just stale (matches what this PR did to fallow.md and competitive-scan-2026-04.md). * **agents rules (T7)** — Coverage-ingest dispatch row read as if V8 was auto-detected alongside Istanbul/LCOV. Re-worded to make the Istanbul/LCOV auto-detect path vs the V8 `--runtime` opt-in path explicit. Lockstep update across `templates/agents/rules/codemap.md` and `.agents/rules/codemap.md` per Rule 10. * **coverage-engine.ts buildLineOffsets JSDoc (T4 corrected)** — my original comment said the line-offset table was a "byte-offset approximation" walking UTF-16 code units. That's wrong both about V8's units and about what the code does: V8's `Profiler.CoverageRange` source offsets are UTF-16 code units (per Chrome DevTools Protocol spec) — the same units `String.charCodeAt` walks. No encoding conversion needed. Updated the JSDoc to cite the spec so the next reviewer doesn't repeat the question. Replies posted on T2, T4, T6, and the "Docstring Coverage 32%" pre-merge warning (codebase convention is concise-comments per .agents/rules/concise-comments.md, not 80% blanket coverage).
|
Re. the Docstring Coverage 32.14% < 80% pre-merge warning — declining. The 80% threshold is CodeRabbit's default, not a project rule. The codebase's actual JSDoc convention is codified in
Length budget: "0 lines when self-explanatory; 1 line default for kept comments; >3 lines means extract to a The non-derivable bits in this PR (V8 source-offset spec citation, suppression scope encoding, |
Summary
Ships four roadmap items extracted from the dominance-rethink + non-goals reassessment, alongside a docs-governance cleanup that purges fallow references (research adoption-tracker no longer the codemap mission; roadmap is the north star).
Each feature is independently changeset'd; review can read per-changeset for release-shape framing or per-section below for surface-by-surface diff.
Slice 1 — Outcome-shaped CLI aliases (XS, patch)
Five thin wrappers over
query --recipe <id>so the verb is the outcome, not the recipe id:queryflag passes through (--json,--format,--ci,--summary,--changed-since,--group-by,--params,--save-baseline,--baseline).src/cli/aliases.ts(OUTCOME_ALIASES); rewrite happens before dispatch inmain.ts.Slice 2 — Advisory
unused-type-membersrecipe (S, patch)Field-level enumeration of
type_memberswhose owning type has no detectable importer. Sister recipe tounimported-exports: same upstream signal, joined againsttype_membersso each row carries the field's name, type annotation, optionality, and readonly flag.Strictly advisory. Codemap can't see indexed access (
T['field']),keyof T, mapped types, type spreads, destructuring, or re-export chains. Bundled.mddocuments every false-positive class with tuning axes for project-local overrides. Output is a STARTING POINT for human review, never a "safe to delete" list.Reachable as
codemap query --recipe unused-type-members(or via--format sarif/--format annotations/--ciaggregate); rule id iscodemap.unused-type-members. Golden-query expectation atfixtures/golden/minimal/unused-type-members.json.Slice 3 —
suppressionssubstrate (S, minor — schema 9→10)Opt-in markers parser extension recognises:
Also accepts
#,--,<!--,/*leaders for non-JS files. Writes to a newsuppressions(file_path, line_number, recipe_id)table. Two scopes encoded byline_number: positive = next-line (the directive sits one line above),0= file scope.Recipes opt in via
LEFT JOIN suppressions s ON s.file_path = … AND s.recipe_id = '<id>' AND (s.line_number = 0 OR s.line_number = <row's line>) WHERE s.id IS NULL. Ad-hoc SQL is unaffected. Bundled recipes that opt in today:untested-and-dead(line + file) andunimported-exports(file only —exportshas noline_numbercolumn).Stays consistent with the no-opinionated-rule-engine Floor — no severity, no suppression-by-default, no universal-honor model. Suppression is consumer-chosen substrate: recipe authors choose whether to honor; consumers can override per recipe.
The leader regex requires the directive to start a line (modulo whitespace) so it never matches inside string literals — both this clone's tests and recipe
.mdexamples use the directive in prose without polluting the index.Schema bumps to 10.
--fullrebuild auto-runs on next index pass via the existing schema-mismatch detection.Slice 4 — V8 runtime coverage parser (S, minor)
Extends
codemap ingest-coveragewith a--runtime <dir>flag that readsNODE_V8_COVERAGE=...-style directories:NODE_V8_COVERAGE=.cov bun test codemap ingest-coverage .cov --runtime --jsonEach script's byte-offset ranges are converted to per-line hit counts via innermost-wins range walking (smaller, more specific ranges override the function-as-a-whole count — matches V8's documented model). Skips non-
file://URLs (Node internals,evalmachine.<anonymous>); merges duplicate-URL scripts across dumps so multi-process test runs don't inflatetotal_statements.The
coveragetable schema doesn't move — V8 rows write through the same(file_path, name, line_start, hit_statements, total_statements, coverage_pct)projection, so every existing JOIN (untested-and-dead,files-by-coverage,worst-covered-exports) works unchanged.Local-only — SaaS aggregation explicitly out of scope (different product class). The parser stays in-process; no aggregation server, no upload primitive. New
format: "v8"arm on the result envelope.Docs cleanup (no functional impact)
docs/research/fallow.mdanddocs/research/competitive-scan-2026-04.md— adoption-from-fallow framing no longer the codemap mission.lsp-diagnostic-push.md,why-codemap.md,non-goals-reassessment-2026-05.md,audit-pr-architectureskill,docs-governancerule + skill,lessons.md.No remote-repo cloningFloor toroadmap.md(recovered from the deleted competitive-scan; rejected in PR feat(cli): codemap validate / context / --performance, four new query recipes, friendlier no-DB error #23 as demoware that doesn't fit the SQL-index thesis).No opinionated rule engineFloor to clarify suppressions ship as opt-in substrate, not as policy primitives.No runtime tracingFloor to scope V8 protocol post-mortem ingestion as the static-side adjacent capability (live tracing / production beacons still out of scope).Lockstep updates (Rule 10)
Per
docs/README.mdRule 10, bundled agent rule + skill intemplates/agents/and the dev-side mirror in.agents/updated in lockstep — drift between the two pairs is CLI-prefix-only (codemapvsbun src/index.ts):.agents/rules/codemap.md+templates/agents/rules/codemap.md— outcome alias / suppressions /--runtimerows added to dispatch table.agents/skills/codemap/SKILL.md+templates/agents/skills/codemap/SKILL.md— outcome alias preamble + suppression substrate notes + V8 mention oncoveragetabledocs/glossary.md— new entries for outcome aliases (O section), suppressions (S section); updatedcodemap ingest-coverage(I section) andcoverage(C section) entries to mention V8docs/architecture.md— schema version 10, newsuppressionsrow, V8 added to coverage row +coverage-engine.tsparser listTest plan
bun run check— format / lint / typecheck / build / golden-query / 858 unit tests pass (was 847 at branch base — +11 net new tests)bun src/index.ts dead-code --jsonreturns rows;--helpprints alias-specific helpbun src/index.ts query --recipe unused-type-members --jsonreturns 50 rows on this repo// codemap-ignore-next-line untested-and-deaddirectives populatesuppressionstable; recipes JOIN-honor themmkdtempSync+ craftedV8ScriptCoveragepayloads (line-mapping accuracy + duplicate-URL merge + non-file-URL skip)dropAll+createSchema()flow);dropAllupdated to includesuppressions.agents/rules/concise-comments.md)Summary by CodeRabbit
New Features
dead-code,deprecated,boundaries,hotspots,coverage-gaps) as shorthand CLI commands for common queries.--runtimeflag foringest-coverage.codemap-ignore-next-line/codemap-ignore-file) to exclude specific findings from recipes.unused-type-membersrecipe to identify unreferenced type members.Documentation