Skip to content

chore(contrib): few tweaks to make grida runnable easier - scripts, mise, some fixes to first run success#574

Open
zdunecki wants to merge 1 commit intogridaco:mainfrom
zdunecki:chore/contrib
Open

chore(contrib): few tweaks to make grida runnable easier - scripts, mise, some fixes to first run success#574
zdunecki wants to merge 1 commit intogridaco:mainfrom
zdunecki:chore/contrib

Conversation

@zdunecki
Copy link

@zdunecki zdunecki commented Mar 12, 2026

Hi,

Grida looks awesome! I've tried to pull that locally and I've got some troubles to run, some sort of stuff were missing/manually edits needed. Below changes make have fun with Grida faster after clone.

Best,

Summary by CodeRabbit

Release Notes

  • Documentation

    • Updated development setup instructions with new initialization workflow
  • New Features

    • Added automated development environment initialization
    • Added development services orchestration
    • Introduced tool version management configuration
  • Bug Fixes

    • Improved authentication redirect handling for better security
  • Chores

    • Enhanced database schema for improved message queue support

@vercel
Copy link

vercel bot commented Mar 12, 2026

@zdunecki is attempting to deploy a commit to the Grida Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Mar 12, 2026

Walkthrough

This PR updates project setup and development tooling by adding a mise.toml tool version configuration, creating shell scripts for initializing and running local development services (including Supabase integration with signing key generation), updating README documentation to reference the new setup process, modifying the sign-in route to safely resolve redirect URIs, and extending Supabase pgmq schema permissions.

Changes

Cohort / File(s) Summary
Setup Documentation
README.md, mise.toml, package.json
Updated quickstart to use mise install and added dev:init and dev:services npm scripts; introduced mise.toml with tool version specifications (node 22, pnpm 10, rust 1.92.0, supabase 2, deno 2, python 3.12).
Development Environment Scripts
scripts/dev/init.sh, scripts/dev/services.sh
Added initialization scripts that validate prerequisites (Node.js v22+, pnpm v10+, optional Docker/Supabase CLI), generate environment files from examples, orchestrate Supabase startup with EC P-256 signing key generation, and inject configuration into .env.local.
Authentication Logic
editor/app/(insiders)/insiders/auth/basic/sign-in/route.ts
Changed redirect path construction to resolve redirect_uri against requestUrl.origin before issuing 302 redirect, ensuring consistent origin handling.
Database Schema
supabase/migrations/20250503123355_pgmq.sql
Added pgmq_public schema and extended service_role grants on pgmq schema and pgmq.meta table (SELECT, UPDATE, DELETE, INSERT, REFERENCES, TRIGGER, TRUNCATE permissions).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested labels

org

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: adding setup scripts, mise configuration, and fixes to improve the initial developer experience for running Grida locally.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
README.md (1)

193-200: ⚠️ Potential issue | 🟡 Minor

Call out the build prerequisite before pnpm typecheck.

pnpm typecheck is still not first-command-safe on a fresh clone; the docs need the shared-package build step before it.

Suggested change
 pnpm dev:editor
 pnpm dev:packages
+pnpm turbo build --filter="./packages/*"
 pnpm typecheck
 pnpm lint
 pnpm test

Based on learnings, Run \pnpm install` and `pnpm turbo build --filter="./packages/*"` after cloning the repo before executing `pnpm typecheck`.`

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 193 - 200, Update the "Common tasks" section so the
README calls out the required build/install steps before running `pnpm
typecheck`: add a prerequisite note above the `pnpm typecheck` entry (near the
"Common tasks" heading) instructing users to run `pnpm install` and `pnpm turbo
build --filter="./packages/*"` on a fresh clone; ensure the note is concise and
clearly placed immediately before the `pnpm typecheck` line so readers see the
prerequisite first.
editor/app/(insiders)/insiders/auth/basic/sign-in/route.ts (1)

30-39: ⚠️ Potential issue | 🟠 Major

redirect_uri and next can redirect to arbitrary external URLs.

Lines 30-39 fail to validate post-login redirect targets. The new URL(redirect_uri, requestUrl.origin) constructor preserves absolute URLs unchanged, and resolve_next() explicitly returns absolute URLs without modification (see editor/host/url.ts:333). Both parameters sourced from untrusted form data can thus bypass same-origin checks and redirect authenticated users to arbitrary sites.

Validate both targets against requestUrl.origin before redirecting, or use an allowlist of permitted redirect paths.

Suggested fix
-  if (redirect_uri) {
-    return NextResponse.redirect(
-      new URL(redirect_uri, requestUrl.origin),
-      { status: 302 }
-    );
-  }
-
-  if (next) {
-    return NextResponse.redirect(resolve_next(requestUrl.origin, next), {
-      status: 302,
-    });
-  }
+  const redirectTarget = redirect_uri ?? next;
+  if (redirectTarget) {
+    let redirectUrl: URL;
+    try {
+      redirectUrl = new URL(redirectTarget, requestUrl.origin);
+    } catch {
+      return NextResponse.redirect(requestUrl.origin + "/insiders/welcome", {
+        status: 302,
+      });
+    }
+
+    if (redirectUrl.origin !== requestUrl.origin) {
+      return NextResponse.redirect(requestUrl.origin + "/insiders/welcome", {
+        status: 302,
+      });
+    }
+
+    return NextResponse.redirect(redirectUrl, { status: 302 });
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@editor/app/`(insiders)/insiders/auth/basic/sign-in/route.ts around lines 30 -
39, The redirect handling currently returns NextResponse.redirect for untrusted
form values redirect_uri and next (used with new URL(redirect_uri,
requestUrl.origin) and resolve_next(requestUrl.origin, next)), which allows
absolute external redirects; fix by validating those targets before redirecting:
ensure new URL(redirect_uri, requestUrl.origin).origin === requestUrl.origin (or
check that resolve_next(...) yields a same-origin path) or match them against a
server-side allowlist of permitted paths, and only call NextResponse.redirect
when the check passes (otherwise fallback to a safe internal path); update the
logic around redirect_uri, next, new URL(...), and resolve_next(...)
accordingly.
🧹 Nitpick comments (2)
supabase/migrations/20250503123355_pgmq.sql (1)

8-9: Add pgTAP coverage for the new permission boundary.

This migration grants service_role access to pgmq schemas and tables, but there is no accompanying pgTAP test asserting that service_role has the required schema/table access and that non-service roles (anon, authenticated) are properly denied. Add a positive test for the intended role and negative tests for anon and authenticated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/migrations/20250503123355_pgmq.sql` around lines 8 - 9, Add a pgTAP
test that asserts the new permission boundary: use
has_schema_privilege('service_role', 'pgmq', 'USAGE') and
has_schema_privilege('service_role', 'pgmq_public', 'USAGE') (wrap with pgTAP
ok/assert functions) and also assert that has_schema_privilege('anon', ...) and
has_schema_privilege('authenticated', ...) return false (use pgTAP isnt/ok with
NOT) ; additionally pick a representative table in each schema and assert
has_table_privilege('service_role', '<schema>.<table>', 'SELECT' or 'USAGE' as
appropriate) is true and the same check for 'anon' and 'authenticated' is false;
name the test to run with the migration test suite so these positive and
negative assertions fail the CI if privileges change.
package.json (1)

9-10: Using bash wrapper in package scripts improves portability.

Both scripts are correctly committed with executable bits (100755), so the current code works. However, wrapping with bash ./scripts/... removes the dependency on file modes and makes the scripts more resilient across different clone environments and archive/copy scenarios—a defensive best practice.

Suggested change
-    "dev:init": " ./scripts/dev/init.sh",
-    "dev:services": "./scripts/dev/services.sh",
+    "dev:init": "bash ./scripts/dev/init.sh",
+    "dev:services": "bash ./scripts/dev/services.sh",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` around lines 9 - 10, The npm scripts "dev:init" and
"dev:services" rely on executable file permissions; change their command strings
in package.json to explicitly invoke the shell (e.g., prepend "bash " before
"./scripts/dev/init.sh" and "./scripts/dev/services.sh") so the scripts run
regardless of file mode; update the values for the dev:init and dev:services
entries accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 183-189: Update the quickstart requirements to list Docker (and
Docker Desktop/docker engine) as a prerequisite and add a note that the Docker
daemon must be running before running the "pnpm dev:services" command;
specifically mention the dev services script that fails when Docker is absent
and suggest either adding a one-line check/instruction to start Docker or a link
to Docker installation docs in the README.

In `@scripts/dev/init.sh`:
- Around line 30-44: Update the node and pnpm checks in init.sh to enforce the
minimum versions: capture NODE_V from `node --version` (strip leading "v") and
PNPM_V from `pnpm --version`, parse their major (and optionally minor/patch)
components and compare them against the declared minima (node >= 22.0.0, pnpm >=
10.0.0); if the version check fails for either `NODE_V` or `PNPM_V` emit the
existing red error message (augmenting it to show the detected version) and exit
1, otherwise print the green success line; use the existing variables NODE_V and
PNPM_V and the same echo messages to keep output consistent.

In `@scripts/dev/services.sh`:
- Around line 59-72: The script currently only regenerates signing_keys.json
when the file is missing or equals "[]", so zero-byte files left by interrupted
writes are ignored; update the existence check around SIGNING_KEYS in the if
condition (the test that currently uses [ ! -f "$SIGNING_KEYS" ] || [ "$(cat
"$SIGNING_KEYS")" = "[]" ]) to detect empty files as missing by using the -s
test (e.g., replace [ ! -f "$SIGNING_KEYS" ] with [ ! -s "$SIGNING_KEYS" ])
while keeping the existing check for the "[]" content so the node key-generation
block will run when the file is missing, empty, or contains only [].
- Around line 81-125: The script continues to the final "Services running!"
message even when "supabase start" fails or SUPABASE_OUTPUT is empty; update the
flow so failures stop the script: capture the exit code of the supabase start
command and if it fails print a clear error and exit non-zero, and after
obtaining SUPABASE_OUTPUT validate it is non-empty before injecting keys — if
SUPABASE_OUTPUT is empty print an error/helpful message and exit non-zero
instead of proceeding; only print the final success message when supabase start
succeeded and at least one environment value was updated (i.e., SUPABASE_OUTPUT
non-empty or UPDATED==1). Reference symbols: supabase start, SUPABASE_OUTPUT,
UPDATED, ENV_FILE.

In `@supabase/migrations/20250503123355_pgmq.sql`:
- Around line 4-9: Add explicit ACL hardening for pgmq_public functions by
revoking public execute privileges and granting only service_role EXECUTE on the
pgmq_public.read and pgmq_public.archive functions (follow the pattern used in
other migrations like rpc_delete_project_with_timeout: REVOKE ALL ON FUNCTION
... FROM public; GRANT EXECUTE ON FUNCTION ... TO service_role). Update the
migration that creates schema "pgmq_public" / extension "pgmq" to include these
REVOKE/GRANT statements for the read and archive functions, and add pgTAP tests
that assert public cannot execute them while service_role can (tests should call
pgmq_public.read and pgmq_public.archive and expect permission denied for public
and success for service_role).

---

Outside diff comments:
In `@editor/app/`(insiders)/insiders/auth/basic/sign-in/route.ts:
- Around line 30-39: The redirect handling currently returns
NextResponse.redirect for untrusted form values redirect_uri and next (used with
new URL(redirect_uri, requestUrl.origin) and resolve_next(requestUrl.origin,
next)), which allows absolute external redirects; fix by validating those
targets before redirecting: ensure new URL(redirect_uri,
requestUrl.origin).origin === requestUrl.origin (or check that resolve_next(...)
yields a same-origin path) or match them against a server-side allowlist of
permitted paths, and only call NextResponse.redirect when the check passes
(otherwise fallback to a safe internal path); update the logic around
redirect_uri, next, new URL(...), and resolve_next(...) accordingly.

In `@README.md`:
- Around line 193-200: Update the "Common tasks" section so the README calls out
the required build/install steps before running `pnpm typecheck`: add a
prerequisite note above the `pnpm typecheck` entry (near the "Common tasks"
heading) instructing users to run `pnpm install` and `pnpm turbo build
--filter="./packages/*"` on a fresh clone; ensure the note is concise and
clearly placed immediately before the `pnpm typecheck` line so readers see the
prerequisite first.

---

Nitpick comments:
In `@package.json`:
- Around line 9-10: The npm scripts "dev:init" and "dev:services" rely on
executable file permissions; change their command strings in package.json to
explicitly invoke the shell (e.g., prepend "bash " before
"./scripts/dev/init.sh" and "./scripts/dev/services.sh") so the scripts run
regardless of file mode; update the values for the dev:init and dev:services
entries accordingly.

In `@supabase/migrations/20250503123355_pgmq.sql`:
- Around line 8-9: Add a pgTAP test that asserts the new permission boundary:
use has_schema_privilege('service_role', 'pgmq', 'USAGE') and
has_schema_privilege('service_role', 'pgmq_public', 'USAGE') (wrap with pgTAP
ok/assert functions) and also assert that has_schema_privilege('anon', ...) and
has_schema_privilege('authenticated', ...) return false (use pgTAP isnt/ok with
NOT) ; additionally pick a representative table in each schema and assert
has_table_privilege('service_role', '<schema>.<table>', 'SELECT' or 'USAGE' as
appropriate) is true and the same check for 'anon' and 'authenticated' is false;
name the test to run with the migration test suite so these positive and
negative assertions fail the CI if privileges change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b9293f95-262b-44d8-841e-cd11d6f59b03

📥 Commits

Reviewing files that changed from the base of the PR and between fc965ba and c04ba89.

📒 Files selected for processing (7)
  • README.md
  • editor/app/(insiders)/insiders/auth/basic/sign-in/route.ts
  • mise.toml
  • package.json
  • scripts/dev/init.sh
  • scripts/dev/services.sh
  • supabase/migrations/20250503123355_pgmq.sql

Comment on lines +183 to 189
**Requirements:** mise, Node.js **22+**, pnpm **10+**

```bash
pnpm install
mise install
pnpm dev:init
pnpm dev:services
pnpm dev
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

List Docker as a quickstart prerequisite.

pnpm dev:services hard-exits when Docker is missing or not running (scripts/dev/services.sh:31-39), so the current quickstart still fails on a clean machine that only followed the documented requirements.

Suggested change
-**Requirements:** mise, Node.js **22+**, pnpm **10+**
+**Requirements:** mise, Docker / Docker Desktop, Node.js **22+**, pnpm **10+**
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Requirements:** mise, Node.js **22+**, pnpm **10+**
```bash
pnpm install
mise install
pnpm dev:init
pnpm dev:services
pnpm dev
**Requirements:** mise, Docker / Docker Desktop, Node.js **22+**, pnpm **10+**
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 183 - 189, Update the quickstart requirements to list
Docker (and Docker Desktop/docker engine) as a prerequisite and add a note that
the Docker daemon must be running before running the "pnpm dev:services"
command; specifically mention the dev services script that fails when Docker is
absent and suggest either adding a one-line check/instruction to start Docker or
a link to Docker installation docs in the README.

Comment on lines +30 to +44
if command -v node &>/dev/null; then
NODE_V=$(node --version)
echo -e " ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}"
else
echo -e " ${RED}x${NC} node not found - install Node.js 22+"
exit 1
fi

if command -v pnpm &>/dev/null; then
PNPM_V=$(pnpm --version)
echo -e " ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}"
else
echo -e " ${RED}x${NC} pnpm not found - install pnpm 10+"
exit 1
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -n '"node": ">=22\.0\.0"|"pnpm": ">=10\.0\.0"|command -v node|command -v pnpm|NODE_V=|PNPM_V=' package.json scripts/dev/init.sh

Repository: gridaco/grida

Length of output: 355


Enforce the declared Node.js and pnpm minimums in this script.

Lines 30–44 only check whether node and pnpm are installed, not whether they meet the minimum versions declared in package.json (node >=22.0.0 and pnpm >=10.0.0). A user with Node 20 or pnpm 9 gets a passing check and fails later during build steps.

Suggested change
 if command -v node &>/dev/null; then
   NODE_V=$(node --version)
+  NODE_MAJOR="${NODE_V#v}"
+  NODE_MAJOR="${NODE_MAJOR%%.*}"
+  if [ "$NODE_MAJOR" -lt 22 ]; then
+    echo -e "  ${RED}x${NC} node ${DIM}${NODE_V}${NC} detected - install Node.js 22+"
+    exit 1
+  fi
   echo -e "  ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}"
 else
   echo -e "  ${RED}x${NC} node not found - install Node.js 22+"
   exit 1
 fi
 
 if command -v pnpm &>/dev/null; then
   PNPM_V=$(pnpm --version)
+  PNPM_MAJOR="${PNPM_V%%.*}"
+  if [ "$PNPM_MAJOR" -lt 10 ]; then
+    echo -e "  ${RED}x${NC} pnpm ${DIM}v${PNPM_V}${NC} detected - install pnpm 10+"
+    exit 1
+  fi
   echo -e "  ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}"
 else
   echo -e "  ${RED}x${NC} pnpm not found - install pnpm 10+"
   exit 1
 fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if command -v node &>/dev/null; then
NODE_V=$(node --version)
echo -e " ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}"
else
echo -e " ${RED}x${NC} node not found - install Node.js 22+"
exit 1
fi
if command -v pnpm &>/dev/null; then
PNPM_V=$(pnpm --version)
echo -e " ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}"
else
echo -e " ${RED}x${NC} pnpm not found - install pnpm 10+"
exit 1
fi
if command -v node &>/dev/null; then
NODE_V=$(node --version)
NODE_MAJOR="${NODE_V#v}"
NODE_MAJOR="${NODE_MAJOR%%.*}"
if [ "$NODE_MAJOR" -lt 22 ]; then
echo -e " ${RED}x${NC} node ${DIM}${NODE_V}${NC} detected - install Node.js 22+"
exit 1
fi
echo -e " ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}"
else
echo -e " ${RED}x${NC} node not found - install Node.js 22+"
exit 1
fi
if command -v pnpm &>/dev/null; then
PNPM_V=$(pnpm --version)
PNPM_MAJOR="${PNPM_V%%.*}"
if [ "$PNPM_MAJOR" -lt 10 ]; then
echo -e " ${RED}x${NC} pnpm ${DIM}v${PNPM_V}${NC} detected - install pnpm 10+"
exit 1
fi
echo -e " ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}"
else
echo -e " ${RED}x${NC} pnpm not found - install pnpm 10+"
exit 1
fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/dev/init.sh` around lines 30 - 44, Update the node and pnpm checks in
init.sh to enforce the minimum versions: capture NODE_V from `node --version`
(strip leading "v") and PNPM_V from `pnpm --version`, parse their major (and
optionally minor/patch) components and compare them against the declared minima
(node >= 22.0.0, pnpm >= 10.0.0); if the version check fails for either `NODE_V`
or `PNPM_V` emit the existing red error message (augmenting it to show the
detected version) and exit 1, otherwise print the green success line; use the
existing variables NODE_V and PNPM_V and the same echo messages to keep output
consistent.

Comment on lines +59 to +72
SIGNING_KEYS="$REPO_ROOT/supabase/signing_keys.json"
if [ ! -f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]; then
echo -e "${BLUE}${BOLD}[2/4]${NC} Generating Supabase signing keys..."
node -e "
const crypto = require('crypto');
const { privateKey } = crypto.generateKeyPairSync('ec', { namedCurve: 'P-256' });
const jwk = privateKey.export({ format: 'jwk' });
jwk.kid = crypto.randomUUID();
jwk.use = 'sig';
jwk.key_ops = ['sign', 'verify'];
jwk.alg = 'ES256';
jwk.ext = true;
console.log(JSON.stringify([jwk], null, 2));
" > "$SIGNING_KEYS"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Treat zero-byte signing_keys.json as missing.

Line 60 only regenerates when the file is absent or exactly []. If the redirected node -e write is interrupted, the next run leaves an empty file in place and never self-heals.

Suggested change
-if [ ! -f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]; then
+if [ ! -s "$SIGNING_KEYS" ] || [ "$(tr -d '[:space:]' < "$SIGNING_KEYS")" = "[]" ]; then
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
SIGNING_KEYS="$REPO_ROOT/supabase/signing_keys.json"
if [ ! -f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]; then
echo -e "${BLUE}${BOLD}[2/4]${NC} Generating Supabase signing keys..."
node -e "
const crypto = require('crypto');
const { privateKey } = crypto.generateKeyPairSync('ec', { namedCurve: 'P-256' });
const jwk = privateKey.export({ format: 'jwk' });
jwk.kid = crypto.randomUUID();
jwk.use = 'sig';
jwk.key_ops = ['sign', 'verify'];
jwk.alg = 'ES256';
jwk.ext = true;
console.log(JSON.stringify([jwk], null, 2));
" > "$SIGNING_KEYS"
SIGNING_KEYS="$REPO_ROOT/supabase/signing_keys.json"
if [ ! -s "$SIGNING_KEYS" ] || [ "$(tr -d '[:space:]' < "$SIGNING_KEYS")" = "[]" ]; then
echo -e "${BLUE}${BOLD}[2/4]${NC} Generating Supabase signing keys..."
node -e "
const crypto = require('crypto');
const { privateKey } = crypto.generateKeyPairSync('ec', { namedCurve: 'P-256' });
const jwk = privateKey.export({ format: 'jwk' });
jwk.kid = crypto.randomUUID();
jwk.use = 'sig';
jwk.key_ops = ['sign', 'verify'];
jwk.alg = 'ES256';
jwk.ext = true;
console.log(JSON.stringify([jwk], null, 2));
" > "$SIGNING_KEYS"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/dev/services.sh` around lines 59 - 72, The script currently only
regenerates signing_keys.json when the file is missing or equals "[]", so
zero-byte files left by interrupted writes are ignored; update the existence
check around SIGNING_KEYS in the if condition (the test that currently uses [ !
-f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]) to detect empty
files as missing by using the -s test (e.g., replace [ ! -f "$SIGNING_KEYS" ]
with [ ! -s "$SIGNING_KEYS" ]) while keeping the existing check for the "[]"
content so the node key-generation block will run when the file is missing,
empty, or contains only [].

Comment on lines +81 to +125
echo -e "${BLUE}${BOLD}[3/4]${NC} Starting Supabase..."
echo ""
cd "$REPO_ROOT"
supabase start || true
echo ""

# 4. Inject local Supabase keys into .env.local
echo -e "${BLUE}${BOLD}[4/4]${NC} Configuring environment..."

SUPABASE_OUTPUT=$(supabase status -o env 2>/dev/null || true)

API_URL=$(echo "$SUPABASE_OUTPUT" | grep '^API_URL=' | cut -d= -f2- | tr -d '"' || true)
PUBLISHABLE_KEY=$(echo "$SUPABASE_OUTPUT" | grep '^PUBLISHABLE_KEY=' | cut -d= -f2- | tr -d '"' || true)
SECRET_KEY=$(echo "$SUPABASE_OUTPUT" | grep '^SECRET_KEY=' | cut -d= -f2- | tr -d '"' || true)

UPDATED=0

if [ -n "$API_URL" ]; then
sed -i.bak "s|^SUPABASE_URL=.*|SUPABASE_URL=\"$API_URL\"|" "$ENV_FILE"
sed -i.bak "s|^NEXT_PUBLIC_SUPABASE_URL=.*|NEXT_PUBLIC_SUPABASE_URL=\"$API_URL\"|" "$ENV_FILE"
echo -e " ${GREEN}+${NC} SUPABASE_URL ${DIM}${API_URL}${NC}"
UPDATED=1
fi

if [ -n "$PUBLISHABLE_KEY" ]; then
sed -i.bak "s|^NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=.*|NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=\"$PUBLISHABLE_KEY\"|" "$ENV_FILE"
echo -e " ${GREEN}+${NC} NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY ${DIM}${PUBLISHABLE_KEY:0:20}...${NC}"
UPDATED=1
fi

if [ -n "$SECRET_KEY" ]; then
sed -i.bak "s|^SUPABASE_SECRET_KEY=.*|SUPABASE_SECRET_KEY=\"$SECRET_KEY\"|" "$ENV_FILE"
echo -e " ${GREEN}+${NC} SUPABASE_SECRET_KEY ${DIM}${SECRET_KEY:0:20}...${NC}"
UPDATED=1
fi

rm -f "$ENV_FILE.bak"

if [ "$UPDATED" -eq 0 ]; then
echo -e " ${YELLOW}~${NC} Could not read Supabase keys - update .env.local manually"
fi

echo ""
echo -e "${GREEN}${BOLD}Services running!${NC}"
echo ""
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not print a success path after a failed Supabase start.

Lines 84 and 119-125 let the script continue all the way to Services running! even when supabase start failed or supabase status -o env returned nothing. That makes first-run failures much harder to diagnose.

Suggested change
-supabase start || true
+if ! supabase start && ! supabase status -o env >/dev/null 2>&1; then
+  echo -e "  ${RED}x${NC} Supabase failed to start"
+  exit 1
+fi
@@
 if [ "$UPDATED" -eq 0 ]; then
   echo -e "  ${YELLOW}~${NC} Could not read Supabase keys - update .env.local manually"
+  exit 1
 fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/dev/services.sh` around lines 81 - 125, The script continues to the
final "Services running!" message even when "supabase start" fails or
SUPABASE_OUTPUT is empty; update the flow so failures stop the script: capture
the exit code of the supabase start command and if it fails print a clear error
and exit non-zero, and after obtaining SUPABASE_OUTPUT validate it is non-empty
before injecting keys — if SUPABASE_OUTPUT is empty print an error/helpful
message and exit non-zero instead of proceeding; only print the final success
message when supabase start succeeded and at least one environment value was
updated (i.e., SUPABASE_OUTPUT non-empty or UPDATED==1). Reference symbols:
supabase start, SUPABASE_OUTPUT, UPDATED, ENV_FILE.

Comment on lines +4 to +9
create schema if not exists "pgmq_public";

create extension if not exists "pgmq" with schema "pgmq" version '1.4.4';

grant usage on schema "pgmq" to "service_role";
grant usage on schema "pgmq_public" to "service_role";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching pgmq privilege statements..."
rg -n -C2 --type=sql 'pgmq_public|grant\s+execute|revoke\s+all|grant\s+usage' supabase/migrations supabase/tests || true

echo
echo "Searching for application call sites that use pgmq_public..."
rg -n -C2 --type=ts 'schema:\s*"pgmq_public"|rpc\("read"|rpc\("archive"' jobs supabase || true

Repository: gridaco/grida

Length of output: 2727


🏁 Script executed:

cat -n supabase/migrations/20250503123355_pgmq.sql

Repository: gridaco/grida

Length of output: 949


🏁 Script executed:

# Search for all pgmq-related migrations to see if a later one hardens the functions
rg -l 'pgmq' supabase/migrations/*.sql | sort

Repository: gridaco/grida

Length of output: 286


🏁 Script executed:

# Check if there are any other pgmq_public-specific modifications after the initial migration
rg -n 'pgmq_public' supabase/migrations/ --type=sql

Repository: gridaco/grida

Length of output: 246


🏁 Script executed:

# Search for pgmq function calls or documentation references
rg -n 'pgmq.*\(read|archive\)' jobs supabase --type=ts --type=sql -A2 -B2

Repository: gridaco/grida

Length of output: 39


🏁 Script executed:

# Check if there's any pgmq documentation or README that explains the schema setup
find . -name "*.md" -o -name "*.txt" | xargs grep -l pgmq 2>/dev/null || echo "No markdown/text references found"

Repository: gridaco/grida

Length of output: 73


🏁 Script executed:

# Look for comments or documentation in migrations that mention pgmq defaults
rg -n 'pgmq' supabase/migrations/20250503123355_pgmq.sql --type=sql -B5

Repository: gridaco/grida

Length of output: 868


🏁 Script executed:

cat -n jobs/README.md | head -50

Repository: gridaco/grida

Length of output: 1353


🏁 Script executed:

# Check if pgmq functions are RPC functions (stored procedures) in PostgreSQL
# Search for pgmq function definitions or any CREATE FUNCTION statements in migrations
rg -n 'create.*function.*pgmq|pgmq.*function' supabase/migrations --type=sql

Repository: gridaco/grida

Length of output: 39


🏁 Script executed:

# Search for any function definitions or wrappers in pgmq_public in migrations
rg -n 'pgmq_public\.' supabase/migrations --type=sql

Repository: gridaco/grida

Length of output: 39


🏁 Script executed:

# Check if pgmq extension documentation is available or if functions are created elsewhere
rg -n 'read.*function|archive.*function' supabase/migrations --type=sql -i

Repository: gridaco/grida

Length of output: 39


🏁 Script executed:

# Check test files for pgmq or queue-related RLS/permission tests
rg -n 'pgmq|queue' supabase/tests --type=sql

Repository: gridaco/grida

Length of output: 39


🏁 Script executed:

# Look for any test files that might cover permissions for pgmq
fd '\.sql$' supabase/tests -x grep -l 'pgmq\|queue' {} \;

Repository: gridaco/grida

Length of output: 39


🏁 Script executed:

# Check if there's any pgmq documentation in code comments or if RPC is documented
rg -n 'pgmq' jobs/src --type=ts -B3 -A3

Repository: gridaco/grida

Length of output: 1123


🏁 Script executed:

# See if supabase tests directory exists and what structure it has
ls -la supabase/tests/ 2>/dev/null || echo "Tests directory structure check"

Repository: gridaco/grida

Length of output: 688


🏁 Script executed:

# Look for all migration files after pgmq to see if any other queue-related changes were made
ls -1 supabase/migrations/ | grep -E '202505|202506' | head -20

Repository: gridaco/grida

Length of output: 337


Lock down pgmq_public function access with explicit ACLs.

The migration grants schema USAGE to service_role but does not explicitly control EXECUTE permissions on the read and archive functions that the application calls via RPC. This relies on the extension's default privileges instead of an explicit least-privilege contract. The codebase pattern (e.g., 20260201133814_rpc_delete_project_with_timeout.sql) establishes the standard: REVOKE ALL ... FROM public followed by GRANT EXECUTE ... TO service_role. Add the same hardening here, and include pgTAP tests to verify that only service_role can execute these functions.

Proposed hardening
 grant usage on schema "pgmq" to "service_role";
 grant usage on schema "pgmq_public" to "service_role";
+revoke all on all functions in schema "pgmq_public" from public;
+grant execute on all functions in schema "pgmq_public" to "service_role";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
create schema if not exists "pgmq_public";
create extension if not exists "pgmq" with schema "pgmq" version '1.4.4';
grant usage on schema "pgmq" to "service_role";
grant usage on schema "pgmq_public" to "service_role";
create schema if not exists "pgmq_public";
create extension if not exists "pgmq" with schema "pgmq" version '1.4.4';
grant usage on schema "pgmq" to "service_role";
grant usage on schema "pgmq_public" to "service_role";
revoke all on all functions in schema "pgmq_public" from public;
grant execute on all functions in schema "pgmq_public" to "service_role";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@supabase/migrations/20250503123355_pgmq.sql` around lines 4 - 9, Add explicit
ACL hardening for pgmq_public functions by revoking public execute privileges
and granting only service_role EXECUTE on the pgmq_public.read and
pgmq_public.archive functions (follow the pattern used in other migrations like
rpc_delete_project_with_timeout: REVOKE ALL ON FUNCTION ... FROM public; GRANT
EXECUTE ON FUNCTION ... TO service_role). Update the migration that creates
schema "pgmq_public" / extension "pgmq" to include these REVOKE/GRANT statements
for the read and archive functions, and add pgTAP tests that assert public
cannot execute them while service_role can (tests should call pgmq_public.read
and pgmq_public.archive and expect permission denied for public and success for
service_role).

@softmarshmallow
Copy link
Member

@zdunecki Hi. thank you for openning the PR. I appreciate the changes, but the changes are too opinionated. (mise.toml, setup scripts)
if this PR scopes down and focuses on fixing a clear issue, or improving documentation, I can start the review.

Thanks !

@softmarshmallow
Copy link
Member

@zdunecki would you re-consider using just justfile which would follow existing convention?

I also found good example how to wire things for windows as well, no need for mise.

reference: (sets powershell)
https://github.com/oxc-project/oxc/blob/main/justfile

@zdunecki
Copy link
Author

@softmarshmallow

Yeah, I didn't find just on the Grida codebase before - it's more reasonable.
I'm very busy right now and can't promise to accomplish that quickly.

I need to test this on multiple environments to confirm whether this PR improves Grida installation.

Best,

@softmarshmallow
Copy link
Member

@zdunecki sure. no problem at all. take your time ; )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants