chore(contrib): few tweaks to make grida runnable easier - scripts, mise, some fixes to first run success#574
chore(contrib): few tweaks to make grida runnable easier - scripts, mise, some fixes to first run success#574zdunecki wants to merge 1 commit intogridaco:mainfrom
Conversation
…ise, some fixes to first run success
|
@zdunecki is attempting to deploy a commit to the Grida Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThis PR updates project setup and development tooling by adding a Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
README.md (1)
193-200:⚠️ Potential issue | 🟡 MinorCall out the build prerequisite before
pnpm typecheck.
pnpm typecheckis still not first-command-safe on a fresh clone; the docs need the shared-package build step before it.Suggested change
pnpm dev:editor pnpm dev:packages +pnpm turbo build --filter="./packages/*" pnpm typecheck pnpm lint pnpm testBased on learnings,
Run \pnpm install` and `pnpm turbo build --filter="./packages/*"` after cloning the repo before executing `pnpm typecheck`.`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 193 - 200, Update the "Common tasks" section so the README calls out the required build/install steps before running `pnpm typecheck`: add a prerequisite note above the `pnpm typecheck` entry (near the "Common tasks" heading) instructing users to run `pnpm install` and `pnpm turbo build --filter="./packages/*"` on a fresh clone; ensure the note is concise and clearly placed immediately before the `pnpm typecheck` line so readers see the prerequisite first.editor/app/(insiders)/insiders/auth/basic/sign-in/route.ts (1)
30-39:⚠️ Potential issue | 🟠 Major
redirect_uriandnextcan redirect to arbitrary external URLs.Lines 30-39 fail to validate post-login redirect targets. The
new URL(redirect_uri, requestUrl.origin)constructor preserves absolute URLs unchanged, andresolve_next()explicitly returns absolute URLs without modification (seeeditor/host/url.ts:333). Both parameters sourced from untrusted form data can thus bypass same-origin checks and redirect authenticated users to arbitrary sites.Validate both targets against
requestUrl.originbefore redirecting, or use an allowlist of permitted redirect paths.Suggested fix
- if (redirect_uri) { - return NextResponse.redirect( - new URL(redirect_uri, requestUrl.origin), - { status: 302 } - ); - } - - if (next) { - return NextResponse.redirect(resolve_next(requestUrl.origin, next), { - status: 302, - }); - } + const redirectTarget = redirect_uri ?? next; + if (redirectTarget) { + let redirectUrl: URL; + try { + redirectUrl = new URL(redirectTarget, requestUrl.origin); + } catch { + return NextResponse.redirect(requestUrl.origin + "/insiders/welcome", { + status: 302, + }); + } + + if (redirectUrl.origin !== requestUrl.origin) { + return NextResponse.redirect(requestUrl.origin + "/insiders/welcome", { + status: 302, + }); + } + + return NextResponse.redirect(redirectUrl, { status: 302 }); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@editor/app/`(insiders)/insiders/auth/basic/sign-in/route.ts around lines 30 - 39, The redirect handling currently returns NextResponse.redirect for untrusted form values redirect_uri and next (used with new URL(redirect_uri, requestUrl.origin) and resolve_next(requestUrl.origin, next)), which allows absolute external redirects; fix by validating those targets before redirecting: ensure new URL(redirect_uri, requestUrl.origin).origin === requestUrl.origin (or check that resolve_next(...) yields a same-origin path) or match them against a server-side allowlist of permitted paths, and only call NextResponse.redirect when the check passes (otherwise fallback to a safe internal path); update the logic around redirect_uri, next, new URL(...), and resolve_next(...) accordingly.
🧹 Nitpick comments (2)
supabase/migrations/20250503123355_pgmq.sql (1)
8-9: Add pgTAP coverage for the new permission boundary.This migration grants
service_roleaccess to pgmq schemas and tables, but there is no accompanying pgTAP test asserting thatservice_rolehas the required schema/table access and that non-service roles (anon,authenticated) are properly denied. Add a positive test for the intended role and negative tests foranonandauthenticated.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@supabase/migrations/20250503123355_pgmq.sql` around lines 8 - 9, Add a pgTAP test that asserts the new permission boundary: use has_schema_privilege('service_role', 'pgmq', 'USAGE') and has_schema_privilege('service_role', 'pgmq_public', 'USAGE') (wrap with pgTAP ok/assert functions) and also assert that has_schema_privilege('anon', ...) and has_schema_privilege('authenticated', ...) return false (use pgTAP isnt/ok with NOT) ; additionally pick a representative table in each schema and assert has_table_privilege('service_role', '<schema>.<table>', 'SELECT' or 'USAGE' as appropriate) is true and the same check for 'anon' and 'authenticated' is false; name the test to run with the migration test suite so these positive and negative assertions fail the CI if privileges change.package.json (1)
9-10: Usingbashwrapper in package scripts improves portability.Both scripts are correctly committed with executable bits (
100755), so the current code works. However, wrapping withbash ./scripts/...removes the dependency on file modes and makes the scripts more resilient across different clone environments and archive/copy scenarios—a defensive best practice.Suggested change
- "dev:init": " ./scripts/dev/init.sh", - "dev:services": "./scripts/dev/services.sh", + "dev:init": "bash ./scripts/dev/init.sh", + "dev:services": "bash ./scripts/dev/services.sh",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 9 - 10, The npm scripts "dev:init" and "dev:services" rely on executable file permissions; change their command strings in package.json to explicitly invoke the shell (e.g., prepend "bash " before "./scripts/dev/init.sh" and "./scripts/dev/services.sh") so the scripts run regardless of file mode; update the values for the dev:init and dev:services entries accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 183-189: Update the quickstart requirements to list Docker (and
Docker Desktop/docker engine) as a prerequisite and add a note that the Docker
daemon must be running before running the "pnpm dev:services" command;
specifically mention the dev services script that fails when Docker is absent
and suggest either adding a one-line check/instruction to start Docker or a link
to Docker installation docs in the README.
In `@scripts/dev/init.sh`:
- Around line 30-44: Update the node and pnpm checks in init.sh to enforce the
minimum versions: capture NODE_V from `node --version` (strip leading "v") and
PNPM_V from `pnpm --version`, parse their major (and optionally minor/patch)
components and compare them against the declared minima (node >= 22.0.0, pnpm >=
10.0.0); if the version check fails for either `NODE_V` or `PNPM_V` emit the
existing red error message (augmenting it to show the detected version) and exit
1, otherwise print the green success line; use the existing variables NODE_V and
PNPM_V and the same echo messages to keep output consistent.
In `@scripts/dev/services.sh`:
- Around line 59-72: The script currently only regenerates signing_keys.json
when the file is missing or equals "[]", so zero-byte files left by interrupted
writes are ignored; update the existence check around SIGNING_KEYS in the if
condition (the test that currently uses [ ! -f "$SIGNING_KEYS" ] || [ "$(cat
"$SIGNING_KEYS")" = "[]" ]) to detect empty files as missing by using the -s
test (e.g., replace [ ! -f "$SIGNING_KEYS" ] with [ ! -s "$SIGNING_KEYS" ])
while keeping the existing check for the "[]" content so the node key-generation
block will run when the file is missing, empty, or contains only [].
- Around line 81-125: The script continues to the final "Services running!"
message even when "supabase start" fails or SUPABASE_OUTPUT is empty; update the
flow so failures stop the script: capture the exit code of the supabase start
command and if it fails print a clear error and exit non-zero, and after
obtaining SUPABASE_OUTPUT validate it is non-empty before injecting keys — if
SUPABASE_OUTPUT is empty print an error/helpful message and exit non-zero
instead of proceeding; only print the final success message when supabase start
succeeded and at least one environment value was updated (i.e., SUPABASE_OUTPUT
non-empty or UPDATED==1). Reference symbols: supabase start, SUPABASE_OUTPUT,
UPDATED, ENV_FILE.
In `@supabase/migrations/20250503123355_pgmq.sql`:
- Around line 4-9: Add explicit ACL hardening for pgmq_public functions by
revoking public execute privileges and granting only service_role EXECUTE on the
pgmq_public.read and pgmq_public.archive functions (follow the pattern used in
other migrations like rpc_delete_project_with_timeout: REVOKE ALL ON FUNCTION
... FROM public; GRANT EXECUTE ON FUNCTION ... TO service_role). Update the
migration that creates schema "pgmq_public" / extension "pgmq" to include these
REVOKE/GRANT statements for the read and archive functions, and add pgTAP tests
that assert public cannot execute them while service_role can (tests should call
pgmq_public.read and pgmq_public.archive and expect permission denied for public
and success for service_role).
---
Outside diff comments:
In `@editor/app/`(insiders)/insiders/auth/basic/sign-in/route.ts:
- Around line 30-39: The redirect handling currently returns
NextResponse.redirect for untrusted form values redirect_uri and next (used with
new URL(redirect_uri, requestUrl.origin) and resolve_next(requestUrl.origin,
next)), which allows absolute external redirects; fix by validating those
targets before redirecting: ensure new URL(redirect_uri,
requestUrl.origin).origin === requestUrl.origin (or check that resolve_next(...)
yields a same-origin path) or match them against a server-side allowlist of
permitted paths, and only call NextResponse.redirect when the check passes
(otherwise fallback to a safe internal path); update the logic around
redirect_uri, next, new URL(...), and resolve_next(...) accordingly.
In `@README.md`:
- Around line 193-200: Update the "Common tasks" section so the README calls out
the required build/install steps before running `pnpm typecheck`: add a
prerequisite note above the `pnpm typecheck` entry (near the "Common tasks"
heading) instructing users to run `pnpm install` and `pnpm turbo build
--filter="./packages/*"` on a fresh clone; ensure the note is concise and
clearly placed immediately before the `pnpm typecheck` line so readers see the
prerequisite first.
---
Nitpick comments:
In `@package.json`:
- Around line 9-10: The npm scripts "dev:init" and "dev:services" rely on
executable file permissions; change their command strings in package.json to
explicitly invoke the shell (e.g., prepend "bash " before
"./scripts/dev/init.sh" and "./scripts/dev/services.sh") so the scripts run
regardless of file mode; update the values for the dev:init and dev:services
entries accordingly.
In `@supabase/migrations/20250503123355_pgmq.sql`:
- Around line 8-9: Add a pgTAP test that asserts the new permission boundary:
use has_schema_privilege('service_role', 'pgmq', 'USAGE') and
has_schema_privilege('service_role', 'pgmq_public', 'USAGE') (wrap with pgTAP
ok/assert functions) and also assert that has_schema_privilege('anon', ...) and
has_schema_privilege('authenticated', ...) return false (use pgTAP isnt/ok with
NOT) ; additionally pick a representative table in each schema and assert
has_table_privilege('service_role', '<schema>.<table>', 'SELECT' or 'USAGE' as
appropriate) is true and the same check for 'anon' and 'authenticated' is false;
name the test to run with the migration test suite so these positive and
negative assertions fail the CI if privileges change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b9293f95-262b-44d8-841e-cd11d6f59b03
📒 Files selected for processing (7)
README.mdeditor/app/(insiders)/insiders/auth/basic/sign-in/route.tsmise.tomlpackage.jsonscripts/dev/init.shscripts/dev/services.shsupabase/migrations/20250503123355_pgmq.sql
| **Requirements:** mise, Node.js **22+**, pnpm **10+** | ||
|
|
||
| ```bash | ||
| pnpm install | ||
| mise install | ||
| pnpm dev:init | ||
| pnpm dev:services | ||
| pnpm dev |
There was a problem hiding this comment.
List Docker as a quickstart prerequisite.
pnpm dev:services hard-exits when Docker is missing or not running (scripts/dev/services.sh:31-39), so the current quickstart still fails on a clean machine that only followed the documented requirements.
Suggested change
-**Requirements:** mise, Node.js **22+**, pnpm **10+**
+**Requirements:** mise, Docker / Docker Desktop, Node.js **22+**, pnpm **10+**📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| **Requirements:** mise, Node.js **22+**, pnpm **10+** | |
| ```bash | |
| pnpm install | |
| mise install | |
| pnpm dev:init | |
| pnpm dev:services | |
| pnpm dev | |
| **Requirements:** mise, Docker / Docker Desktop, Node.js **22+**, pnpm **10+** | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 183 - 189, Update the quickstart requirements to list
Docker (and Docker Desktop/docker engine) as a prerequisite and add a note that
the Docker daemon must be running before running the "pnpm dev:services"
command; specifically mention the dev services script that fails when Docker is
absent and suggest either adding a one-line check/instruction to start Docker or
a link to Docker installation docs in the README.
| if command -v node &>/dev/null; then | ||
| NODE_V=$(node --version) | ||
| echo -e " ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}" | ||
| else | ||
| echo -e " ${RED}x${NC} node not found - install Node.js 22+" | ||
| exit 1 | ||
| fi | ||
|
|
||
| if command -v pnpm &>/dev/null; then | ||
| PNPM_V=$(pnpm --version) | ||
| echo -e " ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}" | ||
| else | ||
| echo -e " ${RED}x${NC} pnpm not found - install pnpm 10+" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n '"node": ">=22\.0\.0"|"pnpm": ">=10\.0\.0"|command -v node|command -v pnpm|NODE_V=|PNPM_V=' package.json scripts/dev/init.shRepository: gridaco/grida
Length of output: 355
Enforce the declared Node.js and pnpm minimums in this script.
Lines 30–44 only check whether node and pnpm are installed, not whether they meet the minimum versions declared in package.json (node >=22.0.0 and pnpm >=10.0.0). A user with Node 20 or pnpm 9 gets a passing check and fails later during build steps.
Suggested change
if command -v node &>/dev/null; then
NODE_V=$(node --version)
+ NODE_MAJOR="${NODE_V#v}"
+ NODE_MAJOR="${NODE_MAJOR%%.*}"
+ if [ "$NODE_MAJOR" -lt 22 ]; then
+ echo -e " ${RED}x${NC} node ${DIM}${NODE_V}${NC} detected - install Node.js 22+"
+ exit 1
+ fi
echo -e " ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}"
else
echo -e " ${RED}x${NC} node not found - install Node.js 22+"
exit 1
fi
if command -v pnpm &>/dev/null; then
PNPM_V=$(pnpm --version)
+ PNPM_MAJOR="${PNPM_V%%.*}"
+ if [ "$PNPM_MAJOR" -lt 10 ]; then
+ echo -e " ${RED}x${NC} pnpm ${DIM}v${PNPM_V}${NC} detected - install pnpm 10+"
+ exit 1
+ fi
echo -e " ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}"
else
echo -e " ${RED}x${NC} pnpm not found - install pnpm 10+"
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if command -v node &>/dev/null; then | |
| NODE_V=$(node --version) | |
| echo -e " ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}" | |
| else | |
| echo -e " ${RED}x${NC} node not found - install Node.js 22+" | |
| exit 1 | |
| fi | |
| if command -v pnpm &>/dev/null; then | |
| PNPM_V=$(pnpm --version) | |
| echo -e " ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}" | |
| else | |
| echo -e " ${RED}x${NC} pnpm not found - install pnpm 10+" | |
| exit 1 | |
| fi | |
| if command -v node &>/dev/null; then | |
| NODE_V=$(node --version) | |
| NODE_MAJOR="${NODE_V#v}" | |
| NODE_MAJOR="${NODE_MAJOR%%.*}" | |
| if [ "$NODE_MAJOR" -lt 22 ]; then | |
| echo -e " ${RED}x${NC} node ${DIM}${NODE_V}${NC} detected - install Node.js 22+" | |
| exit 1 | |
| fi | |
| echo -e " ${GREEN}+${NC} node ${DIM}${NODE_V}${NC}" | |
| else | |
| echo -e " ${RED}x${NC} node not found - install Node.js 22+" | |
| exit 1 | |
| fi | |
| if command -v pnpm &>/dev/null; then | |
| PNPM_V=$(pnpm --version) | |
| PNPM_MAJOR="${PNPM_V%%.*}" | |
| if [ "$PNPM_MAJOR" -lt 10 ]; then | |
| echo -e " ${RED}x${NC} pnpm ${DIM}v${PNPM_V}${NC} detected - install pnpm 10+" | |
| exit 1 | |
| fi | |
| echo -e " ${GREEN}+${NC} pnpm ${DIM}v${PNPM_V}${NC}" | |
| else | |
| echo -e " ${RED}x${NC} pnpm not found - install pnpm 10+" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/dev/init.sh` around lines 30 - 44, Update the node and pnpm checks in
init.sh to enforce the minimum versions: capture NODE_V from `node --version`
(strip leading "v") and PNPM_V from `pnpm --version`, parse their major (and
optionally minor/patch) components and compare them against the declared minima
(node >= 22.0.0, pnpm >= 10.0.0); if the version check fails for either `NODE_V`
or `PNPM_V` emit the existing red error message (augmenting it to show the
detected version) and exit 1, otherwise print the green success line; use the
existing variables NODE_V and PNPM_V and the same echo messages to keep output
consistent.
| SIGNING_KEYS="$REPO_ROOT/supabase/signing_keys.json" | ||
| if [ ! -f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]; then | ||
| echo -e "${BLUE}${BOLD}[2/4]${NC} Generating Supabase signing keys..." | ||
| node -e " | ||
| const crypto = require('crypto'); | ||
| const { privateKey } = crypto.generateKeyPairSync('ec', { namedCurve: 'P-256' }); | ||
| const jwk = privateKey.export({ format: 'jwk' }); | ||
| jwk.kid = crypto.randomUUID(); | ||
| jwk.use = 'sig'; | ||
| jwk.key_ops = ['sign', 'verify']; | ||
| jwk.alg = 'ES256'; | ||
| jwk.ext = true; | ||
| console.log(JSON.stringify([jwk], null, 2)); | ||
| " > "$SIGNING_KEYS" |
There was a problem hiding this comment.
Treat zero-byte signing_keys.json as missing.
Line 60 only regenerates when the file is absent or exactly []. If the redirected node -e write is interrupted, the next run leaves an empty file in place and never self-heals.
Suggested change
-if [ ! -f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]; then
+if [ ! -s "$SIGNING_KEYS" ] || [ "$(tr -d '[:space:]' < "$SIGNING_KEYS")" = "[]" ]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SIGNING_KEYS="$REPO_ROOT/supabase/signing_keys.json" | |
| if [ ! -f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]; then | |
| echo -e "${BLUE}${BOLD}[2/4]${NC} Generating Supabase signing keys..." | |
| node -e " | |
| const crypto = require('crypto'); | |
| const { privateKey } = crypto.generateKeyPairSync('ec', { namedCurve: 'P-256' }); | |
| const jwk = privateKey.export({ format: 'jwk' }); | |
| jwk.kid = crypto.randomUUID(); | |
| jwk.use = 'sig'; | |
| jwk.key_ops = ['sign', 'verify']; | |
| jwk.alg = 'ES256'; | |
| jwk.ext = true; | |
| console.log(JSON.stringify([jwk], null, 2)); | |
| " > "$SIGNING_KEYS" | |
| SIGNING_KEYS="$REPO_ROOT/supabase/signing_keys.json" | |
| if [ ! -s "$SIGNING_KEYS" ] || [ "$(tr -d '[:space:]' < "$SIGNING_KEYS")" = "[]" ]; then | |
| echo -e "${BLUE}${BOLD}[2/4]${NC} Generating Supabase signing keys..." | |
| node -e " | |
| const crypto = require('crypto'); | |
| const { privateKey } = crypto.generateKeyPairSync('ec', { namedCurve: 'P-256' }); | |
| const jwk = privateKey.export({ format: 'jwk' }); | |
| jwk.kid = crypto.randomUUID(); | |
| jwk.use = 'sig'; | |
| jwk.key_ops = ['sign', 'verify']; | |
| jwk.alg = 'ES256'; | |
| jwk.ext = true; | |
| console.log(JSON.stringify([jwk], null, 2)); | |
| " > "$SIGNING_KEYS" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/dev/services.sh` around lines 59 - 72, The script currently only
regenerates signing_keys.json when the file is missing or equals "[]", so
zero-byte files left by interrupted writes are ignored; update the existence
check around SIGNING_KEYS in the if condition (the test that currently uses [ !
-f "$SIGNING_KEYS" ] || [ "$(cat "$SIGNING_KEYS")" = "[]" ]) to detect empty
files as missing by using the -s test (e.g., replace [ ! -f "$SIGNING_KEYS" ]
with [ ! -s "$SIGNING_KEYS" ]) while keeping the existing check for the "[]"
content so the node key-generation block will run when the file is missing,
empty, or contains only [].
| echo -e "${BLUE}${BOLD}[3/4]${NC} Starting Supabase..." | ||
| echo "" | ||
| cd "$REPO_ROOT" | ||
| supabase start || true | ||
| echo "" | ||
|
|
||
| # 4. Inject local Supabase keys into .env.local | ||
| echo -e "${BLUE}${BOLD}[4/4]${NC} Configuring environment..." | ||
|
|
||
| SUPABASE_OUTPUT=$(supabase status -o env 2>/dev/null || true) | ||
|
|
||
| API_URL=$(echo "$SUPABASE_OUTPUT" | grep '^API_URL=' | cut -d= -f2- | tr -d '"' || true) | ||
| PUBLISHABLE_KEY=$(echo "$SUPABASE_OUTPUT" | grep '^PUBLISHABLE_KEY=' | cut -d= -f2- | tr -d '"' || true) | ||
| SECRET_KEY=$(echo "$SUPABASE_OUTPUT" | grep '^SECRET_KEY=' | cut -d= -f2- | tr -d '"' || true) | ||
|
|
||
| UPDATED=0 | ||
|
|
||
| if [ -n "$API_URL" ]; then | ||
| sed -i.bak "s|^SUPABASE_URL=.*|SUPABASE_URL=\"$API_URL\"|" "$ENV_FILE" | ||
| sed -i.bak "s|^NEXT_PUBLIC_SUPABASE_URL=.*|NEXT_PUBLIC_SUPABASE_URL=\"$API_URL\"|" "$ENV_FILE" | ||
| echo -e " ${GREEN}+${NC} SUPABASE_URL ${DIM}${API_URL}${NC}" | ||
| UPDATED=1 | ||
| fi | ||
|
|
||
| if [ -n "$PUBLISHABLE_KEY" ]; then | ||
| sed -i.bak "s|^NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=.*|NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=\"$PUBLISHABLE_KEY\"|" "$ENV_FILE" | ||
| echo -e " ${GREEN}+${NC} NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY ${DIM}${PUBLISHABLE_KEY:0:20}...${NC}" | ||
| UPDATED=1 | ||
| fi | ||
|
|
||
| if [ -n "$SECRET_KEY" ]; then | ||
| sed -i.bak "s|^SUPABASE_SECRET_KEY=.*|SUPABASE_SECRET_KEY=\"$SECRET_KEY\"|" "$ENV_FILE" | ||
| echo -e " ${GREEN}+${NC} SUPABASE_SECRET_KEY ${DIM}${SECRET_KEY:0:20}...${NC}" | ||
| UPDATED=1 | ||
| fi | ||
|
|
||
| rm -f "$ENV_FILE.bak" | ||
|
|
||
| if [ "$UPDATED" -eq 0 ]; then | ||
| echo -e " ${YELLOW}~${NC} Could not read Supabase keys - update .env.local manually" | ||
| fi | ||
|
|
||
| echo "" | ||
| echo -e "${GREEN}${BOLD}Services running!${NC}" | ||
| echo "" |
There was a problem hiding this comment.
Do not print a success path after a failed Supabase start.
Lines 84 and 119-125 let the script continue all the way to Services running! even when supabase start failed or supabase status -o env returned nothing. That makes first-run failures much harder to diagnose.
Suggested change
-supabase start || true
+if ! supabase start && ! supabase status -o env >/dev/null 2>&1; then
+ echo -e " ${RED}x${NC} Supabase failed to start"
+ exit 1
+fi
@@
if [ "$UPDATED" -eq 0 ]; then
echo -e " ${YELLOW}~${NC} Could not read Supabase keys - update .env.local manually"
+ exit 1
fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/dev/services.sh` around lines 81 - 125, The script continues to the
final "Services running!" message even when "supabase start" fails or
SUPABASE_OUTPUT is empty; update the flow so failures stop the script: capture
the exit code of the supabase start command and if it fails print a clear error
and exit non-zero, and after obtaining SUPABASE_OUTPUT validate it is non-empty
before injecting keys — if SUPABASE_OUTPUT is empty print an error/helpful
message and exit non-zero instead of proceeding; only print the final success
message when supabase start succeeded and at least one environment value was
updated (i.e., SUPABASE_OUTPUT non-empty or UPDATED==1). Reference symbols:
supabase start, SUPABASE_OUTPUT, UPDATED, ENV_FILE.
| create schema if not exists "pgmq_public"; | ||
|
|
||
| create extension if not exists "pgmq" with schema "pgmq" version '1.4.4'; | ||
|
|
||
| grant usage on schema "pgmq" to "service_role"; | ||
| grant usage on schema "pgmq_public" to "service_role"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching pgmq privilege statements..."
rg -n -C2 --type=sql 'pgmq_public|grant\s+execute|revoke\s+all|grant\s+usage' supabase/migrations supabase/tests || true
echo
echo "Searching for application call sites that use pgmq_public..."
rg -n -C2 --type=ts 'schema:\s*"pgmq_public"|rpc\("read"|rpc\("archive"' jobs supabase || trueRepository: gridaco/grida
Length of output: 2727
🏁 Script executed:
cat -n supabase/migrations/20250503123355_pgmq.sqlRepository: gridaco/grida
Length of output: 949
🏁 Script executed:
# Search for all pgmq-related migrations to see if a later one hardens the functions
rg -l 'pgmq' supabase/migrations/*.sql | sortRepository: gridaco/grida
Length of output: 286
🏁 Script executed:
# Check if there are any other pgmq_public-specific modifications after the initial migration
rg -n 'pgmq_public' supabase/migrations/ --type=sqlRepository: gridaco/grida
Length of output: 246
🏁 Script executed:
# Search for pgmq function calls or documentation references
rg -n 'pgmq.*\(read|archive\)' jobs supabase --type=ts --type=sql -A2 -B2Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check if there's any pgmq documentation or README that explains the schema setup
find . -name "*.md" -o -name "*.txt" | xargs grep -l pgmq 2>/dev/null || echo "No markdown/text references found"Repository: gridaco/grida
Length of output: 73
🏁 Script executed:
# Look for comments or documentation in migrations that mention pgmq defaults
rg -n 'pgmq' supabase/migrations/20250503123355_pgmq.sql --type=sql -B5Repository: gridaco/grida
Length of output: 868
🏁 Script executed:
cat -n jobs/README.md | head -50Repository: gridaco/grida
Length of output: 1353
🏁 Script executed:
# Check if pgmq functions are RPC functions (stored procedures) in PostgreSQL
# Search for pgmq function definitions or any CREATE FUNCTION statements in migrations
rg -n 'create.*function.*pgmq|pgmq.*function' supabase/migrations --type=sqlRepository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Search for any function definitions or wrappers in pgmq_public in migrations
rg -n 'pgmq_public\.' supabase/migrations --type=sqlRepository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check if pgmq extension documentation is available or if functions are created elsewhere
rg -n 'read.*function|archive.*function' supabase/migrations --type=sql -iRepository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check test files for pgmq or queue-related RLS/permission tests
rg -n 'pgmq|queue' supabase/tests --type=sqlRepository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Look for any test files that might cover permissions for pgmq
fd '\.sql$' supabase/tests -x grep -l 'pgmq\|queue' {} \;Repository: gridaco/grida
Length of output: 39
🏁 Script executed:
# Check if there's any pgmq documentation in code comments or if RPC is documented
rg -n 'pgmq' jobs/src --type=ts -B3 -A3Repository: gridaco/grida
Length of output: 1123
🏁 Script executed:
# See if supabase tests directory exists and what structure it has
ls -la supabase/tests/ 2>/dev/null || echo "Tests directory structure check"Repository: gridaco/grida
Length of output: 688
🏁 Script executed:
# Look for all migration files after pgmq to see if any other queue-related changes were made
ls -1 supabase/migrations/ | grep -E '202505|202506' | head -20Repository: gridaco/grida
Length of output: 337
Lock down pgmq_public function access with explicit ACLs.
The migration grants schema USAGE to service_role but does not explicitly control EXECUTE permissions on the read and archive functions that the application calls via RPC. This relies on the extension's default privileges instead of an explicit least-privilege contract. The codebase pattern (e.g., 20260201133814_rpc_delete_project_with_timeout.sql) establishes the standard: REVOKE ALL ... FROM public followed by GRANT EXECUTE ... TO service_role. Add the same hardening here, and include pgTAP tests to verify that only service_role can execute these functions.
Proposed hardening
grant usage on schema "pgmq" to "service_role";
grant usage on schema "pgmq_public" to "service_role";
+revoke all on all functions in schema "pgmq_public" from public;
+grant execute on all functions in schema "pgmq_public" to "service_role";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| create schema if not exists "pgmq_public"; | |
| create extension if not exists "pgmq" with schema "pgmq" version '1.4.4'; | |
| grant usage on schema "pgmq" to "service_role"; | |
| grant usage on schema "pgmq_public" to "service_role"; | |
| create schema if not exists "pgmq_public"; | |
| create extension if not exists "pgmq" with schema "pgmq" version '1.4.4'; | |
| grant usage on schema "pgmq" to "service_role"; | |
| grant usage on schema "pgmq_public" to "service_role"; | |
| revoke all on all functions in schema "pgmq_public" from public; | |
| grant execute on all functions in schema "pgmq_public" to "service_role"; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@supabase/migrations/20250503123355_pgmq.sql` around lines 4 - 9, Add explicit
ACL hardening for pgmq_public functions by revoking public execute privileges
and granting only service_role EXECUTE on the pgmq_public.read and
pgmq_public.archive functions (follow the pattern used in other migrations like
rpc_delete_project_with_timeout: REVOKE ALL ON FUNCTION ... FROM public; GRANT
EXECUTE ON FUNCTION ... TO service_role). Update the migration that creates
schema "pgmq_public" / extension "pgmq" to include these REVOKE/GRANT statements
for the read and archive functions, and add pgTAP tests that assert public
cannot execute them while service_role can (tests should call pgmq_public.read
and pgmq_public.archive and expect permission denied for public and success for
service_role).
|
@zdunecki Hi. thank you for openning the PR. I appreciate the changes, but the changes are too opinionated. (mise.toml, setup scripts) Thanks ! |
|
@zdunecki would you re-consider using I also found good example how to wire things for windows as well, no need for mise. reference: (sets powershell) |
|
Yeah, I didn't find I need to test this on multiple environments to confirm whether this PR improves Grida installation. Best, |
|
@zdunecki sure. no problem at all. take your time ; ) |
Hi,
Grida looks awesome! I've tried to pull that locally and I've got some troubles to run, some sort of stuff were missing/manually edits needed. Below changes make have fun with Grida faster after clone.
Best,
Summary by CodeRabbit
Release Notes
Documentation
New Features
Bug Fixes
Chores