Skip to content

feat: add API response caching and parallelize SSR data fetching#3974

Open
ety001 wants to merge 1 commit intosteemit:masterfrom
ety001:ssr-api-cache
Open

feat: add API response caching and parallelize SSR data fetching#3974
ety001 wants to merge 1 commit intosteemit:masterfrom
ety001:ssr-api-cache

Conversation

@ety001
Copy link
Copy Markdown
Member

@ety001 ety001 commented Apr 5, 2026

Summary

  • Add ApiCache module with TTL-based in-memory caching and inflight request coalescing for bridge API calls
  • Parallelize independent SSR API calls (getStateAsync, getFeedHistoryAsync, getDynamicGlobalPropertiesAsync) via Promise.all
  • Parallelize get_community, get_profile, get_trending_topics within getStateAsync
  • Observer parameter excluded from cache keys for community/ranked-posts methods to prevent key explosion
  • Cache init guarded with !process.env.BROWSER to prevent crashing client-side bundle

Cache TTL Configuration

API TTL
get_trending_topics 5min
get_community 10min
get_profile 5min
get_ranked_posts / get_account_posts 30s
get_discussion 60s
feed_history / dynamic_global_properties 60s

Problem

SSR requests were making 3-5 sequential API calls with zero caching, causing average latency of 218-340ms (threshold: 150ms). Data like trending topics, feed prices, and global properties change infrequently but were re-fetched on every request.

Expected Improvement

SSR latency from ~300ms to ~80-120ms (most API calls hit cache after first request; independent calls run in parallel).

Test plan

  • Deploy to beta environment
  • Monitor condenser SSR latency via scalyr dashboard
  • Verify average latency drops below 150ms threshold
  • Confirm no regression in client-side rendering
  • Check cache hit rates via ApiCache.getStats()

SSR requests were making 3-5 sequential API calls (callBridge,
getFeedHistoryAsync, getDynamicGlobalPropertiesAsync) with zero
caching, causing average latency of 218-340ms (threshold: 150ms).

Changes:
- Add ApiCache with TTL-based caching and inflight request coalescing
- Parallelize independent API calls in apiFetchState via Promise.all
- Parallelize community/profile/trending_topics in getStateAsync
- Cache low-frequency-change data (trending topics 5min, community
  10min, feed price/global props 1min, posts 30-60s)
- Exclude observer from cache keys for observer-aware methods to
  prevent key explosion
- Guard cache init with !process.env.BROWSER to avoid crashing
  the client-side bundle (node-cache is server-only)

Expected improvement: SSR latency from ~300ms to ~80-120ms.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant