feat: add API response caching and parallelize SSR data fetching#3974
Open
ety001 wants to merge 1 commit intosteemit:masterfrom
Open
feat: add API response caching and parallelize SSR data fetching#3974ety001 wants to merge 1 commit intosteemit:masterfrom
ety001 wants to merge 1 commit intosteemit:masterfrom
Conversation
SSR requests were making 3-5 sequential API calls (callBridge, getFeedHistoryAsync, getDynamicGlobalPropertiesAsync) with zero caching, causing average latency of 218-340ms (threshold: 150ms). Changes: - Add ApiCache with TTL-based caching and inflight request coalescing - Parallelize independent API calls in apiFetchState via Promise.all - Parallelize community/profile/trending_topics in getStateAsync - Cache low-frequency-change data (trending topics 5min, community 10min, feed price/global props 1min, posts 30-60s) - Exclude observer from cache keys for observer-aware methods to prevent key explosion - Guard cache init with !process.env.BROWSER to avoid crashing the client-side bundle (node-cache is server-only) Expected improvement: SSR latency from ~300ms to ~80-120ms.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
ApiCachemodule with TTL-based in-memory caching and inflight request coalescing for bridge API callsgetStateAsync,getFeedHistoryAsync,getDynamicGlobalPropertiesAsync) viaPromise.allget_community,get_profile,get_trending_topicswithingetStateAsync!process.env.BROWSERto prevent crashing client-side bundleCache TTL Configuration
Problem
SSR requests were making 3-5 sequential API calls with zero caching, causing average latency of 218-340ms (threshold: 150ms). Data like trending topics, feed prices, and global properties change infrequently but were re-fetched on every request.
Expected Improvement
SSR latency from ~300ms to ~80-120ms (most API calls hit cache after first request; independent calls run in parallel).
Test plan
ApiCache.getStats()