-
Notifications
You must be signed in to change notification settings - Fork 826
draft Self clocked tickperupdate #3084
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…rn processing - Removed the heartbeat functionality from the ClientGameRunner and WorkerClient. - Introduced a new processPendingTurns function in Worker.worker.ts to handle turn execution more efficiently. - Added a hasPendingTurns method in GameRunner to check for unprocessed turns. - Updated WorkerMessages to remove heartbeat message type.
…based backpressure Main thread no longer processes worker game_update inline; it queues updates and time-slices them on RAF (ClientGameRunner.ts:267). This prevents blocking the main thread during heavy update processing. Key changes: - Drain budgets are bounded (ms + count) to maintain frame timing - Every applied update still calls renderer.tick() (critical for UI layers that assume one tick per game tick; coalescing would break timers) - turnComplete(processedCount) is now "credits" (acked in batches), not per-turn lockstep - LocalServer is wall-clock paced for ×0.5/×1/×2, and backlog-capped for max (LocalServer.ts:78) - maxOutstandingTurns is the key knob (5 for fixed-rate, 200 for max right now) - Transport forwards the credit count (Transport.ts:395)
GameRunner.executeNextTick(): return false on error paths (never undefined) Worker loop drains via while (executeNextTick()) instead of separate hasPendingTurns() Remove stale LocalServer ReplaySpeedMultiplier/MAX_REPLAY_BACKLOG_TURNS imports/constants undoing part of Spectate catchup openfrontio#3012
- Removed the pendingTurns and hasPendingTurns methods from GameRunner - Updated Worker.worker.ts to simplify the check for pending turns
WalkthroughThe changes refactor the game worker update processing from a heartbeat-driven model to a batched, on-demand system. Updates queue in ClientGameRunner and drain via requestAnimationFrame in configurable batches. Turn pacing shifts from backlog-based to explicit scheduling with nextTurnAtMs. The turnComplete API now accepts an optional count for batch acknowledgment. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/core/worker/Worker.worker.ts`:
- Around line 36-57: The current processPendingTurns function blocks the worker
by running while (gr.executeNextTick()) until all turns complete; update
processPendingTurns (and the isProcessingTurns guard) to process turns in
bounded batches and yield between batches (e.g., run up to N iterations of
gr.executeNextTick() per invocation and await a short tick or next-microtask
before continuing) so other messages (player_actions, player_profile) can be
processed; refer to processPendingTurns, isProcessingTurns, gameRunner and the
gr.executeNextTick() call and implement a configurable batch limit and an async
yield (setImmediate/timeout or Promise.resolve) between batches.
🧹 Nitpick comments (1)
src/client/ClientGameRunner.ts (1)
533-535: VariablestartTimeshadows imported function.The local
const startTimeshadows thestartTimeimport from./LocalPersistantStats(line 39). While this works, it can cause confusion during code maintenance.Consider renaming to
drainStartTimeorbatchStartTime.Rename suggestion
- const startTime = performance.now(); + const drainStartTime = performance.now(); const maxDrainMs = backlog > 200 ? 12 : 8; const maxUpdates = backlog > 200 ? 1000 : 200;And update line 544:
- performance.now() - startTime < maxDrainMs + performance.now() - drainStartTime < maxDrainMs
| async function processPendingTurns() { | ||
| if (isProcessingTurns) { | ||
| return; | ||
| } | ||
| if (!gameRunner) { | ||
| return; | ||
| } | ||
|
|
||
| const gr = await gameRunner; | ||
| if (!gr) { | ||
| return; | ||
| } | ||
|
|
||
| isProcessingTurns = true; | ||
| try { | ||
| while (gr.executeNextTick()) { | ||
| // Keep running until no pending turns. | ||
| } | ||
| } finally { | ||
| isProcessingTurns = false; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Synchronous loop may block worker for extended periods.
The while (gr.executeNextTick()) loop runs until all pending turns are processed without yielding. If a large backlog exists (e.g., after reconnect or replay fast-forward), this could block the worker thread and delay handling of other messages like player_actions or player_profile.
Consider yielding periodically or batching with a limit per invocation.
Proposed batched approach
async function processPendingTurns() {
if (isProcessingTurns) {
return;
}
if (!gameRunner) {
return;
}
const gr = await gameRunner;
if (!gr) {
return;
}
isProcessingTurns = true;
try {
- while (gr.executeNextTick()) {
- // Keep running until no pending turns.
- }
+ const maxTicksPerBatch = 100;
+ let processed = 0;
+ while (processed < maxTicksPerBatch && gr.executeNextTick()) {
+ processed++;
+ }
+ // Reschedule if more work remains
+ if (processed === maxTicksPerBatch) {
+ setTimeout(processPendingTurns, 0);
+ }
} finally {
isProcessingTurns = false;
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function processPendingTurns() { | |
| if (isProcessingTurns) { | |
| return; | |
| } | |
| if (!gameRunner) { | |
| return; | |
| } | |
| const gr = await gameRunner; | |
| if (!gr) { | |
| return; | |
| } | |
| isProcessingTurns = true; | |
| try { | |
| while (gr.executeNextTick()) { | |
| // Keep running until no pending turns. | |
| } | |
| } finally { | |
| isProcessingTurns = false; | |
| } | |
| } | |
| async function processPendingTurns() { | |
| if (isProcessingTurns) { | |
| return; | |
| } | |
| if (!gameRunner) { | |
| return; | |
| } | |
| const gr = await gameRunner; | |
| if (!gr) { | |
| return; | |
| } | |
| isProcessingTurns = true; | |
| try { | |
| const maxTicksPerBatch = 100; | |
| let processed = 0; | |
| while (processed < maxTicksPerBatch && gr.executeNextTick()) { | |
| processed++; | |
| } | |
| // Reschedule if more work remains | |
| if (processed === maxTicksPerBatch) { | |
| setTimeout(processPendingTurns, 0); | |
| } | |
| } finally { | |
| isProcessingTurns = false; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@src/core/worker/Worker.worker.ts` around lines 36 - 57, The current
processPendingTurns function blocks the worker by running while
(gr.executeNextTick()) until all turns complete; update processPendingTurns (and
the isProcessingTurns guard) to process turns in bounded batches and yield
between batches (e.g., run up to N iterations of gr.executeNextTick() per
invocation and await a short tick or next-microtask before continuing) so other
messages (player_actions, player_profile) can be processed; refer to
processPendingTurns, isProcessingTurns, gameRunner and the gr.executeNextTick()
call and implement a configurable batch limit and an async yield
(setImmediate/timeout or Promise.resolve) between batches.
If this PR fixes an issue, link it below. If not, delete these two lines.
Resolves #(issue number)
Description:
Describe the PR.
Please complete the following:
Please put your Discord username so you can be contacted if a bug or regression is found:
DISCORD_USERNAME