Fix LLM callback isolation without serializing requests#4252
Open
VedantMadane wants to merge 3 commits intocrewAIInc:mainfrom
Open
Fix LLM callback isolation without serializing requests#4252VedantMadane wants to merge 3 commits intocrewAIInc:mainfrom
VedantMadane wants to merge 3 commits intocrewAIInc:mainfrom
Conversation
3 tasks
Author
|
Not covered in this PR description:
If you prefer, I can add a follow up commit that documents these options or adds a concurrency focused test. |
31fdc55 to
35483b6
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This PR is being reviewed by Cursor Bugbot
Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
# Conflicts: # lib/crewai/src/crewai/llm.py
set_callbacks mutated LiteLLM's global callback lists - the pattern this PR removes. Its only call sites were deleted; no other callers exist. Removing to avoid accidental re-introduction of global-mutation pattern. Made-with: Cursor
877d021 to
003f5a3
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is a follow-up to #4218 (auto-closed by bot) addressing the same race in LLM callback handling without holding a global lock across the network call.
What changed
test_llm_callback_replacementdeterministic by mockinglitellm.completion(removes sleep/heisenbug).Why
The approach in #4218 used a class-level lock held across the entire LLM request which can serialize all concurrent agent calls. This keeps concurrency while still ensuring callback isolation.
Fixes #4214.
Note
High Risk
Touches core
LLM.call/LLM.acallrequest plumbing and callback behavior, which can regress token tracking and integrations under concurrency. The async error-handling block appears to contain duplicated/stray code that could breakacallat runtime.Overview
Fixes callback race conditions across concurrent LiteLLM calls by stopping CrewAI from mutating LiteLLM's global callback lists and instead passing
callbackson the per-request params for both sync and async code paths.Removes the
LLM.set_callbacksglobal-deduplication helper and updates tests: makestest_llm_callback_replacementdeterministic by mockinglitellm.completion, and adds a new threaded concurrency test to assert callback and token-usage isolation between simultaneous requests.Written by Cursor Bugbot for commit 877d021. This will update automatically on new commits. Configure here.