fix: handle OpenAI model responses with tool calls and no other assistant content #1562
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
The model openai.gpt-oss-120b can sometimes respond with only toolUse blocks and no text content. When this happens, the OpenAI formatter creates an assistant message with an empty content array:
{'role': 'assistant', 'content': [], 'tool_calls': [...]}The next request to Bedrock's Mantle OpenAI-compliant endpoint then fails with validation errors:
This change conditionally includes the content key in the assistant message only when formatted_contents is non-empty.
I also updated LiteLLMModel since it has the same logic.
Type of Change
Bug fix
Testing
How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli
hatch run prepareChecklist
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.