fix(core): emit message before function_calls in toResponsesInput#12084
Open
YgorLeal wants to merge 1 commit intocontinuedev:mainfrom
Open
fix(core): emit message before function_calls in toResponsesInput#12084YgorLeal wants to merge 1 commit intocontinuedev:mainfrom
YgorLeal wants to merge 1 commit intocontinuedev:mainfrom
Conversation
…ntinuedev#11994) Reorder item emission in the assistant case of toResponsesInput so the text message is pushed before function_call items. This keeps a preceding reasoning item adjacent to its message output, satisfying the OpenAI Responses API sequencing requirement that prevents 400 'Missing reasoning item' errors with reasoning models like o1/o3.
Contributor
|
All contributors have signed the CLA ✍️ ✅ |
Author
|
I have read the CLA Document and I hereby sign the CLA |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #11994 — Resolves
400 Bad Request: 'Missing reasoning item'errors when using reasoning models (o1, o3) through the OpenAI Responses API.Problem
In
toResponsesInput, when an assistant message contains both text and tool calls,function_callitems were emitted before the textmessageitem. This separates a precedingreasoningitem from its associatedmessage, violating the Responses API's sequencing requirements:Solution
Swapped the emission order in the
case "assistant"block so the text message is pushed before the function_call items. This keeps the reasoning item adjacent to its message output.Test plan
Summary by cubic
Resolves #11994 by fixing Responses API sequencing in
toResponsesInput: emit the assistant text message beforefunction_callitems. This keeps a precedingreasoningitem adjacent to its message and prevents 400 "Missing reasoning item" errors with o1/o3.Written for commit f272d7e. Summary will update on new commits.