Skip to content

fix(core): emit message before function_calls in toResponsesInput#12084

Open
YgorLeal wants to merge 1 commit intocontinuedev:mainfrom
YgorLeal:fix/responses-api-reasoning-item-order
Open

fix(core): emit message before function_calls in toResponsesInput#12084
YgorLeal wants to merge 1 commit intocontinuedev:mainfrom
YgorLeal:fix/responses-api-reasoning-item-order

Conversation

@YgorLeal
Copy link
Copy Markdown

@YgorLeal YgorLeal commented Apr 9, 2026

Summary

Fixes #11994 — Resolves 400 Bad Request: 'Missing reasoning item' errors when using reasoning models (o1, o3) through the OpenAI Responses API.

Problem

In toResponsesInput, when an assistant message contains both text and tool calls, function_call items were emitted before the text message item. This separates a preceding reasoning item from its associated message, violating the Responses API's sequencing requirements:

// Before (broken): reasoning → function_call → function_call → message
// After (fixed):   reasoning → message → function_call → function_call

Solution

Swapped the emission order in the case "assistant" block so the text message is pushed before the function_call items. This keeps the reasoning item adjacent to its message output.

Test plan

  • Verify reasoning models (o1/o3) no longer produce 400 errors during multi-step tool-calling turns
  • Verify non-reasoning models still work correctly with tool calls
  • Existing test suite passes (lint-staged/prettier verified on commit)

Summary by cubic

Resolves #11994 by fixing Responses API sequencing in toResponsesInput: emit the assistant text message before function_call items. This keeps a preceding reasoning item adjacent to its message and prevents 400 "Missing reasoning item" errors with o1/o3.

Written for commit f272d7e. Summary will update on new commits.

…ntinuedev#11994)

Reorder item emission in the assistant case of toResponsesInput so the
text message is pushed before function_call items. This keeps a preceding
reasoning item adjacent to its message output, satisfying the OpenAI
Responses API sequencing requirement that prevents 400 'Missing reasoning
item' errors with reasoning models like o1/o3.
@YgorLeal YgorLeal requested a review from a team as a code owner April 9, 2026 01:12
@YgorLeal YgorLeal requested review from sestinj and removed request for a team April 9, 2026 01:12
@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Apr 9, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 9, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

@YgorLeal
Copy link
Copy Markdown
Author

YgorLeal commented Apr 9, 2026

I have read the CLA Document and I hereby sign the CLA

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

[Bug] OpenAI Responses API 400 Bad Request: 'Missing reasoning item' with Reasoning Models

1 participant