Python: Fix group chat broadcast to include next speaker when it re-speaks#4736
Python: Fix group chat broadcast to include next speaker when it re-speaks#4736yashy797 wants to merge 4 commits intomicrosoft:mainfrom
Conversation
|
@yashy797 please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
There was a problem hiding this comment.
Pull request overview
This PR fixes a group chat orchestration edge case where the broadcast step incorrectly skipped the next speaker when the same participant was selected to speak twice in a row, which could leave that participant’s executor cache missing its own latest response.
Changes:
- Adjust participant broadcast filtering in both
GroupChatOrchestrator._handle_responseandAgentBasedGroupChatOrchestrator._handle_responseto always includenext_speakereven if it’s the participant who just responded. - Update the ChatKit integration sample to generate upload/preview URLs via the frontend dev-server proxy and migrate attachment upload metadata to
upload_descriptor. - Update several chat client samples to use the newer
Message/Contentinputs forget_response.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| python/packages/orchestrations/agent_framework_orchestrations/_group_chat.py | Fixes broadcast recipient filtering to include the next speaker when it re-speaks. |
| python/samples/05-end-to-end/chatkit-integration/app.py | Adjusts URL generation for attachments + updates function-result handling; introduces an unused import. |
| python/samples/05-end-to-end/chatkit-integration/attachment_store.py | Migrates attachment metadata from upload_url to upload_descriptor. |
| python/samples/05-end-to-end/chatkit-integration/frontend/vite.config.ts | Adds Vite dev-server proxies for /upload and /preview. |
| python/samples/02-agents/chat_client/custom_chat_client.py | Updates sample to call get_response with Message/Content. |
| python/samples/02-agents/chat_client/chat_response_cancellation.py | Updates sample to call get_response with Message/Content (but currently likely violates line-length lint). |
| python/samples/02-agents/chat_client/built_in_chat_clients.py | Updates sample prompt handling to use Message/Content. |
| # Use the frontend origin for generated URLs (upload, preview) so that the browser | ||
| # sends them through the Vite dev-server proxy instead of directly to the backend, | ||
| # avoiding cross-origin issues. | ||
| FRONTEND_PORT = 5171 | ||
| SERVER_BASE_URL = f"http://localhost:{FRONTEND_PORT}" |
|
|
||
| try: | ||
| task = asyncio.create_task(client.get_response(messages=["Tell me a fantasy story."])) | ||
| task = asyncio.create_task(client.get_response([Message(role="user", contents=[Content.from_text("Tell me a fantasy story.")])])) |
| next_speaker = await self._get_next_speaker() | ||
|
|
||
| # Broadcast participant messages to all participants for context, except | ||
| # the participant that just responded | ||
| # the participant that just responded (unless it is also the next speaker, | ||
| # in which case it must receive the broadcast so its executor cache is | ||
| # repopulated before the follow-up request arrives). | ||
| participant = ctx.get_source_executor_id() | ||
| await self._broadcast_messages_to_participants( | ||
| messages, | ||
| cast(WorkflowContext[AgentExecutorRequest | GroupChatParticipantMessage], ctx), | ||
| participants=[p for p in self._participant_registry.participants if p != participant], | ||
| participants=[ | ||
| p for p in self._participant_registry.participants if p != participant or p == next_speaker | ||
| ], |
| #from agent_framework import Agent, AgentResponseUpdate, FunctionResultContent, Message, Role, tool | ||
| from agent_framework import Agent, AgentResponseUpdate, Content, Message, Role, tool |
| # Server configuration | ||
| SERVER_HOST = "127.0.0.1" # Bind to localhost only for security (local dev) | ||
| SERVER_PORT = 8001 | ||
| SERVER_BASE_URL = f"http://localhost:{SERVER_PORT}" | ||
| # Use the frontend origin for generated URLs (upload, preview) so that the browser | ||
| # sends them through the Vite dev-server proxy instead of directly to the backend, | ||
| # avoiding cross-origin issues. | ||
| FRONTEND_PORT = 5171 | ||
| SERVER_BASE_URL = f"http://localhost:{FRONTEND_PORT}" |
|
Hi @yashy797, Thank you for your contribution! The agent that just responded will have the response in its context, thus we don't need to prepopulate the cache with its own response. Are you an issue where the agent is missing its own response? Could you provide a reproduction? |
Motivation and Context
When the orchestrator selects the same participant to speak in consecutive turns, the broadcast step after the first response skips that participant (because it was the one who just responded). This means the participant's AgentExecutor cache is not repopulated with its own response before the follow-up request arrives, causing the agent to miss context or potentially triggering errors from empty message payloads.
Related to #3705 — that issue addressed empty message content in the same
_handle_responsemethods by addingclean_conversation_for_handoff. This fix addresses a separate broadcast filtering bug in the same code path.Description
Fixes the message broadcasting filter in both
GroupChatOrchestrator._handle_responseandAgentBasedGroupChatOrchestrator._handle_responseso that when the next speaker is the same participant that just responded, it still receives the broadcast.Changes in
_group_chat.py:GroupChatOrchestrator._handle_response— the participant filter now usesp != participant or p == next_speakerinstead ofp != participant, ensuring the next speaker is always included in the broadcast even if it just responded.AgentBasedGroupChatOrchestrator._handle_response— same filtering logic applied. Additionally,next_speakeris extracted into a local variable for clarity and reused in both the broadcast filter and the subsequent_send_request_to_participantcall.Both orchestrators now share the comment explaining the rationale:
Contribution Checklist