-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Fix infinite loop in tool calling with FunctionCallingConfig mode='AN… #4188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…Y' (Issue google#4179) This commit fixes the infinite loop bug where agents with FunctionCallingConfig mode='ANY' could continuously call tools without ever providing a final response, causing the agent to hang indefinitely. ## Root Cause When using FunctionCallingConfig with mode='ANY', the LLM is free to keep calling tools as many times as it wants. However, there was no mechanism to prevent infinite loops where the model never provides a final response and keeps calling tools. ## Solution Added a new 'max_tool_iterations' configuration parameter with the following features: 1. **New Configuration (run_config.py)**: - Added 'max_tool_iterations' field with default value of 50 - Prevents infinite loops by limiting consecutive tool-calling cycles - Can be disabled by setting to 0 or negative values - Includes validator with warning for disabled limit 2. **Iteration Tracking (invocation_context.py)**: - Added 'ToolIterationsLimitExceededError' exception - Extended '_InvocationCostManager' to track tool iterations - Added 'increment_tool_iteration_count()' method - Added 'reset_tool_iteration_count()' method - Counter resets when agent provides final response (no function calls) 3. **Flow Integration (base_llm_flow.py)**: - Integrated counter increment when function calls are detected - Integrated counter reset when final response is received - Applied to both 'run_async' and 'run_live' flows ## Changes Made **Modified Files:** - src/google/adk/agents/run_config.py (added max_tool_iterations config) - src/google/adk/agents/invocation_context.py (added iteration tracking) - src/google/adk/flows/llm_flows/base_llm_flow.py (integrated counter logic) **New Files:** - tests/unittests/flows/llm_flows/test_tool_iteration_limit.py (comprehensive tests) ## Testing Added comprehensive unit tests covering: - Default configuration value - Limit enforcement - Counter reset behavior - Disabled limit scenario - Configuration validator All tests pass successfully. ## Impact This change prevents infinite loops when using FunctionCallingConfig mode='ANY', especially in scenarios like sub-agents being used as tools, where the agent could previously get stuck in an endless tool-calling cycle. Fixes google#4179
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello @jayy-77, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue where agents using Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Response from ADK Triaging Agent Hello @jayy-77, thank you for your contribution! It looks like the Contributor License Agreement (CLA) check has failed. Please make sure you have signed the CLA at https://cla.developers.google.com/. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a robust mechanism to prevent infinite tool-calling loops by adding a max_tool_iterations configuration. The implementation is well-structured across run_config.py, invocation_context.py, and base_llm_flow.py. The new configuration is properly validated, and the iteration counter is correctly managed within the LLM flow. The addition of comprehensive unit tests is excellent and covers the core logic of the new feature. I have one suggestion to improve the validator test to make it more robust. Overall, this is a solid contribution that addresses a critical bug.
| async def test_max_tool_iterations_validator(): | ||
| """Test that RunConfig validator warns about disabled limit.""" | ||
| import logging | ||
| import warnings | ||
|
|
||
| # Setting to 0 should trigger a warning | ||
| with warnings.catch_warnings(record=True): | ||
| warnings.simplefilter("always") | ||
| run_config = RunConfig(max_tool_iterations=0) | ||
| assert run_config.max_tool_iterations == 0 | ||
|
|
||
| # Setting to positive value should not raise | ||
| run_config = RunConfig(max_tool_iterations=50) | ||
| assert run_config.max_tool_iterations == 50 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of test_max_tool_iterations_validator does not correctly test for the warning message. The validator uses logger.warning, which is not captured by warnings.catch_warnings by default. To properly test that the warning is logged, you should use the caplog fixture provided by pytest. This will ensure the test is robust and correctly verifies the validator's behavior.
| async def test_max_tool_iterations_validator(): | |
| """Test that RunConfig validator warns about disabled limit.""" | |
| import logging | |
| import warnings | |
| # Setting to 0 should trigger a warning | |
| with warnings.catch_warnings(record=True): | |
| warnings.simplefilter("always") | |
| run_config = RunConfig(max_tool_iterations=0) | |
| assert run_config.max_tool_iterations == 0 | |
| # Setting to positive value should not raise | |
| run_config = RunConfig(max_tool_iterations=50) | |
| assert run_config.max_tool_iterations == 50 | |
| async def test_max_tool_iterations_validator(caplog): | |
| """Test that RunConfig validator warns about disabled limit.""" | |
| import logging | |
| # Setting to 0 should trigger a warning | |
| with caplog.at_level(logging.WARNING): | |
| run_config = RunConfig(max_tool_iterations=0) | |
| assert run_config.max_tool_iterations == 0 | |
| assert 'max_tool_iterations is less than or equal to 0' in caplog.text | |
| # Setting to positive value should not raise or log a warning | |
| caplog.clear() | |
| run_config = RunConfig(max_tool_iterations=50) | |
| assert run_config.max_tool_iterations == 50 | |
| assert not caplog.text |
…Y' (Issue #4179)
This commit fixes the infinite loop bug where agents with FunctionCallingConfig mode='ANY' could continuously call tools without ever providing a final response, causing the agent to hang indefinitely.
Fixes #4179
Please ensure you have read the contribution guide before creating a pull request.
Link to Issue or Description of Change
1. Link to an existing issue (if applicable):