Deprecate LP batch solve across Python, server, and docs#915
Deprecate LP batch solve across Python, server, and docs#915anandhkb wants to merge 3 commits intoNVIDIA:mainfrom
Conversation
anandhkb
commented
Feb 27, 2026
- Add DeprecationWarning to LP BatchSolve in solver.py
- Replace server batch path with sequential Solve() loop
- Add empty-list guard for batch LP_data (HTTP 400)
- Update cuopt_self_hosted client batch mode docs
- Update FAQ, lp-examples, batch_mode_example.sh
- Fix test_lp_solver: individual_solutions = [] (was [] * nb_solves)
- Add DeprecationWarning to LP BatchSolve in solver.py - Replace server batch path with sequential Solve() loop - Add empty-list guard for batch LP_data (HTTP 400) - Update cuopt_self_hosted client batch mode docs - Update FAQ, lp-examples, batch_mode_example.sh - Fix test_lp_solver: individual_solutions = [] (was [] * nb_solves)
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughLP batch mode is deprecated: docs and examples renamed to "Multiple LPs", BatchSolve now emits a DeprecationWarning, server now solves multiple LPs by calling Solve per item, CLI/client help and warmstart validation updated, and tests check equivalence between batch and sequential solves. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py (1)
491-496:⚠️ Potential issue | 🟠 MajorStrengthen batch-vs-sequential checks to include numerical equality.
This test currently validates only termination status. It should also compare objective/primal outputs to confirm numerical correctness equivalence.
Proposed assertion additions
for i in range(nb_solves): assert ( batch_solution[i].get_termination_status() == individual_solutions[i].get_termination_status() ) + assert batch_solution[i].get_primal_objective() == pytest.approx( + individual_solutions[i].get_primal_objective(), rel=1e-6, abs=1e-8 + ) + assert np.array(batch_solution[i].get_primal_solution()) == pytest.approx( + np.array(individual_solutions[i].get_primal_solution()), + rel=1e-5, + abs=1e-7, + )As per coding guidelines: "Write tests validating numerical correctness of optimization results (not just 'runs without error')".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py` around lines 491 - 496, The test only checks termination status; extend it to assert numerical equality between batch_solution and individual_solutions for each i in range(nb_solves) by comparing objective values and primal variable vectors: for each index use the solver result accessors (e.g., get_objective_value() or objective/obj property and get_primal_values() or primal/vars accessor on batch_solution[i] and individual_solutions[i]) and assert the objectives are equal within a small tolerance (pytest.approx or an absolute/relative tol) and the primal arrays match element-wise within tolerance; keep the existing termination-status assertion and add these numeric assertions in the same loop.
🧹 Nitpick comments (3)
python/cuopt_server/cuopt_server/utils/linear_programming/solver.py (1)
358-362: Consider: wall-clock timing vs. engine solve time.The timing logic uses
time.time()around theSolvecall, which measures wall-clock time including Python overhead. The originalBatchSolvemay have returned engine-level solve time. This could result in slightly differenttotal_solve_timevalues compared to the deprecated batch path.If precise engine-level timing is required, consider summing
i_sol.get_solve_time()instead:💡 Alternative timing approach
for i_data in LP_data: i_warnings, data_model = create_data_model(i_data) warnings.extend(i_warnings) - solve_begin_time = time.time() i_sol = linear_programming.Solve( data_model, solver_settings=solver_settings ) - total_solve_time += time.time() - solve_begin_time + total_solve_time += i_sol.get_solve_time() sol.append(i_sol)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py` around lines 358 - 362, The current timing wraps the call to linear_programming.Solve with time.time() (solve_begin_time and total_solve_time), which measures wall-clock/Python overhead rather than the solver engine's internal time; change the accumulation to use the solver-reported time by calling i_sol.get_solve_time() (or equivalent on the Solve result) and add that to total_solve_time so timings match the original BatchSolve semantics and exclude Python overhead.docs/cuopt/source/cuopt-server/examples/lp-examples.rst (2)
18-21: Section title inconsistent with deprecation messaging.Line 5 states batch mode is deprecated, but the section title at line 20 still reads "Genric Example With Normal Mode and Batch Mode" without indicating deprecation. Additionally, "Genric" appears to be a typo for "Generic".
Consider updating the title to reflect deprecation:
📝 Suggested title update
-.. _generic-example-with-normal-and-batch-mode: +.. _generic-example-with-normal-and-multiple-lps: -Genric Example With Normal Mode and Batch Mode ------------------------------------------------ +Generic Example With Normal Mode and Multiple LPs (Batch Deprecated) +---------------------------------------------------------------------🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst` around lines 18 - 21, The section title anchored by "_generic-example-with-normal-and-batch-mode" contains a typo ("Genric") and fails to indicate that batch mode is deprecated; update the heading text "Genric Example With Normal Mode and Batch Mode" to correct the typo to "Generic" and explicitly mark batch mode as deprecated (e.g., "Generic Example With Normal Mode (Batch Mode deprecated)") so the title aligns with the deprecation note in the body.
405-408: CLI batch mode section lacks deprecation notice.The CLI batch mode section describes sending multiple MPS files at once but doesn't mention that batch mode is deprecated, unlike the Python examples section (line 5). For consistency, consider adding a deprecation note here as well.
📝 Suggested deprecation notice
-In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results. The batch mode works only for ``mps`` in the case of CLI: +In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results. The batch mode works only for ``mps`` in the case of CLI. + +.. note:: + LP batch mode is deprecated. Multiple problems are now solved sequentially. .. note:: Batch mode is not available for MILP problems.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst` around lines 405 - 408, Add a deprecation note to the CLI batch-mode paragraph that starts "In the case of batch mode, you can send a bunch of ``mps`` files at once" so it matches the Python examples deprecation: insert a short ".. deprecated::" directive (or equivalent deprecation note) immediately before or after that paragraph and keep the existing clarifying note "Batch mode is not available for MILP problems." so readers see both the deprecation and the current limitation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.py`:
- Line 375: Add a guard in solve() before calling get_LP_solve() that rejects
using warmstart_id when more than one LP problem is supplied: if
args.warmstart_id is set and the request contains multiple LP problems (same
condition used in the existing init_ids validation), raise/exit with a clear
error message. This mirrors the existing init_ids check and prevents forwarding
warmstart_id into get_LP_solve()/_send_request() when multiple problems are
present.
In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 480-482: The test calls the deprecated BatchSolve but doesn't
assert that a DeprecationWarning is emitted; wrap the call to
solver.BatchSolve(...) in a pytest.warns(DeprecationWarning) context (or use
pytest.deprecated_call) to assert the deprecation contract, keeping the rest of
the assertions on batch_solution and solve_time unchanged; reference the
existing call to BatchSolve in
python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py to locate where to
add the pytest.warns assertion and ensure the test fails if the warning is not
emitted.
---
Outside diff comments:
In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 491-496: The test only checks termination status; extend it to
assert numerical equality between batch_solution and individual_solutions for
each i in range(nb_solves) by comparing objective values and primal variable
vectors: for each index use the solver result accessors (e.g.,
get_objective_value() or objective/obj property and get_primal_values() or
primal/vars accessor on batch_solution[i] and individual_solutions[i]) and
assert the objectives are equal within a small tolerance (pytest.approx or an
absolute/relative tol) and the primal arrays match element-wise within
tolerance; keep the existing termination-status assertion and add these numeric
assertions in the same loop.
---
Nitpick comments:
In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst`:
- Around line 18-21: The section title anchored by
"_generic-example-with-normal-and-batch-mode" contains a typo ("Genric") and
fails to indicate that batch mode is deprecated; update the heading text "Genric
Example With Normal Mode and Batch Mode" to correct the typo to "Generic" and
explicitly mark batch mode as deprecated (e.g., "Generic Example With Normal
Mode (Batch Mode deprecated)") so the title aligns with the deprecation note in
the body.
- Around line 405-408: Add a deprecation note to the CLI batch-mode paragraph
that starts "In the case of batch mode, you can send a bunch of ``mps`` files at
once" so it matches the Python examples deprecation: insert a short "..
deprecated::" directive (or equivalent deprecation note) immediately before or
after that paragraph and keep the existing clarifying note "Batch mode is not
available for MILP problems." so readers see both the deprecation and the
current limitation.
In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py`:
- Around line 358-362: The current timing wraps the call to
linear_programming.Solve with time.time() (solve_begin_time and
total_solve_time), which measures wall-clock/Python overhead rather than the
solver engine's internal time; change the accumulation to use the
solver-reported time by calling i_sol.get_solve_time() (or equivalent on the
Solve result) and add that to total_solve_time so timings match the original
BatchSolve semantics and exclude Python overhead.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.github/skills/cuopt-lp-milp/resources/server_examples.mddocs/cuopt/source/cuopt-server/examples/lp-examples.rstdocs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.shdocs/cuopt/source/faq.rstpython/cuopt/cuopt/linear_programming/solver/solver.pypython/cuopt/cuopt/tests/linear_programming/test_lp_solver.pypython/cuopt_self_hosted/README.mdpython/cuopt_self_hosted/cuopt_sh_client/cuopt_self_host_client.pypython/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.pypython/cuopt_server/cuopt_server/utils/linear_programming/solver.pypython/cuopt_server/cuopt_server/webserver.py
- Add warmstart_id guard in cuopt_sh.py when multiple LP problems - Fix lp-examples.rst: Genric → Generic, add deprecation to section title - Add deprecation note to CLI batch mode section - Fix lp-qp-features.rst references to renamed RST anchor - Use i_sol.get_solve_time() in server batch path (engine timing) - Fix error status serialization: use .name for JSON in batch error path - Add numerical equality assertions in test_parser_and_batch_solver - Remove pytest.warns for DeprecationWarning (fails with older installed pkg)
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
docs/cuopt/source/cuopt-server/examples/lp-examples.rst (1)
405-406: Clarify phrasing for deprecated batch mode.The text "In the case of batch mode, you can send a bunch of mps files at once" uses present tense and suggests batch mode is a current feature, which may confuse readers given that batch mode is deprecated. Consider rephrasing to clarify the deprecated status more clearly in this context.
📝 Suggested rephrasing
-In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results. The batch mode works only for ``mps`` in the case of CLI. +For solving multiple ``mps`` files, you can send them together (they will be solved sequentially). This multi-file mode works only for ``mps`` in the case of CLI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst` around lines 405 - 406, The sentence "In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results." incorrectly uses present tense and implies batch mode is current; update the wording to clearly indicate batch mode is deprecated (e.g., change to past or explicitly mention "deprecated") and clarify it only applied to CLI for ``mps`` files—edit the sentence in the paragraph that mentions batch mode so it explicitly states that batch mode was deprecated and therefore should not be relied upon.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst`:
- Line 5: Remove inline deprecation notices from the example: delete the
parenthetical "(solved sequentially; batch mode is deprecated)", remove "(Batch
Deprecated)" from the section title, and remove the "LP batch mode is
deprecated" note block; instead use neutral wording that describes behavior
(e.g., "solved sequentially" or "batch behavior") and add a brief pointer to the
external documentation site (a short "See external docs for deprecation and
migration guidance") so all deprecation and migration details are maintained
off-repo per project guidelines.
In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py`:
- Around line 351-360: The batch loop currently builds solver_settings once from
create_solver(LP_data[0], warmstart_data) and reuses it for all items, dropping
per-item solver_config; update the logic so for each i_data you call
create_solver(i_data, warmstart_data) (or merge per-item solver_config into a
base via create_solver) before invoking linear_programming.Solve; locate
create_solver, LP_data, warmstart_data, solver_settings, create_data_model and
linear_programming.Solve and ensure warnings from create_solver are appended per
item and that each Solve receives the correct per-item solver_settings.
---
Nitpick comments:
In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst`:
- Around line 405-406: The sentence "In the case of batch mode, you can send a
bunch of ``mps`` files at once, and acquire results." incorrectly uses present
tense and implies batch mode is current; update the wording to clearly indicate
batch mode is deprecated (e.g., change to past or explicitly mention
"deprecated") and clarify it only applied to CLI for ``mps`` files—edit the
sentence in the paragraph that mentions batch mode so it explicitly states that
batch mode was deprecated and therefore should not be relied upon.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
docs/cuopt/source/cuopt-server/examples/lp-examples.rstdocs/cuopt/source/lp-qp-features.rstpython/cuopt/cuopt/tests/linear_programming/test_lp_solver.pypython/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.pypython/cuopt_server/cuopt_server/utils/linear_programming/solver.py
🚧 Files skipped from review as they are similar to previous changes (1)
- python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py
python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
Outdated
Show resolved
Hide resolved
7004771 to
81719c3
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh (1)
7-11: Make terminology fully consistent across the example.These updated notes correctly describe sequential solving, but the script banner still says “Batch Mode” (Line 49). Renaming that banner to “Sequential LP solve” would avoid mixed messaging.
As per coding guidelines,
docs/**/*should ensure “Clarity” and “Consistency … terminology match code”.Also applies to: 56-56
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh` around lines 7 - 11, Update the script's banner and any matching labels from "Batch Mode" to "Sequential LP solve" so the terminology matches the note about sequential solving; search for the literal "Batch Mode" string in batch_mode_example.sh (including the banner at the top and any other occurrences such as the secondary message around line 56) and replace them with "Sequential LP solve" to keep the docs' terminology consistent with the updated behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 480-483: Update the misleading comment near the BatchSolve call in
test_lp_solver.py: either remove the sentence claiming "CI asserts it via
pytest.warns" or change it to point to the actual location that asserts the
DeprecationWarning (e.g., the test function or file that uses pytest.warns), and
ensure the comment references the deprecated method name BatchSolve and the
recommended replacement Solve so readers can find the related code paths; keep
the comment concise and accurate.
---
Nitpick comments:
In `@docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh`:
- Around line 7-11: Update the script's banner and any matching labels from
"Batch Mode" to "Sequential LP solve" so the terminology matches the note about
sequential solving; search for the literal "Batch Mode" string in
batch_mode_example.sh (including the banner at the top and any other occurrences
such as the secondary message around line 56) and replace them with "Sequential
LP solve" to keep the docs' terminology consistent with the updated behavior.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
.github/skills/cuopt-lp-milp/resources/server_examples.mddocs/cuopt/source/cuopt-server/examples/lp-examples.rstdocs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.shdocs/cuopt/source/faq.rstdocs/cuopt/source/lp-qp-features.rstpython/cuopt/cuopt/linear_programming/solver/solver.pypython/cuopt/cuopt/tests/linear_programming/test_lp_solver.pypython/cuopt_self_hosted/README.mdpython/cuopt_self_hosted/cuopt_sh_client/cuopt_self_host_client.pypython/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.pypython/cuopt_server/cuopt_server/utils/linear_programming/solver.pypython/cuopt_server/cuopt_server/webserver.py
🚧 Files skipped from review as they are similar to previous changes (7)
- docs/cuopt/source/faq.rst
- docs/cuopt/source/cuopt-server/examples/lp-examples.rst
- docs/cuopt/source/lp-qp-features.rst
- python/cuopt_self_hosted/README.md
- python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
- .github/skills/cuopt-lp-milp/resources/server_examples.md
- python/cuopt_self_hosted/cuopt_sh_client/cuopt_self_host_client.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 480-482: The comment block above BatchSolve is duplicated and
includes an incomplete/truncated line; edit the comment that references
BatchSolve (the deprecation note) to remove the duplicate/truncated lines and
replace them with a single clear sentence such as "Call BatchSolve (deprecated;
use sequential Solve instead)" so that only one complete, non-redundant comment
remains immediately above the BatchSolve usage.
baec347 to
81719c3
Compare
…up, docs terminology Signed-off-by: anandhkb <aanandh@nvidia.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
python/cuopt_server/cuopt_server/utils/linear_programming/solver.py (1)
354-362: Per-item solver creation correctly addresses previous feedback.The refactored loop now calls
create_solver(i_data, warmstart_data)for each item, ensuring per-item solver settings are respected.Consider rejecting batch mode with warmstart upfront.
The
warmstart_dataparameter is applied to each item in the batch. Given that the Python API'sBatchSolveraises an exception when warmstart is provided (as tested intest_batch_solver_warm_start), consider adding an explicit guard at the server level:if warmstart_data is not None: raise HTTPException( status_code=400, detail="Warmstart is not supported for batch LP requests" )This would provide a clearer error message than letting each
Solve()potentially fail with dimension mismatches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py` around lines 354 - 362, Add an explicit guard to reject warmstart for batch LP requests: if the incoming warmstart_data is not None and the call is a batch solve (e.g., the code path that iterates and calls create_solver(...) per item / the BatchSolve API), raise an HTTPException (400) with a clear message like "Warmstart is not supported for batch LP requests". Locate the batch handling code around where create_solver(i_data, warmstart_data) and the loop that calls linear_programming.Solve(...) (symbols: create_solver, warmstart_data, linear_programming.Solve) and enforce this check before entering the per-item loop so the server returns a concise client error instead of letting individual Solve() calls fail.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py`:
- Around line 354-362: Add an explicit guard to reject warmstart for batch LP
requests: if the incoming warmstart_data is not None and the call is a batch
solve (e.g., the code path that iterates and calls create_solver(...) per item /
the BatchSolve API), raise an HTTPException (400) with a clear message like
"Warmstart is not supported for batch LP requests". Locate the batch handling
code around where create_solver(i_data, warmstart_data) and the loop that calls
linear_programming.Solve(...) (symbols: create_solver, warmstart_data,
linear_programming.Solve) and enforce this check before entering the per-item
loop so the server returns a concise client error instead of letting individual
Solve() calls fail.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6b3549ac-cae6-47b0-acfc-5a0b7f438630
📒 Files selected for processing (4)
docs/cuopt/source/cuopt-server/examples/lp-examples.rstdocs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.shpython/cuopt/cuopt/tests/linear_programming/test_lp_solver.pypython/cuopt_server/cuopt_server/utils/linear_programming/solver.py
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/cuopt/source/cuopt-server/examples/lp-examples.rst
- docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh