Skip to content

Deprecate LP batch solve across Python, server, and docs#915

Open
anandhkb wants to merge 3 commits intoNVIDIA:mainfrom
anandhkb:deprecateBatchLP
Open

Deprecate LP batch solve across Python, server, and docs#915
anandhkb wants to merge 3 commits intoNVIDIA:mainfrom
anandhkb:deprecateBatchLP

Conversation

@anandhkb
Copy link
Contributor

  • Add DeprecationWarning to LP BatchSolve in solver.py
  • Replace server batch path with sequential Solve() loop
  • Add empty-list guard for batch LP_data (HTTP 400)
  • Update cuopt_self_hosted client batch mode docs
  • Update FAQ, lp-examples, batch_mode_example.sh
  • Fix test_lp_solver: individual_solutions = [] (was [] * nb_solves)

- Add DeprecationWarning to LP BatchSolve in solver.py
- Replace server batch path with sequential Solve() loop
- Add empty-list guard for batch LP_data (HTTP 400)
- Update cuopt_self_hosted client batch mode docs
- Update FAQ, lp-examples, batch_mode_example.sh
- Fix test_lp_solver: individual_solutions = [] (was [] * nb_solves)
@anandhkb anandhkb added this to the 26.04 milestone Feb 27, 2026
@anandhkb anandhkb requested a review from a team as a code owner February 27, 2026 22:54
@anandhkb anandhkb requested a review from tmckayus February 27, 2026 22:54
@anandhkb anandhkb added the non-breaking Introduces a non-breaking change label Feb 27, 2026
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 27, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Feb 27, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

LP batch mode is deprecated: docs and examples renamed to "Multiple LPs", BatchSolve now emits a DeprecationWarning, server now solves multiple LPs by calling Solve per item, CLI/client help and warmstart validation updated, and tests check equivalence between batch and sequential solves.

Changes

Cohort / File(s) Summary
Documentation
.github/skills/cuopt-lp-milp/resources/server_examples.md, docs/cuopt/source/cuopt-server/examples/lp-examples.rst, docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh, docs/cuopt/source/faq.rst, docs/cuopt/source/lp-qp-features.rst
Replaced "Batch Mode" wording with "Multiple LPs", marked batch mode deprecated, updated anchors/titles/descriptions, example banners and notes, and cross-references.
Core solver (client lib)
python/cuopt/cuopt/linear_programming/solver/solver.py
Added warnings import; updated BatchSolve docstring and emits warnings.warn(..., DeprecationWarning) at runtime; signature and core behavior unchanged.
Server solver implementation
python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
Validate non-empty LP list; emit deprecation warning; replace single BatchSolve call with per-item creation and Solve calls, accumulate per-item solutions and total solve time; convert status enums to names in response.
Server API docs
python/cuopt_server/cuopt_server/webserver.py
Updated warmstartId parameter description to indicate batch LP is deprecated and warmstart is enabled only for single LP.
CLI / self-hosted client
python/cuopt_self_hosted/README.md, python/cuopt_self_hosted/cuopt_sh_client/cuopt_self_host_client.py, python/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.py
Help text updated to state multiple LPs are solved sequentially; added validation to reject use of warmstart_id when multiple LP problems are provided.
Tests
python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py
Updated comments for deprecation/warmstart; adjusted result list init; added assertions comparing BatchSolve outputs to individually solved results (primal objective and solution) to verify equivalence.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 18.18% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Deprecate LP batch solve across Python, server, and docs' directly matches the main objective of the changeset, which deprecates LP batch solving functionality across multiple components.
Description check ✅ Passed The description lists specific, actionable changes that align with the PR objectives: deprecation warnings, sequential solve loops, guards, and documentation updates across the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py (1)

491-496: ⚠️ Potential issue | 🟠 Major

Strengthen batch-vs-sequential checks to include numerical equality.

This test currently validates only termination status. It should also compare objective/primal outputs to confirm numerical correctness equivalence.

Proposed assertion additions
     for i in range(nb_solves):
         assert (
             batch_solution[i].get_termination_status()
             == individual_solutions[i].get_termination_status()
         )
+        assert batch_solution[i].get_primal_objective() == pytest.approx(
+            individual_solutions[i].get_primal_objective(), rel=1e-6, abs=1e-8
+        )
+        assert np.array(batch_solution[i].get_primal_solution()) == pytest.approx(
+            np.array(individual_solutions[i].get_primal_solution()),
+            rel=1e-5,
+            abs=1e-7,
+        )

As per coding guidelines: "Write tests validating numerical correctness of optimization results (not just 'runs without error')".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py` around lines
491 - 496, The test only checks termination status; extend it to assert
numerical equality between batch_solution and individual_solutions for each i in
range(nb_solves) by comparing objective values and primal variable vectors: for
each index use the solver result accessors (e.g., get_objective_value() or
objective/obj property and get_primal_values() or primal/vars accessor on
batch_solution[i] and individual_solutions[i]) and assert the objectives are
equal within a small tolerance (pytest.approx or an absolute/relative tol) and
the primal arrays match element-wise within tolerance; keep the existing
termination-status assertion and add these numeric assertions in the same loop.
🧹 Nitpick comments (3)
python/cuopt_server/cuopt_server/utils/linear_programming/solver.py (1)

358-362: Consider: wall-clock timing vs. engine solve time.

The timing logic uses time.time() around the Solve call, which measures wall-clock time including Python overhead. The original BatchSolve may have returned engine-level solve time. This could result in slightly different total_solve_time values compared to the deprecated batch path.

If precise engine-level timing is required, consider summing i_sol.get_solve_time() instead:

💡 Alternative timing approach
             for i_data in LP_data:
                 i_warnings, data_model = create_data_model(i_data)
                 warnings.extend(i_warnings)
-                solve_begin_time = time.time()
                 i_sol = linear_programming.Solve(
                     data_model, solver_settings=solver_settings
                 )
-                total_solve_time += time.time() - solve_begin_time
+                total_solve_time += i_sol.get_solve_time()
                 sol.append(i_sol)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py` around
lines 358 - 362, The current timing wraps the call to linear_programming.Solve
with time.time() (solve_begin_time and total_solve_time), which measures
wall-clock/Python overhead rather than the solver engine's internal time; change
the accumulation to use the solver-reported time by calling
i_sol.get_solve_time() (or equivalent on the Solve result) and add that to
total_solve_time so timings match the original BatchSolve semantics and exclude
Python overhead.
docs/cuopt/source/cuopt-server/examples/lp-examples.rst (2)

18-21: Section title inconsistent with deprecation messaging.

Line 5 states batch mode is deprecated, but the section title at line 20 still reads "Genric Example With Normal Mode and Batch Mode" without indicating deprecation. Additionally, "Genric" appears to be a typo for "Generic".

Consider updating the title to reflect deprecation:

📝 Suggested title update
-.. _generic-example-with-normal-and-batch-mode:
+.. _generic-example-with-normal-and-multiple-lps:

-Genric Example With Normal Mode and Batch Mode
------------------------------------------------
+Generic Example With Normal Mode and Multiple LPs (Batch Deprecated)
+---------------------------------------------------------------------
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst` around lines 18 -
21, The section title anchored by "_generic-example-with-normal-and-batch-mode"
contains a typo ("Genric") and fails to indicate that batch mode is deprecated;
update the heading text "Genric Example With Normal Mode and Batch Mode" to
correct the typo to "Generic" and explicitly mark batch mode as deprecated
(e.g., "Generic Example With Normal Mode (Batch Mode deprecated)") so the title
aligns with the deprecation note in the body.

405-408: CLI batch mode section lacks deprecation notice.

The CLI batch mode section describes sending multiple MPS files at once but doesn't mention that batch mode is deprecated, unlike the Python examples section (line 5). For consistency, consider adding a deprecation note here as well.

📝 Suggested deprecation notice
-In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results. The batch mode works only for ``mps`` in the case of CLI:
+In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results. The batch mode works only for ``mps`` in the case of CLI.
+
+.. note::
+   LP batch mode is deprecated. Multiple problems are now solved sequentially.

 .. note::
    Batch mode is not available for MILP problems.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst` around lines 405 -
408, Add a deprecation note to the CLI batch-mode paragraph that starts "In the
case of batch mode, you can send a bunch of ``mps`` files at once" so it matches
the Python examples deprecation: insert a short ".. deprecated::" directive (or
equivalent deprecation note) immediately before or after that paragraph and keep
the existing clarifying note "Batch mode is not available for MILP problems." so
readers see both the deprecation and the current limitation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@python/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.py`:
- Line 375: Add a guard in solve() before calling get_LP_solve() that rejects
using warmstart_id when more than one LP problem is supplied: if
args.warmstart_id is set and the request contains multiple LP problems (same
condition used in the existing init_ids validation), raise/exit with a clear
error message. This mirrors the existing init_ids check and prevents forwarding
warmstart_id into get_LP_solve()/_send_request() when multiple problems are
present.

In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 480-482: The test calls the deprecated BatchSolve but doesn't
assert that a DeprecationWarning is emitted; wrap the call to
solver.BatchSolve(...) in a pytest.warns(DeprecationWarning) context (or use
pytest.deprecated_call) to assert the deprecation contract, keeping the rest of
the assertions on batch_solution and solve_time unchanged; reference the
existing call to BatchSolve in
python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py to locate where to
add the pytest.warns assertion and ensure the test fails if the warning is not
emitted.

---

Outside diff comments:
In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 491-496: The test only checks termination status; extend it to
assert numerical equality between batch_solution and individual_solutions for
each i in range(nb_solves) by comparing objective values and primal variable
vectors: for each index use the solver result accessors (e.g.,
get_objective_value() or objective/obj property and get_primal_values() or
primal/vars accessor on batch_solution[i] and individual_solutions[i]) and
assert the objectives are equal within a small tolerance (pytest.approx or an
absolute/relative tol) and the primal arrays match element-wise within
tolerance; keep the existing termination-status assertion and add these numeric
assertions in the same loop.

---

Nitpick comments:
In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst`:
- Around line 18-21: The section title anchored by
"_generic-example-with-normal-and-batch-mode" contains a typo ("Genric") and
fails to indicate that batch mode is deprecated; update the heading text "Genric
Example With Normal Mode and Batch Mode" to correct the typo to "Generic" and
explicitly mark batch mode as deprecated (e.g., "Generic Example With Normal
Mode (Batch Mode deprecated)") so the title aligns with the deprecation note in
the body.
- Around line 405-408: Add a deprecation note to the CLI batch-mode paragraph
that starts "In the case of batch mode, you can send a bunch of ``mps`` files at
once" so it matches the Python examples deprecation: insert a short "..
deprecated::" directive (or equivalent deprecation note) immediately before or
after that paragraph and keep the existing clarifying note "Batch mode is not
available for MILP problems." so readers see both the deprecation and the
current limitation.

In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py`:
- Around line 358-362: The current timing wraps the call to
linear_programming.Solve with time.time() (solve_begin_time and
total_solve_time), which measures wall-clock/Python overhead rather than the
solver engine's internal time; change the accumulation to use the
solver-reported time by calling i_sol.get_solve_time() (or equivalent on the
Solve result) and add that to total_solve_time so timings match the original
BatchSolve semantics and exclude Python overhead.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 57551ce and 8cf31b9.

📒 Files selected for processing (11)
  • .github/skills/cuopt-lp-milp/resources/server_examples.md
  • docs/cuopt/source/cuopt-server/examples/lp-examples.rst
  • docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh
  • docs/cuopt/source/faq.rst
  • python/cuopt/cuopt/linear_programming/solver/solver.py
  • python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py
  • python/cuopt_self_hosted/README.md
  • python/cuopt_self_hosted/cuopt_sh_client/cuopt_self_host_client.py
  • python/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.py
  • python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
  • python/cuopt_server/cuopt_server/webserver.py

- Add warmstart_id guard in cuopt_sh.py when multiple LP problems
- Fix lp-examples.rst: Genric → Generic, add deprecation to section title
- Add deprecation note to CLI batch mode section
- Fix lp-qp-features.rst references to renamed RST anchor
- Use i_sol.get_solve_time() in server batch path (engine timing)
- Fix error status serialization: use .name for JSON in batch error path
- Add numerical equality assertions in test_parser_and_batch_solver
- Remove pytest.warns for DeprecationWarning (fails with older installed pkg)
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
docs/cuopt/source/cuopt-server/examples/lp-examples.rst (1)

405-406: Clarify phrasing for deprecated batch mode.

The text "In the case of batch mode, you can send a bunch of mps files at once" uses present tense and suggests batch mode is a current feature, which may confuse readers given that batch mode is deprecated. Consider rephrasing to clarify the deprecated status more clearly in this context.

📝 Suggested rephrasing
-In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results. The batch mode works only for ``mps`` in the case of CLI.
+For solving multiple ``mps`` files, you can send them together (they will be solved sequentially). This multi-file mode works only for ``mps`` in the case of CLI.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst` around lines 405 -
406, The sentence "In the case of batch mode, you can send a bunch of ``mps``
files at once, and acquire results." incorrectly uses present tense and implies
batch mode is current; update the wording to clearly indicate batch mode is
deprecated (e.g., change to past or explicitly mention "deprecated") and clarify
it only applied to CLI for ``mps`` files—edit the sentence in the paragraph that
mentions batch mode so it explicitly states that batch mode was deprecated and
therefore should not be relied upon.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst`:
- Line 5: Remove inline deprecation notices from the example: delete the
parenthetical "(solved sequentially; batch mode is deprecated)", remove "(Batch
Deprecated)" from the section title, and remove the "LP batch mode is
deprecated" note block; instead use neutral wording that describes behavior
(e.g., "solved sequentially" or "batch behavior") and add a brief pointer to the
external documentation site (a short "See external docs for deprecation and
migration guidance") so all deprecation and migration details are maintained
off-repo per project guidelines.

In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py`:
- Around line 351-360: The batch loop currently builds solver_settings once from
create_solver(LP_data[0], warmstart_data) and reuses it for all items, dropping
per-item solver_config; update the logic so for each i_data you call
create_solver(i_data, warmstart_data) (or merge per-item solver_config into a
base via create_solver) before invoking linear_programming.Solve; locate
create_solver, LP_data, warmstart_data, solver_settings, create_data_model and
linear_programming.Solve and ensure warnings from create_solver are appended per
item and that each Solve receives the correct per-item solver_settings.

---

Nitpick comments:
In `@docs/cuopt/source/cuopt-server/examples/lp-examples.rst`:
- Around line 405-406: The sentence "In the case of batch mode, you can send a
bunch of ``mps`` files at once, and acquire results." incorrectly uses present
tense and implies batch mode is current; update the wording to clearly indicate
batch mode is deprecated (e.g., change to past or explicitly mention
"deprecated") and clarify it only applied to CLI for ``mps`` files—edit the
sentence in the paragraph that mentions batch mode so it explicitly states that
batch mode was deprecated and therefore should not be relied upon.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8cf31b9 and 7004771.

📒 Files selected for processing (5)
  • docs/cuopt/source/cuopt-server/examples/lp-examples.rst
  • docs/cuopt/source/lp-qp-features.rst
  • python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py
  • python/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.py
  • python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh (1)

7-11: Make terminology fully consistent across the example.

These updated notes correctly describe sequential solving, but the script banner still says “Batch Mode” (Line 49). Renaming that banner to “Sequential LP solve” would avoid mixed messaging.

As per coding guidelines, docs/**/* should ensure “Clarity” and “Consistency … terminology match code”.

Also applies to: 56-56

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh`
around lines 7 - 11, Update the script's banner and any matching labels from
"Batch Mode" to "Sequential LP solve" so the terminology matches the note about
sequential solving; search for the literal "Batch Mode" string in
batch_mode_example.sh (including the banner at the top and any other occurrences
such as the secondary message around line 56) and replace them with "Sequential
LP solve" to keep the docs' terminology consistent with the updated behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 480-483: Update the misleading comment near the BatchSolve call in
test_lp_solver.py: either remove the sentence claiming "CI asserts it via
pytest.warns" or change it to point to the actual location that asserts the
DeprecationWarning (e.g., the test function or file that uses pytest.warns), and
ensure the comment references the deprecated method name BatchSolve and the
recommended replacement Solve so readers can find the related code paths; keep
the comment concise and accurate.

---

Nitpick comments:
In `@docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh`:
- Around line 7-11: Update the script's banner and any matching labels from
"Batch Mode" to "Sequential LP solve" so the terminology matches the note about
sequential solving; search for the literal "Batch Mode" string in
batch_mode_example.sh (including the banner at the top and any other occurrences
such as the secondary message around line 56) and replace them with "Sequential
LP solve" to keep the docs' terminology consistent with the updated behavior.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7004771 and 81719c3.

📒 Files selected for processing (12)
  • .github/skills/cuopt-lp-milp/resources/server_examples.md
  • docs/cuopt/source/cuopt-server/examples/lp-examples.rst
  • docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh
  • docs/cuopt/source/faq.rst
  • docs/cuopt/source/lp-qp-features.rst
  • python/cuopt/cuopt/linear_programming/solver/solver.py
  • python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py
  • python/cuopt_self_hosted/README.md
  • python/cuopt_self_hosted/cuopt_sh_client/cuopt_self_host_client.py
  • python/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.py
  • python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
  • python/cuopt_server/cuopt_server/webserver.py
🚧 Files skipped from review as they are similar to previous changes (7)
  • docs/cuopt/source/faq.rst
  • docs/cuopt/source/cuopt-server/examples/lp-examples.rst
  • docs/cuopt/source/lp-qp-features.rst
  • python/cuopt_self_hosted/README.md
  • python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
  • .github/skills/cuopt-lp-milp/resources/server_examples.md
  • python/cuopt_self_hosted/cuopt_sh_client/cuopt_self_host_client.py

@anandhkb anandhkb requested review from Iroy30 and rgsl888prabhu March 1, 2026 04:03
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py`:
- Around line 480-482: The comment block above BatchSolve is duplicated and
includes an incomplete/truncated line; edit the comment that references
BatchSolve (the deprecation note) to remove the duplicate/truncated lines and
replace them with a single clear sentence such as "Call BatchSolve (deprecated;
use sequential Solve instead)" so that only one complete, non-redundant comment
remains immediately above the BatchSolve usage.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 81719c3 and baec347.

📒 Files selected for processing (1)
  • python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py

…up, docs terminology

Signed-off-by: anandhkb <aanandh@nvidia.com>
@anandhkb anandhkb requested a review from a team as a code owner March 6, 2026 17:02
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
python/cuopt_server/cuopt_server/utils/linear_programming/solver.py (1)

354-362: Per-item solver creation correctly addresses previous feedback.

The refactored loop now calls create_solver(i_data, warmstart_data) for each item, ensuring per-item solver settings are respected.

Consider rejecting batch mode with warmstart upfront.

The warmstart_data parameter is applied to each item in the batch. Given that the Python API's BatchSolve raises an exception when warmstart is provided (as tested in test_batch_solver_warm_start), consider adding an explicit guard at the server level:

if warmstart_data is not None:
    raise HTTPException(
        status_code=400,
        detail="Warmstart is not supported for batch LP requests"
    )

This would provide a clearer error message than letting each Solve() potentially fail with dimension mismatches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py` around
lines 354 - 362, Add an explicit guard to reject warmstart for batch LP
requests: if the incoming warmstart_data is not None and the call is a batch
solve (e.g., the code path that iterates and calls create_solver(...) per item /
the BatchSolve API), raise an HTTPException (400) with a clear message like
"Warmstart is not supported for batch LP requests". Locate the batch handling
code around where create_solver(i_data, warmstart_data) and the loop that calls
linear_programming.Solve(...) (symbols: create_solver, warmstart_data,
linear_programming.Solve) and enforce this check before entering the per-item
loop so the server returns a concise client error instead of letting individual
Solve() calls fail.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@python/cuopt_server/cuopt_server/utils/linear_programming/solver.py`:
- Around line 354-362: Add an explicit guard to reject warmstart for batch LP
requests: if the incoming warmstart_data is not None and the call is a batch
solve (e.g., the code path that iterates and calls create_solver(...) per item /
the BatchSolve API), raise an HTTPException (400) with a clear message like
"Warmstart is not supported for batch LP requests". Locate the batch handling
code around where create_solver(i_data, warmstart_data) and the loop that calls
linear_programming.Solve(...) (symbols: create_solver, warmstart_data,
linear_programming.Solve) and enforce this check before entering the per-item
loop so the server returns a concise client error instead of letting individual
Solve() calls fail.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6b3549ac-cae6-47b0-acfc-5a0b7f438630

📥 Commits

Reviewing files that changed from the base of the PR and between baec347 and 58afbd3.

📒 Files selected for processing (4)
  • docs/cuopt/source/cuopt-server/examples/lp-examples.rst
  • docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh
  • python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py
  • python/cuopt_server/cuopt_server/utils/linear_programming/solver.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • docs/cuopt/source/cuopt-server/examples/lp-examples.rst
  • docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant