Skip to content

update routing tests#940

Open
Iroy30 wants to merge 2 commits intoNVIDIA:mainfrom
Iroy30:update_routing_tests
Open

update routing tests#940
Iroy30 wants to merge 2 commits intoNVIDIA:mainfrom
Iroy30:update_routing_tests

Conversation

@Iroy30
Copy link
Member

@Iroy30 Iroy30 commented Mar 6, 2026

Description

Issue

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

@Iroy30 Iroy30 requested a review from a team as a code owner March 6, 2026 15:57
@Iroy30 Iroy30 requested a review from tmckayus March 6, 2026 15:57
@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 6, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

This pull request restructures the cuopt routing test suite by fixing a data model method to handle missing keys gracefully, removing multiple existing test files, and introducing new or modified tests focusing on specific routing features (PDPTW, prize collection, transit time matrices). It adds comprehensive API coverage documentation while consolidating test scenarios.

Changes

Cohort / File(s) Summary
Core Library Fix
python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx
Updated get_order_service_times() to return empty cudf.Series([]) when vehicle_id is absent, preventing KeyError exceptions.
Documentation
python/cuopt/cuopt/tests/routing/API_COVERAGE.md
Added comprehensive markdown documenting routing API coverage across assignment and vehicle_routing modules, cataloging test coverage for enums, classes, getters, setters, and identified gaps.
Removed End-to-End Solver Tests
python/cuopt/cuopt/tests/routing/save_restore_test.py, solver_test.py, test_cvrptw.py, test_pickup_delivery.py, test_prize_collection.py, test_tsp.py, test_vehicle_routing.py
Deleted test files exercising full routing workflows including Solomon datasets, CVRPTW, pickup-delivery, prize collection, and TSP scenarios with solver invocation and result assertions.
Removed Validation & Exception Tests
python/cuopt/cuopt/tests/routing/test_cuopt_exception.py, test_error_logging.py, test_validation.py, test_warnings.py
Removed unit tests validating DataModel constraints, error messages, infeasibility handling, and type-casting warnings.
Removed Vehicle-Specific Constraint Tests
python/cuopt/cuopt/tests/routing/test_vehicle_breaks.py, test_vehicle_dependent_service_times.py, test_vehicle_fixed_costs.py, test_vehicle_max_cost.py, test_vehicle_max_time.py, test_vehicle_order_match.py, test_vehicle_types.py
Deleted tests for break configurations, service times, fixed costs, max cost/time constraints, vehicle-order matching, and multi-vehicle-type scenarios.
Enhanced Existing Tests
python/cuopt/cuopt/tests/routing/test_data_model.py
Replaced solver integration tests with direct matrix assertions; added add_transit_time_matrix() and set_vehicle_types() method coverage; introduced multi-vehicle-type validation.
Enhanced Existing Tests
python/cuopt/cuopt/tests/routing/test_initial_solutions.py
Updated to use cudf.Series for skip-first and drop-return flags; added retrieval and assertion of initial solutions via get_initial_solutions().
New & Enhanced Solver Tests
python/cuopt/cuopt/tests/routing/test_solver.py
Added comprehensive tests for PDPTW scenarios (pickup-delivery pairs, transit times, route constraints) and prize-collection scenarios (multi-matrix setups, vehicle types, breaks, objective validation).
Enhanced Settings Tests
python/cuopt/cuopt/tests/routing/test_solver_settings.py
Replaced min-vehicles and max-distance tests with verbose-mode, time-limit getter/setter, and YAML config dump-and-load workflows.
New Validation Test File
python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py
Added consolidated test module covering type-casting warnings, cost matrix validation, time window validation, order location bounds checking, and DataModel constructor validation.
Enhanced Vehicle Properties Tests
python/cuopt/cuopt/tests/routing/test_vehicle_properties.py
Added import of ErrorStatus from vehicle_routing_wrapper module.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is completely empty except for a template with unchecked boxes and no substantive content, making it impossible to understand the intent or scope of the changes. Fill in the Description section with details about what tests were changed, why they were modified, and what the changes accomplish. Explain the rationale for removing tests and adding new ones.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'update routing tests' is vague and generic, using non-descriptive language that doesn't convey the specific nature or scope of changes made to the routing tests. Provide a more descriptive title that clarifies what testing changes were made, such as 'Consolidate and refactor routing test suite' or 'Remove deprecated routing tests and add new coverage'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (1)
python/cuopt/cuopt/tests/routing/test_solver.py (1)

60-253: Add at least one degenerate-case solver scenario in this new test module.

These additions are strong happy-path cases, but this file still misses explicit infeasible/empty/singleton coverage in solver-level assertions.

As per coding guidelines, **/*test*.{cpp,cu,py} should include degenerate cases (infeasible, unbounded, empty, singleton) and validate optimization correctness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/routing/test_solver.py` around lines 60 - 253, Add a
new test (e.g., test_degenerate_cases) alongside
test_pdptw/test_prize_collection that exercises solver degenerate scenarios: (1)
empty problem — construct a routing.DataModel with zero orders (or zero
vehicles) and call routing.Solve with routing.SolverSettings to assert
solution.get_status() is a valid code (no crash),
solution.get_vehicle_count()==0 and accepted/infeasible getters return empty
cudf.Series; (2) infeasible problem — create a DataModel where demand exceeds
all vehicle capacities or time windows are impossible, call routing.Solve and
assert solution.get_status() indicates infeasible and
solution.get_infeasible_orders() lists the problematic orders; and (3) singleton
feasible problem — one order and one vehicle, assert status==0, vehicle_count==1
and objective equals expected cost from the cost matrix; reuse
routing.SolverSettings, routing.Solve, DataModel methods (add_cost_matrix,
add_transit_time_matrix, add_capacity_dimension, set_order_time_windows, etc.)
and validate getters like get_accepted_solutions(), get_infeasible_orders(),
get_total_objective() and get_vehicle_count().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx`:
- Around line 657-661: get_order_service_times currently returns an empty
cudf.Series when vehicle_id is missing, skipping any default mapping stored
under key -1; update the function (get_order_service_times) to first check
self.order_service_times for vehicle_id, then fallback to
self.order_service_times.get(-1) (or check "-1" as appropriate for the dict
keys) and return that default if present, and only return cudf.Series([]) when
neither vehicle_id nor the -1 default exists.

In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md`:
- Around line 98-99: The coverage table incorrectly marks set_verbose_mode() as
"No" while the summary states all SolverSettings setters are covered; update the
table row for `set_verbose_mode()` to reflect coverage (change "No" to "Yes" and
reference the corresponding test name if applicable) or adjust the summary to
match the table; ensure consistency between the table entries for
`set_verbose_mode()` and `set_error_logging_mode()` and the summary sentence
about `SolverSettings` setters so the API_COVERAGE.md accurately reflects
current test coverage.

In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py`:
- Around line 128-134: The test currently only checks lengths of
get_initial_solutions() results; change it to assert value-level equality for
each component returned by d.get_initial_solutions() against the originals
(vehicle_ids, routes, types, sol_offsets). Locate the call to
d.get_initial_solutions() and the ret_initial variable, unpack or iterate the
returned items and compare the actual vehicle id lists, route contents, node
type lists, and solution offset lists/arrays to the original variables
(vehicle_ids, routes, types, sol_offsets) so ordering and values are validated
rather than just sizes.

In `@python/cuopt/cuopt/tests/routing/test_solver_settings.py`:
- Around line 51-71: The test test_dump_config uses a fixed filename
"solver_cfg.yaml" which can collide across runs; change the test to use a unique
temporary path (via pytest tmp_path or tmp_path_factory) and pass that path to
SolverSettings.dump_config_file and to utils.create_data_model_from_yaml so the
file is written and read from an isolated location; update references in the
test for s.dump_config_file(...), s.get_config_file_name(), and
utils.create_data_model_from_yaml(...) to use the tmp path.

In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 138-140: The test uses exact float equality on
solution.get_total_objective(), which is brittle; change those assertions to
tolerant comparisons (e.g., use pytest.approx or math.isclose) so small numeric
differences won't fail the test. Update the assertion(s) that currently read
assert total_cost == 13.0 and the similar assertions later in the file (the
block checking total objective around the second solution) to assert total_cost
== pytest.approx(13.0) or equivalent with a defined rel/abs tolerance, keeping
the same expected value but allowing a small tolerance.

In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py`:
- Around line 496-497: The test currently only checks
routing_solution.get_status() == 0 and misses validating that break constraints
are respected; update the test to iterate the returned routes/trips from the
solution (use the existing routing_solution or solution accessor methods) and
assert for each scheduled break that its start time lies within the specified
break window and that its duration matches the expected break duration (and that
breaks do not overlap work segments); use the route/vehicle identifiers and
break objects already present in the test fixtures (e.g., the route/trip list,
break.window.start/end, and break.duration fields) to locate and validate each
break, and add the same assertions for the other test instance that currently
omits them to ensure numerical correctness of break adherence.

In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py`:
- Around line 25-36: The assertions in test_warnings_exceptions.py rely on exact
positions w[0]/w[1] which is flaky; update the checks to search the captured
warnings list for messages containing the expected substrings instead of
indexing. For example, after calling dm.add_cost_matrix,
dm.set_order_time_windows, and dm.set_order_service_times, replace uses of
w[0].message and w[1].message with assertions using any("Casting cost_matrix
from int64 to float32" in str(m.message) for m in w) and any("Casting
service_times from float64 to int32" in str(m.message) for m in w) so the test
passes regardless of warning order.

---

Nitpick comments:
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 60-253: Add a new test (e.g., test_degenerate_cases) alongside
test_pdptw/test_prize_collection that exercises solver degenerate scenarios: (1)
empty problem — construct a routing.DataModel with zero orders (or zero
vehicles) and call routing.Solve with routing.SolverSettings to assert
solution.get_status() is a valid code (no crash),
solution.get_vehicle_count()==0 and accepted/infeasible getters return empty
cudf.Series; (2) infeasible problem — create a DataModel where demand exceeds
all vehicle capacities or time windows are impossible, call routing.Solve and
assert solution.get_status() indicates infeasible and
solution.get_infeasible_orders() lists the problematic orders; and (3) singleton
feasible problem — one order and one vehicle, assert status==0, vehicle_count==1
and objective equals expected cost from the cost matrix; reuse
routing.SolverSettings, routing.Solve, DataModel methods (add_cost_matrix,
add_transit_time_matrix, add_capacity_dimension, set_order_time_windows, etc.)
and validate getters like get_accepted_solutions(), get_infeasible_orders(),
get_total_objective() and get_vehicle_count().

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 885022f2-de43-4f90-bae5-2d699c0381b0

📥 Commits

Reviewing files that changed from the base of the PR and between 1c6f86b and d09fae9.

📒 Files selected for processing (26)
  • python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx
  • python/cuopt/cuopt/tests/routing/API_COVERAGE.md
  • python/cuopt/cuopt/tests/routing/save_restore_test.py
  • python/cuopt/cuopt/tests/routing/solver_test.py
  • python/cuopt/cuopt/tests/routing/test_cuopt_exception.py
  • python/cuopt/cuopt/tests/routing/test_cvrptw.py
  • python/cuopt/cuopt/tests/routing/test_data_model.py
  • python/cuopt/cuopt/tests/routing/test_error_logging.py
  • python/cuopt/cuopt/tests/routing/test_initial_solutions.py
  • python/cuopt/cuopt/tests/routing/test_pickup_delivery.py
  • python/cuopt/cuopt/tests/routing/test_prize_collection.py
  • python/cuopt/cuopt/tests/routing/test_solver.py
  • python/cuopt/cuopt/tests/routing/test_solver_settings.py
  • python/cuopt/cuopt/tests/routing/test_tsp.py
  • python/cuopt/cuopt/tests/routing/test_validation.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_breaks.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_dependent_service_times.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_fixed_costs.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_max_cost.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_max_time.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_order_match.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_properties.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_routing.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_types.py
  • python/cuopt/cuopt/tests/routing/test_warnings.py
  • python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py
💤 Files with no reviewable changes (18)
  • python/cuopt/cuopt/tests/routing/test_pickup_delivery.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_types.py
  • python/cuopt/cuopt/tests/routing/test_error_logging.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_max_time.py
  • python/cuopt/cuopt/tests/routing/test_prize_collection.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_breaks.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_routing.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_dependent_service_times.py
  • python/cuopt/cuopt/tests/routing/test_warnings.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_max_cost.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_fixed_costs.py
  • python/cuopt/cuopt/tests/routing/test_cvrptw.py
  • python/cuopt/cuopt/tests/routing/test_validation.py
  • python/cuopt/cuopt/tests/routing/solver_test.py
  • python/cuopt/cuopt/tests/routing/test_cuopt_exception.py
  • python/cuopt/cuopt/tests/routing/test_vehicle_order_match.py
  • python/cuopt/cuopt/tests/routing/test_tsp.py
  • python/cuopt/cuopt/tests/routing/save_restore_test.py

Comment on lines 657 to +661
def get_order_service_times(self, vehicle_id):
return self.order_service_times[vehicle_id]
if vehicle_id in self.order_service_times:
return self.order_service_times[vehicle_id]
else:
return cudf.Series([])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing fallback to default service-times entry (vehicle_id=-1).

On Line 658-661, a missing vehicle-specific key returns an empty series immediately. This skips the default service-time mapping (if set under -1) and can produce incorrect getter behavior.

🔧 Proposed fix
 def get_order_service_times(self, vehicle_id):
-        if vehicle_id in self.order_service_times:        
-            return self.order_service_times[vehicle_id]
-        else:
-            return cudf.Series([])
+        if vehicle_id in self.order_service_times:
+            return self.order_service_times[vehicle_id]
+        if -1 in self.order_service_times:
+            return self.order_service_times[-1]
+        return cudf.Series([], dtype=np.int32)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx` around lines 657 -
661, get_order_service_times currently returns an empty cudf.Series when
vehicle_id is missing, skipping any default mapping stored under key -1; update
the function (get_order_service_times) to first check self.order_service_times
for vehicle_id, then fallback to self.order_service_times.get(-1) (or check "-1"
as appropriate for the dict keys) and return that default if present, and only
return cudf.Series([]) when neither vehicle_id nor the -1 default exists.

Comment on lines +98 to +99
| `set_verbose_mode()` | **No** | — |
| `set_error_logging_mode()` | Yes | test_error_logging |
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Coverage summary contradicts the table for set_verbose_mode().

Line 98 marks set_verbose_mode() as No, while Line 124 says all SolverSettings setters are covered. Please make these consistent.

Based on learnings, docs should be updated accurately for API behavior/coverage changes.

Also applies to: 124-124

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md` around lines 98 - 99, The
coverage table incorrectly marks set_verbose_mode() as "No" while the summary
states all SolverSettings setters are covered; update the table row for
`set_verbose_mode()` to reflect coverage (change "No" to "Yes" and reference the
corresponding test name if applicable) or adjust the summary to match the table;
ensure consistency between the table entries for `set_verbose_mode()` and
`set_error_logging_mode()` and the summary sentence about `SolverSettings`
setters so the API_COVERAGE.md accurately reflects current test coverage.

Comment on lines +128 to +134
ret_initial = d.get_initial_solutions()
assert len(ret_initial) == 4
ret_sizes = sorted(len(x) for x in ret_initial)
expected_sizes = sorted([
len(vehicle_ids), len(routes), len(types), len(sol_offsets)
])
assert ret_sizes == expected_sizes
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Initial-solution round-trip check is too weak (size-only).

The new assertions verify only lengths, so a reordered/corrupted payload could still pass. This should assert value-level equality for vehicle ids, routes, node types, and offsets.

✅ Suggested stronger assertions
-    ret_initial = d.get_initial_solutions()
-    assert len(ret_initial) == 4
-    ret_sizes = sorted(len(x) for x in ret_initial)
-    expected_sizes = sorted([
-        len(vehicle_ids), len(routes), len(types), len(sol_offsets)
-    ])
-    assert ret_sizes == expected_sizes
+    ret_vehicle_ids, ret_routes, ret_types, ret_offsets = d.get_initial_solutions()
+    assert (ret_vehicle_ids == vehicle_ids).all()
+    assert (ret_routes == routes).all()
+    assert (ret_types == types).all()
+    assert (ret_offsets == sol_offsets).all()

As per coding guidelines, tests should verify transformation/index consistency across representations, not just structural size checks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py` around lines 128
- 134, The test currently only checks lengths of get_initial_solutions()
results; change it to assert value-level equality for each component returned by
d.get_initial_solutions() against the originals (vehicle_ids, routes, types,
sol_offsets). Locate the call to d.get_initial_solutions() and the ret_initial
variable, unpack or iterate the returned items and compare the actual vehicle id
lists, route contents, node type lists, and solution offset lists/arrays to the
original variables (vehicle_ids, routes, types, sol_offsets) so ordering and
values are validated rather than just sizes.

Comment on lines +51 to +71
def test_dump_config():
"""Test SolverSettings solve with config file"""
s = routing.SolverSettings()
config_file = "solver_cfg.yaml"
s.dump_config_file(config_file)
assert s.get_config_file_name() == config_file

# Small example data model: 3 locations, 1 vehicle
cost = cudf.DataFrame(
[[0.0, 1.0, 1.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0]],
dtype=np.float32,
)
dm = routing.DataModel(3, 1)
dm.add_cost_matrix(cost)
s.set_time_limit(2)
routing_solution = routing.Solve(dm, s)
assert routing_solution.get_status() == 0

# Load from written solver_cfg.yaml and solve again
dm_from_yaml, s_from_yaml = utils.create_data_model_from_yaml(config_file)
solution_from_yaml = routing.Solve(dm_from_yaml, s_from_yaml)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use an isolated temp path instead of a fixed config filename.

Using "solver_cfg.yaml" in CWD can collide across parallel or repeated runs. Prefer tmp_path to keep this test hermetic.

🧪 Proposed fix
-def test_dump_config():
+def test_dump_config(tmp_path):
     """Test SolverSettings solve with config file"""
     s = routing.SolverSettings()
-    config_file = "solver_cfg.yaml"
+    config_file = str(tmp_path / "solver_cfg.yaml")
     s.dump_config_file(config_file)
     assert s.get_config_file_name() == config_file

As per coding guidelines, test isolation should prevent file/global state leakage between test cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/routing/test_solver_settings.py` around lines 51 -
71, The test test_dump_config uses a fixed filename "solver_cfg.yaml" which can
collide across runs; change the test to use a unique temporary path (via pytest
tmp_path or tmp_path_factory) and pass that path to
SolverSettings.dump_config_file and to utils.create_data_model_from_yaml so the
file is written and read from an isolated location; update references in the
test for s.dump_config_file(...), s.get_config_file_name(), and
utils.create_data_model_from_yaml(...) to use the tmp path.

Comment on lines +138 to +140
# Optional: basic objective check
total_cost = solution.get_total_objective()
assert total_cost == 13.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use tolerant comparisons for floating objectives.

Line 140 and Line 248-250 use exact float equality; these can be flaky with solver/runtime numeric variance.

🔧 Suggested change
-    assert total_cost == 13.0
+    assert math.isclose(total_cost, 13.0, rel_tol=1e-6, abs_tol=1e-6)

-    assert sol.get_total_objective() == -13.0
-    assert objectives[routing.Objective.PRIZE] == -26.0
-    assert objectives[routing.Objective.COST] == 13.0
+    assert math.isclose(sol.get_total_objective(), -13.0, rel_tol=1e-6, abs_tol=1e-6)
+    assert math.isclose(objectives[routing.Objective.PRIZE], -26.0, rel_tol=1e-6, abs_tol=1e-6)
+    assert math.isclose(objectives[routing.Objective.COST], 13.0, rel_tol=1e-6, abs_tol=1e-6)

Also applies to: 247-250

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/routing/test_solver.py` around lines 138 - 140, The
test uses exact float equality on solution.get_total_objective(), which is
brittle; change those assertions to tolerant comparisons (e.g., use
pytest.approx or math.isclose) so small numeric differences won't fail the test.
Update the assertion(s) that currently read assert total_cost == 13.0 and the
similar assertions later in the file (the block checking total objective around
the second solution) to assert total_cost == pytest.approx(13.0) or equivalent
with a defined rel/abs tolerance, keeping the same expected value but allowing a
small tolerance.

Comment on lines +496 to +497
# TO DO: Check if breaks are adhered to
assert routing_solution.get_status() == 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Break tests are missing constraint-validation assertions.

Line 496 and Line 564 explicitly leave break adherence unverified, so these tests currently only check “solver runs” and not whether computed routes satisfy break windows/durations.

✅ Suggested assertion pattern
-    # TO DO: Check if breaks are adhered to
-    assert routing_solution.get_status() == 0
+    assert routing_solution.get_status() == 0
+    routes = routing_solution.get_route().to_pandas()
+    break_rows = routes[routes["type"] == "Break"]
+    assert not break_rows.empty
+    # Validate break arrival stamps are within configured windows
+    # (adapt indexing by break dimension as needed for this route schema).

As per coding guidelines, **/*test*.{cpp,cu,py} must validate numerical correctness of optimization results, not just successful execution.

Also applies to: 564-565

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py` around lines 496
- 497, The test currently only checks routing_solution.get_status() == 0 and
misses validating that break constraints are respected; update the test to
iterate the returned routes/trips from the solution (use the existing
routing_solution or solution accessor methods) and assert for each scheduled
break that its start time lies within the specified break window and that its
duration matches the expected break duration (and that breaks do not overlap
work segments); use the route/vehicle identifiers and break objects already
present in the test fixtures (e.g., the route/trip list, break.window.start/end,
and break.duration fields) to locate and validate each break, and add the same
assertions for the other test instance that currently omits them to ensure
numerical correctness of break adherence.

Comment on lines +25 to +36
with warnings.catch_warnings(record=True) as w:
dm.add_cost_matrix(cost_matrix)
assert "Casting cost_matrix from int64 to float32" in str(w[0].message)

dm.set_order_time_windows(
constraints["earliest"], constraints["latest"]
)

dm.set_order_service_times(constraints["service"])
assert "Casting service_times from float64 to int32" in str(
w[1].message
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Warning assertions are order-dependent and flaky.

This test assumes exact warning positions (w[0], w[1]), but additional cast warnings can shift ordering and break the test intermittently.

🛠️ Proposed stabilization
 with warnings.catch_warnings(record=True) as w:
+    warnings.simplefilter("always")
     dm.add_cost_matrix(cost_matrix)
-    assert "Casting cost_matrix from int64 to float32" in str(w[0].message)
+    msgs = [str(item.message) for item in w]
+    assert any("Casting cost_matrix from int64 to float32" in m for m in msgs)

     dm.set_order_time_windows(
         constraints["earliest"], constraints["latest"]
     )

     dm.set_order_service_times(constraints["service"])
-    assert "Casting service_times from float64 to int32" in str(
-        w[1].message
-    )
+    msgs = [str(item.message) for item in w]
+    assert any("Casting service_times from float64 to int32" in m for m in msgs)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py` around lines 25
- 36, The assertions in test_warnings_exceptions.py rely on exact positions
w[0]/w[1] which is flaky; update the checks to search the captured warnings list
for messages containing the expected substrings instead of indexing. For
example, after calling dm.add_cost_matrix, dm.set_order_time_windows, and
dm.set_order_service_times, replace uses of w[0].message and w[1].message with
assertions using any("Casting cost_matrix from int64 to float32" in
str(m.message) for m in w) and any("Casting service_times from float64 to int32"
in str(m.message) for m in w) so the test passes regardless of warning order.

@anandhkb anandhkb added this to the 26.04 milestone Mar 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants