Conversation
📝 WalkthroughWalkthroughThis pull request restructures the cuopt routing test suite by fixing a data model method to handle missing keys gracefully, removing multiple existing test files, and introducing new or modified tests focusing on specific routing features (PDPTW, prize collection, transit time matrices). It adds comprehensive API coverage documentation while consolidating test scenarios. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
python/cuopt/cuopt/tests/routing/test_solver.py (1)
60-253: Add at least one degenerate-case solver scenario in this new test module.These additions are strong happy-path cases, but this file still misses explicit infeasible/empty/singleton coverage in solver-level assertions.
As per coding guidelines,
**/*test*.{cpp,cu,py}should include degenerate cases (infeasible, unbounded, empty, singleton) and validate optimization correctness.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python/cuopt/cuopt/tests/routing/test_solver.py` around lines 60 - 253, Add a new test (e.g., test_degenerate_cases) alongside test_pdptw/test_prize_collection that exercises solver degenerate scenarios: (1) empty problem — construct a routing.DataModel with zero orders (or zero vehicles) and call routing.Solve with routing.SolverSettings to assert solution.get_status() is a valid code (no crash), solution.get_vehicle_count()==0 and accepted/infeasible getters return empty cudf.Series; (2) infeasible problem — create a DataModel where demand exceeds all vehicle capacities or time windows are impossible, call routing.Solve and assert solution.get_status() indicates infeasible and solution.get_infeasible_orders() lists the problematic orders; and (3) singleton feasible problem — one order and one vehicle, assert status==0, vehicle_count==1 and objective equals expected cost from the cost matrix; reuse routing.SolverSettings, routing.Solve, DataModel methods (add_cost_matrix, add_transit_time_matrix, add_capacity_dimension, set_order_time_windows, etc.) and validate getters like get_accepted_solutions(), get_infeasible_orders(), get_total_objective() and get_vehicle_count().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx`:
- Around line 657-661: get_order_service_times currently returns an empty
cudf.Series when vehicle_id is missing, skipping any default mapping stored
under key -1; update the function (get_order_service_times) to first check
self.order_service_times for vehicle_id, then fallback to
self.order_service_times.get(-1) (or check "-1" as appropriate for the dict
keys) and return that default if present, and only return cudf.Series([]) when
neither vehicle_id nor the -1 default exists.
In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md`:
- Around line 98-99: The coverage table incorrectly marks set_verbose_mode() as
"No" while the summary states all SolverSettings setters are covered; update the
table row for `set_verbose_mode()` to reflect coverage (change "No" to "Yes" and
reference the corresponding test name if applicable) or adjust the summary to
match the table; ensure consistency between the table entries for
`set_verbose_mode()` and `set_error_logging_mode()` and the summary sentence
about `SolverSettings` setters so the API_COVERAGE.md accurately reflects
current test coverage.
In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py`:
- Around line 128-134: The test currently only checks lengths of
get_initial_solutions() results; change it to assert value-level equality for
each component returned by d.get_initial_solutions() against the originals
(vehicle_ids, routes, types, sol_offsets). Locate the call to
d.get_initial_solutions() and the ret_initial variable, unpack or iterate the
returned items and compare the actual vehicle id lists, route contents, node
type lists, and solution offset lists/arrays to the original variables
(vehicle_ids, routes, types, sol_offsets) so ordering and values are validated
rather than just sizes.
In `@python/cuopt/cuopt/tests/routing/test_solver_settings.py`:
- Around line 51-71: The test test_dump_config uses a fixed filename
"solver_cfg.yaml" which can collide across runs; change the test to use a unique
temporary path (via pytest tmp_path or tmp_path_factory) and pass that path to
SolverSettings.dump_config_file and to utils.create_data_model_from_yaml so the
file is written and read from an isolated location; update references in the
test for s.dump_config_file(...), s.get_config_file_name(), and
utils.create_data_model_from_yaml(...) to use the tmp path.
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 138-140: The test uses exact float equality on
solution.get_total_objective(), which is brittle; change those assertions to
tolerant comparisons (e.g., use pytest.approx or math.isclose) so small numeric
differences won't fail the test. Update the assertion(s) that currently read
assert total_cost == 13.0 and the similar assertions later in the file (the
block checking total objective around the second solution) to assert total_cost
== pytest.approx(13.0) or equivalent with a defined rel/abs tolerance, keeping
the same expected value but allowing a small tolerance.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py`:
- Around line 496-497: The test currently only checks
routing_solution.get_status() == 0 and misses validating that break constraints
are respected; update the test to iterate the returned routes/trips from the
solution (use the existing routing_solution or solution accessor methods) and
assert for each scheduled break that its start time lies within the specified
break window and that its duration matches the expected break duration (and that
breaks do not overlap work segments); use the route/vehicle identifiers and
break objects already present in the test fixtures (e.g., the route/trip list,
break.window.start/end, and break.duration fields) to locate and validate each
break, and add the same assertions for the other test instance that currently
omits them to ensure numerical correctness of break adherence.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py`:
- Around line 25-36: The assertions in test_warnings_exceptions.py rely on exact
positions w[0]/w[1] which is flaky; update the checks to search the captured
warnings list for messages containing the expected substrings instead of
indexing. For example, after calling dm.add_cost_matrix,
dm.set_order_time_windows, and dm.set_order_service_times, replace uses of
w[0].message and w[1].message with assertions using any("Casting cost_matrix
from int64 to float32" in str(m.message) for m in w) and any("Casting
service_times from float64 to int32" in str(m.message) for m in w) so the test
passes regardless of warning order.
---
Nitpick comments:
In `@python/cuopt/cuopt/tests/routing/test_solver.py`:
- Around line 60-253: Add a new test (e.g., test_degenerate_cases) alongside
test_pdptw/test_prize_collection that exercises solver degenerate scenarios: (1)
empty problem — construct a routing.DataModel with zero orders (or zero
vehicles) and call routing.Solve with routing.SolverSettings to assert
solution.get_status() is a valid code (no crash),
solution.get_vehicle_count()==0 and accepted/infeasible getters return empty
cudf.Series; (2) infeasible problem — create a DataModel where demand exceeds
all vehicle capacities or time windows are impossible, call routing.Solve and
assert solution.get_status() indicates infeasible and
solution.get_infeasible_orders() lists the problematic orders; and (3) singleton
feasible problem — one order and one vehicle, assert status==0, vehicle_count==1
and objective equals expected cost from the cost matrix; reuse
routing.SolverSettings, routing.Solve, DataModel methods (add_cost_matrix,
add_transit_time_matrix, add_capacity_dimension, set_order_time_windows, etc.)
and validate getters like get_accepted_solutions(), get_infeasible_orders(),
get_total_objective() and get_vehicle_count().
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 885022f2-de43-4f90-bae5-2d699c0381b0
📒 Files selected for processing (26)
python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyxpython/cuopt/cuopt/tests/routing/API_COVERAGE.mdpython/cuopt/cuopt/tests/routing/save_restore_test.pypython/cuopt/cuopt/tests/routing/solver_test.pypython/cuopt/cuopt/tests/routing/test_cuopt_exception.pypython/cuopt/cuopt/tests/routing/test_cvrptw.pypython/cuopt/cuopt/tests/routing/test_data_model.pypython/cuopt/cuopt/tests/routing/test_error_logging.pypython/cuopt/cuopt/tests/routing/test_initial_solutions.pypython/cuopt/cuopt/tests/routing/test_pickup_delivery.pypython/cuopt/cuopt/tests/routing/test_prize_collection.pypython/cuopt/cuopt/tests/routing/test_solver.pypython/cuopt/cuopt/tests/routing/test_solver_settings.pypython/cuopt/cuopt/tests/routing/test_tsp.pypython/cuopt/cuopt/tests/routing/test_validation.pypython/cuopt/cuopt/tests/routing/test_vehicle_breaks.pypython/cuopt/cuopt/tests/routing/test_vehicle_dependent_service_times.pypython/cuopt/cuopt/tests/routing/test_vehicle_fixed_costs.pypython/cuopt/cuopt/tests/routing/test_vehicle_max_cost.pypython/cuopt/cuopt/tests/routing/test_vehicle_max_time.pypython/cuopt/cuopt/tests/routing/test_vehicle_order_match.pypython/cuopt/cuopt/tests/routing/test_vehicle_properties.pypython/cuopt/cuopt/tests/routing/test_vehicle_routing.pypython/cuopt/cuopt/tests/routing/test_vehicle_types.pypython/cuopt/cuopt/tests/routing/test_warnings.pypython/cuopt/cuopt/tests/routing/test_warnings_exceptions.py
💤 Files with no reviewable changes (18)
- python/cuopt/cuopt/tests/routing/test_pickup_delivery.py
- python/cuopt/cuopt/tests/routing/test_vehicle_types.py
- python/cuopt/cuopt/tests/routing/test_error_logging.py
- python/cuopt/cuopt/tests/routing/test_vehicle_max_time.py
- python/cuopt/cuopt/tests/routing/test_prize_collection.py
- python/cuopt/cuopt/tests/routing/test_vehicle_breaks.py
- python/cuopt/cuopt/tests/routing/test_vehicle_routing.py
- python/cuopt/cuopt/tests/routing/test_vehicle_dependent_service_times.py
- python/cuopt/cuopt/tests/routing/test_warnings.py
- python/cuopt/cuopt/tests/routing/test_vehicle_max_cost.py
- python/cuopt/cuopt/tests/routing/test_vehicle_fixed_costs.py
- python/cuopt/cuopt/tests/routing/test_cvrptw.py
- python/cuopt/cuopt/tests/routing/test_validation.py
- python/cuopt/cuopt/tests/routing/solver_test.py
- python/cuopt/cuopt/tests/routing/test_cuopt_exception.py
- python/cuopt/cuopt/tests/routing/test_vehicle_order_match.py
- python/cuopt/cuopt/tests/routing/test_tsp.py
- python/cuopt/cuopt/tests/routing/save_restore_test.py
| def get_order_service_times(self, vehicle_id): | ||
| return self.order_service_times[vehicle_id] | ||
| if vehicle_id in self.order_service_times: | ||
| return self.order_service_times[vehicle_id] | ||
| else: | ||
| return cudf.Series([]) |
There was a problem hiding this comment.
Missing fallback to default service-times entry (vehicle_id=-1).
On Line 658-661, a missing vehicle-specific key returns an empty series immediately. This skips the default service-time mapping (if set under -1) and can produce incorrect getter behavior.
🔧 Proposed fix
def get_order_service_times(self, vehicle_id):
- if vehicle_id in self.order_service_times:
- return self.order_service_times[vehicle_id]
- else:
- return cudf.Series([])
+ if vehicle_id in self.order_service_times:
+ return self.order_service_times[vehicle_id]
+ if -1 in self.order_service_times:
+ return self.order_service_times[-1]
+ return cudf.Series([], dtype=np.int32)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/routing/vehicle_routing_wrapper.pyx` around lines 657 -
661, get_order_service_times currently returns an empty cudf.Series when
vehicle_id is missing, skipping any default mapping stored under key -1; update
the function (get_order_service_times) to first check self.order_service_times
for vehicle_id, then fallback to self.order_service_times.get(-1) (or check "-1"
as appropriate for the dict keys) and return that default if present, and only
return cudf.Series([]) when neither vehicle_id nor the -1 default exists.
| | `set_verbose_mode()` | **No** | — | | ||
| | `set_error_logging_mode()` | Yes | test_error_logging | |
There was a problem hiding this comment.
Coverage summary contradicts the table for set_verbose_mode().
Line 98 marks set_verbose_mode() as No, while Line 124 says all SolverSettings setters are covered. Please make these consistent.
Based on learnings, docs should be updated accurately for API behavior/coverage changes.
Also applies to: 124-124
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/API_COVERAGE.md` around lines 98 - 99, The
coverage table incorrectly marks set_verbose_mode() as "No" while the summary
states all SolverSettings setters are covered; update the table row for
`set_verbose_mode()` to reflect coverage (change "No" to "Yes" and reference the
corresponding test name if applicable) or adjust the summary to match the table;
ensure consistency between the table entries for `set_verbose_mode()` and
`set_error_logging_mode()` and the summary sentence about `SolverSettings`
setters so the API_COVERAGE.md accurately reflects current test coverage.
| ret_initial = d.get_initial_solutions() | ||
| assert len(ret_initial) == 4 | ||
| ret_sizes = sorted(len(x) for x in ret_initial) | ||
| expected_sizes = sorted([ | ||
| len(vehicle_ids), len(routes), len(types), len(sol_offsets) | ||
| ]) | ||
| assert ret_sizes == expected_sizes |
There was a problem hiding this comment.
Initial-solution round-trip check is too weak (size-only).
The new assertions verify only lengths, so a reordered/corrupted payload could still pass. This should assert value-level equality for vehicle ids, routes, node types, and offsets.
✅ Suggested stronger assertions
- ret_initial = d.get_initial_solutions()
- assert len(ret_initial) == 4
- ret_sizes = sorted(len(x) for x in ret_initial)
- expected_sizes = sorted([
- len(vehicle_ids), len(routes), len(types), len(sol_offsets)
- ])
- assert ret_sizes == expected_sizes
+ ret_vehicle_ids, ret_routes, ret_types, ret_offsets = d.get_initial_solutions()
+ assert (ret_vehicle_ids == vehicle_ids).all()
+ assert (ret_routes == routes).all()
+ assert (ret_types == types).all()
+ assert (ret_offsets == sol_offsets).all()As per coding guidelines, tests should verify transformation/index consistency across representations, not just structural size checks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_initial_solutions.py` around lines 128
- 134, The test currently only checks lengths of get_initial_solutions()
results; change it to assert value-level equality for each component returned by
d.get_initial_solutions() against the originals (vehicle_ids, routes, types,
sol_offsets). Locate the call to d.get_initial_solutions() and the ret_initial
variable, unpack or iterate the returned items and compare the actual vehicle id
lists, route contents, node type lists, and solution offset lists/arrays to the
original variables (vehicle_ids, routes, types, sol_offsets) so ordering and
values are validated rather than just sizes.
| def test_dump_config(): | ||
| """Test SolverSettings solve with config file""" | ||
| s = routing.SolverSettings() | ||
| config_file = "solver_cfg.yaml" | ||
| s.dump_config_file(config_file) | ||
| assert s.get_config_file_name() == config_file | ||
|
|
||
| # Small example data model: 3 locations, 1 vehicle | ||
| cost = cudf.DataFrame( | ||
| [[0.0, 1.0, 1.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0]], | ||
| dtype=np.float32, | ||
| ) | ||
| dm = routing.DataModel(3, 1) | ||
| dm.add_cost_matrix(cost) | ||
| s.set_time_limit(2) | ||
| routing_solution = routing.Solve(dm, s) | ||
| assert routing_solution.get_status() == 0 | ||
|
|
||
| # Load from written solver_cfg.yaml and solve again | ||
| dm_from_yaml, s_from_yaml = utils.create_data_model_from_yaml(config_file) | ||
| solution_from_yaml = routing.Solve(dm_from_yaml, s_from_yaml) |
There was a problem hiding this comment.
Use an isolated temp path instead of a fixed config filename.
Using "solver_cfg.yaml" in CWD can collide across parallel or repeated runs. Prefer tmp_path to keep this test hermetic.
🧪 Proposed fix
-def test_dump_config():
+def test_dump_config(tmp_path):
"""Test SolverSettings solve with config file"""
s = routing.SolverSettings()
- config_file = "solver_cfg.yaml"
+ config_file = str(tmp_path / "solver_cfg.yaml")
s.dump_config_file(config_file)
assert s.get_config_file_name() == config_fileAs per coding guidelines, test isolation should prevent file/global state leakage between test cases.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_solver_settings.py` around lines 51 -
71, The test test_dump_config uses a fixed filename "solver_cfg.yaml" which can
collide across runs; change the test to use a unique temporary path (via pytest
tmp_path or tmp_path_factory) and pass that path to
SolverSettings.dump_config_file and to utils.create_data_model_from_yaml so the
file is written and read from an isolated location; update references in the
test for s.dump_config_file(...), s.get_config_file_name(), and
utils.create_data_model_from_yaml(...) to use the tmp path.
| # Optional: basic objective check | ||
| total_cost = solution.get_total_objective() | ||
| assert total_cost == 13.0 |
There was a problem hiding this comment.
Use tolerant comparisons for floating objectives.
Line 140 and Line 248-250 use exact float equality; these can be flaky with solver/runtime numeric variance.
🔧 Suggested change
- assert total_cost == 13.0
+ assert math.isclose(total_cost, 13.0, rel_tol=1e-6, abs_tol=1e-6)
- assert sol.get_total_objective() == -13.0
- assert objectives[routing.Objective.PRIZE] == -26.0
- assert objectives[routing.Objective.COST] == 13.0
+ assert math.isclose(sol.get_total_objective(), -13.0, rel_tol=1e-6, abs_tol=1e-6)
+ assert math.isclose(objectives[routing.Objective.PRIZE], -26.0, rel_tol=1e-6, abs_tol=1e-6)
+ assert math.isclose(objectives[routing.Objective.COST], 13.0, rel_tol=1e-6, abs_tol=1e-6)Also applies to: 247-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_solver.py` around lines 138 - 140, The
test uses exact float equality on solution.get_total_objective(), which is
brittle; change those assertions to tolerant comparisons (e.g., use
pytest.approx or math.isclose) so small numeric differences won't fail the test.
Update the assertion(s) that currently read assert total_cost == 13.0 and the
similar assertions later in the file (the block checking total objective around
the second solution) to assert total_cost == pytest.approx(13.0) or equivalent
with a defined rel/abs tolerance, keeping the same expected value but allowing a
small tolerance.
| # TO DO: Check if breaks are adhered to | ||
| assert routing_solution.get_status() == 0 |
There was a problem hiding this comment.
Break tests are missing constraint-validation assertions.
Line 496 and Line 564 explicitly leave break adherence unverified, so these tests currently only check “solver runs” and not whether computed routes satisfy break windows/durations.
✅ Suggested assertion pattern
- # TO DO: Check if breaks are adhered to
- assert routing_solution.get_status() == 0
+ assert routing_solution.get_status() == 0
+ routes = routing_solution.get_route().to_pandas()
+ break_rows = routes[routes["type"] == "Break"]
+ assert not break_rows.empty
+ # Validate break arrival stamps are within configured windows
+ # (adapt indexing by break dimension as needed for this route schema).As per coding guidelines, **/*test*.{cpp,cu,py} must validate numerical correctness of optimization results, not just successful execution.
Also applies to: 564-565
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_vehicle_properties.py` around lines 496
- 497, The test currently only checks routing_solution.get_status() == 0 and
misses validating that break constraints are respected; update the test to
iterate the returned routes/trips from the solution (use the existing
routing_solution or solution accessor methods) and assert for each scheduled
break that its start time lies within the specified break window and that its
duration matches the expected break duration (and that breaks do not overlap
work segments); use the route/vehicle identifiers and break objects already
present in the test fixtures (e.g., the route/trip list, break.window.start/end,
and break.duration fields) to locate and validate each break, and add the same
assertions for the other test instance that currently omits them to ensure
numerical correctness of break adherence.
| with warnings.catch_warnings(record=True) as w: | ||
| dm.add_cost_matrix(cost_matrix) | ||
| assert "Casting cost_matrix from int64 to float32" in str(w[0].message) | ||
|
|
||
| dm.set_order_time_windows( | ||
| constraints["earliest"], constraints["latest"] | ||
| ) | ||
|
|
||
| dm.set_order_service_times(constraints["service"]) | ||
| assert "Casting service_times from float64 to int32" in str( | ||
| w[1].message | ||
| ) |
There was a problem hiding this comment.
Warning assertions are order-dependent and flaky.
This test assumes exact warning positions (w[0], w[1]), but additional cast warnings can shift ordering and break the test intermittently.
🛠️ Proposed stabilization
with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
dm.add_cost_matrix(cost_matrix)
- assert "Casting cost_matrix from int64 to float32" in str(w[0].message)
+ msgs = [str(item.message) for item in w]
+ assert any("Casting cost_matrix from int64 to float32" in m for m in msgs)
dm.set_order_time_windows(
constraints["earliest"], constraints["latest"]
)
dm.set_order_service_times(constraints["service"])
- assert "Casting service_times from float64 to int32" in str(
- w[1].message
- )
+ msgs = [str(item.message) for item in w]
+ assert any("Casting service_times from float64 to int32" in m for m in msgs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@python/cuopt/cuopt/tests/routing/test_warnings_exceptions.py` around lines 25
- 36, The assertions in test_warnings_exceptions.py rely on exact positions
w[0]/w[1] which is flaky; update the checks to search the captured warnings list
for messages containing the expected substrings instead of indexing. For
example, after calling dm.add_cost_matrix, dm.set_order_time_windows, and
dm.set_order_service_times, replace uses of w[0].message and w[1].message with
assertions using any("Casting cost_matrix from int64 to float32" in
str(m.message) for m in w) and any("Casting service_times from float64 to int32"
in str(m.message) for m in w) so the test passes regardless of warning order.
Description
Issue
Checklist