Skip to content

Refactoring min-dep tests#8561

Open
ericspod wants to merge 61 commits intoProject-MONAI:devfrom
ericspod:cicd_test_update
Open

Refactoring min-dep tests#8561
ericspod wants to merge 61 commits intoProject-MONAI:devfrom
ericspod:cicd_test_update

Conversation

@ericspod
Copy link
Copy Markdown
Member

@ericspod ericspod commented Sep 8, 2025

Description

This PR is an effort at refactoring the CICD tests to reduce code volume in the yaml files. Branch rules will have to be temporarily changed to not expect jobs by names which are no longer used, and once accepted changed again to add the jobs with the new names back in. This would be a good opportunity to move branch rules to rule sets which is the newer Github mechanism. This also adds code for removing unused tools in the Linux runners to avoid running out of disk space during testing, this has increasingly become an issue for testing as the image has gotten larger and more libraries get installed.

This also removes testing support for Python 3.9 and updates which versions of PyTorch are tested with in line with our support policy. This also bumps versions of some actions and dependencies, specifically black for security reasons. Testing Windows with full dependencies requires a much later version of PyTorch due to a gloo bug, and macOS requires the explicit installation of pybind11 for now. A number of other formatting fixes had to be made to account for newer versions of black and other tools, and a number of imports were moved around to match Python 3.10 usage. A number of type annotations were changed to remove Union and Optional.

Windows tests with versions of PyTorch between 2.8 and 2.10 seem to suffer from the gloo bug mentioned in pytorch/pytorch#150381.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
@Project-MONAI Project-MONAI deleted a comment from coderabbitai bot Sep 10, 2025
ericspod and others added 7 commits November 11, 2025 15:00
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
@Project-MONAI Project-MONAI deleted a comment from coderabbitai bot Nov 12, 2025
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Dec 7, 2025

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6ba0a94c-59df-4525-aa57-9ac9267f4f61

📥 Commits

Reviewing files that changed from the base of the PR and between d243970 and 5675306.

📒 Files selected for processing (1)
  • monai/apps/detection/transforms/dictionary.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • monai/apps/detection/transforms/dictionary.py

📝 Walkthrough

Walkthrough

Adds new CI workflows and removes others, updates many GitHub Actions YAMLs to use Python 3.10 (and other version updates), and refactors CI matrices/steps. Raises minimum Python to 3.10 and bumps torch/dependency constraints to >=2.8.0 across pyproject, setup.cfg, requirements, and docs. Modernizes type hints (PEP 604 unions), moves Callable/Sequence imports to collections.abc, and applies numerous small edits to error messages, type casts, tests, and tooling configs (black/ruff).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Refactoring min-dep tests' accurately describes the main change: consolidating and refactoring CI/CD test workflows, particularly the minimum-dependency test configuration.
Description check ✅ Passed The description comprehensively covers objectives (CICD refactoring, Python 3.9 removal, PyTorch updates, dependency bumps), includes the required PR template sections (Description, Types of changes with checkboxes), and provides context on branch-rule changes and testing status.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
ericspod and others added 14 commits April 5, 2026 20:15
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
@ericspod ericspod marked this pull request as ready for review April 10, 2026 09:58
@ericspod ericspod requested review from KumoLiu and Nic-Ma as code owners April 10, 2026 09:58
@ericspod ericspod self-assigned this Apr 10, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
setup.cfg (2)

165-165: ⚠️ Potential issue | 🟠 Major

Stale onnxruntime version constraint in [onnx] extra.

Line 87 in [all] specifies onnxruntime without version gating, but line 165 in the [onnx] extra gates it with python_version <= '3.10'. This is inconsistent and appears to be leftover from Python 3.9 support.

🔧 Proposed fix
 onnx =
     onnx>=1.13.0
-    onnxruntime; python_version <= '3.10'
+    onnxruntime
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@setup.cfg` at line 165, The [onnx] extra contains a stale conditional
"onnxruntime; python_version <= '3.10'" that conflicts with the un-gated
onnxruntime in the [all] extra; remove the python_version environment marker
from the onnx extra so both extras list onnxruntime consistently (i.e., change
the entry in the [onnx] extra that currently reads "onnxruntime; python_version
<= '3.10'" to just "onnxruntime"), ensuring the dependency string in the onnx
extra matches the [all] section.

136-136: ⚠️ Potential issue | 🟠 Major

Stale transformers version constraint in [transformers] extra.

Line 73 in [all] specifies transformers>=4.53.0 without version gating, but line 136 in the [transformers] extra has the old constraint transformers>=4.36.0, <4.41.0; python_version <= '3.10'. This is inconsistent and appears to be leftover from Python 3.9 support.

🔧 Proposed fix
 transformers =
-    transformers>=4.36.0, <4.41.0; python_version <= '3.10'
+    transformers>=4.53.0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@setup.cfg` at line 136, Update the stale constraint in the [transformers]
extra: replace the old requirement "transformers>=4.36.0, <4.41.0;
python_version <= '3.10'" with a constraint consistent with the [all] extra
(e.g., "transformers>=4.53.0" or the same range used in [all]) and remove the
unnecessary python_version gating; ensure the [transformers] extra and the [all]
section reference the identical transformers requirement so they are consistent.
🧹 Nitpick comments (7)
monai/networks/nets/dints.py (3)

926-927: Prefer more specific type ignore or avoid reassignment.

# type: ignore is broad. Use # type: ignore[assignment] for precision, or assign to new variables instead of reassigning with different types.

♻️ More specific type suppression
-_node_in = (_node_in >= 1).astype(int)  # type: ignore
-_node_out = (_node_out >= 1).astype(int)  # type: ignore
+_node_in = (_node_in >= 1).astype(int)  # type: ignore[assignment]
+_node_out = (_node_out >= 1).astype(int)  # type: ignore[assignment]
♻️ Alternative: use new variable names
-_node_in = (_node_in >= 1).astype(int)  # type: ignore
-_node_out = (_node_out >= 1).astype(int)  # type: ignore
+_node_in_binary = (_node_in >= 1).astype(int)
+_node_out_binary = (_node_out >= 1).astype(int)
-node2in[self.node_act_dict[str(_node_out)]].append(child_idx)
-node2out[self.node_act_dict[str(_node_in)]].append(child_idx)
+node2in[self.node_act_dict[str(_node_out_binary)]].append(child_idx)
+node2out[self.node_act_dict[str(_node_in_binary)]].append(child_idx)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 926 - 927, Replace the broad "#
type: ignore" on the boolean-to-int reassignments of _node_in and _node_out by
either using a more specific suppression "# type: ignore[assignment]" or,
preferably, avoid reassigning with a different type by creating new variables
(e.g., node_in_bin = (_node_in >= 1).astype(int) and node_out_bin = (_node_out
>= 1).astype(int)) and update downstream uses to those new names; target the
statements assigning _node_in and _node_out to locate and apply this change.

1017-1017: Prefer more specific type ignore or avoid reassignment.

Same issue. Use # type: ignore[assignment] or assign to a new variable.

♻️ More specific type suppression
-node_a = (node_a >= 1).astype(int)  # type: ignore
+node_a = (node_a >= 1).astype(int)  # type: ignore[assignment]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` at line 1017, The current broad type
suppression on the reassignment of node_a should be made more specific or
avoided: either replace the comment with a targeted suppression (use "# type:
ignore[assignment]" on the line with node_a = (node_a >= 1).astype(int)) or
avoid reassigning by creating a new variable (e.g., node_a_bin or similar) for
the integer/boolean result and update subsequent uses accordingly; locate the
statement referencing node_a and apply one of these two fixes to remove the
broad "# type: ignore".

979-979: Prefer more specific type ignore or avoid reassignment.

Same issue as lines 926-927. Use # type: ignore[assignment] or assign to a new variable.

♻️ More specific type suppression
-_node_act = (_node_act >= 1).astype(int)  # type: ignore
+_node_act = (_node_act >= 1).astype(int)  # type: ignore[assignment]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` at line 979, Replace the broad type ignore on
the assignment to _node_act so mypy knows why the cast is allowed: either assign
the boolean-to-int result to a new variable (e.g., node_act_int = (_node_act >=
1).astype(int)) and use that, or keep the reassignment but narrow the
suppression to "# type: ignore[assignment]" on the line "_node_act = (_node_act
>= 1).astype(int)". Update usages to reference the new variable if you created
one; target the "_node_act" assignment site to make this change.
.github/workflows/cicd_tests.yml (3)

78-82: Quote command substitutions to prevent word splitting.

Shellcheck warns about unquoted command substitutions that could cause issues with paths containing spaces.

♻️ Proposed fix
       run: |
         # clean up temporary files
-        $(pwd)/runtests.sh --build --clean
+        "$(pwd)/runtests.sh" --build --clean
         # Github actions have multiple cores, so parallelize pytype
-        $(pwd)/runtests.sh --build --${{ matrix.opt }} -j $(nproc --all)
+        "$(pwd)/runtests.sh" --build --${{ matrix.opt }} -j "$(nproc --all)"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/cicd_tests.yml around lines 78 - 82, The shell commands
use unquoted command substitutions which can break on paths with spaces: wrap
the command substitutions used when invoking runtests.sh—specifically the $(pwd)
and $(nproc --all) substitutions in the run steps that call runtests.sh --build
--clean and runtests.sh --build --${{ matrix.opt }} -j $(nproc --all)—in double
quotes (e.g., "$(pwd)" and "$(nproc --all)") so the expanded values are treated
as single arguments and won't be word-split.

204-213: Quote environment variable expansions.

Shellcheck recommends quoting ${WIN_FULLDEP_VER}, ${WIN_FULLDEP_VISION_VER}, ${PYTORCH_VER1}, and ${TORCHVISION_VER1} to prevent potential word splitting, though unlikely with version numbers.

♻️ Proposed fix
       run: |
         # BUG: PyTorch 2.8.0 appears to have a gloo bug under Windows with failing distributed tests
-        python -m pip install torch==${WIN_FULLDEP_VER} torchvision==${WIN_FULLDEP_VISION_VER}+cpu --index-url https://download.pytorch.org/whl/cpu
+        python -m pip install "torch==${WIN_FULLDEP_VER}" "torchvision==${WIN_FULLDEP_VISION_VER}+cpu" --index-url https://download.pytorch.org/whl/cpu
       shell: bash
     - if: runner.os != 'windows'
       name: Install torch gpu
       run: |
         # install the lowest version of PyTorch supported
-        python -m pip install torch==${PYTORCH_VER1} torchvision==${TORCHVISION_VER1}
+        python -m pip install "torch==${PYTORCH_VER1}" "torchvision==${TORCHVISION_VER1}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/cicd_tests.yml around lines 204 - 213, Quote the
environment variable expansions in the workflow commands to prevent accidental
word-splitting: update the torch install lines to use "${WIN_FULLDEP_VER}" and
"${WIN_FULLDEP_VISION_VER}" in the Windows install command and "${PYTORCH_VER1}"
and "${TORCHVISION_VER1}" in the non-Windows GPU install command; locate the two
run blocks that call python -m pip install and replace the unquoted ${...}
usages with the quoted "${...}" variants in those commands.

304-308: Use safer file discovery instead of ls parsing.

Shellcheck warns against parsing ls output. Use a glob or find instead. The $name[all] is valid pip syntax but confuses shellcheck.

♻️ Proposed fix
       run: |
         # install from tar.gz
-        name=$(ls *.tar.gz | head -n1)
-        echo $name
-        python -m pip install $name[all] --extra-index-url https://download.pytorch.org/whl/cpu
+        for name in *.tar.gz; do break; done
+        echo "$name"
+        python -m pip install "${name}[all]" --extra-index-url https://download.pytorch.org/whl/cpu
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/cicd_tests.yml around lines 304 - 308, Replace the unsafe
ls parsing for the tarball and the shellcheck-confusing $name[all] usage:
instead of using `name=$(ls *.tar.gz | head -n1)`, find the archive safely
(e.g., with `find` or a nullglob-safe shell glob) and assign its full path to
the variable `name`, then call the pip install invocation using a quoted,
option-separated form so the [all] extras aren't mis-parsed (e.g. pass `--`
before the package path and quote the value when invoking `python -m pip
install`), updating the `name` variable and the `python -m pip install
$name[all] --extra-index-url ...` line accordingly.
monai/visualize/visualizer.py (1)

31-35: Add docstring and explicit types for up.

up is a definition without a Google-style docstring and explicit parameter/return typing. Please document inputs/return/raised exceptions and type it directly.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/visualize/visualizer.py` around lines 31 - 35, Add a Google-style
docstring and explicit type hints to the local function up: annotate the
parameter x as torch.Tensor and the return as torch.Tensor (e.g., def up(x:
torch.Tensor) -> torch.Tensor), and add a docstring that documents Args (x:
input tensor), Returns (upsampled tensor), and Raises (e.g., ValueError if
len(spatial_size) not in {1,2,3} or IndexError when selecting interp_mode). In
the body mention the expected external variables (spatial_size: Sequence[int],
align_corners: Optional[bool]) and that the function picks InterpolateMode based
on len(spatial_size) before calling F.interpolate; ensure the docstring follows
Google style and briefly describes behavior and exceptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/build_docs.yml:
- Around line 9-11: The workflow uses an invalid pull_request filter key
"head_ref-ignore" which breaks parsing; update the pull_request event block to
replace "head_ref-ignore" with the supported key "branches-ignore" (i.e., modify
the pull_request configuration in the workflow YAML to use branches-ignore
instead of head_ref-ignore).

In @.github/workflows/cicd_tests.yml:
- Around line 149-154: The workflow contains an unreachable step "Print GPU
Info" gated by the condition `if: matrix.os == 'linux-gpu-runner'` which will
never be true given the matrix values; either remove this step entirely or make
it reachable by adding `linux-gpu-runner` to the job matrix (or change the `if`
to match an existing matrix value like `ubuntu-latest`), updating the
conditional `if: matrix.os == 'linux-gpu-runner'` so the "Print GPU Info" step
actually executes when intended.
- Line 244: The workflow sets the `shell` key incorrectly inside the `env`
mapping; remove the `shell` entry from the `env` section and instead set it at
`defaults.run.shell` for the job or add `shell: bash` to the individual steps
that require it (or leave per-step usage if already present); update the file by
deleting the misplaced `shell` in the `env` block and placing the key under
`defaults.run.shell` or on relevant `steps` to resolve the invalid placement.
- Around line 15-16: Replace the invalid GitHub Actions key head_ref-ignore with
the correct branches-ignore in the workflow YAML so the workflow properly
ignores the specified branch (e.g., dev); update the mapping entry under the
triggers block where head_ref-ignore is declared to use branches-ignore instead
and ensure the list of ignored branches remains the same.

In @.github/workflows/pythonapp-gpu.yml:
- Around line 54-57: The if-condition testing the matrix value uses an exact
equality to "PT230+CUDA124" so it never matches the actual matrix value
"PT230+CUDA124DOCKER"; update the conditional in
.github/workflows/pythonapp-gpu.yml (the block around the if that sets
PYVER/PYSFX/DISTUTILS and installs packages) to match the DOCKER variant —
either use a pattern comparison (e.g. double-bracket glob like [[ "${{
matrix.environment }}" == PT230+CUDA124* ]]) or explicitly include
"PT230+CUDA124DOCKER" in the test so the PYVER/PYSFX/DISTUTILS assignment and
apt-get install run for that matrix entry.

In `@monai/apps/detection/transforms/dictionary.py`:
- Around line 814-823: The ValueError messages in the checks for box_keys_tuple
and box_ref_image_keys_tuple are misleading: the condition len(...) != 1
enforces exactly one key but the messages say "at least one key"; update the
exception text for the checks around box_keys_tuple (related to box_keys) and
box_ref_image_keys_tuple (related to box_ref_image_keys) to state that exactly
one key must be provided (e.g., "Please provide exactly one key for
`box_keys`..." and "Please provide exactly one key for `box_ref_image_keys`...")
so the message semantics match the condition.

In `@monai/apps/detection/utils/anchor_utils.py`:
- Around line 173-174: The error message built in anchor_utils.py uses
len(aspect_ratios_t.shape) - 1 to describe the expected rank which can be wrong;
change it to use the required rank directly (self.spatial_dims) when
constructing the message for aspect_ratios_t in the check inside the
function/method that validates aspect_ratios (the block referencing
aspect_ratios_t and self.spatial_dims), so the message reads something like "In
{self.spatial_dims}-D image, aspect_ratios for each level should be
{self.spatial_dims}-D. Got aspect_ratios with shape {aspect_ratios_t.shape}."
ensuring the expected dimension uses self.spatial_dims rather than computing
len(shape)-1.

In `@monai/apps/reconstruction/transforms/array.py`:
- Around line 65-66: The validation error message uses literal expressions
instead of actual lengths; update the message where the check compares
center_fractions and accelerations (use the variables center_fractions and
accelerations) to include the real lengths (e.g., len(center_fractions) and
len(accelerations))—replace the current static string with a formatted message
(f-string or .format) that prints both lengths and a clear context so failures
show actual values.

In `@monai/apps/vista3d/inferer.py`:
- Line 117: The code currently uses assert stitched_output is not None which
fails when point_start >= len(point_coords[0]) because the loop never runs;
replace this assertion with an explicit runtime guard in the function handling
inference (referencing variables point_start, point_coords, stitched_output and
stitched_mask) — either initialize stitched_output/stitched_mask to correct
empty/default tensors before the loop or return early with a sensible empty
result when there are no new points to process, and update callers accordingly;
also add a regression test that calls the inferer path with point_start >=
len(point_coords[0]) to verify no crash and correct empty output handling.

In `@monai/visualize/visualizer.py`:
- Line 35: The F.interpolate call currently uses a bare "# type: ignore" which
hides all type diagnostics; replace it by explicitly typing the interpolation
mode and annotating the nested function up: declare smode as the proper
literal/Enum-derived type (from InterpolateMode.value) or annotate up's
parameters and return type so mypy understands the argument types, then remove
the bare ignore or change it to a targeted ignore like "# type:
ignore[arg-type]" if necessary; also add a Google-style docstring to the nested
function up describing its parameters (x, spatial_size, smode, align_corners)
and return value to satisfy code guidelines.

In `@requirements-dev.txt`:
- Line 66: Remove the duplicate dependency entry "pybind11" from
requirements-dev.txt (the second occurrence at line 66); keep the original
declaration (the one at line 21) and ensure only a single "pybind11" line
remains to avoid redundancy in dependency audits.

In `@setup.cfg`:
- Line 65: The dependency marker for cucim-cu12 is inverted: change the
environment marker on the cucim-cu12 requirement (the line containing
"cucim-cu12; platform_system == \"Linux\" and python_version <= '3.10'") to
reflect the package's actual Python requirement (e.g., "python_version >=
'3.11'" or a precise range like "python_version >= '3.11' and python_version <
'3.12'"); apply the same correction to the other occurrence referenced (the
cucim-cu13-related line mentioned in the comment) so both markers match the
packages' PyPI metadata.

In `@tests/test_utils.py`:
- Around line 572-575: The loop that does p.join() then pr = results.get() can
hang if a worker exited without putting a result; update the logic around
processes, p.join(), and results.get() to avoid blocking by using a bounded wait
(e.g. results.get with a timeout) and immediately inspect the worker state
(p.exitcode) after join — if exitcode is nonzero or results.get times out,
raise/assert with diagnostic info; ensure the code checks/consumes the queue
only when a result is expected to be present to prevent indefinite blocking.

---

Outside diff comments:
In `@setup.cfg`:
- Line 165: The [onnx] extra contains a stale conditional "onnxruntime;
python_version <= '3.10'" that conflicts with the un-gated onnxruntime in the
[all] extra; remove the python_version environment marker from the onnx extra so
both extras list onnxruntime consistently (i.e., change the entry in the [onnx]
extra that currently reads "onnxruntime; python_version <= '3.10'" to just
"onnxruntime"), ensuring the dependency string in the onnx extra matches the
[all] section.
- Line 136: Update the stale constraint in the [transformers] extra: replace the
old requirement "transformers>=4.36.0, <4.41.0; python_version <= '3.10'" with a
constraint consistent with the [all] extra (e.g., "transformers>=4.53.0" or the
same range used in [all]) and remove the unnecessary python_version gating;
ensure the [transformers] extra and the [all] section reference the identical
transformers requirement so they are consistent.

---

Nitpick comments:
In @.github/workflows/cicd_tests.yml:
- Around line 78-82: The shell commands use unquoted command substitutions which
can break on paths with spaces: wrap the command substitutions used when
invoking runtests.sh—specifically the $(pwd) and $(nproc --all) substitutions in
the run steps that call runtests.sh --build --clean and runtests.sh --build
--${{ matrix.opt }} -j $(nproc --all)—in double quotes (e.g., "$(pwd)" and
"$(nproc --all)") so the expanded values are treated as single arguments and
won't be word-split.
- Around line 204-213: Quote the environment variable expansions in the workflow
commands to prevent accidental word-splitting: update the torch install lines to
use "${WIN_FULLDEP_VER}" and "${WIN_FULLDEP_VISION_VER}" in the Windows install
command and "${PYTORCH_VER1}" and "${TORCHVISION_VER1}" in the non-Windows GPU
install command; locate the two run blocks that call python -m pip install and
replace the unquoted ${...} usages with the quoted "${...}" variants in those
commands.
- Around line 304-308: Replace the unsafe ls parsing for the tarball and the
shellcheck-confusing $name[all] usage: instead of using `name=$(ls *.tar.gz |
head -n1)`, find the archive safely (e.g., with `find` or a nullglob-safe shell
glob) and assign its full path to the variable `name`, then call the pip install
invocation using a quoted, option-separated form so the [all] extras aren't
mis-parsed (e.g. pass `--` before the package path and quote the value when
invoking `python -m pip install`), updating the `name` variable and the `python
-m pip install $name[all] --extra-index-url ...` line accordingly.

In `@monai/networks/nets/dints.py`:
- Around line 926-927: Replace the broad "# type: ignore" on the boolean-to-int
reassignments of _node_in and _node_out by either using a more specific
suppression "# type: ignore[assignment]" or, preferably, avoid reassigning with
a different type by creating new variables (e.g., node_in_bin = (_node_in >=
1).astype(int) and node_out_bin = (_node_out >= 1).astype(int)) and update
downstream uses to those new names; target the statements assigning _node_in and
_node_out to locate and apply this change.
- Line 1017: The current broad type suppression on the reassignment of node_a
should be made more specific or avoided: either replace the comment with a
targeted suppression (use "# type: ignore[assignment]" on the line with node_a =
(node_a >= 1).astype(int)) or avoid reassigning by creating a new variable
(e.g., node_a_bin or similar) for the integer/boolean result and update
subsequent uses accordingly; locate the statement referencing node_a and apply
one of these two fixes to remove the broad "# type: ignore".
- Line 979: Replace the broad type ignore on the assignment to _node_act so mypy
knows why the cast is allowed: either assign the boolean-to-int result to a new
variable (e.g., node_act_int = (_node_act >= 1).astype(int)) and use that, or
keep the reassignment but narrow the suppression to "# type: ignore[assignment]"
on the line "_node_act = (_node_act >= 1).astype(int)". Update usages to
reference the new variable if you created one; target the "_node_act" assignment
site to make this change.

In `@monai/visualize/visualizer.py`:
- Around line 31-35: Add a Google-style docstring and explicit type hints to the
local function up: annotate the parameter x as torch.Tensor and the return as
torch.Tensor (e.g., def up(x: torch.Tensor) -> torch.Tensor), and add a
docstring that documents Args (x: input tensor), Returns (upsampled tensor), and
Raises (e.g., ValueError if len(spatial_size) not in {1,2,3} or IndexError when
selecting interp_mode). In the body mention the expected external variables
(spatial_size: Sequence[int], align_corners: Optional[bool]) and that the
function picks InterpolateMode based on len(spatial_size) before calling
F.interpolate; ensure the docstring follows Google style and briefly describes
behavior and exceptions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 19bc9f8f-6ad8-4eee-865c-624745befa9f

📥 Commits

Reviewing files that changed from the base of the PR and between deb3f98 and d5a9182.

📒 Files selected for processing (63)
  • .github/workflows/build_docs.yml
  • .github/workflows/cicd_tests.yml
  • .github/workflows/conda.yml
  • .github/workflows/cron-ngc-bundle.yml
  • .github/workflows/cron.yml
  • .github/workflows/docker.yml
  • .github/workflows/pythonapp-gpu.yml
  • .github/workflows/pythonapp-min.yml
  • .github/workflows/pythonapp.yml
  • .github/workflows/release.yml
  • .github/workflows/setupapp.yml
  • .github/workflows/weekly-preview.yml
  • docs/.readthedocs.yaml
  • docs/requirements.txt
  • docs/source/installation.md
  • monai/_version.py
  • monai/apps/auto3dseg/auto_runner.py
  • monai/apps/detection/transforms/dictionary.py
  • monai/apps/detection/utils/anchor_utils.py
  • monai/apps/pathology/inferers/inferer.py
  • monai/apps/pathology/transforms/post/array.py
  • monai/apps/reconstruction/transforms/array.py
  • monai/apps/vista3d/inferer.py
  • monai/auto3dseg/utils.py
  • monai/bundle/scripts.py
  • monai/bundle/utils.py
  • monai/config/type_definitions.py
  • monai/data/image_reader.py
  • monai/engines/evaluator.py
  • monai/engines/trainer.py
  • monai/losses/dice.py
  • monai/losses/focal_loss.py
  • monai/losses/hausdorff_loss.py
  • monai/losses/sure_loss.py
  • monai/metrics/average_precision.py
  • monai/metrics/rocauc.py
  • monai/networks/layers/conjugate_gradient.py
  • monai/networks/layers/convutils.py
  • monai/networks/nets/daf3d.py
  • monai/networks/nets/dints.py
  • monai/networks/nets/dynunet.py
  • monai/networks/nets/vista3d.py
  • monai/optimizers/lr_finder.py
  • monai/optimizers/lr_scheduler.py
  • monai/transforms/adaptors.py
  • monai/transforms/intensity/array.py
  • monai/transforms/intensity/dictionary.py
  • monai/transforms/io/array.py
  • monai/transforms/io/dictionary.py
  • monai/transforms/spatial/array.py
  • monai/transforms/utility/array.py
  • monai/transforms/utils_pytorch_numpy_unification.py
  • monai/utils/component_store.py
  • monai/utils/decorators.py
  • monai/visualize/gradient_based.py
  • monai/visualize/visualizer.py
  • pyproject.toml
  • requirements-dev.txt
  • requirements.txt
  • runtests.sh
  • setup.cfg
  • tests/bundle/test_config_item.py
  • tests/test_utils.py
💤 Files with no reviewable changes (2)
  • .github/workflows/pythonapp.yml
  • .github/workflows/pythonapp-min.yml

ericspod and others added 2 commits April 10, 2026 12:06
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
monai/apps/vista3d/inferer.py (1)

117-119: ⚠️ Potential issue | 🟠 Major

Line 117: valid incremental path still fails when no new points are selected.

When point_start skips all points, raising unconditionally breaks the documented prev_mask reuse flow. Use prev_mask as fallback, and only raise if it is absent. Also add a regression test for point_start >= len(point_coords[0]) with prev_mask set.

Proposed fix
-    if stitched_output is None:
-        raise ValueError("Input configuration resulted in no prediction windows being selected.")
+    if stitched_output is None:
+        if prev_mask is None:
+            raise ValueError(
+                "Input configuration resulted in no prediction windows being selected; "
+                "provide `prev_mask` when `point_start` skips all points."
+            )
+        stitched_output = prev_mask.to("cpu")  # type: ignore[assignment]
+        stitched_mask = torch.ones_like(stitched_output, device="cpu")
As per coding guidelines, "Examine code for logical error or inconsistencies" and "Ensure new or modified definitions will be covered by existing or new unit tests."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/apps/vista3d/inferer.py` around lines 117 - 119, The unconditional
raise when stitched_output is None in inferer.py breaks incremental prediction
reuse: modify the logic in the function where stitched_output is checked (the
incremental/point_start handling around point_coords and prev_mask) to use
prev_mask as a fallback when stitched_output is None and point_start skips all
points, only raising ValueError if stitched_output is None AND prev_mask is also
None; update the control flow around point_start and stitched_output to select
prev_mask when available. Also add a regression unit test that calls the inferer
with point_start >= len(point_coords[0]) and prev_mask provided to assert no
exception is raised and prev_mask is used.
tests/test_utils.py (1)

574-575: ⚠️ Potential issue | 🟠 Major

Harden worker result handling after join.

results.get(block=False) can raise queue.Empty without context, and there is still no explicit p.exitcode check. This can mask the true worker failure mode.

As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."

🔧 Proposed hardening
             for p in processes:
                 p.join()
-                pr = results.get(block=False)
-                assert pr is True, f"Distributed call failed: {pr}"
+                if p.exitcode not in (0, None):
+                    raise RuntimeError(f"Distributed call failed: process exited with code {p.exitcode}.")
+                try:
+                    pr = results.get(timeout=5)
+                except queue.Empty as e:
+                    raise RuntimeError("Distributed call failed: no result returned by worker.") from e
+                assert pr is True, f"Distributed call failed: {pr}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_utils.py` around lines 574 - 575, Wrap the non-blocking retrieve
from results.get(block=False) in a try/except that catches queue.Empty and fails
with a clearer message, and after joining the worker check p.exitcode and assert
it's zero (or include it in the failure message) before asserting pr is True;
use the local symbols results.get, pr, and p.exitcode in the message so the
assertion clearly reports whether the queue was empty or the process exited with
a non-zero code.
monai/apps/detection/transforms/dictionary.py (1)

814-829: ⚠️ Potential issue | 🟠 Major

Enforce exactly one key here, not “at least one”.

This constructor now accepts multiple keys but immediately uses only the first ([0]), which can silently ignore user input.

🔧 Proposed fix
-        if len(box_keys_tuple) < 1:
+        if len(box_keys_tuple) != 1:
             raise ValueError(
-                "Please provide at least one key for `box_keys`. All `label_keys` are attached to this `box_keys`."
+                "Please provide exactly one key for `box_keys`. All `label_keys` are attached to this `box_keys`."
             )
         box_ref_image_keys_tuple = ensure_tuple(box_ref_image_keys)
-        if len(box_ref_image_keys_tuple) < 1:
+        if len(box_ref_image_keys_tuple) != 1:
             raise ValueError(
-                "Please provide at least one key for `box_ref_image_keys`. "
+                "Please provide exactly one key for `box_ref_image_keys`. "
                 "All box_keys and label_keys are attached to this `box_ref_image_keys`."
             )

As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/apps/detection/transforms/dictionary.py` around lines 814 - 829, The
constructor currently accepts multiple box_keys/box_ref_image_keys but only uses
the first element; change the validation to require exactly one key by replacing
the "at least one" checks with equality checks (len(box_keys_tuple) == 1 and
len(box_ref_image_keys_tuple) == 1) and raise ValueError if not exactly one,
keeping the assignments self.box_keys = box_keys_tuple[0],
self.box_ref_image_keys = box_ref_image_keys_tuple[0] and leaving
self.label_keys and super().__init__(...) as-is so callers cannot pass multiple
keys silently to box_keys or box_ref_image_keys.
🧹 Nitpick comments (1)
tests/test_utils.py (1)

536-537: Preserve richer exception details in queue payload.

Sending only str(e) loses traceback/type context and weakens debugging for distributed failures. Prefer serializing traceback text (or structured error payload).

🛠️ Suggested tweak
         except Exception as e:
-            results.put(str(e))
+            results.put(traceback.format_exc())
             raise e

As per coding guidelines, "Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_utils.py` around lines 536 - 537, Replace the current queue
payload of just str(e) with a richer serialized error that preserves type and
traceback: when catching exception variable e, build a structured payload (e.g.,
{"type": type(e).__name__, "message": str(e), "traceback":
"".join(traceback.format_exception(type(e), e, e.__traceback__))}) and pass that
to results.put instead of str(e); ensure you import traceback and keep raising
the original exception (raise e) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/apps/detection/transforms/dictionary.py`:
- Line 312: The error message string that reads f"Key 'affine' not found in
{meta_key}. Please check it is the correct the image meta key." contains a
duplicated "the"; update the message used where the KeyError (or check) is
raised to read something like "Please check it is the correct image meta key."
Keep the f-string and {meta_key} intact so the change only removes the extra
"the" in the message associated with the affine key check.

---

Duplicate comments:
In `@monai/apps/detection/transforms/dictionary.py`:
- Around line 814-829: The constructor currently accepts multiple
box_keys/box_ref_image_keys but only uses the first element; change the
validation to require exactly one key by replacing the "at least one" checks
with equality checks (len(box_keys_tuple) == 1 and len(box_ref_image_keys_tuple)
== 1) and raise ValueError if not exactly one, keeping the assignments
self.box_keys = box_keys_tuple[0], self.box_ref_image_keys =
box_ref_image_keys_tuple[0] and leaving self.label_keys and
super().__init__(...) as-is so callers cannot pass multiple keys silently to
box_keys or box_ref_image_keys.

In `@monai/apps/vista3d/inferer.py`:
- Around line 117-119: The unconditional raise when stitched_output is None in
inferer.py breaks incremental prediction reuse: modify the logic in the function
where stitched_output is checked (the incremental/point_start handling around
point_coords and prev_mask) to use prev_mask as a fallback when stitched_output
is None and point_start skips all points, only raising ValueError if
stitched_output is None AND prev_mask is also None; update the control flow
around point_start and stitched_output to select prev_mask when available. Also
add a regression unit test that calls the inferer with point_start >=
len(point_coords[0]) and prev_mask provided to assert no exception is raised and
prev_mask is used.

In `@tests/test_utils.py`:
- Around line 574-575: Wrap the non-blocking retrieve from
results.get(block=False) in a try/except that catches queue.Empty and fails with
a clearer message, and after joining the worker check p.exitcode and assert it's
zero (or include it in the failure message) before asserting pr is True; use the
local symbols results.get, pr, and p.exitcode in the message so the assertion
clearly reports whether the queue was empty or the process exited with a
non-zero code.

---

Nitpick comments:
In `@tests/test_utils.py`:
- Around line 536-537: Replace the current queue payload of just str(e) with a
richer serialized error that preserves type and traceback: when catching
exception variable e, build a structured payload (e.g., {"type":
type(e).__name__, "message": str(e), "traceback":
"".join(traceback.format_exception(type(e), e, e.__traceback__))}) and pass that
to results.put instead of str(e); ensure you import traceback and keep raising
the original exception (raise e) unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 202b445e-20c1-40e5-931d-457c94e92eae

📥 Commits

Reviewing files that changed from the base of the PR and between d5a9182 and d243970.

📒 Files selected for processing (8)
  • .github/workflows/build_docs.yml
  • .github/workflows/cicd_tests.yml
  • monai/apps/detection/transforms/dictionary.py
  • monai/apps/detection/utils/anchor_utils.py
  • monai/apps/reconstruction/transforms/array.py
  • monai/apps/vista3d/inferer.py
  • requirements-dev.txt
  • tests/test_utils.py
✅ Files skipped from review due to trivial changes (4)
  • monai/apps/reconstruction/transforms/array.py
  • requirements-dev.txt
  • monai/apps/detection/utils/anchor_utils.py
  • .github/workflows/cicd_tests.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/build_docs.yml

Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
@ericspod
Copy link
Copy Markdown
Member Author

@coderabbitai summarise the changes in this PR as a serices of bullet points.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 10, 2026

Tip

For best results, initiate chat on the files or code changes.

@ericspod Here's a summary of the changes in this PR:

CI/CD Workflow Refactoring

  • Removed the old pythonapp.yml and pythonapp-min.yml workflows and replaced them with a single consolidated cicd_tests.yml workflow containing static-checks, min-dep, full-dep, and packaging jobs.
  • Added a new build_docs.yml workflow to separate doc builds from the main test pipeline.
  • Added Linux runner cleanup steps to remove unused tooling caches and avoid disk-space exhaustion.

Python/PyTorch Version Bumps

  • Dropped Python 3.9 support; minimum is now Python 3.10 across all CI, packaging metadata (setup.cfg), and documentation.
  • Bumped minimum torch requirement from >=2.4.1 to >=2.8.0 in requirements.txt, setup.cfg, pyproject.toml, and docs/requirements.txt.
  • Windows full-dep testing pins a newer PyTorch CPU build to work around a gloo bug affecting PyTorch 2.8–2.10.
  • macOS CI now explicitly installs pybind11.

Dependency & Tooling Updates

  • black bumped from pinned ==25.1.0 to >=26.3.1 (security fix); --skip-magic-trailing-comma removed from runtests.sh (now handled via pyproject.toml config).
  • codecov/codecov-action bumped from v5 to v6 across multiple workflows.
  • actions/upload-artifact bumped from v6 to v7.
  • pytorch-ignite unpinned; scipy version marker python_version >= '3.9' removed; onnxruntime unconditionally included; transformers bumped and uncapped; cucim-cu13 added for Python 3.11+.
  • Read the Docs Python version updated from 3.9 → 3.10.

Type Annotation Modernization

  • typing.Union[X, Y] replaced with PEP 604 X | Y syntax throughout (e.g., monai/config/type_definitions.py, monai/networks/nets/dynunet.py, monai/transforms/spatial/array.py).
  • typing.Optional[X] replaced with X | None.
  • Callable import migrated from typing to collections.abc across many modules.

Code / Logic Fixes

  • monai/apps/vista3d/inferer.py: added explicit torch.Tensor | None annotation and a ValueError guard when no windows are processed.
  • monai/apps/detection/transforms/dictionary.py: validation conditions loosened (!= 1< 1) and error messages improved with backtick quoting.
  • tests/test_utils.py: DistTestCase now enqueues the exception string on failure and asserts it in the parent process for clearer distributed-test error reporting.
  • monai/networks/layers/convutils.py: refactored Bessel kernel list construction in gaussian_1d.
  • Various error message formatting cleanups (single-line strings, f-strings with context).

Documentation

  • Installation guide updated to reflect Python ≥ 3.10 minimum.
  • Docstring examples updated from python3.9python3.10.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

1 participant