Conversation
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
for more information, see https://pre-commit.ci
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds new CI workflows and removes others, updates many GitHub Actions YAMLs to use Python 3.10 (and other version updates), and refactors CI matrices/steps. Raises minimum Python to 3.10 and bumps torch/dependency constraints to >=2.8.0 across pyproject, setup.cfg, requirements, and docs. Modernizes type hints (PEP 604 unions), moves Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
for more information, see https://pre-commit.ci
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
setup.cfg (2)
165-165:⚠️ Potential issue | 🟠 MajorStale onnxruntime version constraint in [onnx] extra.
Line 87 in
[all]specifiesonnxruntimewithout version gating, but line 165 in the[onnx]extra gates it withpython_version <= '3.10'. This is inconsistent and appears to be leftover from Python 3.9 support.🔧 Proposed fix
onnx = onnx>=1.13.0 - onnxruntime; python_version <= '3.10' + onnxruntime🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@setup.cfg` at line 165, The [onnx] extra contains a stale conditional "onnxruntime; python_version <= '3.10'" that conflicts with the un-gated onnxruntime in the [all] extra; remove the python_version environment marker from the onnx extra so both extras list onnxruntime consistently (i.e., change the entry in the [onnx] extra that currently reads "onnxruntime; python_version <= '3.10'" to just "onnxruntime"), ensuring the dependency string in the onnx extra matches the [all] section.
136-136:⚠️ Potential issue | 🟠 MajorStale transformers version constraint in [transformers] extra.
Line 73 in
[all]specifiestransformers>=4.53.0without version gating, but line 136 in the[transformers]extra has the old constrainttransformers>=4.36.0, <4.41.0; python_version <= '3.10'. This is inconsistent and appears to be leftover from Python 3.9 support.🔧 Proposed fix
transformers = - transformers>=4.36.0, <4.41.0; python_version <= '3.10' + transformers>=4.53.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@setup.cfg` at line 136, Update the stale constraint in the [transformers] extra: replace the old requirement "transformers>=4.36.0, <4.41.0; python_version <= '3.10'" with a constraint consistent with the [all] extra (e.g., "transformers>=4.53.0" or the same range used in [all]) and remove the unnecessary python_version gating; ensure the [transformers] extra and the [all] section reference the identical transformers requirement so they are consistent.
🧹 Nitpick comments (7)
monai/networks/nets/dints.py (3)
926-927: Prefer more specific type ignore or avoid reassignment.
# type: ignoreis broad. Use# type: ignore[assignment]for precision, or assign to new variables instead of reassigning with different types.♻️ More specific type suppression
-_node_in = (_node_in >= 1).astype(int) # type: ignore -_node_out = (_node_out >= 1).astype(int) # type: ignore +_node_in = (_node_in >= 1).astype(int) # type: ignore[assignment] +_node_out = (_node_out >= 1).astype(int) # type: ignore[assignment]♻️ Alternative: use new variable names
-_node_in = (_node_in >= 1).astype(int) # type: ignore -_node_out = (_node_out >= 1).astype(int) # type: ignore +_node_in_binary = (_node_in >= 1).astype(int) +_node_out_binary = (_node_out >= 1).astype(int) -node2in[self.node_act_dict[str(_node_out)]].append(child_idx) -node2out[self.node_act_dict[str(_node_in)]].append(child_idx) +node2in[self.node_act_dict[str(_node_out_binary)]].append(child_idx) +node2out[self.node_act_dict[str(_node_in_binary)]].append(child_idx)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/networks/nets/dints.py` around lines 926 - 927, Replace the broad "# type: ignore" on the boolean-to-int reassignments of _node_in and _node_out by either using a more specific suppression "# type: ignore[assignment]" or, preferably, avoid reassigning with a different type by creating new variables (e.g., node_in_bin = (_node_in >= 1).astype(int) and node_out_bin = (_node_out >= 1).astype(int)) and update downstream uses to those new names; target the statements assigning _node_in and _node_out to locate and apply this change.
1017-1017: Prefer more specific type ignore or avoid reassignment.Same issue. Use
# type: ignore[assignment]or assign to a new variable.♻️ More specific type suppression
-node_a = (node_a >= 1).astype(int) # type: ignore +node_a = (node_a >= 1).astype(int) # type: ignore[assignment]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/networks/nets/dints.py` at line 1017, The current broad type suppression on the reassignment of node_a should be made more specific or avoided: either replace the comment with a targeted suppression (use "# type: ignore[assignment]" on the line with node_a = (node_a >= 1).astype(int)) or avoid reassigning by creating a new variable (e.g., node_a_bin or similar) for the integer/boolean result and update subsequent uses accordingly; locate the statement referencing node_a and apply one of these two fixes to remove the broad "# type: ignore".
979-979: Prefer more specific type ignore or avoid reassignment.Same issue as lines 926-927. Use
# type: ignore[assignment]or assign to a new variable.♻️ More specific type suppression
-_node_act = (_node_act >= 1).astype(int) # type: ignore +_node_act = (_node_act >= 1).astype(int) # type: ignore[assignment]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/networks/nets/dints.py` at line 979, Replace the broad type ignore on the assignment to _node_act so mypy knows why the cast is allowed: either assign the boolean-to-int result to a new variable (e.g., node_act_int = (_node_act >= 1).astype(int)) and use that, or keep the reassignment but narrow the suppression to "# type: ignore[assignment]" on the line "_node_act = (_node_act >= 1).astype(int)". Update usages to reference the new variable if you created one; target the "_node_act" assignment site to make this change..github/workflows/cicd_tests.yml (3)
78-82: Quote command substitutions to prevent word splitting.Shellcheck warns about unquoted command substitutions that could cause issues with paths containing spaces.
♻️ Proposed fix
run: | # clean up temporary files - $(pwd)/runtests.sh --build --clean + "$(pwd)/runtests.sh" --build --clean # Github actions have multiple cores, so parallelize pytype - $(pwd)/runtests.sh --build --${{ matrix.opt }} -j $(nproc --all) + "$(pwd)/runtests.sh" --build --${{ matrix.opt }} -j "$(nproc --all)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/cicd_tests.yml around lines 78 - 82, The shell commands use unquoted command substitutions which can break on paths with spaces: wrap the command substitutions used when invoking runtests.sh—specifically the $(pwd) and $(nproc --all) substitutions in the run steps that call runtests.sh --build --clean and runtests.sh --build --${{ matrix.opt }} -j $(nproc --all)—in double quotes (e.g., "$(pwd)" and "$(nproc --all)") so the expanded values are treated as single arguments and won't be word-split.
204-213: Quote environment variable expansions.Shellcheck recommends quoting
${WIN_FULLDEP_VER},${WIN_FULLDEP_VISION_VER},${PYTORCH_VER1}, and${TORCHVISION_VER1}to prevent potential word splitting, though unlikely with version numbers.♻️ Proposed fix
run: | # BUG: PyTorch 2.8.0 appears to have a gloo bug under Windows with failing distributed tests - python -m pip install torch==${WIN_FULLDEP_VER} torchvision==${WIN_FULLDEP_VISION_VER}+cpu --index-url https://download.pytorch.org/whl/cpu + python -m pip install "torch==${WIN_FULLDEP_VER}" "torchvision==${WIN_FULLDEP_VISION_VER}+cpu" --index-url https://download.pytorch.org/whl/cpu shell: bash - if: runner.os != 'windows' name: Install torch gpu run: | # install the lowest version of PyTorch supported - python -m pip install torch==${PYTORCH_VER1} torchvision==${TORCHVISION_VER1} + python -m pip install "torch==${PYTORCH_VER1}" "torchvision==${TORCHVISION_VER1}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/cicd_tests.yml around lines 204 - 213, Quote the environment variable expansions in the workflow commands to prevent accidental word-splitting: update the torch install lines to use "${WIN_FULLDEP_VER}" and "${WIN_FULLDEP_VISION_VER}" in the Windows install command and "${PYTORCH_VER1}" and "${TORCHVISION_VER1}" in the non-Windows GPU install command; locate the two run blocks that call python -m pip install and replace the unquoted ${...} usages with the quoted "${...}" variants in those commands.
304-308: Use safer file discovery instead oflsparsing.Shellcheck warns against parsing
lsoutput. Use a glob orfindinstead. The$name[all]is valid pip syntax but confuses shellcheck.♻️ Proposed fix
run: | # install from tar.gz - name=$(ls *.tar.gz | head -n1) - echo $name - python -m pip install $name[all] --extra-index-url https://download.pytorch.org/whl/cpu + for name in *.tar.gz; do break; done + echo "$name" + python -m pip install "${name}[all]" --extra-index-url https://download.pytorch.org/whl/cpu🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/cicd_tests.yml around lines 304 - 308, Replace the unsafe ls parsing for the tarball and the shellcheck-confusing $name[all] usage: instead of using `name=$(ls *.tar.gz | head -n1)`, find the archive safely (e.g., with `find` or a nullglob-safe shell glob) and assign its full path to the variable `name`, then call the pip install invocation using a quoted, option-separated form so the [all] extras aren't mis-parsed (e.g. pass `--` before the package path and quote the value when invoking `python -m pip install`), updating the `name` variable and the `python -m pip install $name[all] --extra-index-url ...` line accordingly.monai/visualize/visualizer.py (1)
31-35: Add docstring and explicit types forup.
upis a definition without a Google-style docstring and explicit parameter/return typing. Please document inputs/return/raised exceptions and type it directly.As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/visualize/visualizer.py` around lines 31 - 35, Add a Google-style docstring and explicit type hints to the local function up: annotate the parameter x as torch.Tensor and the return as torch.Tensor (e.g., def up(x: torch.Tensor) -> torch.Tensor), and add a docstring that documents Args (x: input tensor), Returns (upsampled tensor), and Raises (e.g., ValueError if len(spatial_size) not in {1,2,3} or IndexError when selecting interp_mode). In the body mention the expected external variables (spatial_size: Sequence[int], align_corners: Optional[bool]) and that the function picks InterpolateMode based on len(spatial_size) before calling F.interpolate; ensure the docstring follows Google style and briefly describes behavior and exceptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build_docs.yml:
- Around line 9-11: The workflow uses an invalid pull_request filter key
"head_ref-ignore" which breaks parsing; update the pull_request event block to
replace "head_ref-ignore" with the supported key "branches-ignore" (i.e., modify
the pull_request configuration in the workflow YAML to use branches-ignore
instead of head_ref-ignore).
In @.github/workflows/cicd_tests.yml:
- Around line 149-154: The workflow contains an unreachable step "Print GPU
Info" gated by the condition `if: matrix.os == 'linux-gpu-runner'` which will
never be true given the matrix values; either remove this step entirely or make
it reachable by adding `linux-gpu-runner` to the job matrix (or change the `if`
to match an existing matrix value like `ubuntu-latest`), updating the
conditional `if: matrix.os == 'linux-gpu-runner'` so the "Print GPU Info" step
actually executes when intended.
- Line 244: The workflow sets the `shell` key incorrectly inside the `env`
mapping; remove the `shell` entry from the `env` section and instead set it at
`defaults.run.shell` for the job or add `shell: bash` to the individual steps
that require it (or leave per-step usage if already present); update the file by
deleting the misplaced `shell` in the `env` block and placing the key under
`defaults.run.shell` or on relevant `steps` to resolve the invalid placement.
- Around line 15-16: Replace the invalid GitHub Actions key head_ref-ignore with
the correct branches-ignore in the workflow YAML so the workflow properly
ignores the specified branch (e.g., dev); update the mapping entry under the
triggers block where head_ref-ignore is declared to use branches-ignore instead
and ensure the list of ignored branches remains the same.
In @.github/workflows/pythonapp-gpu.yml:
- Around line 54-57: The if-condition testing the matrix value uses an exact
equality to "PT230+CUDA124" so it never matches the actual matrix value
"PT230+CUDA124DOCKER"; update the conditional in
.github/workflows/pythonapp-gpu.yml (the block around the if that sets
PYVER/PYSFX/DISTUTILS and installs packages) to match the DOCKER variant —
either use a pattern comparison (e.g. double-bracket glob like [[ "${{
matrix.environment }}" == PT230+CUDA124* ]]) or explicitly include
"PT230+CUDA124DOCKER" in the test so the PYVER/PYSFX/DISTUTILS assignment and
apt-get install run for that matrix entry.
In `@monai/apps/detection/transforms/dictionary.py`:
- Around line 814-823: The ValueError messages in the checks for box_keys_tuple
and box_ref_image_keys_tuple are misleading: the condition len(...) != 1
enforces exactly one key but the messages say "at least one key"; update the
exception text for the checks around box_keys_tuple (related to box_keys) and
box_ref_image_keys_tuple (related to box_ref_image_keys) to state that exactly
one key must be provided (e.g., "Please provide exactly one key for
`box_keys`..." and "Please provide exactly one key for `box_ref_image_keys`...")
so the message semantics match the condition.
In `@monai/apps/detection/utils/anchor_utils.py`:
- Around line 173-174: The error message built in anchor_utils.py uses
len(aspect_ratios_t.shape) - 1 to describe the expected rank which can be wrong;
change it to use the required rank directly (self.spatial_dims) when
constructing the message for aspect_ratios_t in the check inside the
function/method that validates aspect_ratios (the block referencing
aspect_ratios_t and self.spatial_dims), so the message reads something like "In
{self.spatial_dims}-D image, aspect_ratios for each level should be
{self.spatial_dims}-D. Got aspect_ratios with shape {aspect_ratios_t.shape}."
ensuring the expected dimension uses self.spatial_dims rather than computing
len(shape)-1.
In `@monai/apps/reconstruction/transforms/array.py`:
- Around line 65-66: The validation error message uses literal expressions
instead of actual lengths; update the message where the check compares
center_fractions and accelerations (use the variables center_fractions and
accelerations) to include the real lengths (e.g., len(center_fractions) and
len(accelerations))—replace the current static string with a formatted message
(f-string or .format) that prints both lengths and a clear context so failures
show actual values.
In `@monai/apps/vista3d/inferer.py`:
- Line 117: The code currently uses assert stitched_output is not None which
fails when point_start >= len(point_coords[0]) because the loop never runs;
replace this assertion with an explicit runtime guard in the function handling
inference (referencing variables point_start, point_coords, stitched_output and
stitched_mask) — either initialize stitched_output/stitched_mask to correct
empty/default tensors before the loop or return early with a sensible empty
result when there are no new points to process, and update callers accordingly;
also add a regression test that calls the inferer path with point_start >=
len(point_coords[0]) to verify no crash and correct empty output handling.
In `@monai/visualize/visualizer.py`:
- Line 35: The F.interpolate call currently uses a bare "# type: ignore" which
hides all type diagnostics; replace it by explicitly typing the interpolation
mode and annotating the nested function up: declare smode as the proper
literal/Enum-derived type (from InterpolateMode.value) or annotate up's
parameters and return type so mypy understands the argument types, then remove
the bare ignore or change it to a targeted ignore like "# type:
ignore[arg-type]" if necessary; also add a Google-style docstring to the nested
function up describing its parameters (x, spatial_size, smode, align_corners)
and return value to satisfy code guidelines.
In `@requirements-dev.txt`:
- Line 66: Remove the duplicate dependency entry "pybind11" from
requirements-dev.txt (the second occurrence at line 66); keep the original
declaration (the one at line 21) and ensure only a single "pybind11" line
remains to avoid redundancy in dependency audits.
In `@setup.cfg`:
- Line 65: The dependency marker for cucim-cu12 is inverted: change the
environment marker on the cucim-cu12 requirement (the line containing
"cucim-cu12; platform_system == \"Linux\" and python_version <= '3.10'") to
reflect the package's actual Python requirement (e.g., "python_version >=
'3.11'" or a precise range like "python_version >= '3.11' and python_version <
'3.12'"); apply the same correction to the other occurrence referenced (the
cucim-cu13-related line mentioned in the comment) so both markers match the
packages' PyPI metadata.
In `@tests/test_utils.py`:
- Around line 572-575: The loop that does p.join() then pr = results.get() can
hang if a worker exited without putting a result; update the logic around
processes, p.join(), and results.get() to avoid blocking by using a bounded wait
(e.g. results.get with a timeout) and immediately inspect the worker state
(p.exitcode) after join — if exitcode is nonzero or results.get times out,
raise/assert with diagnostic info; ensure the code checks/consumes the queue
only when a result is expected to be present to prevent indefinite blocking.
---
Outside diff comments:
In `@setup.cfg`:
- Line 165: The [onnx] extra contains a stale conditional "onnxruntime;
python_version <= '3.10'" that conflicts with the un-gated onnxruntime in the
[all] extra; remove the python_version environment marker from the onnx extra so
both extras list onnxruntime consistently (i.e., change the entry in the [onnx]
extra that currently reads "onnxruntime; python_version <= '3.10'" to just
"onnxruntime"), ensuring the dependency string in the onnx extra matches the
[all] section.
- Line 136: Update the stale constraint in the [transformers] extra: replace the
old requirement "transformers>=4.36.0, <4.41.0; python_version <= '3.10'" with a
constraint consistent with the [all] extra (e.g., "transformers>=4.53.0" or the
same range used in [all]) and remove the unnecessary python_version gating;
ensure the [transformers] extra and the [all] section reference the identical
transformers requirement so they are consistent.
---
Nitpick comments:
In @.github/workflows/cicd_tests.yml:
- Around line 78-82: The shell commands use unquoted command substitutions which
can break on paths with spaces: wrap the command substitutions used when
invoking runtests.sh—specifically the $(pwd) and $(nproc --all) substitutions in
the run steps that call runtests.sh --build --clean and runtests.sh --build
--${{ matrix.opt }} -j $(nproc --all)—in double quotes (e.g., "$(pwd)" and
"$(nproc --all)") so the expanded values are treated as single arguments and
won't be word-split.
- Around line 204-213: Quote the environment variable expansions in the workflow
commands to prevent accidental word-splitting: update the torch install lines to
use "${WIN_FULLDEP_VER}" and "${WIN_FULLDEP_VISION_VER}" in the Windows install
command and "${PYTORCH_VER1}" and "${TORCHVISION_VER1}" in the non-Windows GPU
install command; locate the two run blocks that call python -m pip install and
replace the unquoted ${...} usages with the quoted "${...}" variants in those
commands.
- Around line 304-308: Replace the unsafe ls parsing for the tarball and the
shellcheck-confusing $name[all] usage: instead of using `name=$(ls *.tar.gz |
head -n1)`, find the archive safely (e.g., with `find` or a nullglob-safe shell
glob) and assign its full path to the variable `name`, then call the pip install
invocation using a quoted, option-separated form so the [all] extras aren't
mis-parsed (e.g. pass `--` before the package path and quote the value when
invoking `python -m pip install`), updating the `name` variable and the `python
-m pip install $name[all] --extra-index-url ...` line accordingly.
In `@monai/networks/nets/dints.py`:
- Around line 926-927: Replace the broad "# type: ignore" on the boolean-to-int
reassignments of _node_in and _node_out by either using a more specific
suppression "# type: ignore[assignment]" or, preferably, avoid reassigning with
a different type by creating new variables (e.g., node_in_bin = (_node_in >=
1).astype(int) and node_out_bin = (_node_out >= 1).astype(int)) and update
downstream uses to those new names; target the statements assigning _node_in and
_node_out to locate and apply this change.
- Line 1017: The current broad type suppression on the reassignment of node_a
should be made more specific or avoided: either replace the comment with a
targeted suppression (use "# type: ignore[assignment]" on the line with node_a =
(node_a >= 1).astype(int)) or avoid reassigning by creating a new variable
(e.g., node_a_bin or similar) for the integer/boolean result and update
subsequent uses accordingly; locate the statement referencing node_a and apply
one of these two fixes to remove the broad "# type: ignore".
- Line 979: Replace the broad type ignore on the assignment to _node_act so mypy
knows why the cast is allowed: either assign the boolean-to-int result to a new
variable (e.g., node_act_int = (_node_act >= 1).astype(int)) and use that, or
keep the reassignment but narrow the suppression to "# type: ignore[assignment]"
on the line "_node_act = (_node_act >= 1).astype(int)". Update usages to
reference the new variable if you created one; target the "_node_act" assignment
site to make this change.
In `@monai/visualize/visualizer.py`:
- Around line 31-35: Add a Google-style docstring and explicit type hints to the
local function up: annotate the parameter x as torch.Tensor and the return as
torch.Tensor (e.g., def up(x: torch.Tensor) -> torch.Tensor), and add a
docstring that documents Args (x: input tensor), Returns (upsampled tensor), and
Raises (e.g., ValueError if len(spatial_size) not in {1,2,3} or IndexError when
selecting interp_mode). In the body mention the expected external variables
(spatial_size: Sequence[int], align_corners: Optional[bool]) and that the
function picks InterpolateMode based on len(spatial_size) before calling
F.interpolate; ensure the docstring follows Google style and briefly describes
behavior and exceptions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 19bc9f8f-6ad8-4eee-865c-624745befa9f
📒 Files selected for processing (63)
.github/workflows/build_docs.yml.github/workflows/cicd_tests.yml.github/workflows/conda.yml.github/workflows/cron-ngc-bundle.yml.github/workflows/cron.yml.github/workflows/docker.yml.github/workflows/pythonapp-gpu.yml.github/workflows/pythonapp-min.yml.github/workflows/pythonapp.yml.github/workflows/release.yml.github/workflows/setupapp.yml.github/workflows/weekly-preview.ymldocs/.readthedocs.yamldocs/requirements.txtdocs/source/installation.mdmonai/_version.pymonai/apps/auto3dseg/auto_runner.pymonai/apps/detection/transforms/dictionary.pymonai/apps/detection/utils/anchor_utils.pymonai/apps/pathology/inferers/inferer.pymonai/apps/pathology/transforms/post/array.pymonai/apps/reconstruction/transforms/array.pymonai/apps/vista3d/inferer.pymonai/auto3dseg/utils.pymonai/bundle/scripts.pymonai/bundle/utils.pymonai/config/type_definitions.pymonai/data/image_reader.pymonai/engines/evaluator.pymonai/engines/trainer.pymonai/losses/dice.pymonai/losses/focal_loss.pymonai/losses/hausdorff_loss.pymonai/losses/sure_loss.pymonai/metrics/average_precision.pymonai/metrics/rocauc.pymonai/networks/layers/conjugate_gradient.pymonai/networks/layers/convutils.pymonai/networks/nets/daf3d.pymonai/networks/nets/dints.pymonai/networks/nets/dynunet.pymonai/networks/nets/vista3d.pymonai/optimizers/lr_finder.pymonai/optimizers/lr_scheduler.pymonai/transforms/adaptors.pymonai/transforms/intensity/array.pymonai/transforms/intensity/dictionary.pymonai/transforms/io/array.pymonai/transforms/io/dictionary.pymonai/transforms/spatial/array.pymonai/transforms/utility/array.pymonai/transforms/utils_pytorch_numpy_unification.pymonai/utils/component_store.pymonai/utils/decorators.pymonai/visualize/gradient_based.pymonai/visualize/visualizer.pypyproject.tomlrequirements-dev.txtrequirements.txtruntests.shsetup.cfgtests/bundle/test_config_item.pytests/test_utils.py
💤 Files with no reviewable changes (2)
- .github/workflows/pythonapp.yml
- .github/workflows/pythonapp-min.yml
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
monai/apps/vista3d/inferer.py (1)
117-119:⚠️ Potential issue | 🟠 MajorLine 117: valid incremental path still fails when no new points are selected.
When
point_startskips all points, raising unconditionally breaks the documentedprev_maskreuse flow. Useprev_maskas fallback, and only raise if it is absent. Also add a regression test forpoint_start >= len(point_coords[0])withprev_maskset.As per coding guidelines, "Examine code for logical error or inconsistencies" and "Ensure new or modified definitions will be covered by existing or new unit tests."Proposed fix
- if stitched_output is None: - raise ValueError("Input configuration resulted in no prediction windows being selected.") + if stitched_output is None: + if prev_mask is None: + raise ValueError( + "Input configuration resulted in no prediction windows being selected; " + "provide `prev_mask` when `point_start` skips all points." + ) + stitched_output = prev_mask.to("cpu") # type: ignore[assignment] + stitched_mask = torch.ones_like(stitched_output, device="cpu")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/apps/vista3d/inferer.py` around lines 117 - 119, The unconditional raise when stitched_output is None in inferer.py breaks incremental prediction reuse: modify the logic in the function where stitched_output is checked (the incremental/point_start handling around point_coords and prev_mask) to use prev_mask as a fallback when stitched_output is None and point_start skips all points, only raising ValueError if stitched_output is None AND prev_mask is also None; update the control flow around point_start and stitched_output to select prev_mask when available. Also add a regression unit test that calls the inferer with point_start >= len(point_coords[0]) and prev_mask provided to assert no exception is raised and prev_mask is used.tests/test_utils.py (1)
574-575:⚠️ Potential issue | 🟠 MajorHarden worker result handling after join.
results.get(block=False)can raisequeue.Emptywithout context, and there is still no explicitp.exitcodecheck. This can mask the true worker failure mode.As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."
🔧 Proposed hardening
for p in processes: p.join() - pr = results.get(block=False) - assert pr is True, f"Distributed call failed: {pr}" + if p.exitcode not in (0, None): + raise RuntimeError(f"Distributed call failed: process exited with code {p.exitcode}.") + try: + pr = results.get(timeout=5) + except queue.Empty as e: + raise RuntimeError("Distributed call failed: no result returned by worker.") from e + assert pr is True, f"Distributed call failed: {pr}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_utils.py` around lines 574 - 575, Wrap the non-blocking retrieve from results.get(block=False) in a try/except that catches queue.Empty and fails with a clearer message, and after joining the worker check p.exitcode and assert it's zero (or include it in the failure message) before asserting pr is True; use the local symbols results.get, pr, and p.exitcode in the message so the assertion clearly reports whether the queue was empty or the process exited with a non-zero code.monai/apps/detection/transforms/dictionary.py (1)
814-829:⚠️ Potential issue | 🟠 MajorEnforce exactly one key here, not “at least one”.
This constructor now accepts multiple keys but immediately uses only the first (
[0]), which can silently ignore user input.🔧 Proposed fix
- if len(box_keys_tuple) < 1: + if len(box_keys_tuple) != 1: raise ValueError( - "Please provide at least one key for `box_keys`. All `label_keys` are attached to this `box_keys`." + "Please provide exactly one key for `box_keys`. All `label_keys` are attached to this `box_keys`." ) box_ref_image_keys_tuple = ensure_tuple(box_ref_image_keys) - if len(box_ref_image_keys_tuple) < 1: + if len(box_ref_image_keys_tuple) != 1: raise ValueError( - "Please provide at least one key for `box_ref_image_keys`. " + "Please provide exactly one key for `box_ref_image_keys`. " "All box_keys and label_keys are attached to this `box_ref_image_keys`." )As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/apps/detection/transforms/dictionary.py` around lines 814 - 829, The constructor currently accepts multiple box_keys/box_ref_image_keys but only uses the first element; change the validation to require exactly one key by replacing the "at least one" checks with equality checks (len(box_keys_tuple) == 1 and len(box_ref_image_keys_tuple) == 1) and raise ValueError if not exactly one, keeping the assignments self.box_keys = box_keys_tuple[0], self.box_ref_image_keys = box_ref_image_keys_tuple[0] and leaving self.label_keys and super().__init__(...) as-is so callers cannot pass multiple keys silently to box_keys or box_ref_image_keys.
🧹 Nitpick comments (1)
tests/test_utils.py (1)
536-537: Preserve richer exception details in queue payload.Sending only
str(e)loses traceback/type context and weakens debugging for distributed failures. Prefer serializing traceback text (or structured error payload).🛠️ Suggested tweak
except Exception as e: - results.put(str(e)) + results.put(traceback.format_exc()) raise eAs per coding guidelines, "Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_utils.py` around lines 536 - 537, Replace the current queue payload of just str(e) with a richer serialized error that preserves type and traceback: when catching exception variable e, build a structured payload (e.g., {"type": type(e).__name__, "message": str(e), "traceback": "".join(traceback.format_exception(type(e), e, e.__traceback__))}) and pass that to results.put instead of str(e); ensure you import traceback and keep raising the original exception (raise e) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/apps/detection/transforms/dictionary.py`:
- Line 312: The error message string that reads f"Key 'affine' not found in
{meta_key}. Please check it is the correct the image meta key." contains a
duplicated "the"; update the message used where the KeyError (or check) is
raised to read something like "Please check it is the correct image meta key."
Keep the f-string and {meta_key} intact so the change only removes the extra
"the" in the message associated with the affine key check.
---
Duplicate comments:
In `@monai/apps/detection/transforms/dictionary.py`:
- Around line 814-829: The constructor currently accepts multiple
box_keys/box_ref_image_keys but only uses the first element; change the
validation to require exactly one key by replacing the "at least one" checks
with equality checks (len(box_keys_tuple) == 1 and len(box_ref_image_keys_tuple)
== 1) and raise ValueError if not exactly one, keeping the assignments
self.box_keys = box_keys_tuple[0], self.box_ref_image_keys =
box_ref_image_keys_tuple[0] and leaving self.label_keys and
super().__init__(...) as-is so callers cannot pass multiple keys silently to
box_keys or box_ref_image_keys.
In `@monai/apps/vista3d/inferer.py`:
- Around line 117-119: The unconditional raise when stitched_output is None in
inferer.py breaks incremental prediction reuse: modify the logic in the function
where stitched_output is checked (the incremental/point_start handling around
point_coords and prev_mask) to use prev_mask as a fallback when stitched_output
is None and point_start skips all points, only raising ValueError if
stitched_output is None AND prev_mask is also None; update the control flow
around point_start and stitched_output to select prev_mask when available. Also
add a regression unit test that calls the inferer with point_start >=
len(point_coords[0]) and prev_mask provided to assert no exception is raised and
prev_mask is used.
In `@tests/test_utils.py`:
- Around line 574-575: Wrap the non-blocking retrieve from
results.get(block=False) in a try/except that catches queue.Empty and fails with
a clearer message, and after joining the worker check p.exitcode and assert it's
zero (or include it in the failure message) before asserting pr is True; use the
local symbols results.get, pr, and p.exitcode in the message so the assertion
clearly reports whether the queue was empty or the process exited with a
non-zero code.
---
Nitpick comments:
In `@tests/test_utils.py`:
- Around line 536-537: Replace the current queue payload of just str(e) with a
richer serialized error that preserves type and traceback: when catching
exception variable e, build a structured payload (e.g., {"type":
type(e).__name__, "message": str(e), "traceback":
"".join(traceback.format_exception(type(e), e, e.__traceback__))}) and pass that
to results.put instead of str(e); ensure you import traceback and keep raising
the original exception (raise e) unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 202b445e-20c1-40e5-931d-457c94e92eae
📒 Files selected for processing (8)
.github/workflows/build_docs.yml.github/workflows/cicd_tests.ymlmonai/apps/detection/transforms/dictionary.pymonai/apps/detection/utils/anchor_utils.pymonai/apps/reconstruction/transforms/array.pymonai/apps/vista3d/inferer.pyrequirements-dev.txttests/test_utils.py
✅ Files skipped from review due to trivial changes (4)
- monai/apps/reconstruction/transforms/array.py
- requirements-dev.txt
- monai/apps/detection/utils/anchor_utils.py
- .github/workflows/cicd_tests.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/build_docs.yml
|
@coderabbitai summarise the changes in this PR as a serices of bullet points. |
|
Tip For best results, initiate chat on the files or code changes.
CI/CD Workflow Refactoring
Python/PyTorch Version Bumps
Dependency & Tooling Updates
Type Annotation Modernization
Code / Logic Fixes
Documentation
|
Description
This PR is an effort at refactoring the CICD tests to reduce code volume in the yaml files. Branch rules will have to be temporarily changed to not expect jobs by names which are no longer used, and once accepted changed again to add the jobs with the new names back in. This would be a good opportunity to move branch rules to rule sets which is the newer Github mechanism. This also adds code for removing unused tools in the Linux runners to avoid running out of disk space during testing, this has increasingly become an issue for testing as the image has gotten larger and more libraries get installed.
This also removes testing support for Python 3.9 and updates which versions of PyTorch are tested with in line with our support policy. This also bumps versions of some actions and dependencies, specifically black for security reasons. Testing Windows with full dependencies requires a much later version of PyTorch due to a gloo bug, and macOS requires the explicit installation of pybind11 for now. A number of other formatting fixes had to be made to account for newer versions of black and other tools, and a number of imports were moved around to match Python 3.10 usage. A number of type annotations were changed to remove Union and Optional.
Windows tests with versions of PyTorch between 2.8 and 2.10 seem to suffer from the gloo bug mentioned in pytorch/pytorch#150381.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.