Conversation
📝 WalkthroughWalkthroughThis pull request adds WPI Turing support to the toolchain infrastructure. A new interactive menu option is exposed in the bootstrap modules selection prompt. The toolchain modules configuration introduces a new cluster group with CPU-variant modules pinning GCC/12.1.0, OpenMPI/4.1.3, and Python/3.13.5, and GPU-variant modules pinning NVHPC/24.3, Python/3.13.5, with compiler environment variables and MPI configuration. A new Mako template generates HPC batch job scripts with conditional Slurm directives, module loading, and per-target execution supporting both MPI and non-MPI modes with optional GPU resource allocation. 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
toolchain/templates/turing.mako (1)
17-21: Hardcoded single GPU and memory constraints may limit flexibility.The GPU configuration hardcodes:
--gres=gpu:1- Only 1 GPU regardless of job requirements--mem=208G- Fixed memory allocation-C "A30|A100"- Specific GPU constraintsThis may be appropriate for Turing's node configuration, but consider whether multi-GPU jobs are needed.
Please confirm with WPI cluster administrators that:
- Single GPU per job is the intended use pattern
- 208G is the appropriate memory allocation for GPU nodes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2454b169-22c8-4900-b9d8-ae3d855e67c8
📒 Files selected for processing (3)
toolchain/bootstrap/modules.shtoolchain/modulestoolchain/templates/turing.mako
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1335 +/- ##
=======================================
Coverage 64.67% 64.67%
=======================================
Files 70 70
Lines 18251 18251
Branches 1504 1504
=======================================
Hits 11804 11804
Misses 5492 5492
Partials 955 955 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…ntal and limited to single-processor simulations.
Review Summary by QodoAdd Turing WPI supercomputer support
WalkthroughsDescription• Add Turing supercomputer support for WPI HPC cluster • Create CPU-only template with SLURM batch configuration • Register Turing system in bootstrap module selection menu • Configure environment modules for gcc, OpenMPI, and Python Diagramflowchart LR
A["Bootstrap System Selection"] -->|"Add Turing option"| B["modules.sh"]
C["New Turing Template"] -->|"CPU-only SLURM config"| D["turing.mako"]
E["Environment Modules"] -->|"gcc, OpenMPI, Python"| F["modules file"]
B --> G["WPI Turing System"]
D --> G
F --> G
File Changes1. toolchain/bootstrap/modules.sh
|
Code Review by Qodo
1. GPU mode missing module set
|
|
|
||
| ok ":) Loading modules:\n" | ||
| cd "${MFC_ROOT_DIR}" | ||
| . ./mfc.sh load -c t -m ${'g' if gpu_enabled else 'c'} |
There was a problem hiding this comment.
1. Gpu mode missing module set 🐞 Bug ≡ Correctness
toolchain/templates/turing.mako uses gpu_enabled to select -m g, but toolchain/modules defines no t-gpu entry, so mfc.sh load will omit compiler/MPI modules and MPI runs can fail due to missing runtime libraries. This is triggered whenever a user runs with --gpu on Turing (gpu_enabled=true).
Agent Prompt
### Issue description
`turing.mako` selects `-m g` when `gpu_enabled` is true, but there is no `t-gpu` module set in `toolchain/modules`. As a result, GPU mode on Turing will only load `t-all` (currently just `slurm`) and will miss required compiler/MPI modules.
### Issue Context
`gpu_enabled` is driven by the `--gpu` CLI flag, so users can unintentionally hit this even on a CPU-only cluster.
### Fix Focus Areas
- toolchain/templates/turing.mako[28-33]
- toolchain/modules[106-108]
- toolchain/bootstrap/modules.sh[105-114]
### Suggested fix options
1) If Turing is CPU-only: change the template to always load CPU modules (`-m c`) and optionally error if `gpu_enabled` is requested.
2) If GPU runs are intended: add `t-gpu ...` in `toolchain/modules` (either alias to `t-cpu` or add the real GPU toolchain modules) so `mfc.sh load -c t -m g` loads a complete environment.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
The template and modules targets CPU-only simulations, as GPU support on Turing remains experimental and restricted to single-processor runs.
Description
This PR adds support for the Turing supercomputer, an HPC cluster hosted by Worcester Polytechnic Institute (WPI).
The changes include:
toolchain/bootstrap/modules.sh, and included the corresponding environment modules intoolchain/modules.toolchain/templates/turing.mako, with the appropriate settings to ensure CPU-based executionType of change
Testing
Ran
examples/3D_performance_test,examples/3D_lagrange_bubblescreen, andexamples/3D_lagrange_shbubcollapsewith the new template and different number of processors. Confirmed no errors during module loading or job submissionChecklist
See the developer guide for full coding standards.
GPU changes (expand if you modified
src/simulation/)AI code reviews
Reviews are not triggered automatically. To request a review, comment on the PR:
@coderabbitai review— incremental review (new changes only)@coderabbitai full review— full review from scratch/review— Qodo review/improve— Qodo code suggestions@claude full review— Claude full review (also triggers on PR open/reopen/ready)claude-full-review— Claude full review via label