Skip to content

Update CIFAR10 tutorial device selection for CUDA, MPS, and CPU#3826

Open
amit-chaubey wants to merge 2 commits intopytorch:mainfrom
amit-chaubey:blitz-cifar10-device-options
Open

Update CIFAR10 tutorial device selection for CUDA, MPS, and CPU#3826
amit-chaubey wants to merge 2 commits intopytorch:mainfrom
amit-chaubey:blitz-cifar10-device-options

Conversation

@amit-chaubey
Copy link
Copy Markdown

@amit-chaubey amit-chaubey commented Apr 13, 2026

Description

This PR updates beginner_source/blitz/cifar10_tutorial.py to make device selection explicit across CUDA, MPS, and CPU.

Changes

  • Select device in this order:
    • cuda:0 if CUDA is available
    • mps if Apple MPS backend is available
    • cpu otherwise
  • Update nearby tutorial comments to refer to generic device usage instead of CUDA-only wording.

Why

The tutorial currently frames this section as CUDA-specific. This change keeps the same tutorial flow while making guidance clearer for Apple Silicon and CPU-only users.

Checklist

  • The issue that is being fixed is referred in the description (no exact matching open issue linked for this change)
  • Only one issue is addressed in this pull request
  • Labels from the issue that this PR is fixing are added to this pull request
  • No unnecessary issues are included into this pull request.

cc @subramen @albanD @jbschlosser

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 13, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3826

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the cla signed label Apr 13, 2026
@svekars svekars added intro core Tutorials of any level of difficulty related to the core pytorch functionality labels Apr 13, 2026
@svekars
Copy link
Copy Markdown
Contributor

svekars commented Apr 13, 2026

@claude

@claude
Copy link
Copy Markdown

claude bot commented Apr 13, 2026

Claude finished @svekars's task in 2m 15s —— View job


Review of PR #3826

  • Read the changed file and PR diff
  • Review the changes for correctness, style, and consistency
  • Check for consistency with other tutorials in the repo
  • Post review feedback

Summary

The intent of this PR is good — making the CIFAR10 tutorial device-agnostic benefits Apple Silicon and CPU-only users. However, the implementation uses an older device-selection pattern rather than the modern torch.accelerator API that most other tutorials in this repo have already adopted.

Recommendation: Use torch.accelerator instead

Most beginner tutorials (including the closely related quickstart_tutorial.py, buildmodel_tutorial.py, transfer_learning_tutorial.py, nn_tutorial.py, and others) have already migrated to the torch.accelerator API:

device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"

This single line replaces the 5-line if/elif/else block in this PR and automatically handles CUDA, MPS, XPU, and other accelerator backends without needing explicit checks for each one. It's also future-proof for new hardware backends.

The current PR uses the hasattr(torch.backends, "mps") pattern, which is only found in one other file (reinforcement_q_learning.py) and is a legacy approach.

Suggested change at cifar10_tutorial.py:302-309 — replace:

# Let's first select a device. Prefer CUDA when available, otherwise use MPS
# (Apple Silicon), and fall back to CPU.
if torch.cuda.is_available():
    device = torch.device("cuda:0")
elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
    device = torch.device("mps")
else:
    device = torch.device("cpu")

with:

# Let's first select a device. This picks the fastest available accelerator,
# or falls back to CPU.
device = (
    torch.accelerator.current_accelerator().type
    if torch.accelerator.is_available()
    else "cpu"
)

Fix this →

Other comments are fine

The remaining changes — updating prose to say "accelerator device" instead of "CUDA device", "tensors on device" instead of "CUDA tensors", and "device too" instead of "GPU too" — are all good and appropriate regardless of which device selection approach is used.

Minor nit

The hasattr(torch.backends, "mps") guard is overly cautious for the PyTorch versions this tutorial targets. torch.backends.mps has existed since PyTorch 1.12. But this becomes moot if using torch.accelerator.


|

@amit-chaubey
Copy link
Copy Markdown
Author

Thanks! @claude Updated to use torch.accelerator (same one-liner style as quickstart_tutorial.py), adjusted the surrounding prose so it stays accurate when device is "cpu", and kept num_workers=0 for direct script runs on macOS/Windows. Pushed in the latest commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed core Tutorials of any level of difficulty related to the core pytorch functionality intro

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants