Skip to content

sample QAD example script#933

Open
ynankani wants to merge 2 commits intomainfrom
ynankani/qad_example
Open

sample QAD example script#933
ynankani wants to merge 2 commits intomainfrom
ynankani/qad_example

Conversation

@ynankani
Copy link
Contributor

@ynankani ynankani commented Feb 25, 2026

What does this PR do?

sample QAD example script

Type of change: ? new example
Example script for QAD on diffusion model like ltx-2

Overview: ?

  1. Model loading
  2. NvFP4 fake quant PTQ using mtq.quantize
  3. Distillation class wrapping using mtd.convert
  4. Using ltx-2 trainer code for training
  5. Checkpoint save in bf16 . post process bf16 model using Comfy-kitchen to produce real quantized model for ComyUI inference

Usage

accelerate launch --config_file fsdp_custom.yaml sample_example_qad_diffusers.py train  --config ltx2_qad.yaml 

Testing

  1. Tested improvement in Vbench score for PTQ and QAD checkpoint.

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: NA
  • Did you write any new necessary tests?: NA
  • Did you add or update any necessary documentation?: NA
  • Did you update Changelog?: NA

Additional Information

Summary by CodeRabbit

  • Documentation

    • Added comprehensive documentation for a new Quantization-Aware Distillation example on Windows, including setup instructions and usage guidance.
  • New Features

    • Added a complete training example with distributed training configuration and workflow implementation for model training.
    • Added package requirements specification for the example.

Signed-off-by: ynankani <ynankani@nvidia.com>
Signed-off-by: ynankani <ynankani@nvidia.com>
@ynankani ynankani requested a review from a team as a code owner February 25, 2026 13:29
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 25, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e589ac8 and b39e963.

📒 Files selected for processing (5)
  • examples/windows/torch_onnx/diffusers/qad_example/README.md
  • examples/windows/torch_onnx/diffusers/qad_example/fsdp_custom.yaml
  • examples/windows/torch_onnx/diffusers/qad_example/ltx2_qad.yaml
  • examples/windows/torch_onnx/diffusers/qad_example/requirements.txt
  • examples/windows/torch_onnx/diffusers/qad_example/sample_example_qad_diffusers.py

📝 Walkthrough

Walkthrough

This pull request introduces a new LTX-2 QAD (Quantization-Aware Distillation) example for Windows Torch ONNX diffusers. The addition includes a complete training workflow with quantization calibration, knowledge distillation, configuration files, dependencies, and comprehensive documentation.

Changes

Cohort / File(s) Summary
Configuration Files
fsdp_custom.yaml, ltx2_qad.yaml
YAML configuration for distributed training with FSDP and LTX-2 QAD training parameters including model paths, optimization settings, quantization config, validation, and checkpoint management.
Documentation and Dependencies
README.md, requirements.txt
Documentation describing the QAD workflow, installation steps, project layout, and usage guidance; dependency list for LTX-2 components, accelerate, and NVIDIA ModelOpt.
Implementation
sample_example_qad_diffusers.py
Core QAD training implementation with LtxvQADTrainer class supporting PTQ calibration and knowledge distillation, state dict utilities for format detection and loading, amax metadata extraction, and inference checkpoint creation.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant LtxvQADTrainer
    participant Quantizer as ModelOpt<br/>(Quantizer)
    participant Student as Student<br/>Model
    participant Teacher as Teacher<br/>Model
    participant Checkpoint as Checkpoint<br/>Manager

    User->>LtxvQADTrainer: Initialize trainer
    LtxvQADTrainer->>Quantizer: Run PTQ calibration
    Quantizer->>Student: Calibrate weights
    Quantizer-->>LtxvQADTrainer: Calibration complete
    
    LtxvQADTrainer->>Teacher: Load teacher model
    LtxvQADTrainer->>Student: Wrap with DistillationModel
    
    LtxvQADTrainer->>Student: Training loop
    Student->>Student: Forward pass
    Student->>Teacher: Get teacher outputs
    Student->>Student: Compute training loss
    Student->>Student: Compute distillation loss
    Student->>Student: Combined loss backward
    
    LtxvQADTrainer->>Checkpoint: Save checkpoint
    Checkpoint->>Checkpoint: Extract amax metadata
    Checkpoint->>Checkpoint: Filter keys
    Checkpoint->>Checkpoint: Save as safetensors
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'sample QAD example script' directly and accurately reflects the main purpose of the changeset—introducing a complete QAD (Quantization-Aware Distillation) example for LTX-2 diffusion model training.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ynankani/qad_example

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant