-
Notifications
You must be signed in to change notification settings - Fork 6.7k
[wip] switch to transformers main again. #12976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
039324a
c152b18
f8e50fa
c5e023f
d0f279c
96f0804
ea90a74
37cfcee
926db24
cec0209
7b55da8
3dcb97c
084c959
4ea43ee
7f2cd5b
62bf2b0
3513163
387befd
2fe9f98
e1249d2
c2d8273
a21a6ac
5274ffd
515dd06
4dff318
0eaa35f
6e8e7ba
fefd0f4
b4b707e
7a0739c
2bee621
f9bdc09
079e0e3
7eb51e9
ea815e5
5fefef9
8568200
c3249d7
10ef226
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -17,6 +17,9 @@ | |
| import os | ||
| import sys | ||
| import tempfile | ||
| import unittest | ||
|
|
||
| from diffusers.utils import is_transformers_version | ||
|
|
||
|
|
||
| sys.path.append("..") | ||
|
|
@@ -30,6 +33,7 @@ | |
| logger.addHandler(stream_handler) | ||
|
|
||
|
|
||
| @unittest.skipIf(is_transformers_version(">=", "4.57.5"), "Size mismatch") | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Internal discussion: https://huggingface.slack.com/archives/C014N4749J9/p1768474502541349 |
||
| class CustomDiffusion(ExamplesTestsAccelerate): | ||
| def test_custom_diffusion(self): | ||
| with tempfile.TemporaryDirectory() as tmpdir: | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -44,6 +44,7 @@ | |
| torch.nn.ConvTranspose2d, | ||
| torch.nn.ConvTranspose3d, | ||
| torch.nn.Linear, | ||
| torch.nn.Embedding, | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Happening because of the way weight loading is done in v5. |
||
| # TODO(aryan): look into torch.nn.LayerNorm, torch.nn.GroupNorm later, seems to be causing some issues with CogVideoX | ||
| # because of double invocation of the same norm layer in CogVideoXLayerNorm | ||
| ) | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -287,6 +287,9 @@ def _get_prompt_embeds( | |
| truncation=True, | ||
| padding="max_length", | ||
| ) | ||
| input_ids = ( | ||
| input_ids["input_ids"] if not isinstance(input_ids, list) and "input_ids" in input_ids else input_ids | ||
| ) | ||
|
Comment on lines
+290
to
+292
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Internal discussion https://huggingface.slack.com/archives/C014N4749J9/p1768537424692669 |
||
| input_ids = torch.LongTensor(input_ids) | ||
| input_ids_batch.append(input_ids) | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -20,13 +20,17 @@ def test_load_from_config_diffusers_with_subfolder(self, mock_load_config): | |
| side_effect=[EnvironmentError("File not found"), {"model_type": "clip_text_model"}], | ||
| ) | ||
| def test_load_from_config_transformers_with_subfolder(self, mock_load_config): | ||
| model = AutoModel.from_pretrained("hf-internal-testing/tiny-stable-diffusion-torch", subfolder="text_encoder") | ||
| model = AutoModel.from_pretrained( | ||
| "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="text_encoder", use_safetensors=False | ||
| ) | ||
|
Comment on lines
+23
to
+25
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Internal discussion: https://huggingface.slack.com/archives/C014N4749J9/p1768462040821759 |
||
| assert isinstance(model, CLIPTextModel) | ||
|
|
||
| def test_load_from_config_without_subfolder(self): | ||
| model = AutoModel.from_pretrained("hf-internal-testing/tiny-random-longformer") | ||
| assert isinstance(model, LongformerModel) | ||
|
|
||
| def test_load_from_model_index(self): | ||
| model = AutoModel.from_pretrained("hf-internal-testing/tiny-stable-diffusion-torch", subfolder="text_encoder") | ||
| model = AutoModel.from_pretrained( | ||
| "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="text_encoder", use_safetensors=False | ||
| ) | ||
| assert isinstance(model, CLIPTextModel) | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,7 +19,7 @@ | |
| import numpy as np | ||
| import torch | ||
| from huggingface_hub import hf_hub_download | ||
| from transformers import T5EncoderModel, T5TokenizerFast | ||
| from transformers import AutoConfig, T5EncoderModel, T5TokenizerFast | ||
|
|
||
| from diffusers import ( | ||
| AutoencoderKL, | ||
|
|
@@ -89,7 +89,8 @@ def get_dummy_components(self): | |
| scheduler = FlowMatchEulerDiscreteScheduler() | ||
|
|
||
| torch.manual_seed(0) | ||
| text_encoder = T5EncoderModel.from_pretrained("hf-internal-testing/tiny-random-t5") | ||
| config = AutoConfig.from_pretrained("hf-internal-testing/tiny-random-t5") | ||
| text_encoder = T5EncoderModel(config) | ||
|
Comment on lines
+92
to
+93
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
| tokenizer = T5TokenizerFast.from_pretrained("hf-internal-testing/tiny-random-t5") | ||
|
|
||
| components = { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temporary. For this PR.