h1t commited on
Commit
d0a7bb3
1 Parent(s): c273194
.gitignore ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # poetry
98
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102
+ #poetry.lock
103
+
104
+ # pdm
105
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106
+ #pdm.lock
107
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108
+ # in version control.
109
+ # https://pdm.fming.dev/#use-with-ide
110
+ .pdm.toml
111
+
112
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113
+ __pypackages__/
114
+
115
+ # Celery stuff
116
+ celerybeat-schedule
117
+ celerybeat.pid
118
+
119
+ # SageMath parsed files
120
+ *.sage.py
121
+
122
+ # Environments
123
+ .env
124
+ .venv
125
+ env/
126
+ venv/
127
+ ENV/
128
+ env.bak/
129
+ venv.bak/
130
+
131
+ # Spyder project settings
132
+ .spyderproject
133
+ .spyproject
134
+
135
+ # Rope project settings
136
+ .ropeproject
137
+
138
+ # mkdocs documentation
139
+ /site
140
+
141
+ # mypy
142
+ .mypy_cache/
143
+ .dmypy.json
144
+ dmypy.json
145
+
146
+ # Pyre type checker
147
+ .pyre/
148
+
149
+ # pytype static type analyzer
150
+ .pytype/
151
+
152
+ # Cython debug symbols
153
+ cython_debug/
154
+
155
+ # PyCharm
156
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
159
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160
+ #.idea/
README.md CHANGED
@@ -1,13 +1,23 @@
1
- ---
2
- title: Oms Sdxl Lcm
3
- emoji: 🚀
4
- colorFrom: gray
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 4.7.1
8
- app_file: app.py
9
- pinned: false
10
- license: openrail++
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
1
+ # One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls
2
+
3
+ One More Step (OMS) module was proposed in [One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls](https://github.com/mhh0318/OneMoreStep)
4
+ by *Minghui Hu, Jianbin Zheng, Chuanxia Zheng, Tat-Jen Cham et al.*
5
+
6
+ By incorporating **one minor, additional step** atop the existing sampling process, it can address inherent limitations in the diffusion schedule of current diffusion models. Crucially, this augmentation does not necessitate alterations to the original parameters of the model. Furthermore, the OMS module enhances control over low-frequency elements, such as color, within the generated images.
7
+
8
+ Our model is **versatile** and allowing for **seamless integration** with a broad spectrum of prevalent Stable Diffusion frameworks. It demonstrates compatibility with community-favored tools and techniques, including LoRA, ControlNet, Adapter, and other foundational models, underscoring its utility and adaptability in diverse applications.
9
+
10
+ ## Usage
11
+
12
+ OMS now is supported diffusers with a customized pipeline, as detailed in [github](https://github.com/mhh0318/OneMoreStep). To run the model, first install the latest version of `diffusers` (especially for `LCM` feature) as well as `accelerate` and `transformers`.
13
+
14
+ ```bash
15
+ pip install --upgrade pip
16
+ pip install --upgrade diffusers transformers accelerate
17
+ ```
18
+
19
+ And then we clone the repo
20
+ ```bash
21
+ git clone https://github.com/mhh0318/OneMoreStep.git
22
+ cd OneMoreStep
23
+ ```
app.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import gradio as gr
3
+ from functools import partial
4
+
5
+ from diffusers_patch import OMSPipeline
6
+
7
+
8
+ def create_sdxl_lcm_lora_pipe(sd_pipe_name_or_path, oms_name_or_path, lora_name_or_path):
9
+ from diffusers import StableDiffusionXLPipeline, LCMScheduler
10
+ sd_pipe = StableDiffusionXLPipeline.from_pretrained(sd_pipe_name_or_path, torch_dtype=torch.float16, variant="fp16", add_watermarker=False).to('cuda')
11
+ print('successfully load pipe')
12
+ sd_scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config)
13
+ sd_pipe.load_lora_weights(lora_name_or_path, variant="fp16")
14
+
15
+ pipe = OMSPipeline.from_pretrained(oms_name_or_path, sd_pipeline = sd_pipe, torch_dtype=torch.float16, variant="fp16", trust_remote_code=True, sd_scheduler=sd_scheduler)
16
+ pipe.to('cuda')
17
+
18
+ return pipe
19
+
20
+
21
+ class GradioDemo:
22
+ def __init__(
23
+ self,
24
+ sd_pipe_name_or_path = "stabilityai/stable-diffusion-xl-base-1.0",
25
+ oms_name_or_path = 'h1t/oms_b_openclip_xl',
26
+ lora_name_or_path = 'latent-consistency/lcm-lora-sdxl'
27
+ ):
28
+ self.pipe = create_sdxl_lcm_lora_pipe(sd_pipe_name_or_path, oms_name_or_path, lora_name_or_path)
29
+
30
+ def _inference(
31
+ self,
32
+ prompt = None,
33
+ oms_prompt = None,
34
+ oms_guidance_scale = 1.0,
35
+ num_inference_steps = 4,
36
+ sd_pipe_guidance_scale = 1.0,
37
+ seed = 1024,
38
+ ):
39
+ pipe_kwargs = dict(
40
+ prompt = prompt,
41
+ num_inference_steps = num_inference_steps,
42
+ guidance_scale = sd_pipe_guidance_scale,
43
+ )
44
+
45
+ generator = torch.Generator(device=self.pipe.device).manual_seed(seed)
46
+ pipe_kwargs.update(oms_flag=False)
47
+ print(f'raw kwargs: {pipe_kwargs}')
48
+ image_raw = self.pipe(
49
+ **pipe_kwargs,
50
+ generator=generator
51
+ )['images'][0]
52
+
53
+ generator = torch.Generator(device=self.pipe.device).manual_seed(seed)
54
+ pipe_kwargs.update(oms_flag=True, oms_prompt=oms_prompt, oms_guidance_scale=1.0)
55
+ print(f'w/ oms wo/ cfg kwargs: {pipe_kwargs}')
56
+ image_oms = self.pipe(
57
+ **pipe_kwargs,
58
+ generator=generator
59
+ )['images'][0]
60
+
61
+ oms_guidance_flag = oms_guidance_scale != 1.0
62
+ if oms_guidance_flag:
63
+ generator = torch.Generator(device=self.pipe.device).manual_seed(seed)
64
+ pipe_kwargs.update(oms_guidance_scale=oms_guidance_scale)
65
+ print(f'w/ oms +cfg kwargs: {pipe_kwargs}')
66
+ image_oms_cfg = self.pipe(
67
+ **pipe_kwargs,
68
+ generator=generator
69
+ )['images'][0]
70
+ else:
71
+ image_oms_cfg = None
72
+
73
+ return image_raw, image_oms, image_oms_cfg, gr.update(visible=oms_guidance_flag)
74
+
75
+ def mainloop(self):
76
+ with gr.Blocks() as demo:
77
+ gr.Markdown("# One More Step Demo")
78
+ with gr.Row():
79
+ with gr.Column():
80
+ prompt = gr.Textbox(label="Prompt", value="a cat")
81
+ oms_prompt = gr.Textbox(label="OMS Prompt", value="orange car")
82
+ oms_guidance_scale = gr.Slider(label="OMS Guidance Scale", minimum=1.0, maximum=5.0, value=1.5, step=0.1)
83
+ run_button = gr.Button(value="Generate images")
84
+ with gr.Accordion("Advanced options", open=False):
85
+ num_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=4, step=1)
86
+ sd_guidance_scale = gr.Slider(label="SD Pipe Guidance Scale", minimum=0.1, maximum=30.0, value=1.0, step=0.1)
87
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=False, value=1024)
88
+ with gr.Column():
89
+ output_raw = gr.Image(label="SDXL w/ LCM-LoRA w/o OMS ")
90
+ output_oms = gr.Image(label="w/ OMS w/o OMS CFG")
91
+ with gr.Column(visible=False) as oms_cfg_wd:
92
+ output_oms_cfg = gr.Image(label=f"w/ OMS w/ OMS CFG")
93
+
94
+ ips = [prompt, oms_prompt, oms_guidance_scale, num_steps, sd_guidance_scale, seed]
95
+ run_button.click(fn=self._inference, inputs=ips, outputs=[output_raw, output_oms, output_oms_cfg, oms_cfg_wd])
96
+
97
+ demo.queue(max_size=20)
98
+ demo.launch()
99
+
100
+
101
+ if __name__ == "__main__":
102
+ gradio_demo = GradioDemo()
103
+ gradio_demo.mainloop()
diffusers_patch/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .pipelines.oms import OMSPipeline
diffusers_patch/models/unet_2d_condition_woct.py ADDED
@@ -0,0 +1,756 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 The HuggingFace Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ from dataclasses import dataclass
15
+ from typing import Any, Dict, List, Optional, Tuple, Union
16
+
17
+ import torch
18
+ import torch.nn as nn
19
+ import torch.utils.checkpoint
20
+
21
+ from diffusers.configuration_utils import ConfigMixin, register_to_config
22
+ from diffusers.loaders import UNet2DConditionLoadersMixin
23
+ from diffusers.utils import BaseOutput, logging
24
+ from diffusers.models.activations import get_activation
25
+ from diffusers.models.attention_processor import AttentionProcessor, AttnProcessor
26
+ from diffusers.models.embeddings import (
27
+ GaussianFourierProjection,
28
+ ImageHintTimeEmbedding,
29
+ ImageProjection,
30
+ ImageTimeEmbedding,
31
+ TextImageProjection,
32
+ TextImageTimeEmbedding,
33
+ TextTimeEmbedding,
34
+ TimestepEmbedding,
35
+ Timesteps,
36
+ )
37
+ from diffusers.models.modeling_utils import ModelMixin
38
+ from diffusers.models.unet_2d_blocks import (
39
+ CrossAttnDownBlock2D,
40
+ CrossAttnUpBlock2D,
41
+ DownBlock2D,
42
+ UNetMidBlock2DCrossAttn,
43
+ UNetMidBlock2DSimpleCrossAttn,
44
+ UpBlock2D,
45
+ get_down_block,
46
+ get_up_block,
47
+ )
48
+
49
+
50
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
51
+
52
+
53
+ @dataclass
54
+ class UNet2DConditionOutput(BaseOutput):
55
+ """
56
+ The output of [`UNet2DConditionModel`].
57
+
58
+ Args:
59
+ sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
60
+ The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model.
61
+ """
62
+
63
+ sample: torch.FloatTensor = None
64
+
65
+
66
+ class UNet2DConditionWoCTModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin):
67
+ r"""
68
+ A conditional 2D UNet model that takes a noisy sample, conditional state, but w/o a timestep and returns a sample
69
+ shaped output.
70
+
71
+ This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented
72
+ for all models (such as downloading or saving).
73
+
74
+ Parameters:
75
+ sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`):
76
+ Height and width of input/output sample.
77
+ in_channels (`int`, *optional*, defaults to 4): Number of channels in the input sample.
78
+ out_channels (`int`, *optional*, defaults to 4): Number of channels in the output.
79
+ center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample.
80
+ down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`):
81
+ The tuple of downsample blocks to use.
82
+ mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`):
83
+ Block type for middle of UNet, it can be either `UNetMidBlock2DCrossAttn` or
84
+ `UNetMidBlock2DSimpleCrossAttn`. If `None`, the mid block layer is skipped.
85
+ up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")`):
86
+ The tuple of upsample blocks to use.
87
+ only_cross_attention(`bool` or `Tuple[bool]`, *optional*, default to `False`):
88
+ Whether to include self-attention in the basic transformer blocks, see
89
+ [`~models.attention.BasicTransformerBlock`].
90
+ block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`):
91
+ The tuple of output channels for each block.
92
+ layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block.
93
+ downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution.
94
+ mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block.
95
+ act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
96
+ norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization.
97
+ If `None`, normalization and activation layers is skipped in post-processing.
98
+ norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization.
99
+ cross_attention_dim (`int` or `Tuple[int]`, *optional*, defaults to 1280):
100
+ The dimension of the cross attention features.
101
+ transformer_layers_per_block (`int` or `Tuple[int]`, *optional*, defaults to 1):
102
+ The number of transformer blocks of type [`~models.attention.BasicTransformerBlock`]. Only relevant for
103
+ [`~models.unet_2d_blocks.CrossAttnDownBlock2D`], [`~models.unet_2d_blocks.CrossAttnUpBlock2D`],
104
+ [`~models.unet_2d_blocks.UNetMidBlock2DCrossAttn`].
105
+ encoder_hid_dim (`int`, *optional*, defaults to None):
106
+ If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim`
107
+ dimension to `cross_attention_dim`.
108
+ encoder_hid_dim_type (`str`, *optional*, defaults to `None`):
109
+ If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text
110
+ embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`.
111
+ attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads.
112
+ num_attention_heads (`int`, *optional*):
113
+ The number of attention heads. If not defined, defaults to `attention_head_dim`
114
+ conv_in_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_in` layer.
115
+ conv_out_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_out` layer.
116
+ mid_block_only_cross_attention (`bool`, *optional*, defaults to `None`):
117
+ Whether to use cross attention with the mid block when using the `UNetMidBlock2DSimpleCrossAttn`. If
118
+ `only_cross_attention` is given as a single boolean and `mid_block_only_cross_attention` is `None`, the
119
+ `only_cross_attention` value is used as the value for `mid_block_only_cross_attention`. Default to `False`
120
+ otherwise.
121
+ """
122
+
123
+ _supports_gradient_checkpointing = True
124
+
125
+ @register_to_config
126
+ def __init__(
127
+ self,
128
+ sample_size: Optional[int] = None,
129
+ in_channels: int = 4,
130
+ out_channels: int = 4,
131
+ center_input_sample: bool = False,
132
+ down_block_types: Tuple[str] = (
133
+ "CrossAttnDownBlock2D",
134
+ "CrossAttnDownBlock2D",
135
+ "CrossAttnDownBlock2D",
136
+ "DownBlock2D",
137
+ ),
138
+ mid_block_type: Optional[str] = "UNetMidBlock2DCrossAttn",
139
+ up_block_types: Tuple[str] = ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D"),
140
+ only_cross_attention: Union[bool, Tuple[bool]] = False,
141
+ block_out_channels: Tuple[int] = (320, 640, 1280, 1280),
142
+ layers_per_block: Union[int, Tuple[int]] = 2,
143
+ downsample_padding: int = 1,
144
+ mid_block_scale_factor: float = 1,
145
+ act_fn: str = "silu",
146
+ norm_num_groups: Optional[int] = 32,
147
+ norm_eps: float = 1e-5,
148
+ cross_attention_dim: Union[int, Tuple[int]] = 1280,
149
+ transformer_layers_per_block: Union[int, Tuple[int]] = 1,
150
+ encoder_hid_dim: Optional[int] = None,
151
+ encoder_hid_dim_type: Optional[str] = None,
152
+ attention_head_dim: Union[int, Tuple[int]] = 8,
153
+ num_attention_heads: Optional[Union[int, Tuple[int]]] = None,
154
+ dual_cross_attention: bool = False,
155
+ use_linear_projection: bool = False,
156
+ upcast_attention: bool = False,
157
+ resnet_out_scale_factor: int = 1.0,
158
+ conv_in_kernel: int = 3,
159
+ conv_out_kernel: int = 3,
160
+ mid_block_only_cross_attention: Optional[bool] = None,
161
+ cross_attention_norm: Optional[str] = None,
162
+ ):
163
+ super().__init__()
164
+
165
+ self.sample_size = sample_size
166
+
167
+ if num_attention_heads is not None:
168
+ raise ValueError(
169
+ "At the moment it is not possible to define the number of attention heads via `num_attention_heads` because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131. Passing `num_attention_heads` will only be supported in diffusers v0.19."
170
+ )
171
+
172
+ # If `num_attention_heads` is not defined (which is the case for most models)
173
+ # it will default to `attention_head_dim`. This looks weird upon first reading it and it is.
174
+ # The reason for this behavior is to correct for incorrectly named variables that were introduced
175
+ # when this library was created. The incorrect naming was only discovered much later in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131
176
+ # Changing `attention_head_dim` to `num_attention_heads` for 40,000+ configurations is too backwards breaking
177
+ # which is why we correct for the naming here.
178
+ num_attention_heads = num_attention_heads or attention_head_dim
179
+
180
+ # Check inputs
181
+ if len(down_block_types) != len(up_block_types):
182
+ raise ValueError(
183
+ f"Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`: {down_block_types}. `up_block_types`: {up_block_types}."
184
+ )
185
+
186
+ if len(block_out_channels) != len(down_block_types):
187
+ raise ValueError(
188
+ f"Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`: {block_out_channels}. `down_block_types`: {down_block_types}."
189
+ )
190
+
191
+ if not isinstance(only_cross_attention, bool) and len(only_cross_attention) != len(down_block_types):
192
+ raise ValueError(
193
+ f"Must provide the same number of `only_cross_attention` as `down_block_types`. `only_cross_attention`: {only_cross_attention}. `down_block_types`: {down_block_types}."
194
+ )
195
+
196
+ if not isinstance(num_attention_heads, int) and len(num_attention_heads) != len(down_block_types):
197
+ raise ValueError(
198
+ f"Must provide the same number of `num_attention_heads` as `down_block_types`. `num_attention_heads`: {num_attention_heads}. `down_block_types`: {down_block_types}."
199
+ )
200
+
201
+ if not isinstance(attention_head_dim, int) and len(attention_head_dim) != len(down_block_types):
202
+ raise ValueError(
203
+ f"Must provide the same number of `attention_head_dim` as `down_block_types`. `attention_head_dim`: {attention_head_dim}. `down_block_types`: {down_block_types}."
204
+ )
205
+
206
+ if isinstance(cross_attention_dim, list) and len(cross_attention_dim) != len(down_block_types):
207
+ raise ValueError(
208
+ f"Must provide the same number of `cross_attention_dim` as `down_block_types`. `cross_attention_dim`: {cross_attention_dim}. `down_block_types`: {down_block_types}."
209
+ )
210
+
211
+ if not isinstance(layers_per_block, int) and len(layers_per_block) != len(down_block_types):
212
+ raise ValueError(
213
+ f"Must provide the same number of `layers_per_block` as `down_block_types`. `layers_per_block`: {layers_per_block}. `down_block_types`: {down_block_types}."
214
+ )
215
+
216
+ # input
217
+ conv_in_padding = (conv_in_kernel - 1) // 2
218
+ self.conv_in = nn.Conv2d(
219
+ in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding
220
+ )
221
+
222
+ if encoder_hid_dim_type is None and encoder_hid_dim is not None:
223
+ encoder_hid_dim_type = "text_proj"
224
+ self.register_to_config(encoder_hid_dim_type=encoder_hid_dim_type)
225
+ logger.info("encoder_hid_dim_type defaults to 'text_proj' as `encoder_hid_dim` is defined.")
226
+
227
+ if encoder_hid_dim is None and encoder_hid_dim_type is not None:
228
+ raise ValueError(
229
+ f"`encoder_hid_dim` has to be defined when `encoder_hid_dim_type` is set to {encoder_hid_dim_type}."
230
+ )
231
+
232
+ if encoder_hid_dim_type == "text_proj":
233
+ self.encoder_hid_proj = nn.Linear(encoder_hid_dim, cross_attention_dim)
234
+ elif encoder_hid_dim_type == "text_image_proj":
235
+ # image_embed_dim DOESN'T have to be `cross_attention_dim`. To not clutter the __init__ too much
236
+ # they are set to `cross_attention_dim` here as this is exactly the required dimension for the currently only use
237
+ # case when `addition_embed_type == "text_image_proj"` (Kadinsky 2.1)`
238
+ self.encoder_hid_proj = TextImageProjection(
239
+ text_embed_dim=encoder_hid_dim,
240
+ image_embed_dim=cross_attention_dim,
241
+ cross_attention_dim=cross_attention_dim,
242
+ )
243
+ elif encoder_hid_dim_type == "image_proj":
244
+ # Kandinsky 2.2
245
+ self.encoder_hid_proj = ImageProjection(
246
+ image_embed_dim=encoder_hid_dim,
247
+ cross_attention_dim=cross_attention_dim,
248
+ )
249
+ elif encoder_hid_dim_type is not None:
250
+ raise ValueError(
251
+ f"encoder_hid_dim_type: {encoder_hid_dim_type} must be None, 'text_proj' or 'text_image_proj'."
252
+ )
253
+ else:
254
+ self.encoder_hid_proj = None
255
+
256
+ self.down_blocks = nn.ModuleList([])
257
+ self.up_blocks = nn.ModuleList([])
258
+
259
+ if isinstance(only_cross_attention, bool):
260
+ if mid_block_only_cross_attention is None:
261
+ mid_block_only_cross_attention = only_cross_attention
262
+
263
+ only_cross_attention = [only_cross_attention] * len(down_block_types)
264
+
265
+ if mid_block_only_cross_attention is None:
266
+ mid_block_only_cross_attention = False
267
+
268
+ if isinstance(num_attention_heads, int):
269
+ num_attention_heads = (num_attention_heads,) * len(down_block_types)
270
+
271
+ if isinstance(attention_head_dim, int):
272
+ attention_head_dim = (attention_head_dim,) * len(down_block_types)
273
+
274
+ if isinstance(cross_attention_dim, int):
275
+ cross_attention_dim = (cross_attention_dim,) * len(down_block_types)
276
+
277
+ if isinstance(layers_per_block, int):
278
+ layers_per_block = [layers_per_block] * len(down_block_types)
279
+
280
+ if isinstance(transformer_layers_per_block, int):
281
+ transformer_layers_per_block = [transformer_layers_per_block] * len(down_block_types)
282
+
283
+ # disable time cond
284
+ time_embed_dim = None
285
+ blocks_time_embed_dim = time_embed_dim
286
+ resnet_time_scale_shift = None
287
+ resnet_skip_time_act = False
288
+
289
+ # down
290
+ output_channel = block_out_channels[0]
291
+ for i, down_block_type in enumerate(down_block_types):
292
+ input_channel = output_channel
293
+ output_channel = block_out_channels[i]
294
+ is_final_block = i == len(block_out_channels) - 1
295
+
296
+ down_block = get_down_block(
297
+ down_block_type,
298
+ num_layers=layers_per_block[i],
299
+ transformer_layers_per_block=transformer_layers_per_block[i],
300
+ in_channels=input_channel,
301
+ out_channels=output_channel,
302
+ temb_channels=blocks_time_embed_dim,
303
+ add_downsample=not is_final_block,
304
+ resnet_eps=norm_eps,
305
+ resnet_act_fn=act_fn,
306
+ resnet_groups=norm_num_groups,
307
+ cross_attention_dim=cross_attention_dim[i],
308
+ num_attention_heads=num_attention_heads[i],
309
+ downsample_padding=downsample_padding,
310
+ dual_cross_attention=dual_cross_attention,
311
+ use_linear_projection=use_linear_projection,
312
+ only_cross_attention=only_cross_attention[i],
313
+ upcast_attention=upcast_attention,
314
+ resnet_time_scale_shift=resnet_time_scale_shift,
315
+ resnet_skip_time_act=resnet_skip_time_act,
316
+ resnet_out_scale_factor=resnet_out_scale_factor,
317
+ cross_attention_norm=cross_attention_norm,
318
+ attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel,
319
+ )
320
+ self.down_blocks.append(down_block)
321
+
322
+ # mid
323
+ if mid_block_type == "UNetMidBlock2DCrossAttn":
324
+ self.mid_block = UNetMidBlock2DCrossAttn(
325
+ transformer_layers_per_block=transformer_layers_per_block[-1],
326
+ in_channels=block_out_channels[-1],
327
+ temb_channels=blocks_time_embed_dim,
328
+ resnet_eps=norm_eps,
329
+ resnet_act_fn=act_fn,
330
+ output_scale_factor=mid_block_scale_factor,
331
+ resnet_time_scale_shift=resnet_time_scale_shift,
332
+ cross_attention_dim=cross_attention_dim[-1],
333
+ num_attention_heads=num_attention_heads[-1],
334
+ resnet_groups=norm_num_groups,
335
+ dual_cross_attention=dual_cross_attention,
336
+ use_linear_projection=use_linear_projection,
337
+ upcast_attention=upcast_attention,
338
+ )
339
+ elif mid_block_type == "UNetMidBlock2DSimpleCrossAttn":
340
+ self.mid_block = UNetMidBlock2DSimpleCrossAttn(
341
+ in_channels=block_out_channels[-1],
342
+ temb_channels=blocks_time_embed_dim,
343
+ resnet_eps=norm_eps,
344
+ resnet_act_fn=act_fn,
345
+ output_scale_factor=mid_block_scale_factor,
346
+ cross_attention_dim=cross_attention_dim[-1],
347
+ attention_head_dim=attention_head_dim[-1],
348
+ resnet_groups=norm_num_groups,
349
+ resnet_time_scale_shift=resnet_time_scale_shift,
350
+ skip_time_act=resnet_skip_time_act,
351
+ only_cross_attention=mid_block_only_cross_attention,
352
+ cross_attention_norm=cross_attention_norm,
353
+ )
354
+ elif mid_block_type is None:
355
+ self.mid_block = None
356
+ else:
357
+ raise ValueError(f"unknown mid_block_type : {mid_block_type}")
358
+
359
+ # count how many layers upsample the images
360
+ self.num_upsamplers = 0
361
+
362
+ # up
363
+ reversed_block_out_channels = list(reversed(block_out_channels))
364
+ reversed_num_attention_heads = list(reversed(num_attention_heads))
365
+ reversed_layers_per_block = list(reversed(layers_per_block))
366
+ reversed_cross_attention_dim = list(reversed(cross_attention_dim))
367
+ reversed_transformer_layers_per_block = list(reversed(transformer_layers_per_block))
368
+ only_cross_attention = list(reversed(only_cross_attention))
369
+
370
+ output_channel = reversed_block_out_channels[0]
371
+ for i, up_block_type in enumerate(up_block_types):
372
+ is_final_block = i == len(block_out_channels) - 1
373
+
374
+ prev_output_channel = output_channel
375
+ output_channel = reversed_block_out_channels[i]
376
+ input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)]
377
+
378
+ # add upsample block for all BUT final layer
379
+ if not is_final_block:
380
+ add_upsample = True
381
+ self.num_upsamplers += 1
382
+ else:
383
+ add_upsample = False
384
+
385
+ up_block = get_up_block(
386
+ up_block_type,
387
+ num_layers=reversed_layers_per_block[i] + 1,
388
+ transformer_layers_per_block=reversed_transformer_layers_per_block[i],
389
+ in_channels=input_channel,
390
+ out_channels=output_channel,
391
+ prev_output_channel=prev_output_channel,
392
+ temb_channels=blocks_time_embed_dim,
393
+ add_upsample=add_upsample,
394
+ resnet_eps=norm_eps,
395
+ resnet_act_fn=act_fn,
396
+ resnet_groups=norm_num_groups,
397
+ cross_attention_dim=reversed_cross_attention_dim[i],
398
+ num_attention_heads=reversed_num_attention_heads[i],
399
+ dual_cross_attention=dual_cross_attention,
400
+ use_linear_projection=use_linear_projection,
401
+ only_cross_attention=only_cross_attention[i],
402
+ upcast_attention=upcast_attention,
403
+ resnet_time_scale_shift=resnet_time_scale_shift,
404
+ resnet_skip_time_act=resnet_skip_time_act,
405
+ resnet_out_scale_factor=resnet_out_scale_factor,
406
+ cross_attention_norm=cross_attention_norm,
407
+ attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel,
408
+ )
409
+ self.up_blocks.append(up_block)
410
+ prev_output_channel = output_channel
411
+
412
+ # out
413
+ if norm_num_groups is not None:
414
+ self.conv_norm_out = nn.GroupNorm(
415
+ num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps
416
+ )
417
+
418
+ self.conv_act = get_activation(act_fn)
419
+
420
+ else:
421
+ self.conv_norm_out = None
422
+ self.conv_act = None
423
+
424
+ conv_out_padding = (conv_out_kernel - 1) // 2
425
+ self.conv_out = nn.Conv2d(
426
+ block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding
427
+ )
428
+
429
+ @property
430
+ def attn_processors(self) -> Dict[str, AttentionProcessor]:
431
+ r"""
432
+ Returns:
433
+ `dict` of attention processors: A dictionary containing all attention processors used in the model with
434
+ indexed by its weight name.
435
+ """
436
+ # set recursively
437
+ processors = {}
438
+
439
+ def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]):
440
+ if hasattr(module, "set_processor"):
441
+ processors[f"{name}.processor"] = module.processor
442
+
443
+ for sub_name, child in module.named_children():
444
+ fn_recursive_add_processors(f"{name}.{sub_name}", child, processors)
445
+
446
+ return processors
447
+
448
+ for name, module in self.named_children():
449
+ fn_recursive_add_processors(name, module, processors)
450
+
451
+ return processors
452
+
453
+ def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]):
454
+ r"""
455
+ Sets the attention processor to use to compute attention.
456
+
457
+ Parameters:
458
+ processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`):
459
+ The instantiated processor class or a dictionary of processor classes that will be set as the processor
460
+ for **all** `Attention` layers.
461
+
462
+ If `processor` is a dict, the key needs to define the path to the corresponding cross attention
463
+ processor. This is strongly recommended when setting trainable attention processors.
464
+
465
+ """
466
+ count = len(self.attn_processors.keys())
467
+
468
+ if isinstance(processor, dict) and len(processor) != count:
469
+ raise ValueError(
470
+ f"A dict of processors was passed, but the number of processors {len(processor)} does not match the"
471
+ f" number of attention layers: {count}. Please make sure to pass {count} processor classes."
472
+ )
473
+
474
+ def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor):
475
+ if hasattr(module, "set_processor"):
476
+ if not isinstance(processor, dict):
477
+ module.set_processor(processor)
478
+ else:
479
+ module.set_processor(processor.pop(f"{name}.processor"))
480
+
481
+ for sub_name, child in module.named_children():
482
+ fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
483
+
484
+ for name, module in self.named_children():
485
+ fn_recursive_attn_processor(name, module, processor)
486
+
487
+ def set_default_attn_processor(self):
488
+ """
489
+ Disables custom attention processors and sets the default attention implementation.
490
+ """
491
+ self.set_attn_processor(AttnProcessor())
492
+
493
+ def set_attention_slice(self, slice_size):
494
+ r"""
495
+ Enable sliced attention computation.
496
+
497
+ When this option is enabled, the attention module splits the input tensor in slices to compute attention in
498
+ several steps. This is useful for saving some memory in exchange for a small decrease in speed.
499
+
500
+ Args:
501
+ slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`):
502
+ When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
503
+ `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
504
+ provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
505
+ must be a multiple of `slice_size`.
506
+ """
507
+ sliceable_head_dims = []
508
+
509
+ def fn_recursive_retrieve_sliceable_dims(module: torch.nn.Module):
510
+ if hasattr(module, "set_attention_slice"):
511
+ sliceable_head_dims.append(module.sliceable_head_dim)
512
+
513
+ for child in module.children():
514
+ fn_recursive_retrieve_sliceable_dims(child)
515
+
516
+ # retrieve number of attention layers
517
+ for module in self.children():
518
+ fn_recursive_retrieve_sliceable_dims(module)
519
+
520
+ num_sliceable_layers = len(sliceable_head_dims)
521
+
522
+ if slice_size == "auto":
523
+ # half the attention head size is usually a good trade-off between
524
+ # speed and memory
525
+ slice_size = [dim // 2 for dim in sliceable_head_dims]
526
+ elif slice_size == "max":
527
+ # make smallest slice possible
528
+ slice_size = num_sliceable_layers * [1]
529
+
530
+ slice_size = num_sliceable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size
531
+
532
+ if len(slice_size) != len(sliceable_head_dims):
533
+ raise ValueError(
534
+ f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different"
535
+ f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}."
536
+ )
537
+
538
+ for i in range(len(slice_size)):
539
+ size = slice_size[i]
540
+ dim = sliceable_head_dims[i]
541
+ if size is not None and size > dim:
542
+ raise ValueError(f"size {size} has to be smaller or equal to {dim}.")
543
+
544
+ # Recursively walk through all the children.
545
+ # Any children which exposes the set_attention_slice method
546
+ # gets the message
547
+ def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]):
548
+ if hasattr(module, "set_attention_slice"):
549
+ module.set_attention_slice(slice_size.pop())
550
+
551
+ for child in module.children():
552
+ fn_recursive_set_attention_slice(child, slice_size)
553
+
554
+ reversed_slice_size = list(reversed(slice_size))
555
+ for module in self.children():
556
+ fn_recursive_set_attention_slice(module, reversed_slice_size)
557
+
558
+ def _set_gradient_checkpointing(self, module, value=False):
559
+ if isinstance(module, (CrossAttnDownBlock2D, DownBlock2D, CrossAttnUpBlock2D, UpBlock2D)):
560
+ module.gradient_checkpointing = value
561
+
562
+ def forward(
563
+ self,
564
+ sample: torch.FloatTensor,
565
+ encoder_hidden_states: torch.Tensor,
566
+ attention_mask: Optional[torch.Tensor] = None,
567
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
568
+ added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,
569
+ down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None,
570
+ mid_block_additional_residual: Optional[torch.Tensor] = None,
571
+ encoder_attention_mask: Optional[torch.Tensor] = None,
572
+ return_dict: bool = True,
573
+ ) -> Union[UNet2DConditionOutput, Tuple]:
574
+ r"""
575
+ The [`UNet2DConditionModel`] forward method.
576
+
577
+ Args:
578
+ sample (`torch.FloatTensor`):
579
+ The noisy input tensor with the following shape `(batch, channel, height, width)`.
580
+ encoder_hidden_states (`torch.FloatTensor`):
581
+ The encoder hidden states with shape `(batch, sequence_length, feature_dim)`.
582
+ encoder_attention_mask (`torch.Tensor`):
583
+ A cross-attention mask of shape `(batch, sequence_length)` is applied to `encoder_hidden_states`. If
584
+ `True` the mask is kept, otherwise if `False` it is discarded. Mask will be converted into a bias,
585
+ which adds large negative values to the attention scores corresponding to "discard" tokens.
586
+ return_dict (`bool`, *optional*, defaults to `True`):
587
+ Whether or not to return a [`~models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain
588
+ tuple.
589
+ cross_attention_kwargs (`dict`, *optional*):
590
+ A kwargs dictionary that if specified is passed along to the [`AttnProcessor`].
591
+ added_cond_kwargs: (`dict`, *optional*):
592
+ A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that
593
+ are passed along to the UNet blocks.
594
+
595
+ Returns:
596
+ [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`:
597
+ If `return_dict` is True, an [`~models.unet_2d_condition.UNet2DConditionOutput`] is returned, otherwise
598
+ a `tuple` is returned where the first element is the sample tensor.
599
+ """
600
+ # By default samples have to be AT least a multiple of the overall upsampling factor.
601
+ # The overall upsampling factor is equal to 2 ** (# num of upsampling layers).
602
+ # However, the upsampling interpolation output size can be forced to fit any upsampling size
603
+ # on the fly if necessary.
604
+ default_overall_up_factor = 2**self.num_upsamplers
605
+
606
+ # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor`
607
+ forward_upsample_size = False
608
+ upsample_size = None
609
+
610
+ if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]):
611
+ logger.info("Forward upsample size to force interpolation output size.")
612
+ forward_upsample_size = True
613
+
614
+ # ensure attention_mask is a bias, and give it a singleton query_tokens dimension
615
+ # expects mask of shape:
616
+ # [batch, key_tokens]
617
+ # adds singleton query_tokens dimension:
618
+ # [batch, 1, key_tokens]
619
+ # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes:
620
+ # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn)
621
+ # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn)
622
+ if attention_mask is not None:
623
+ # assume that mask is expressed as:
624
+ # (1 = keep, 0 = discard)
625
+ # convert mask into a bias that can be added to attention scores:
626
+ # (keep = +0, discard = -10000.0)
627
+ attention_mask = (1 - attention_mask.to(sample.dtype)) * -10000.0
628
+ attention_mask = attention_mask.unsqueeze(1)
629
+
630
+ # convert encoder_attention_mask to a bias the same way we do for attention_mask
631
+ if encoder_attention_mask is not None:
632
+ encoder_attention_mask = (1 - encoder_attention_mask.to(sample.dtype)) * -10000.0
633
+ encoder_attention_mask = encoder_attention_mask.unsqueeze(1)
634
+
635
+ # 0. center input if necessary
636
+ if self.config.center_input_sample:
637
+ sample = 2 * sample - 1.0
638
+
639
+ # 1. time (skip)
640
+ emb = None
641
+
642
+ if self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_proj":
643
+ encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states)
644
+ elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_image_proj":
645
+ # Kadinsky 2.1 - style
646
+ if "image_embeds" not in added_cond_kwargs:
647
+ raise ValueError(
648
+ f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'text_image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`"
649
+ )
650
+
651
+ image_embeds = added_cond_kwargs.get("image_embeds")
652
+ encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states, image_embeds)
653
+ elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "image_proj":
654
+ # Kandinsky 2.2 - style
655
+ if "image_embeds" not in added_cond_kwargs:
656
+ raise ValueError(
657
+ f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`"
658
+ )
659
+ image_embeds = added_cond_kwargs.get("image_embeds")
660
+ encoder_hidden_states = self.encoder_hid_proj(image_embeds)
661
+ # 2. pre-process
662
+ sample = self.conv_in(sample)
663
+
664
+ # 3. down
665
+
666
+ is_controlnet = mid_block_additional_residual is not None and down_block_additional_residuals is not None
667
+ is_adapter = mid_block_additional_residual is None and down_block_additional_residuals is not None
668
+
669
+ down_block_res_samples = (sample,)
670
+ for downsample_block in self.down_blocks:
671
+ if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention:
672
+ # For t2i-adapter CrossAttnDownBlock2D
673
+ additional_residuals = {}
674
+ if is_adapter and len(down_block_additional_residuals) > 0:
675
+ additional_residuals["additional_residuals"] = down_block_additional_residuals.pop(0)
676
+
677
+ sample, res_samples = downsample_block(
678
+ hidden_states=sample,
679
+ temb=emb,
680
+ encoder_hidden_states=encoder_hidden_states,
681
+ attention_mask=attention_mask,
682
+ cross_attention_kwargs=cross_attention_kwargs,
683
+ encoder_attention_mask=encoder_attention_mask,
684
+ **additional_residuals,
685
+ )
686
+ else:
687
+ sample, res_samples = downsample_block(hidden_states=sample, temb=emb)
688
+
689
+ if is_adapter and len(down_block_additional_residuals) > 0:
690
+ sample += down_block_additional_residuals.pop(0)
691
+
692
+ down_block_res_samples += res_samples
693
+
694
+ if is_controlnet:
695
+ new_down_block_res_samples = ()
696
+
697
+ for down_block_res_sample, down_block_additional_residual in zip(
698
+ down_block_res_samples, down_block_additional_residuals
699
+ ):
700
+ down_block_res_sample = down_block_res_sample + down_block_additional_residual
701
+ new_down_block_res_samples = new_down_block_res_samples + (down_block_res_sample,)
702
+
703
+ down_block_res_samples = new_down_block_res_samples
704
+
705
+ # 4. mid
706
+ if self.mid_block is not None:
707
+ sample = self.mid_block(
708
+ sample,
709
+ emb,
710
+ encoder_hidden_states=encoder_hidden_states,
711
+ attention_mask=attention_mask,
712
+ cross_attention_kwargs=cross_attention_kwargs,
713
+ encoder_attention_mask=encoder_attention_mask,
714
+ )
715
+
716
+ if is_controlnet:
717
+ sample = sample + mid_block_additional_residual
718
+
719
+ # 5. up
720
+ for i, upsample_block in enumerate(self.up_blocks):
721
+ is_final_block = i == len(self.up_blocks) - 1
722
+
723
+ res_samples = down_block_res_samples[-len(upsample_block.resnets) :]
724
+ down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)]
725
+
726
+ # if we have not reached the final block and need to forward the
727
+ # upsample size, we do it here
728
+ if not is_final_block and forward_upsample_size:
729
+ upsample_size = down_block_res_samples[-1].shape[2:]
730
+
731
+ if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention:
732
+ sample = upsample_block(
733
+ hidden_states=sample,
734
+ temb=emb,
735
+ res_hidden_states_tuple=res_samples,
736
+ encoder_hidden_states=encoder_hidden_states,
737
+ cross_attention_kwargs=cross_attention_kwargs,
738
+ upsample_size=upsample_size,
739
+ attention_mask=attention_mask,
740
+ encoder_attention_mask=encoder_attention_mask,
741
+ )
742
+ else:
743
+ sample = upsample_block(
744
+ hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size
745
+ )
746
+
747
+ # 6. post-process
748
+ if self.conv_norm_out:
749
+ sample = self.conv_norm_out(sample)
750
+ sample = self.conv_act(sample)
751
+ sample = self.conv_out(sample)
752
+
753
+ if not return_dict:
754
+ return (sample,)
755
+
756
+ return UNet2DConditionOutput(sample=sample)
diffusers_patch/pipelines/oms/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .pipeline_oms import OMSPipeline
diffusers_patch/pipelines/oms/pipeline_oms.py ADDED
@@ -0,0 +1,655 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ import inspect
4
+ from typing import Any, Callable, Dict, List, Optional, Tuple, Union
5
+
6
+ import torch
7
+ from transformers import CLIPTextModel, CLIPTokenizer
8
+
9
+ from diffusers.loaders import FromSingleFileMixin
10
+
11
+ from diffusers.utils import (
12
+ USE_PEFT_BACKEND,
13
+ deprecate,
14
+ logging,
15
+ )
16
+ from diffusers.utils.torch_utils import randn_tensor
17
+ from diffusers.pipelines.pipeline_utils import DiffusionPipeline
18
+ from diffusers.pipelines.pipeline_utils import *
19
+ from diffusers.pipelines.pipeline_utils import _get_pipeline_class
20
+ from diffusers.models.modeling_utils import _LOW_CPU_MEM_USAGE_DEFAULT
21
+
22
+ from diffusers_patch.models.unet_2d_condition_woct import UNet2DConditionWoCTModel
23
+
24
+ from diffusers_patch.pipelines.oms.utils import SDXLTextEncoder, SDXLTokenizer
25
+
26
+
27
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
28
+
29
+
30
+ def load_sub_model_oms(
31
+ library_name: str,
32
+ class_name: str,
33
+ importable_classes: List[Any],
34
+ pipelines: Any,
35
+ is_pipeline_module: bool,
36
+ pipeline_class: Any,
37
+ torch_dtype: torch.dtype,
38
+ provider: Any,
39
+ sess_options: Any,
40
+ device_map: Optional[Union[Dict[str, torch.device], str]],
41
+ max_memory: Optional[Dict[Union[int, str], Union[int, str]]],
42
+ offload_folder: Optional[Union[str, os.PathLike]],
43
+ offload_state_dict: bool,
44
+ model_variants: Dict[str, str],
45
+ name: str,
46
+ from_flax: bool,
47
+ variant: str,
48
+ low_cpu_mem_usage: bool,
49
+ cached_folder: Union[str, os.PathLike],
50
+ ):
51
+ """Helper method to load the module `name` from `library_name` and `class_name`"""
52
+ # retrieve class candidates
53
+ class_obj, class_candidates = get_class_obj_and_candidates(
54
+ library_name,
55
+ class_name,
56
+ importable_classes,
57
+ pipelines,
58
+ is_pipeline_module,
59
+ component_name=name,
60
+ cache_dir=cached_folder,
61
+ )
62
+
63
+ load_method_name = None
64
+ # retrive load method name
65
+ for class_name, class_candidate in class_candidates.items():
66
+ if class_candidate is not None and issubclass(class_obj, class_candidate):
67
+ load_method_name = importable_classes[class_name][1]
68
+
69
+ # if load method name is None, then we have a dummy module -> raise Error
70
+ if load_method_name is None:
71
+ none_module = class_obj.__module__
72
+ is_dummy_path = none_module.startswith(DUMMY_MODULES_FOLDER) or none_module.startswith(
73
+ TRANSFORMERS_DUMMY_MODULES_FOLDER
74
+ )
75
+ if is_dummy_path and "dummy" in none_module:
76
+ # call class_obj for nice error message of missing requirements
77
+ class_obj()
78
+
79
+ raise ValueError(
80
+ f"The component {class_obj} of {pipeline_class} cannot be loaded as it does not seem to have"
81
+ f" any of the loading methods defined in {ALL_IMPORTABLE_CLASSES}."
82
+ )
83
+
84
+ load_method = getattr(class_obj, load_method_name)
85
+
86
+ # add kwargs to loading method
87
+ import diffusers
88
+ loading_kwargs = {}
89
+ if issubclass(class_obj, torch.nn.Module):
90
+ loading_kwargs["torch_dtype"] = torch_dtype
91
+ if issubclass(class_obj, diffusers.OnnxRuntimeModel):
92
+ loading_kwargs["provider"] = provider
93
+ loading_kwargs["sess_options"] = sess_options
94
+
95
+ is_diffusers_model = issubclass(class_obj, diffusers.ModelMixin)
96
+
97
+ if is_transformers_available():
98
+ transformers_version = version.parse(version.parse(transformers.__version__).base_version)
99
+ else:
100
+ transformers_version = "N/A"
101
+
102
+ is_transformers_model = (
103
+ is_transformers_available()
104
+ and issubclass(class_obj, PreTrainedModel)
105
+ and transformers_version >= version.parse("4.20.0")
106
+ )
107
+
108
+ # When loading a transformers model, if the device_map is None, the weights will be initialized as opposed to diffusers.
109
+ # To make default loading faster we set the `low_cpu_mem_usage=low_cpu_mem_usage` flag which is `True` by default.
110
+ # This makes sure that the weights won't be initialized which significantly speeds up loading.
111
+ if is_diffusers_model or is_transformers_model:
112
+ loading_kwargs["device_map"] = device_map
113
+ loading_kwargs["max_memory"] = max_memory
114
+ loading_kwargs["offload_folder"] = offload_folder
115
+ loading_kwargs["offload_state_dict"] = offload_state_dict
116
+ loading_kwargs["variant"] = model_variants.pop(name, None)
117
+ if from_flax:
118
+ loading_kwargs["from_flax"] = True
119
+
120
+ # the following can be deleted once the minimum required `transformers` version
121
+ # is higher than 4.27
122
+ if (
123
+ is_transformers_model
124
+ and loading_kwargs["variant"] is not None
125
+ and transformers_version < version.parse("4.27.0")
126
+ ):
127
+ raise ImportError(
128
+ f"When passing `variant='{variant}'`, please make sure to upgrade your `transformers` version to at least 4.27.0.dev0"
129
+ )
130
+ elif is_transformers_model and loading_kwargs["variant"] is None:
131
+ loading_kwargs.pop("variant")
132
+
133
+ # if `from_flax` and model is transformer model, can currently not load with `low_cpu_mem_usage`
134
+ if not (from_flax and is_transformers_model):
135
+ loading_kwargs["low_cpu_mem_usage"] = low_cpu_mem_usage
136
+ else:
137
+ loading_kwargs["low_cpu_mem_usage"] = False
138
+ # check if oms directory
139
+ if 'oms' in name:
140
+ config_name = os.path.join(cached_folder, name, 'config.json')
141
+ with open(config_name, "r", encoding="utf-8") as f:
142
+ index = json.load(f)
143
+ file_path_or_name = index['_name_or_path']
144
+ if 'SDXL' in index.get('_class_name', 'CLIP'):
145
+ loaded_sub_model = load_method(file_path_or_name, **loading_kwargs)
146
+ elif 'subfolder' in index.keys():
147
+ loading_kwargs["subfolder"] = index["subfolder"]
148
+ loaded_sub_model = load_method(file_path_or_name, **loading_kwargs)
149
+ else:
150
+ # check if the module is in a subdirectory
151
+ if os.path.isdir(os.path.join(cached_folder, name)):
152
+ loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
153
+ else:
154
+ # else load from the root directory
155
+ loaded_sub_model = load_method(cached_folder, **loading_kwargs)
156
+
157
+ return loaded_sub_model
158
+
159
+ class OMSPipeline(DiffusionPipeline, FromSingleFileMixin):
160
+
161
+ def __init__(
162
+ self,
163
+ oms_module: UNet2DConditionWoCTModel,
164
+ sd_pipeline: DiffusionPipeline,
165
+ oms_text_encoder:Optional[Union[CLIPTextModel, SDXLTextEncoder]],
166
+ oms_tokenizer:Optional[Union[CLIPTokenizer, SDXLTokenizer]],
167
+ sd_scheduler = None
168
+ ):
169
+ # assert sd_pipeline is not None
170
+
171
+ if oms_tokenizer is None:
172
+ oms_tokenizer = sd_pipeline.tokenizer
173
+ if oms_text_encoder is None:
174
+ oms_text_encoder = sd_pipeline.text_encoder
175
+
176
+ # For OMS with SDXL text encoders
177
+ if 'SDXL' in oms_text_encoder.__class__.__name__:
178
+ self.is_dual_text_encoder = True
179
+ else:
180
+ self.is_dual_text_encoder = False
181
+
182
+ self.register_modules(
183
+ oms_module=oms_module,
184
+ oms_text_encoder=oms_text_encoder,
185
+ oms_tokenizer=oms_tokenizer,
186
+ sd_pipeline = sd_pipeline
187
+ )
188
+
189
+ if sd_scheduler is None:
190
+ self.scheduler = sd_pipeline.scheduler
191
+ else:
192
+ self.scheduler = sd_scheduler
193
+ sd_pipeline.scheduler = sd_scheduler
194
+
195
+ self.vae_scale_factor = 2 ** (len(sd_pipeline.vae.config.block_out_channels) - 1)
196
+ self.default_sample_size = sd_pipeline.unet.config.sample_size
197
+
198
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
199
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
200
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
201
+ if isinstance(generator, list) and len(generator) != batch_size:
202
+ raise ValueError(
203
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
204
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
205
+ )
206
+
207
+ if latents is None:
208
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
209
+ else:
210
+ latents = latents.to(device)
211
+
212
+ # scale the initial noise by the standard deviation required by the scheduler
213
+ latents = latents * self.scheduler.init_noise_sigma
214
+ return latents
215
+
216
+ def oms_step(self, predict_v, latents, do_classifier_free_guidance_for_oms, oms_guidance_scale, generator, alpha_prod_t_prev):
217
+ if do_classifier_free_guidance_for_oms:
218
+ pred_uncond, pred_text = predict_v.chunk(2)
219
+ predict_v = pred_uncond + oms_guidance_scale * (pred_text - pred_uncond)
220
+ # so fking dirty but keep it for now
221
+ alpha_prod_t = torch.zeros_like(alpha_prod_t_prev)
222
+ beta_prod_t = 1 - alpha_prod_t
223
+ beta_prod_t_prev = 1 - alpha_prod_t_prev
224
+ current_alpha_t = alpha_prod_t / alpha_prod_t_prev
225
+ current_beta_t = 1 - current_alpha_t
226
+ pred_original_sample = (alpha_prod_t**0.5) * latents - (beta_prod_t**0.5) * predict_v
227
+ # pred_original_sample = - predict_v
228
+ pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * current_beta_t) / beta_prod_t
229
+ current_sample_coeff = current_alpha_t ** (0.5) * beta_prod_t_prev / beta_prod_t
230
+ pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * latents
231
+
232
+ pred_prev_sample = pred_prev_sample
233
+ # TODO unit variance but seem dont need it
234
+
235
+ device = latents.device
236
+ variance_noise = randn_tensor(
237
+ latents.shape, generator=generator, device=device, dtype=latents.dtype
238
+ )
239
+ variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * current_beta_t
240
+ variance = torch.clamp(variance, min=1e-20) * variance_noise
241
+
242
+ latents = pred_prev_sample + variance
243
+ return latents
244
+
245
+ def oms_text_encode(self, prompt, num_images_per_prompt, device):
246
+ max_length = None if self.is_dual_text_encoder else self.oms_tokenizer.model_max_length
247
+ if self.is_dual_text_encoder:
248
+ tokenized_prompts = self.oms_tokenizer(prompt,
249
+ padding='max_length',
250
+ max_length=max_length,
251
+ truncation=True,
252
+ return_tensors='pt').input_ids
253
+ tokenized_prompts = torch.stack([tokenized_prompts[0], tokenized_prompts[1]], dim=1)
254
+ text_embeddings, _ = self.oms_text_encoder( [tokenized_prompts[:, 0, :].to(device), tokenized_prompts[:, 1, :].to(device)]) # type: ignore
255
+ elif 'clip' in self.oms_text_encoder.config_class.model_type:
256
+ tokenized_prompts = self.oms_tokenizer(prompt,
257
+ padding='max_length',
258
+ max_length=max_length,
259
+ truncation=True,
260
+ return_tensors='pt').input_ids
261
+ text_embeddings = self.oms_text_encoder(tokenized_prompts.to(device))[0] # type: ignore
262
+ else: # T5
263
+ tokenized_prompts = self.oms_tokenizer(prompt,
264
+ padding='max_length',
265
+ max_length=max_length,
266
+ truncation=True,
267
+ add_special_tokens=True,
268
+ return_tensors='pt').input_ids
269
+ # Note: t5 text encoder outputs "None" under fp16
270
+ with torch.cuda.amp.autocast(dtype=torch.float32):
271
+ text_embeddings = self.text_encoder(tokenized_prompts.to(device))[0]
272
+
273
+ # duplicate text embeddings for each generation per prompt
274
+ bs_embed, seq_len, _ = text_embeddings.shape
275
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) # type: ignore
276
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
277
+
278
+ return text_embeddings
279
+
280
+ @classmethod
281
+ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
282
+ cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
283
+ resume_download = kwargs.pop("resume_download", False)
284
+ force_download = kwargs.pop("force_download", False)
285
+ proxies = kwargs.pop("proxies", None)
286
+ local_files_only = kwargs.pop("local_files_only", HF_HUB_OFFLINE)
287
+ use_auth_token = kwargs.pop("use_auth_token", None)
288
+ revision = kwargs.pop("revision", None)
289
+ from_flax = kwargs.pop("from_flax", False)
290
+ torch_dtype = kwargs.pop("torch_dtype", None)
291
+ custom_pipeline = kwargs.pop("custom_pipeline", None)
292
+ custom_revision = kwargs.pop("custom_revision", None)
293
+ provider = kwargs.pop("provider", None)
294
+ sess_options = kwargs.pop("sess_options", None)
295
+ device_map = kwargs.pop("device_map", None)
296
+ max_memory = kwargs.pop("max_memory", None)
297
+ offload_folder = kwargs.pop("offload_folder", None)
298
+ offload_state_dict = kwargs.pop("offload_state_dict", False)
299
+ low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT)
300
+ variant = kwargs.pop("variant", None)
301
+ use_safetensors = kwargs.pop("use_safetensors", None)
302
+ load_connected_pipeline = kwargs.pop("load_connected_pipeline", False)
303
+
304
+ # 1. Download the checkpoints and configs
305
+ # use snapshot download here to get it working from from_pretrained
306
+ if not os.path.isdir(pretrained_model_name_or_path):
307
+ if pretrained_model_name_or_path.count("/") > 1:
308
+ raise ValueError(
309
+ f'The provided pretrained_model_name_or_path "{pretrained_model_name_or_path}"'
310
+ " is neither a valid local path nor a valid repo id. Please check the parameter."
311
+ )
312
+ cached_folder = cls.download(
313
+ pretrained_model_name_or_path,
314
+ cache_dir=cache_dir,
315
+ resume_download=resume_download,
316
+ force_download=force_download,
317
+ proxies=proxies,
318
+ local_files_only=local_files_only,
319
+ use_auth_token=use_auth_token,
320
+ revision=revision,
321
+ from_flax=from_flax,
322
+ use_safetensors=use_safetensors,
323
+ custom_pipeline=custom_pipeline,
324
+ custom_revision=custom_revision,
325
+ variant=variant,
326
+ load_connected_pipeline=load_connected_pipeline,
327
+ **kwargs,
328
+ )
329
+ else:
330
+ cached_folder = pretrained_model_name_or_path
331
+
332
+ config_dict = cls.load_config(cached_folder)
333
+
334
+ # pop out "_ignore_files" as it is only needed for download
335
+ config_dict.pop("_ignore_files", None)
336
+
337
+ # 2. Define which model components should load variants
338
+ # We retrieve the information by matching whether variant
339
+ # model checkpoints exist in the subfolders
340
+ model_variants = {}
341
+ if variant is not None:
342
+ for folder in os.listdir(cached_folder):
343
+ folder_path = os.path.join(cached_folder, folder)
344
+ is_folder = os.path.isdir(folder_path) and folder in config_dict
345
+ variant_exists = is_folder and any(
346
+ p.split(".")[1].startswith(variant) for p in os.listdir(folder_path)
347
+ )
348
+ if variant_exists:
349
+ model_variants[folder] = variant
350
+
351
+ # 3. Load the pipeline class, if using custom module then load it from the hub
352
+ # if we load from explicit class, let's use it
353
+ pipeline_class = _get_pipeline_class(
354
+ cls,
355
+ config_dict,
356
+ load_connected_pipeline=load_connected_pipeline,
357
+ custom_pipeline=custom_pipeline,
358
+ cache_dir=cache_dir,
359
+ revision=custom_revision,
360
+ )
361
+
362
+ # DEPRECATED: To be removed in 1.0.0
363
+ if pipeline_class.__name__ == "StableDiffusionInpaintPipeline" and version.parse(
364
+ version.parse(config_dict["_diffusers_version"]).base_version
365
+ ) <= version.parse("0.5.1"):
366
+ from diffusers import StableDiffusionInpaintPipeline, StableDiffusionInpaintPipelineLegacy
367
+
368
+ pipeline_class = StableDiffusionInpaintPipelineLegacy
369
+
370
+ deprecation_message = (
371
+ "You are using a legacy checkpoint for inpainting with Stable Diffusion, therefore we are loading the"
372
+ f" {StableDiffusionInpaintPipelineLegacy} class instead of {StableDiffusionInpaintPipeline}. For"
373
+ " better inpainting results, we strongly suggest using Stable Diffusion's official inpainting"
374
+ " checkpoint: https://huggingface.co/runwayml/stable-diffusion-inpainting instead or adapting your"
375
+ f" checkpoint {pretrained_model_name_or_path} to the format of"
376
+ " https://huggingface.co/runwayml/stable-diffusion-inpainting. Note that we do not actively maintain"
377
+ " the {StableDiffusionInpaintPipelineLegacy} class and will likely remove it in version 1.0.0."
378
+ )
379
+ deprecate("StableDiffusionInpaintPipelineLegacy", "1.0.0", deprecation_message, standard_warn=False)
380
+
381
+ # 4. Define expected modules given pipeline signature
382
+ # and define non-None initialized modules (=`init_kwargs`)
383
+
384
+ # some modules can be passed directly to the init
385
+ # in this case they are already instantiated in `kwargs`
386
+ # extract them here
387
+ expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class)
388
+ passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs}
389
+ passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs}
390
+
391
+ init_dict, unused_kwargs, _ = pipeline_class.extract_init_dict(config_dict, **kwargs)
392
+
393
+ # define init kwargs and make sure that optional component modules are filtered out
394
+ init_kwargs = {
395
+ k: init_dict.pop(k)
396
+ for k in optional_kwargs
397
+ if k in init_dict and k not in pipeline_class._optional_components
398
+ }
399
+ init_kwargs = {**init_kwargs, **passed_pipe_kwargs}
400
+
401
+ # remove `null` components
402
+ def load_module(name, value):
403
+ if value[0] is None:
404
+ return False
405
+ if name in passed_class_obj and passed_class_obj[name] is None:
406
+ return False
407
+ return True
408
+
409
+ init_dict = {k: v for k, v in init_dict.items() if load_module(k, v)}
410
+
411
+ # Special case: safety_checker must be loaded separately when using `from_flax`
412
+ if from_flax and "safety_checker" in init_dict and "safety_checker" not in passed_class_obj:
413
+ raise NotImplementedError(
414
+ "The safety checker cannot be automatically loaded when loading weights `from_flax`."
415
+ " Please, pass `safety_checker=None` to `from_pretrained`, and load the safety checker"
416
+ " separately if you need it."
417
+ )
418
+
419
+ # 5. Throw nice warnings / errors for fast accelerate loading
420
+ if len(unused_kwargs) > 0:
421
+ logger.warning(
422
+ f"Keyword arguments {unused_kwargs} are not expected by {pipeline_class.__name__} and will be ignored."
423
+ )
424
+
425
+ if low_cpu_mem_usage and not is_accelerate_available():
426
+ low_cpu_mem_usage = False
427
+ logger.warning(
428
+ "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the"
429
+ " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install"
430
+ " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip"
431
+ " install accelerate\n```\n."
432
+ )
433
+
434
+ if device_map is not None and not is_torch_version(">=", "1.9.0"):
435
+ raise NotImplementedError(
436
+ "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set"
437
+ " `device_map=None`."
438
+ )
439
+
440
+ if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"):
441
+ raise NotImplementedError(
442
+ "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set"
443
+ " `low_cpu_mem_usage=False`."
444
+ )
445
+
446
+ if low_cpu_mem_usage is False and device_map is not None:
447
+ raise ValueError(
448
+ f"You cannot set `low_cpu_mem_usage` to False while using device_map={device_map} for loading and"
449
+ " dispatching. Please make sure to set `low_cpu_mem_usage=True`."
450
+ )
451
+
452
+ # import it here to avoid circular import
453
+ from diffusers import pipelines
454
+
455
+ # 6. Load each module in the pipeline
456
+ for name, (library_name, class_name) in logging.tqdm(init_dict.items(), desc="Loading pipeline components..."):
457
+ # 6.1 - now that JAX/Flax is an official framework of the library, we might load from Flax names
458
+ class_name = class_name[4:] if class_name.startswith("Flax") else class_name
459
+
460
+ # 6.2 Define all importable classes
461
+ is_pipeline_module = hasattr(pipelines, library_name)
462
+ importable_classes = ALL_IMPORTABLE_CLASSES
463
+ loaded_sub_model = None
464
+
465
+ # 6.3 Use passed sub model or load class_name from library_name
466
+ if name in passed_class_obj:
467
+ # if the model is in a pipeline module, then we load it from the pipeline
468
+ # check that passed_class_obj has correct parent class
469
+ maybe_raise_or_warn(
470
+ library_name, library, class_name, importable_classes, passed_class_obj, name, is_pipeline_module
471
+ )
472
+
473
+ loaded_sub_model = passed_class_obj[name]
474
+ else:
475
+ # load sub model
476
+ loaded_sub_model = load_sub_model_oms(
477
+ library_name=library_name,
478
+ class_name=class_name,
479
+ importable_classes=importable_classes,
480
+ pipelines=pipelines,
481
+ is_pipeline_module=is_pipeline_module,
482
+ pipeline_class=pipeline_class,
483
+ torch_dtype=torch_dtype,
484
+ provider=provider,
485
+ sess_options=sess_options,
486
+ device_map=device_map,
487
+ max_memory=max_memory,
488
+ offload_folder=offload_folder,
489
+ offload_state_dict=offload_state_dict,
490
+ model_variants=model_variants,
491
+ name=name,
492
+ from_flax=from_flax,
493
+ variant=variant,
494
+ low_cpu_mem_usage=low_cpu_mem_usage,
495
+ cached_folder=cached_folder,
496
+ )
497
+ logger.info(
498
+ f"Loaded {name} as {class_name} from `{name}` subfolder of {pretrained_model_name_or_path}."
499
+ )
500
+
501
+ init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...)
502
+
503
+ if pipeline_class._load_connected_pipes and os.path.isfile(os.path.join(cached_folder, "README.md")):
504
+ modelcard = ModelCard.load(os.path.join(cached_folder, "README.md"))
505
+ connected_pipes = {prefix: getattr(modelcard.data, prefix, [None])[0] for prefix in CONNECTED_PIPES_KEYS}
506
+ load_kwargs = {
507
+ "cache_dir": cache_dir,
508
+ "resume_download": resume_download,
509
+ "force_download": force_download,
510
+ "proxies": proxies,
511
+ "local_files_only": local_files_only,
512
+ "use_auth_token": use_auth_token,
513
+ "revision": revision,
514
+ "torch_dtype": torch_dtype,
515
+ "custom_pipeline": custom_pipeline,
516
+ "custom_revision": custom_revision,
517
+ "provider": provider,
518
+ "sess_options": sess_options,
519
+ "device_map": device_map,
520
+ "max_memory": max_memory,
521
+ "offload_folder": offload_folder,
522
+ "offload_state_dict": offload_state_dict,
523
+ "low_cpu_mem_usage": low_cpu_mem_usage,
524
+ "variant": variant,
525
+ "use_safetensors": use_safetensors,
526
+ }
527
+ connected_pipes = {
528
+ prefix: DiffusionPipeline.from_pretrained(repo_id, **load_kwargs.copy())
529
+ for prefix, repo_id in connected_pipes.items()
530
+ if repo_id is not None
531
+ }
532
+
533
+ for prefix, connected_pipe in connected_pipes.items():
534
+ # add connected pipes to `init_kwargs` with <prefix>_<component_name>, e.g. "prior_text_encoder"
535
+ init_kwargs.update(
536
+ {"_".join([prefix, name]): component for name, component in connected_pipe.components.items()}
537
+ )
538
+
539
+ # 7. Potentially add passed objects if expected
540
+ missing_modules = set(expected_modules) - set(init_kwargs.keys())
541
+ passed_modules = list(passed_class_obj.keys())
542
+ optional_modules = pipeline_class._optional_components
543
+ if len(missing_modules) > 0 and missing_modules <= set(passed_modules + optional_modules):
544
+ for module in missing_modules:
545
+ init_kwargs[module] = passed_class_obj.get(module, None)
546
+ elif len(missing_modules) > 0:
547
+ passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs
548
+ raise ValueError(
549
+ f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed."
550
+ )
551
+
552
+ # 8. Instantiate the pipeline
553
+ model = pipeline_class(**init_kwargs)
554
+
555
+ # 9. Save where the model was instantiated from
556
+ model.register_to_config(_name_or_path=pretrained_model_name_or_path)
557
+ return model
558
+
559
+ @torch.no_grad()
560
+ # @replace_example_docstring(EXAMPLE_DOC_STRING)
561
+ def __call__(
562
+ self,
563
+ prompt: Union[str, List[str]] = None,
564
+ oms_prompt: Union[str, List[str]] = None,
565
+ height: Optional[int] = None,
566
+ width: Optional[int] = None,
567
+ num_inference_steps: int = 50,
568
+ num_images_per_prompt: Optional[int] = 1,
569
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
570
+ oms_guidance_scale: float = 1.0,
571
+ oms_flag: bool = True,
572
+ **kwargs,
573
+ ):
574
+ """Pseudo-doc for OMS"""
575
+
576
+ if oms_flag is True:
577
+ if oms_prompt is not None:
578
+ sd_prompt = prompt
579
+ prompt = oms_prompt
580
+
581
+ if prompt is not None and isinstance(prompt, str):
582
+ batch_size = 1
583
+ elif prompt is not None and isinstance(prompt, list):
584
+ batch_size = len(prompt)
585
+
586
+
587
+ height = height or self.default_sample_size * self.vae_scale_factor
588
+ width = width or self.default_sample_size * self.vae_scale_factor
589
+ device = self._execution_device
590
+ ## Guidance flag for OMS
591
+ if oms_guidance_scale is not None:
592
+ do_classifier_free_guidance_for_oms = True
593
+ else:
594
+ do_classifier_free_guidance_for_oms = False
595
+
596
+
597
+ oms_prompt_emb = self.oms_text_encode(prompt,num_images_per_prompt,device)
598
+ if do_classifier_free_guidance_for_oms:
599
+ oms_negative_prompt = ([''] * (batch_size // num_images_per_prompt))
600
+ oms_negative_prompt_emb = self.oms_text_encode(oms_negative_prompt,num_images_per_prompt,device)
601
+
602
+ # 4. Prepare timesteps
603
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
604
+
605
+ timesteps = self.scheduler.timesteps
606
+
607
+ # 5. Prepare latent variables
608
+ num_channels_latents = self.oms_module.config.in_channels
609
+ latents = self.prepare_latents(
610
+ batch_size * num_images_per_prompt,
611
+ num_channels_latents,
612
+ height,
613
+ width,
614
+ oms_prompt_emb.dtype,
615
+ device,
616
+ generator,
617
+ latents=None,
618
+ )
619
+
620
+ ## OMS CFG
621
+ if do_classifier_free_guidance_for_oms:
622
+ oms_prompt_emb = torch.cat([oms_negative_prompt_emb, oms_prompt_emb], dim=0)
623
+
624
+
625
+ ## OMS to device
626
+ oms_prompt_emb = oms_prompt_emb.to(device)
627
+
628
+
629
+ ## Perform OMS
630
+ alphas_cumprod = self.scheduler.alphas_cumprod.to(device)
631
+ alpha_prod_t_prev = alphas_cumprod[int(timesteps[0].item())]
632
+ latent_input_oms = torch.cat([latents] * 2) if do_classifier_free_guidance_for_oms else latents
633
+ v_pred_oms = self.oms_module(latent_input_oms, oms_prompt_emb)['sample']
634
+ latents = self.oms_step(v_pred_oms, latents, do_classifier_free_guidance_for_oms, oms_guidance_scale, generator, alpha_prod_t_prev)
635
+
636
+
637
+ if oms_prompt is not None:
638
+ prompt = sd_prompt
639
+
640
+ print('OMS Completed')
641
+ else:
642
+ print("OMS unloaded")
643
+ latents = None
644
+ output = self.sd_pipeline(
645
+ prompt = prompt,
646
+ height = height,
647
+ width = width,
648
+ num_inference_steps = num_inference_steps,
649
+ num_images_per_prompt = num_images_per_prompt,
650
+ generator = generator,
651
+ latents = latents,
652
+ **kwargs
653
+ )
654
+
655
+ return output
diffusers_patch/pipelines/oms/utils.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
3
+
4
+
5
+ class SDXLTextEncoder(torch.nn.Module):
6
+ """Wrapper around HuggingFace text encoders for SDXL.
7
+
8
+ Creates two text encoders (a CLIPTextModel and CLIPTextModelWithProjection) that behave like one.
9
+
10
+ Args:
11
+ model_name (str): Name of the model's text encoders to load. Defaults to 'stabilityai/stable-diffusion-xl-base-1.0'.
12
+ encode_latents_in_fp16 (bool): Whether to encode latents in fp16. Defaults to True.
13
+ """
14
+
15
+ def __init__(self, model_name='stabilityai/stable-diffusion-xl-base-1.0', encode_latents_in_fp16=True, torch_dtype=None):
16
+ super().__init__()
17
+ if torch_dtype is None:
18
+ torch_dtype = torch.float16 if encode_latents_in_fp16 else None
19
+ self.text_encoder = CLIPTextModel.from_pretrained(model_name, subfolder='text_encoder', torch_dtype=torch_dtype)
20
+ self.text_encoder_2 = CLIPTextModelWithProjection.from_pretrained(model_name,
21
+ subfolder='text_encoder_2',
22
+ torch_dtype=torch_dtype)
23
+
24
+ @property
25
+ def device(self):
26
+ return self.text_encoder.device
27
+
28
+ def forward(self, tokenized_text):
29
+ # first text encoder
30
+ conditioning = self.text_encoder(tokenized_text[0], output_hidden_states=True).hidden_states[-2]
31
+ # second text encoder
32
+ text_encoder_2_out = self.text_encoder_2(tokenized_text[1], output_hidden_states=True)
33
+ pooled_conditioning = text_encoder_2_out[0] # (batch_size, 1280)
34
+ conditioning_2 = text_encoder_2_out.hidden_states[-2] # (batch_size, 77, 1280)
35
+
36
+ conditioning = torch.concat([conditioning, conditioning_2], dim=-1)
37
+ return conditioning, pooled_conditioning
38
+
39
+
40
+ class SDXLTokenizer:
41
+ """Wrapper around HuggingFace tokenizers for SDXL.
42
+
43
+ Tokenizes prompt with two tokenizers and returns the joined output.
44
+
45
+ Args:
46
+ model_name (str): Name of the model's text encoders to load. Defaults to 'stabilityai/stable-diffusion-xl-base-1.0'.
47
+ """
48
+
49
+ def __init__(self, model_name='stabilityai/stable-diffusion-xl-base-1.0'):
50
+ self.tokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder='tokenizer')
51
+ self.tokenizer_2 = CLIPTokenizer.from_pretrained(model_name, subfolder='tokenizer_2')
52
+
53
+ def __call__(self, prompt, padding, truncation, return_tensors, max_length=None):
54
+ tokenized_output = self.tokenizer(
55
+ prompt,
56
+ padding=padding,
57
+ max_length=self.tokenizer.model_max_length if max_length is None else max_length,
58
+ truncation=truncation,
59
+ return_tensors=return_tensors)
60
+ tokenized_output_2 = self.tokenizer_2(
61
+ prompt,
62
+ padding=padding,
63
+ max_length=self.tokenizer_2.model_max_length if max_length is None else max_length,
64
+ truncation=truncation,
65
+ return_tensors=return_tensors)
66
+
67
+ # Add second tokenizer output to first tokenizer
68
+ for key in tokenized_output.keys():
69
+ tokenized_output[key] = [tokenized_output[key], tokenized_output_2[key]]
70
+ return tokenized_output
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ diffusers==0.23.1
2
+ transformers==4.35.2
3
+ accelerate==0.24.1
4
+ gradio==4.7.1
5
+ pydantic==1.10.13
6
+ spacy==3.7.2