Skip to content

Commit

Permalink
for cascade (#7)
Browse files Browse the repository at this point in the history
* Update requirements.txt

the UI launches with one missing module `torchvision`. spits out a `ModuleNotFoundError`. installing `torchvision` module fixed it.

* Fix hiding dom widgets.

* Fix lowvram mode not working with unCLIP and Revision code.

* Fix taesd VAE in lowvram mode.

* Add title to the API workflow json. (comfyanonymous#2380)

* Add `title` to the API workflow json.

* API: Move `title` to `_meta` dictionary, imply unused.

* Only add _meta title to api prompt when dev mode is enabled in UI.

* Fix clip vision lowvram mode not working.

* Cleanup.

* Load weights that can't be lowvramed to target device.

* Use function to calculate model size in model patcher.

* Fix VALIDATE_INPUTS getting called multiple times.

Allow VALIDATE_INPUTS to only validate specific inputs.

* This cache timeout is pretty useless in practice.

* Add argument to run the VAE on the CPU.

* Reregister nodes when pressing refresh button.

* Add a denoise parameter to BasicScheduler node.

* Add node id and prompt id to websocket progress packet.

* Remove useless code.

* Auto detect out_channels from model.

* Fix issue when websocket is deleted when data is being sent.

* Refactor VAE code.

Replace constants with downscale_ratio and latent_channels.

* Fix regression.

* Add support for the stable diffusion x4 upscaling model.

This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.

* Fix model patches not working in custom sampling scheduler nodes.

* Implement noise augmentation for SD 4X upscale model.

* Add a /free route to unload models or free all memory.

A POST request to /free with: {"unload_models":true}
will unload models from vram.

A POST request to /free with: {"free_memory":true}
will unload models and free all cached data from the last run workflow.

* StableZero123_Conditioning_Batched node.

This node lets you generate a batch of images with different elevations or
azimuths by setting the elevation_batch_increment and/or
azimuth_batch_increment.

It also sets the batch index for the latents so that the same init noise is
used on each frame.

* Fix BasicScheduler issue with Loras.

* fix: `/free` handler function name

* Implement attention mask on xformers.

* Support attention mask in split attention.

* Add attention mask support to sub quad attention.

* Update optimized_attention_for_device function for new functions that
support masked attention.

* Support properly loading images with mode I.

* Store user settings/data on the server and multi user support (comfyanonymous#2160)

* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors

* Fix issue with user manager parent dir not being created.

* Support I mode images in LoadImageMask.

* Use basic attention implementation for small inputs on old pytorch.

* Round up to nearest power of 2 in SAG node to fix some resolution issues.

* Fix issue when using multiple t2i adapters with batched images.

* Skip SAG when latent is too small.

* Add InpaintModelConditioning node.

This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on comfyanonymous#2501

* Don't round noise mask.

* Resolved crashing nodes caused by `FileNotFoundError` during directory traversal

- Implemented a `try-except` block in the `recursive_search` function to handle `FileNotFoundError` gracefully.
- When encountering a file or directory path that cannot be accessed (causing `FileNotFoundError`), the code now logs a warning and skips processing for that specific path instead of crashing the node (CheckpointLoaderSimple was usually the first to break). This allows the rest of the directory traversal to proceed without interruption.

* Add error, status to /history endpoint

* Fix hypertile issue with high depths.

* Make server storage the default.

Remove --server-storage argument.

* Clear status notes on execution start.

* Rename status notes to status messages.

I think message describes them better.

* Fix modifiers triggering key down checks

* add setting to change control after generate to run before

* export function

* Manage group nodes (comfyanonymous#2455)

* wip group manage

* prototyping ui

* tweaks

* wip

* wip

* more wip

* fixes
add deletion

* Fix tests

* fixes

* Remove test code

* typo

* fix crash when link is invalid

* Fix crash on group render

* Adds copy image option if browser feature available (comfyanonymous#2544)

* Adds copy image option if browser feature available

* refactor

* Make unclip more deterministic.

Pass a seed argument note that this might make old unclip images different.

* Add error handling to initial fix to keep cache intact

* Only auto enable bf16 VAE on nvidia GPUs that actually support it.

* Fix logging not checking onChange

* Auto queue on change (comfyanonymous#2542)

* Add toggle to enable auto queue when graph is changed

* type fix

* better

* better alignment

* Change undoredo to not ignore inputs when autoqueue in change mode

* Fix renaming upload widget (comfyanonymous#2554)

* Fix renaming upload widget

* Allow custom name

* Jack/workflow (#3)

* modified:   web/scripts/app.js
	modified:   web/scripts/utils.js

* unformat

* fix: workflow id (#4)

* Don't use PEP 604 type hints, to stay compatible with Python<3.10.

* Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.

This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.

* Move some nodes to model_patches section.

* Remove useless import.

* Fix for the extracting issue on windows.

* Fix queue on change to respect auto queue checkbox (comfyanonymous#2608)

* Fix render on change not respecting auto queue checkbox

Fix issue where autoQueueEnabled checkbox is ignored for changes if autoQueueMode is left on `change`

* Make check more specific

* Cleanup some unused imports.

* Fix potential turbo scheduler model patching issue.

* Ability to hide menu
Responsive setting screen
Touch events for zooming/context menu

* typo fix - calculate_sigmas_scheduler (comfyanonymous#2619)

self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <[email protected]>

* Support refresh on group node combos (comfyanonymous#2625)

* Support refresh on group node combos

* fix check

* Sync litegraph with repo.

comfyanonymous/litegraph.js#4

* Jack/load custom nodes (#5)

* update custom nodes

* fix order

* Add experimental photomaker nodes.

Put the model file in models/photomaker and use PhotoMakerLoader.

Then use PhotoMakerEncode with the keyword "photomaker" to apply the image

* Remove some unused imports.

* Cleanups.

* Add a LatentBatchSeedBehavior node.

This lets you set it so the latents can use the same seed for the sampling
on every image in the batch.

* Fix some issues with --gpu-only

* Remove useless code.

* Add node to set only the conditioning area strength.

* Make auto saved workflow stored per tab

* fix: inpaint on mask editor bottom area

* Put VAE key name in model config.

* Fix crash when no widgets on customized group node

* Fix scrolling with lots of nodes

* Litegraph node search improvements.

See: comfyanonymous/litegraph.js#5

* Update readme for new pytorch 2.2 release.

* feat: better pen support for mask editor
- alt-drag: erase
- shift-drag(up/down): zoom in/out

* use local storage

* add increment-wrap as option to ValueControlWidget when isCombo, which loops back to 0 when at end of list

* Fix frontend webp prompt handling

* changed default of LatentBatchSeedBehavior to fixed

* Always use fp16 for the text encoders.

* Mask editor: semitransparent brush, brush color modes

* Speed up SDXL on 16xx series with fp16 weights and manual cast.

* Don't use is_bf16_supported to check for fp16 support.

* Document IS_CHANGED in the example custom node.

* Make minimum tile size the size of the overlap.

* Support linking converted inputs from api json

* Sync litegraph to repo.

comfyanonymous/litegraph.js#6

* Don't use numpy for calculating sigmas.

* Allow custom samplers to request discard penultimate sigma

* Add batch number to filename with %batch_num%

Allow configurable addition of batch number to output file name.

* Add a way to set different conditioning for the controlnet.

* Fix infinite while loop being possible in ddim_scheduler

* Add a node to give the controlnet a prompt different from the unet.

* Safari: Draws certain elements on CPU. In case of search popup, can cause 10 seconds+ main thread lock due to painting. (comfyanonymous#2763)

* lets toggle this setting first.

* also makes it easier for debug. I'll be honest this is generally preferred behavior as well for me but I ain't no power user shrug.

* attempting trick to put the work for filter: brightness on GPU as a first attempt before falling back to not using filter for large lists!

* revert litegraph.core.js changes from branch

* oops

* Prevent hideWidget being called twice for same widget

Fix for comfyanonymous#2766

* Add ImageFromBatch.

* Don't init the CLIP model when the checkpoint has no CLIP weights.

* Add a disabled SaveImageWebsocket custom node.

This node can be used to efficiently get images without saving them to
disk when using ComfyUI as a backend.

* Small refactor of is_device_* functions.

* Stable Cascade Stage A.

* Stable Cascade Stage C.

* Stable Cascade Stage B.

* StableCascade CLIP model support.

* Make --force-fp32 disable loading models in bf16.

* Support Stable Cascade Stage B lite.

* Make Stable Cascade work on old pytorch 2.0

* Fix clip attention mask issues on some hardware.

* Manual cast for bf16 on older GPUs.

* Implement shift schedule for cascade stage C.

* Properly fix attention masks in CLIP with batches.

* Fix attention mask batch size in some attention functions.

* fp8 weight support for Stable Cascade.

* Fix attention masks properly for multiple batches.

* Add ModelSamplingStableCascade to control the shift sampling parameter.

shift is 2.0 by default on Stage C and 1.0 by default on Stage B.

* Fix gligen lowvram mode.

* Support additional PNG info.

* Support loading the Stable Cascade effnet and previewer as a VAE.

The effnet can be used to encode images for img2img with Stage C.

* Forgot to commit this.

---------

Co-authored-by: Oleksiy Nehlyadyuk <[email protected]>
Co-authored-by: comfyanonymous <[email protected]>
Co-authored-by: shiimizu <[email protected]>
Co-authored-by: AYF <[email protected]>
Co-authored-by: ramyma <[email protected]>
Co-authored-by: pythongosssss <[email protected]>
Co-authored-by: TFWol <[email protected]>
Co-authored-by: Kristjan Pärt <[email protected]>
Co-authored-by: Dr.Lt.Data <[email protected]>
Co-authored-by: Lt.Dr.Data <[email protected]>
Co-authored-by: Meowu <[email protected]>
Co-authored-by: pksebben <[email protected]>
Co-authored-by: Chaoses-Ib <[email protected]>
Co-authored-by: FizzleDorf <[email protected]>
Co-authored-by: ultimabear <[email protected]>
Co-authored-by: blepping <[email protected]>
Co-authored-by: Imran Azeez <[email protected]>
Co-authored-by: Jedrzej Kosinski <[email protected]>
Co-authored-by: Steven Lu <[email protected]>
Co-authored-by: chrisgoringe <[email protected]>
  • Loading branch information
21 people committed Feb 19, 2024
1 parent afad7c5 commit 1753700
Show file tree
Hide file tree
Showing 98 changed files with 6,182 additions and 927 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ venv/
/web/extensions/*
!/web/extensions/logging.js.example
!/web/extensions/core/
/tests-ui/data/object_info.json
/tests-ui/data/object_info.json
/user/
9 changes: 0 additions & 9 deletions .vscode/settings.json

This file was deleted.

11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ There is a portable standalone build for Windows that should work for running on

Simply download, extract with [7-Zip](https://7-zip.org) and run. Make sure you put your Stable Diffusion checkpoints/models (the huge ckpt/safetensors files) in: ComfyUI\models\checkpoints

If you have trouble extracting it, right click the file -> properties -> unblock

#### How do I share models between another UI and ComfyUI?

See the [Config file](extra_model_paths.yaml.example) to set the search paths for models. In the standalone windows build you can find this file in the ComfyUI directory. Rename this file to extra_model_paths.yaml and edit it with your favorite text editor.
Expand All @@ -93,24 +95,23 @@ Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints

Put your VAE in: models/vae

Note: pytorch stable does not support python 3.12 yet. If you have python 3.12 you will have to use the nightly version of pytorch. If you run into issues you should try python 3.11 instead.

### AMD GPUs (Linux only)
AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:

```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6```
```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7```

This is the command to install the nightly with ROCm 5.7 which has a python 3.12 package and might have some performance improvements:
This is the command to install the nightly with ROCm 6.0 which might have some performance improvements:

```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7```
```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.0```

### NVIDIA

Nvidia users should install stable pytorch using this command:

```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121```

This is the command to install pytorch nightly instead which has a python 3.12 package and might have performance improvements:
This is the command to install pytorch nightly instead which might have performance improvements:

```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121```

Expand Down
54 changes: 54 additions & 0 deletions app/app_settings.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import os
import json
from aiohttp import web


class AppSettings():
def __init__(self, user_manager):
self.user_manager = user_manager

def get_settings(self, request):
file = self.user_manager.get_request_user_filepath(
request, "comfy.settings.json")
if os.path.isfile(file):
with open(file) as f:
return json.load(f)
else:
return {}

def save_settings(self, request, settings):
file = self.user_manager.get_request_user_filepath(
request, "comfy.settings.json")
with open(file, "w") as f:
f.write(json.dumps(settings, indent=4))

def add_routes(self, routes):
@routes.get("/settings")
async def get_settings(request):
return web.json_response(self.get_settings(request))

@routes.get("/settings/{id}")
async def get_setting(request):
value = None
settings = self.get_settings(request)
setting_id = request.match_info.get("id", None)
if setting_id and setting_id in settings:
value = settings[setting_id]
return web.json_response(value)

@routes.post("/settings")
async def post_settings(request):
settings = self.get_settings(request)
new_settings = await request.json()
self.save_settings(request, {**settings, **new_settings})
return web.Response(status=200)

@routes.post("/settings/{id}")
async def post_setting(request):
setting_id = request.match_info.get("id", None)
if not setting_id:
return web.Response(status=400)
settings = self.get_settings(request)
settings[setting_id] = await request.json()
self.save_settings(request, settings)
return web.Response(status=200)
140 changes: 140 additions & 0 deletions app/user_manager.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
import json
import os
import re
import uuid
from aiohttp import web
from comfy.cli_args import args
from folder_paths import user_directory
from .app_settings import AppSettings

default_user = "default"
users_file = os.path.join(user_directory, "users.json")


class UserManager():
def __init__(self):
global user_directory

self.settings = AppSettings(self)
if not os.path.exists(user_directory):
os.mkdir(user_directory)
if not args.multi_user:
print("****** User settings have been changed to be stored on the server instead of browser storage. ******")
print("****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******")

if args.multi_user:
if os.path.isfile(users_file):
with open(users_file) as f:
self.users = json.load(f)
else:
self.users = {}
else:
self.users = {"default": "default"}

def get_request_user_id(self, request):
user = "default"
if args.multi_user and "comfy-user" in request.headers:
user = request.headers["comfy-user"]

if user not in self.users:
raise KeyError("Unknown user: " + user)

return user

def get_request_user_filepath(self, request, file, type="userdata", create_dir=True):
global user_directory

if type == "userdata":
root_dir = user_directory
else:
raise KeyError("Unknown filepath type:" + type)

user = self.get_request_user_id(request)
path = user_root = os.path.abspath(os.path.join(root_dir, user))

# prevent leaving /{type}
if os.path.commonpath((root_dir, user_root)) != root_dir:
return None

parent = user_root

if file is not None:
# prevent leaving /{type}/{user}
path = os.path.abspath(os.path.join(user_root, file))
if os.path.commonpath((user_root, path)) != user_root:
return None

if create_dir and not os.path.exists(parent):
os.mkdir(parent)

return path

def add_user(self, name):
name = name.strip()
if not name:
raise ValueError("username not provided")
user_id = re.sub("[^a-zA-Z0-9-_]+", '-', name)
user_id = user_id + "_" + str(uuid.uuid4())

self.users[user_id] = name

global users_file
with open(users_file, "w") as f:
json.dump(self.users, f)

return user_id

def add_routes(self, routes):
self.settings.add_routes(routes)

@routes.get("/users")
async def get_users(request):
if args.multi_user:
return web.json_response({"storage": "server", "users": self.users})
else:
user_dir = self.get_request_user_filepath(request, None, create_dir=False)
return web.json_response({
"storage": "server",
"migrated": os.path.exists(user_dir)
})

@routes.post("/users")
async def post_users(request):
body = await request.json()
username = body["username"]
if username in self.users.values():
return web.json_response({"error": "Duplicate username."}, status=400)

user_id = self.add_user(username)
return web.json_response(user_id)

@routes.get("/userdata/{file}")
async def getuserdata(request):
file = request.match_info.get("file", None)
if not file:
return web.Response(status=400)

path = self.get_request_user_filepath(request, file)
if not path:
return web.Response(status=403)

if not os.path.exists(path):
return web.Response(status=404)

return web.FileResponse(path)

@routes.post("/userdata/{file}")
async def post_userdata(request):
file = request.match_info.get("file", None)
if not file:
return web.Response(status=400)

path = self.get_request_user_filepath(request, file)
if not path:
return web.Response(status=403)

body = await request.read()
with open(path, "wb") as f:
f.write(body)

return web.Response(status=200)
4 changes: 4 additions & 0 deletions comfy/cli_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,8 @@ def __call__(self, parser, namespace, values, option_string=None):
fpvae_group.add_argument("--fp32-vae", action="store_true", help="Run the VAE in full precision fp32.")
fpvae_group.add_argument("--bf16-vae", action="store_true", help="Run the VAE in bf16.")

parser.add_argument("--cpu-vae", action="store_true", help="Run the VAE on the CPU.")

fpte_group = parser.add_mutually_exclusive_group()
fpte_group.add_argument("--fp8_e4m3fn-text-enc", action="store_true", help="Store text encoder weights in fp8 (e4m3fn variant).")
fpte_group.add_argument("--fp8_e5m2-text-enc", action="store_true", help="Store text encoder weights in fp8 (e5m2 variant).")
Expand Down Expand Up @@ -110,6 +112,8 @@ class LatentPreviewMethod(enum.Enum):

parser.add_argument("--disable-metadata", action="store_true", help="Disable saving prompt metadata in files.")

parser.add_argument("--multi-user", action="store_true", help="Enables per-user storage.")

if comfy.options.args_parsing:
args = parser.parse_args()
else:
Expand Down
6 changes: 3 additions & 3 deletions comfy/clip_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def __init__(self, num_layers, embed_dim, heads, intermediate_size, intermediate
self.layers = torch.nn.ModuleList([CLIPLayer(embed_dim, heads, intermediate_size, intermediate_activation, dtype, device, operations) for i in range(num_layers)])

def forward(self, x, mask=None, intermediate_output=None):
optimized_attention = optimized_attention_for_device(x.device, mask=mask is not None)
optimized_attention = optimized_attention_for_device(x.device, mask=mask is not None, small_input=True)

if intermediate_output is not None:
if intermediate_output < 0:
Expand Down Expand Up @@ -97,7 +97,7 @@ def forward(self, input_tokens, attention_mask=None, intermediate_output=None, f
x = self.embeddings(input_tokens)
mask = None
if attention_mask is not None:
mask = 1.0 - attention_mask.to(x.dtype).unsqueeze(1).unsqueeze(1).expand(attention_mask.shape[0], 1, attention_mask.shape[-1], attention_mask.shape[-1])
mask = 1.0 - attention_mask.to(x.dtype).reshape((attention_mask.shape[0], 1, -1, attention_mask.shape[-1])).expand(attention_mask.shape[0], 1, attention_mask.shape[-1], attention_mask.shape[-1])
mask = mask.masked_fill(mask.to(torch.bool), float("-inf"))

causal_mask = torch.empty(x.shape[1], x.shape[1], dtype=x.dtype, device=x.device).fill_(float("-inf")).triu_(1)
Expand Down Expand Up @@ -151,7 +151,7 @@ def __init__(self, embed_dim, num_channels=3, patch_size=14, image_size=224, dty

def forward(self, pixel_values):
embeds = self.patch_embedding(pixel_values).flatten(2).transpose(1, 2)
return torch.cat([self.class_embedding.expand(pixel_values.shape[0], 1, -1), embeds], dim=1) + self.position_embedding.weight
return torch.cat([self.class_embedding.to(embeds.device).expand(pixel_values.shape[0], 1, -1), embeds], dim=1) + self.position_embedding.weight.to(embeds.device)


class CLIPVision(torch.nn.Module):
Expand Down
10 changes: 8 additions & 2 deletions comfy/clip_vision.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
from .utils import load_torch_file, transformers_convert, common_upscale
from .utils import load_torch_file, transformers_convert, state_dict_prefix_replace
import os
import torch
import contextlib
import json

import comfy.ops
Expand Down Expand Up @@ -41,9 +40,13 @@ def __init__(self, json_config):
self.model.eval()

self.patcher = comfy.model_patcher.ModelPatcher(self.model, load_device=self.load_device, offload_device=offload_device)

def load_sd(self, sd):
return self.model.load_state_dict(sd, strict=False)

def get_sd(self):
return self.model.state_dict()

def encode_image(self, image):
comfy.model_management.load_model_gpu(self.patcher)
pixel_values = clip_preprocess(image.to(self.load_device)).float()
Expand Down Expand Up @@ -76,6 +79,9 @@ def convert_to_transformers(sd, prefix):
sd['visual_projection.weight'] = sd.pop("{}proj".format(prefix)).transpose(0, 1)

sd = transformers_convert(sd, prefix, "vision_model.", 48)
else:
replace_prefix = {prefix: ""}
sd = state_dict_prefix_replace(sd, replace_prefix)
return sd

def load_clipvision_from_sd(sd, prefix="", convert_keys=False):
Expand Down
1 change: 0 additions & 1 deletion comfy/conds.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import enum
import torch
import math
import comfy.utils
Expand Down
Loading

0 comments on commit 1753700

Please sign in to comment.