Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto queue on change #2542

Merged

Conversation

pythongosssss
Copy link
Contributor

Provides a toggle to switch between autoqueue as soon as the queue is empty, and only when the graph is changed.

image

This uses the undo/redo graph state checking, so it will still trigger on changes that aren't reflected in the serialized prompt output, but this still massively reduces the number of prompts queued.

@comfyanonymous
Copy link
Owner

The "change" doesn't work when using the SDXL turbo example and typing the prompt: https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/

@pythongosssss
Copy link
Contributor Author

I've updated it so that when in change mode, every keypress will trigger the check even when inputs/textareas are focused

@comfyanonymous comfyanonymous merged commit 93bbe3f into comfyanonymous:master Jan 16, 2024
1 check passed
HorusElohim pushed a commit to HorusElohim/PyfyUI that referenced this pull request Feb 17, 2024
* Add toggle to enable auto queue when graph is changed

* type fix

* better

* better alignment

* Change undoredo to not ignore inputs when autoqueue in change mode
loner233 added a commit to InceptionsAI/ComfyUI that referenced this pull request Feb 19, 2024
* Update requirements.txt

the UI launches with one missing module `torchvision`. spits out a `ModuleNotFoundError`. installing `torchvision` module fixed it.

* Fix hiding dom widgets.

* Fix lowvram mode not working with unCLIP and Revision code.

* Fix taesd VAE in lowvram mode.

* Add title to the API workflow json. (comfyanonymous#2380)

* Add `title` to the API workflow json.

* API: Move `title` to `_meta` dictionary, imply unused.

* Only add _meta title to api prompt when dev mode is enabled in UI.

* Fix clip vision lowvram mode not working.

* Cleanup.

* Load weights that can't be lowvramed to target device.

* Use function to calculate model size in model patcher.

* Fix VALIDATE_INPUTS getting called multiple times.

Allow VALIDATE_INPUTS to only validate specific inputs.

* This cache timeout is pretty useless in practice.

* Add argument to run the VAE on the CPU.

* Reregister nodes when pressing refresh button.

* Add a denoise parameter to BasicScheduler node.

* Add node id and prompt id to websocket progress packet.

* Remove useless code.

* Auto detect out_channels from model.

* Fix issue when websocket is deleted when data is being sent.

* Refactor VAE code.

Replace constants with downscale_ratio and latent_channels.

* Fix regression.

* Add support for the stable diffusion x4 upscaling model.

This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.

* Fix model patches not working in custom sampling scheduler nodes.

* Implement noise augmentation for SD 4X upscale model.

* Add a /free route to unload models or free all memory.

A POST request to /free with: {"unload_models":true}
will unload models from vram.

A POST request to /free with: {"free_memory":true}
will unload models and free all cached data from the last run workflow.

* StableZero123_Conditioning_Batched node.

This node lets you generate a batch of images with different elevations or
azimuths by setting the elevation_batch_increment and/or
azimuth_batch_increment.

It also sets the batch index for the latents so that the same init noise is
used on each frame.

* Fix BasicScheduler issue with Loras.

* fix: `/free` handler function name

* Implement attention mask on xformers.

* Support attention mask in split attention.

* Add attention mask support to sub quad attention.

* Update optimized_attention_for_device function for new functions that
support masked attention.

* Support properly loading images with mode I.

* Store user settings/data on the server and multi user support (comfyanonymous#2160)

* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors

* Fix issue with user manager parent dir not being created.

* Support I mode images in LoadImageMask.

* Use basic attention implementation for small inputs on old pytorch.

* Round up to nearest power of 2 in SAG node to fix some resolution issues.

* Fix issue when using multiple t2i adapters with batched images.

* Skip SAG when latent is too small.

* Add InpaintModelConditioning node.

This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on comfyanonymous#2501

* Don't round noise mask.

* Resolved crashing nodes caused by `FileNotFoundError` during directory traversal

- Implemented a `try-except` block in the `recursive_search` function to handle `FileNotFoundError` gracefully.
- When encountering a file or directory path that cannot be accessed (causing `FileNotFoundError`), the code now logs a warning and skips processing for that specific path instead of crashing the node (CheckpointLoaderSimple was usually the first to break). This allows the rest of the directory traversal to proceed without interruption.

* Add error, status to /history endpoint

* Fix hypertile issue with high depths.

* Make server storage the default.

Remove --server-storage argument.

* Clear status notes on execution start.

* Rename status notes to status messages.

I think message describes them better.

* Fix modifiers triggering key down checks

* add setting to change control after generate to run before

* export function

* Manage group nodes (comfyanonymous#2455)

* wip group manage

* prototyping ui

* tweaks

* wip

* wip

* more wip

* fixes
add deletion

* Fix tests

* fixes

* Remove test code

* typo

* fix crash when link is invalid

* Fix crash on group render

* Adds copy image option if browser feature available (comfyanonymous#2544)

* Adds copy image option if browser feature available

* refactor

* Make unclip more deterministic.

Pass a seed argument note that this might make old unclip images different.

* Add error handling to initial fix to keep cache intact

* Only auto enable bf16 VAE on nvidia GPUs that actually support it.

* Fix logging not checking onChange

* Auto queue on change (comfyanonymous#2542)

* Add toggle to enable auto queue when graph is changed

* type fix

* better

* better alignment

* Change undoredo to not ignore inputs when autoqueue in change mode

* Fix renaming upload widget (comfyanonymous#2554)

* Fix renaming upload widget

* Allow custom name

* Jack/workflow (#3)

* modified:   web/scripts/app.js
	modified:   web/scripts/utils.js

* unformat

* fix: workflow id (#4)

* Don't use PEP 604 type hints, to stay compatible with Python<3.10.

* Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.

This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.

* Move some nodes to model_patches section.

* Remove useless import.

* Fix for the extracting issue on windows.

* Fix queue on change to respect auto queue checkbox (comfyanonymous#2608)

* Fix render on change not respecting auto queue checkbox

Fix issue where autoQueueEnabled checkbox is ignored for changes if autoQueueMode is left on `change`

* Make check more specific

* Cleanup some unused imports.

* Fix potential turbo scheduler model patching issue.

* Ability to hide menu
Responsive setting screen
Touch events for zooming/context menu

* typo fix - calculate_sigmas_scheduler (comfyanonymous#2619)

self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <[email protected]>

* Support refresh on group node combos (comfyanonymous#2625)

* Support refresh on group node combos

* fix check

* Sync litegraph with repo.

comfyanonymous/litegraph.js#4

* Jack/load custom nodes (#5)

* update custom nodes

* fix order

* Add experimental photomaker nodes.

Put the model file in models/photomaker and use PhotoMakerLoader.

Then use PhotoMakerEncode with the keyword "photomaker" to apply the image

* Remove some unused imports.

* Cleanups.

* Add a LatentBatchSeedBehavior node.

This lets you set it so the latents can use the same seed for the sampling
on every image in the batch.

* Fix some issues with --gpu-only

* Remove useless code.

* Add node to set only the conditioning area strength.

* Make auto saved workflow stored per tab

* fix: inpaint on mask editor bottom area

* Put VAE key name in model config.

* Fix crash when no widgets on customized group node

* Fix scrolling with lots of nodes

* Litegraph node search improvements.

See: comfyanonymous/litegraph.js#5

* Update readme for new pytorch 2.2 release.

* feat: better pen support for mask editor
- alt-drag: erase
- shift-drag(up/down): zoom in/out

* use local storage

* add increment-wrap as option to ValueControlWidget when isCombo, which loops back to 0 when at end of list

* Fix frontend webp prompt handling

* changed default of LatentBatchSeedBehavior to fixed

* Always use fp16 for the text encoders.

* Mask editor: semitransparent brush, brush color modes

* Speed up SDXL on 16xx series with fp16 weights and manual cast.

* Don't use is_bf16_supported to check for fp16 support.

* Document IS_CHANGED in the example custom node.

* Make minimum tile size the size of the overlap.

* Support linking converted inputs from api json

* Sync litegraph to repo.

comfyanonymous/litegraph.js#6

* Don't use numpy for calculating sigmas.

* Allow custom samplers to request discard penultimate sigma

* Add batch number to filename with %batch_num%

Allow configurable addition of batch number to output file name.

* Add a way to set different conditioning for the controlnet.

* Fix infinite while loop being possible in ddim_scheduler

* Add a node to give the controlnet a prompt different from the unet.

* Safari: Draws certain elements on CPU. In case of search popup, can cause 10 seconds+ main thread lock due to painting. (comfyanonymous#2763)

* lets toggle this setting first.

* also makes it easier for debug. I'll be honest this is generally preferred behavior as well for me but I ain't no power user shrug.

* attempting trick to put the work for filter: brightness on GPU as a first attempt before falling back to not using filter for large lists!

* revert litegraph.core.js changes from branch

* oops

* Prevent hideWidget being called twice for same widget

Fix for comfyanonymous#2766

* Add ImageFromBatch.

* Don't init the CLIP model when the checkpoint has no CLIP weights.

* Add a disabled SaveImageWebsocket custom node.

This node can be used to efficiently get images without saving them to
disk when using ComfyUI as a backend.

* Small refactor of is_device_* functions.

* Stable Cascade Stage A.

* Stable Cascade Stage C.

* Stable Cascade Stage B.

* StableCascade CLIP model support.

* Make --force-fp32 disable loading models in bf16.

* Support Stable Cascade Stage B lite.

* Make Stable Cascade work on old pytorch 2.0

* Fix clip attention mask issues on some hardware.

* Manual cast for bf16 on older GPUs.

* Implement shift schedule for cascade stage C.

* Properly fix attention masks in CLIP with batches.

* Fix attention mask batch size in some attention functions.

* fp8 weight support for Stable Cascade.

* Fix attention masks properly for multiple batches.

* Add ModelSamplingStableCascade to control the shift sampling parameter.

shift is 2.0 by default on Stage C and 1.0 by default on Stage B.

* Fix gligen lowvram mode.

* Support additional PNG info.

* Support loading the Stable Cascade effnet and previewer as a VAE.

The effnet can be used to encode images for img2img with Stage C.

* Forgot to commit this.

---------

Co-authored-by: Oleksiy Nehlyadyuk <[email protected]>
Co-authored-by: comfyanonymous <[email protected]>
Co-authored-by: shiimizu <[email protected]>
Co-authored-by: AYF <[email protected]>
Co-authored-by: ramyma <[email protected]>
Co-authored-by: pythongosssss <[email protected]>
Co-authored-by: TFWol <[email protected]>
Co-authored-by: Kristjan Pärt <[email protected]>
Co-authored-by: Dr.Lt.Data <[email protected]>
Co-authored-by: Lt.Dr.Data <[email protected]>
Co-authored-by: Meowu <[email protected]>
Co-authored-by: pksebben <[email protected]>
Co-authored-by: Chaoses-Ib <[email protected]>
Co-authored-by: FizzleDorf <[email protected]>
Co-authored-by: ultimabear <[email protected]>
Co-authored-by: blepping <[email protected]>
Co-authored-by: Imran Azeez <[email protected]>
Co-authored-by: Jedrzej Kosinski <[email protected]>
Co-authored-by: Steven Lu <[email protected]>
Co-authored-by: chrisgoringe <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants