Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KSampler (Advanced) output not repeatable with "ancestral" samplers #4876

Open
drozbay opened this issue Sep 10, 2024 · 2 comments
Open

KSampler (Advanced) output not repeatable with "ancestral" samplers #4876

drozbay opened this issue Sep 10, 2024 · 2 comments
Labels
Bug Something is confirmed to not be working properly.

Comments

@drozbay
Copy link

drozbay commented Sep 10, 2024

Expected Behavior

Running two identical workflows with the KSampler (Advanced) with the exact same parameters and inputs should always result in the same image generation.

Actual Behavior

The actual result is that running two identical workflows with the parameters KSampler (Advanced) using any "ancestral" sampler and "add_noise" disabled results in a different generation each time.
comfyui_issue_0

Steps to Reproduce

  1. Set up minimal graph with Load Checkpoint, Clip Text Encode, Empty Latent Image, KSampler (Advanced), VAE Decode, and Preview Image.
  2. Set sampler to euler_ancestral, euler_ancestral_cfg_pp, dpm_2_ancestral, or dpmpp_2s_ancestral.
  3. Set scheduler to karras.
  4. Set add_noise to disable
  5. Set noise_seed to a fixed value (eg. noise_seed: 0) and Run workflow.
  6. Change noise seed or any other parameter (eg. noise_seed: 1). Run workflow.
  7. Revert the changed parameter (eg. noise_seed: 0). Run workflow.

The expectation is that the image from the first run and the third run should be exactly the same, because the workflow is exactly the same. But in fact they are different each time.

The attached workflow replicates this behavior by forcing two KSampler (Advanced) nodes to execute with identical parameters by setting a different end_step for each.

comfyui_issue_ksampleradv.json

Debug Logs

H:\AppsDir\git\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-09-10 12:01:04.097364
** Platform: Windows
** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
** Python executable: H:\AppsDir\git\ComfyUI_windows_portable\python_embeded\python.exe
** ComfyUI Path: H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI
** Log path: H:\AppsDir\git\ComfyUI_windows_portable\comfyui.log

Prestartup times for custom nodes:
   1.1 seconds: H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 16376 MB, total RAM 32634 MB
pytorch version: 2.4.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI\web
H:\AppsDir\git\ComfyUI_windows_portable\python_embeded\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
### Loading: ComfyUI-Manager (V2.50.3)
### ComfyUI Revision: 2674 [81778a7f] *DETACHED | Released on '2024-09-10'

Import times for custom nodes:
   0.0 seconds: H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
   0.2 seconds: H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Starting server

To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
FETCH DATA from: H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
FETCH DATA from: H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
H:\AppsDir\git\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
H:\AppsDir\git\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Requested to load SDXL
Loading 1 new model
loaded completely 0.0 4897.0483474731445 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.93it/s]
Requested to load AutoencoderKL
Loading 1 new model
loaded completely 0.0 159.55708122253418 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.67it/s]
Prompt executed in 15.33 seconds

Other

No response

@drozbay drozbay added the Potential Bug User is reporting a bug. This should be tested. label Sep 10, 2024
@drozbay
Copy link
Author

drozbay commented Sep 10, 2024

I should have mentioned that this scenario is encountered when using the KSampler (Advanced) node to split denoising between different models or other parameters, such as when using a refiner.

@mcmonkey4eva
Copy link
Collaborator

This PR will likely fix that: #4518

@mcmonkey4eva mcmonkey4eva added Bug Something is confirmed to not be working properly. and removed Potential Bug User is reporting a bug. This should be tested. labels Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something is confirmed to not be working properly.
Projects
None yet
Development

No branches or pull requests

2 participants