Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error ((((( Help me please #821

Open
nofacedeepfake opened this issue Sep 12, 2024 · 0 comments
Open

Error ((((( Help me please #821

nofacedeepfake opened this issue Sep 12, 2024 · 0 comments

Comments

@nofacedeepfake
Copy link

Loading Model: {'checkpoint_info': {'filename': 'D:\neuron\webui_forge_cu121_torch21\webui\models\Stable-diffusion\enjoyXLSuperRealistic_v30ModifiedVersion.safetensors', 'hash': '16bb3dda'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'unet': 1680, 'vae': 248, 'text_encoder': 197, 'text_encoder_2': 519, 'ignore': 0}
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16}
Model loaded in 5.6s (unload existing model: 0.3s, forge model load: 5.3s).
Tiling: True
Tiling is currently under maintenance and unavailable. Sorry for the inconvenience.
WARNING:dynamicprompts.generators.magicprompt:First load of MagicPrompt may take a while.
D:\neuron\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: huggingface/transformers#31884
warnings.warn(
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.
*** Error running process: D:\neuron\webui_forge_cu121_torch21\webui\extensions\sd-dynamic-prompts\scripts\dynamic_prompting.py
Traceback (most recent call last):
File "D:\neuron\webui_forge_cu121_torch21\webui\modules\scripts.py", line 844, in process
script.process(p, *script_args)
File "D:\neuron\webui_forge_cu121_torch21\webui\extensions\sd-dynamic-prompts\sd_dynamic_prompts\dynamic_prompting.py", line 480, in process
all_prompts, all_negative_prompts = generate_prompts(
File "D:\neuron\webui_forge_cu121_torch21\webui\extensions\sd-dynamic-prompts\sd_dynamic_prompts\helpers.py", line 93, in generate_prompts
all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [""]
File "D:\neuron\webui_forge_cu121_torch21\system\python\lib\site-packages\dynamicprompts\generators\magicprompt.py", line 169, in generate
magic_prompts = self._generate_magic_prompts(prompts)
File "D:\neuron\webui_forge_cu121_torch21\webui\extensions\sd-dynamic-prompts\sd_dynamic_prompts\magic_prompt.py", line 38, in _generate_magic_prompts
output = self.model.generate(
File "D:\neuron\webui_forge_cu121_torch21\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\neuron\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\generation\utils.py", line 1874, in generate
self._validate_generated_length(generation_config, input_ids_length, has_default_max_length)
File "D:\neuron\webui_forge_cu121_torch21\system\python\lib\site-packages\transformers\generation\utils.py", line 1266, in _validate_generated_length
raise ValueError(
ValueError: Input length of input_ids is 109, but max_length is set to 100. This can lead to unexpected behavior. You should consider increasing max_length or, better yet, setting max_new_tokens.


[LORA] Loaded D:\neuron\webui_forge_cu121_torch21\webui\models\Lora\edam\maxsentabraMobsedem-000001.safetensors for KModel-UNet with 722 keys at weight 0.85 (skipped 0 keys) with on_the_fly = False
[LORA] Loaded D:\neuron\webui_forge_cu121_torch21\webui\models\Lora\edam\maxsentabraMobsedem-000001.safetensors for KModel-CLIP with 88 keys at weight 0.85 (skipped 0 keys) with on_the_fly = False
[LORA] Loaded D:\neuron\webui_forge_cu121_torch21\webui\models\Lora\Instagram\inst-000003.safetensors for KModel-UNet with 722 keys at weight 0.3 (skipped 0 keys) with on_the_fly = False
[LORA] Loaded D:\neuron\webui_forge_cu121_torch21\webui\models\Lora\Instagram\inst-000003.safetensors for KModel-CLIP with 264 keys at weight 0.3 (skipped 0 keys) with on_the_fly = False
[LORA] Loaded D:\neuron\webui_forge_cu121_torch21\webui\models\Lora\Artep\Artep2-000003.safetensors for KModel-UNet with 722 keys at weight 1.0 (skipped 0 keys) with on_the_fly = False
[LORA] Loaded D:\neuron\webui_forge_cu121_torch21\webui\models\Lora\Artep\Artep2-000003.safetensors for KModel-CLIP with 264 keys at weight 1.0 (skipped 0 keys) with on_the_fly = False
[Unload] Trying to free 3051.58 MB for cuda:0 with 0 models keep loaded ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 22430.75 MB, Model Require: 1559.68 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 19847.08 MB, All loaded to GPU.
Moving model(s) has taken 1.95 seconds
[Textual Inversion] Used Embedding [negativeXL_D] in CLIP of [clip_l]
[Textual Inversion] Used Embedding [negativeXL_D] in CLIP of [clip_g]
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 20652.55 MB ... Done.
[Unload] Trying to free 7472.08 MB for cuda:0 with 0 models keep loaded ... Current free memory is 20651.08 MB ... Done.
[Memory Management] Target: KModel, Free GPU: 20651.08 MB, Model Require: 4897.05 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 14730.03 MB, All loaded to GPU.
Moving model(s) has taken 5.54 seconds
100%|██████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.21it/s]
[Unload] Trying to free 3882.80 MB for cuda:0 with 0 models keep loaded ... Current free memory is 15641.31 MB ... Done.
[Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 15641.31 MB, Model Require: 159.56 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 14457.75 MB, All loaded to GPU.
Moving model(s) has taken 0.05 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant