-
Notifications
You must be signed in to change notification settings - Fork 551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
linux单卡训练正常多卡训练报错 #321
Comments
请教一下,linux,ubuntu 上如何安装成功的 |
我也遇到了这个问题 |
单卡训练正常,多卡训练卡在 以下是我的参数
|
Traceback (most recent call last):
File "/mnt/data/wangxi/lora-scripts/./sd-scripts/train_db.py", line 501, in
train(args)
File "/mnt/data/wangxi/lora-scripts/./sd-scripts/train_db.py", line 321, in train
encoder_hidden_states = train_util.get_hidden_states(
File "/mnt/data/wangxi/lora-scripts/sd-scripts/library/train_util.py", line 4003, in get_hidden_states
encoder_hidden_states = text_encoder.text_model.final_layer_norm(encoder_hidden_states)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DistributedDataParallel' object has no attribute 'text_model'
steps: 0%| | 0/22080 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/mnt/data/wangxi/lora-scripts/./sd-scripts/train_db.py", line 501, in
train(args)
File "/mnt/data/wangxi/lora-scripts/./sd-scripts/train_db.py", line 321, in train
encoder_hidden_states = train_util.get_hidden_states(
File "/mnt/data/wangxi/lora-scripts/sd-scripts/library/train_util.py", line 4003, in get_hidden_states
encoder_hidden_states = text_encoder.text_model.final_layer_norm(encoder_hidden_states)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DistributedDataParallel' object has no attribute 'text_model'
^CWARNING:torch.distributed.elastic.agent.server.api:Received 2 death signal, shutting down workers
Traceback (most recent call last):
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/commands/launch.py", line 996, in
main()
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/commands/launch.py", line 992, in main
launch_command(args)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/commands/launch.py", line 977, in launch_command
multi_gpu_launcher(args)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/accelerate/commands/launch.py", line 646, in multi_gpu_launcher
distrib_run.run(args)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 241, in launch_agent
result = agent.run()
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 723, in run
result = self._invoke_run(role)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/distributed/elastic/agent/server/api.py", line 864, in _invoke_run
time.sleep(monitor_interval)
File "/home/wangxi/miniconda3/envs/lora/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 62, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 35054 got signal: 2
12:51:03-224816 ERROR Training failed / 训练失败
(lora) [wangxi@v100-4 lora-scripts]$ ^C
(lora) [wangxi@v100-4 lora-scripts]$
The text was updated successfully, but these errors were encountered: