Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"cat_cuda" not implemented for 'Float8_e4m3fn' #1711

Open
vneznaikin opened this issue Oct 18, 2024 · 6 comments
Open

"cat_cuda" not implemented for 'Float8_e4m3fn' #1711

vneznaikin opened this issue Oct 18, 2024 · 6 comments

Comments

@vneznaikin
Copy link

vneznaikin commented Oct 18, 2024

torch 2.4.0
flux1-dev-fp8-e4m3fn.safetensors
t5xxl_fp8_e4m3fn.safetensors

Settings:
--fp8_base
--split_mode

Error:
[rank0]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'

PyTorch doesn't yet support Float8_e4m3fn for torch.cat (probably), but "fp8_base" should be able to handle float8_e4m3fn models.

[rank0]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
[rank0]:[W1018 14:07:24.358993691 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())
W1018 14:07:27.587000 138783073695552 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 136 closing signal SIGTERM
E1018 14:07:27.619000 138783073695552 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 135) of binary: /opt/conda/bin/python3.10
Traceback (most recent call last):
  File "/opt/conda/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
    args.func(args)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1097, in launch_command
    multi_gpu_launcher(args)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 734, in multi_gpu_launcher
    distrib_run.run(args)
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
    elastic_launch(
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
flux_train_network.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-10-18_14:07:27
  host      : 70d34f90e91b
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 135)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
add Codeadd Markdown
@kohya-ss
Copy link
Owner

Could you please share the part further up in the log so we can pinpoint exactly where the problem is?

@zsavmak
Copy link

zsavmak commented Oct 19, 2024

same problem

@geomaster1234
Copy link

I have the same problem as well but only when trying to use multi-gpu. Training on a single GPU works fine. This only crops up when I reconfigure accelerate from 1 machine/1 gpu to 1 machine/3 gpus. (I have 4 installed, purposely only trying to use 3 for training.)

My rig:
OS: Ubuntu 22.04.5 LTS
CPU: 2X - Xeon E5-2680 v4
System Memory: 128GB
GPU: 4X - RTX 3060 12GB VRAM

Training Settings:
accelerate launch --mixed_precision bf16 --num_cpu_threads_per_process 1 --rdzv_backend=c10d flux_train_network.py --train_data_dir '/mnt/Array/LoraTraining/FluxTest/training_images' --pretrained_model_name_or_path '/mnt/Array/ComfyUI/models/unet/flux1DevFp8_v10.safetensors' --clip_l '/mnt/Array/ComfyUI/models/clip/clip_l.safetensors' --t5xxl '/mnt/Array/ComfyUI/models/clip/t5xxl_fp8_e4m3fn.safetensors' --ae '/mnt/Array/ComfyUI/models/vae/ae.safetensors' --cache_latents_to_disk --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --mixed_precision bf16 --save_precision bf16 --network_module networks.lora_flux --cache_text_encoder_outputs --cache_text_encoder_outputs_to_disk --fp8_base --max_train_epochs 20 --save_every_n_epochs 2 --output_dir /mnt/Array/LoraTraining/FluxTest/model --output_name FluxTest_v01 --timestep_sampling shift --discrete_flow_shift 3.1582 --model_prediction_type raw --guidance_scale 1.0 --optimizer_type adafactor --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" --split_mode --network_args "train_blocks=single" --lr_scheduler constant_with_warmup --max_grad_norm 0.0 --network_dim 32 --learning_rate 5e-5 --text_encoder_lr 5e-5 --unet_lr 5e-5 --dataset_config '/mnt/Array/LoraTraining/FluxTest/test_dataset.toml' --save_state --log_with tensorboard --logging_dir '/mnt/Array/LoraTraining/FluxTest/log'

Log Output Returned:

[rank2]: Traceback (most recent call last):                                                                                                                                                                                                                                                          
[rank2]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 519, in <module>                                                                                                                                                                                                    
[rank2]:     trainer.train(args)                                                                                                                                                                                                                                                                     
[rank2]:   File "/mnt/Array/LoraTraining/sd-scripts/train_network.py", line 354, in train                                                                                                                                                                                                            
[rank2]:     model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)                                                                                                                                                                                        
[rank2]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 79, in load_target_model                                                                                                                                                                                            
[rank2]:     model = self.prepare_split_model(model, weight_dtype, accelerator)                                                                                                                                                                                                                      
[rank2]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 129, in prepare_split_model                                                                                                                                                                                         
[rank2]:     flux_upper = accelerator.prepare(flux_upper)                                                                                                                                                                                                                                            
[rank2]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1311, in prepare                                                                                                                                                                   
[rank2]:     result = tuple(                                                                                                                                                                                                                                                                         
[rank2]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1312, in <genexpr>                                                                                                                                                                 
[rank2]:     self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)                                                                                                                                                                                   
[rank2]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1188, in _prepare_one                                                                                                                                                              
[rank2]:     return self.prepare_model(obj, device_placement=device_placement)                                                                                                                                                                                                                       
[rank2]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1452, in prepare_model                                                                                                                                                             
[rank2]:     model = torch.nn.parallel.DistributedDataParallel(                                                                                                                                                                                                                                      
[rank2]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 824, in __init__                                                                                                                                                            
[rank2]:     _sync_module_states(                                                                                                                                                                                                                                                                    
[rank2]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/utils.py", line 315, in _sync_module_states                                                                                                                                                       
[rank2]:     _sync_params_and_buffers(process_group, module_states, broadcast_bucket_size, src)                                                                                                                                                                                                      
[rank2]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/utils.py", line 326, in _sync_params_and_buffers                                                                                                                                                  
[rank2]:     dist._broadcast_coalesced(                                                                                                                                                                                                                                                              
[rank2]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
[rank0]: Traceback (most recent call last):
[rank0]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 519, in <module>
[rank0]:     trainer.train(args)
[rank0]:   File "/mnt/Array/LoraTraining/sd-scripts/train_network.py", line 354, in train
[rank0]:     model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)
[rank0]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 79, in load_target_model
[rank0]:     model = self.prepare_split_model(model, weight_dtype, accelerator)
[rank0]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 129, in prepare_split_model
[rank0]:     flux_upper = accelerator.prepare(flux_upper)
[rank0]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1311, in prepare
[rank0]:     result = tuple(
[rank0]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1312, in <genexpr>
[rank0]:     self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
[rank0]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1188, in _prepare_one
[rank0]:     return self.prepare_model(obj, device_placement=device_placement)
[rank0]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1452, in prepare_model
[rank0]:     model = torch.nn.parallel.DistributedDataParallel(
[rank0]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 824, in __init__
[rank0]:     _sync_module_states(
[rank0]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/utils.py", line 315, in _sync_module_states
[rank0]:     _sync_params_and_buffers(process_group, module_states, broadcast_bucket_size, src)
[rank0]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/utils.py", line 326, in _sync_params_and_buffers
[rank0]:     dist._broadcast_coalesced(
[rank0]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
[rank1]: Traceback (most recent call last):
[rank1]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 519, in <module>
[rank1]:     trainer.train(args)
[rank1]:   File "/mnt/Array/LoraTraining/sd-scripts/train_network.py", line 354, in train
[rank1]:     model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)
[rank1]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 79, in load_target_model
[rank1]:     model = self.prepare_split_model(model, weight_dtype, accelerator)
[rank1]:   File "/mnt/Array/LoraTraining/sd-scripts/flux_train_network.py", line 129, in prepare_split_model
[rank1]:     flux_upper = accelerator.prepare(flux_upper)
[rank1]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1311, in prepare
[rank1]:     result = tuple(
[rank1]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1312, in <genexpr>
[rank1]:     self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
[rank1]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1188, in _prepare_one
[rank1]:     return self.prepare_model(obj, device_placement=device_placement)
[rank1]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/accelerator.py", line 1452, in prepare_model
[rank1]:     model = torch.nn.parallel.DistributedDataParallel(
[rank1]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 824, in __init__
[rank1]:     _sync_module_states(
[rank1]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/utils.py", line 315, in _sync_module_states
[rank1]:     _sync_params_and_buffers(process_group, module_states, broadcast_bucket_size, src)
[rank1]:   File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/utils.py", line 326, in _sync_params_and_buffers
[rank1]:     dist._broadcast_coalesced(
[rank1]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
W1023 07:39:59.765905 131166916654144 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 42656 closing signal SIGTERM
W1023 07:39:59.766342 131166916654144 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 42657 closing signal SIGTERM
E1023 07:39:59.880910 131166916654144 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 42655) of binary: /mnt/Array/miniconda3/envs/kohyass/bin/python
Traceback (most recent call last):
  File "/mnt/Array/miniconda3/envs/kohyass/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
    args.func(args)
  File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1097, in launch_command
    multi_gpu_launcher(args)
  File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/accelerate/commands/launch.py", line 734, in multi_gpu_launcher
    distrib_run.run(args)
  File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
    elastic_launch(
  File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/mnt/Array/miniconda3/envs/kohyass/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
flux_train_network.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-10-23_07:39:59
  host      : superserver
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 42655)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

@kohya-ss
Copy link
Owner

It seems that fp8 support is achieved by Accelerate using the transformer engine internally, but multi-GPU training may not be supported. Further investigation is needed.

@Mikec78660
Copy link

Mikec78660 commented Oct 24, 2024

same error trying to run on two gpus.

accelerate launch \
  --mixed_precision bf16 \
  --num_cpu_threads_per_process 1 \
  sd-scripts/flux_train_network.py \
  --pretrained_model_name_or_path "/app/fluxgym/models/unet/flux1-dev.sft" \
  --clip_l "/app/fluxgym/models/clip/clip_l.safetensors" \
  --t5xxl "/app/fluxgym/models/clip/t5xxl_fp16.safetensors" \
  --ae "/app/fluxgym/models/vae/ae.sft" \
  --cache_latents_to_disk \
  --save_model_as safetensors \
  --sdpa --persistent_data_loader_workers \
  --max_data_loader_n_workers 2 \
  --seed 42 \
  --gradient_checkpointing \
  --mixed_precision bf16 \
  --save_precision bf16 \
  --network_module networks.lora_flux \
  --network_dim 4 \
  --optimizer_type adafactor \
  --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" \
  --split_mode \
  --network_args "train_blocks=single" \
  --lr_scheduler constant_with_warmup \
  --max_grad_norm 0.0 \
  --learning_rate 8e-4 \
  --cache_text_encoder_outputs \
  --cache_text_encoder_outputs_to_disk \
  --fp8_base \
  --highvram \
  --max_train_epochs 2 \
  --save_every_n_epochs 4 \
  --dataset_config "/app/fluxgym/outputs/michael/dataset.toml" \
  --output_dir "/app/fluxgym/outputs/michael" \
  --output_name michael \
  --timestep_sampling shift \
  --discrete_flow_shift 3.1582 \
  --model_prediction_type raw \
  --guidance_scale 1 \
  --loss_type l2 \
INFO] Running bash "/app/fluxgym/outputs/michael/train.sh"
[2024-10-24 13:13:42] [INFO] The following values were not passed to `accelerate launch` and had defaults used instead:
[2024-10-24 13:13:42] [INFO] `--num_processes` was set to a value of `2`
[2024-10-24 13:13:42] [INFO] More than one GPU was found, enabling multi-GPU training.
[2024-10-24 13:13:42] [INFO] If this was unintended please pass in `--num_processes=1`.
[2024-10-24 13:13:42] [INFO] `--num_machines` was set to a value of `1`
[2024-10-24 13:13:42] [INFO] `--dynamo_backend` was set to a value of `'no'`
[2024-10-24 13:13:42] [INFO] To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
[2024-10-24 13:13:45] [INFO] 2024-10-24 13:13:45 INFO     highvram is enabled /            train_util.py:4106
[2024-10-24 13:13:45] [INFO] highvramが有効です
[2024-10-24 13:13:45] [INFO] WARNING  cache_latents_to_disk is         train_util.py:4127
[2024-10-24 13:13:45] [INFO] enabled, so cache_latents is
[2024-10-24 13:13:45] [INFO] also enabled /
[2024-10-24 13:13:45] [INFO] cache_latents_to_diskが有効なた
[2024-10-24 13:13:45] [INFO] め、cache_latentsを有効にします
[2024-10-24 13:13:45] [INFO] 2024-10-24 13:13:45 INFO     Checking the state dict: Diffusers flux_utils.py:62
[2024-10-24 13:13:45] [INFO] or BFL, dev or schnell
[2024-10-24 13:13:45] [INFO] 2024-10-24 13:13:45 INFO     highvram is enabled /            train_util.py:4106
[2024-10-24 13:13:45] [INFO] highvramが有効です
[2024-10-24 13:13:45] [INFO] INFO     t5xxl_max_token_length:   flux_train_network.py:152
[2024-10-24 13:13:45] [INFO] 512
[2024-10-24 13:13:45] [INFO] WARNING  cache_latents_to_disk is         train_util.py:4127
[2024-10-24 13:13:45] [INFO] enabled, so cache_latents is
[2024-10-24 13:13:45] [INFO] also enabled /
[2024-10-24 13:13:45] [INFO] cache_latents_to_diskが有効なた
[2024-10-24 13:13:45] [INFO] め、cache_latentsを有効にします
[2024-10-24 13:13:45] [INFO] 2024-10-24 13:13:45 INFO     Checking the state dict: Diffusers flux_utils.py:62
[2024-10-24 13:13:45] [INFO] or BFL, dev or schnell
[2024-10-24 13:13:45] [INFO] INFO     t5xxl_max_token_length:   flux_train_network.py:152
[2024-10-24 13:13:45] [INFO] 512
[2024-10-24 13:13:45] [INFO] /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
[2024-10-24 13:13:45] [INFO] warnings.warn(
[2024-10-24 13:13:45] [INFO] /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
[2024-10-24 13:13:45] [INFO] warnings.warn(
[2024-10-24 13:13:45] [INFO] You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
[2024-10-24 13:13:45] [INFO] You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
[2024-10-24 13:13:46] [INFO] 2024-10-24 13:13:46 INFO     Loading dataset config from    train_network.py:304
[2024-10-24 13:13:46] [INFO] /app/fluxgym/outputs/michael-c
[2024-10-24 13:13:46] [INFO] iesielczyk/dataset.toml
[2024-10-24 13:13:46] [INFO] INFO     prepare images.                  train_util.py:1956
[2024-10-24 13:13:46] [INFO] INFO     get image size from name of      train_util.py:1873
[2024-10-24 13:13:46] [INFO] cache files
[2024-10-24 13:13:46] [INFO] 0%|          | 0/10 [00:00<?, ?it/s]
100%|██████████| 10/10 [00:00<00:00, 174762.67it/s]
[2024-10-24 13:13:46] [INFO] INFO     set image size from cache files: train_util.py:1901
[2024-10-24 13:13:46] [INFO] 10/10
[2024-10-24 13:13:46] [INFO] INFO     found directory                  train_util.py:1903
[2024-10-24 13:13:46] [INFO] /app/fluxgym/datasets/michael contains 10 image
[2024-10-24 13:13:46] [INFO] files
[2024-10-24 13:13:46] [INFO] read caption:   0%|          | 0/10 [00:00<?, ?it/s]
read caption: 100%|██████████| 10/10 [00:00<00:00, 32239.08it/s]
[2024-10-24 13:13:46] [INFO] INFO     40 train images with repeating.  train_util.py:1997
[2024-10-24 13:13:46] [INFO] INFO     0 reg images.                    train_util.py:2000
[2024-10-24 13:13:46] [INFO] WARNING  no regularization images /       train_util.py:2005
[2024-10-24 13:13:46] [INFO] 正則化画像が見つかりませんでした
[2024-10-24 13:13:46] [INFO] INFO     [Dataset 0]                      config_util.py:567
[2024-10-24 13:13:46] [INFO] batch_size: 1
[2024-10-24 13:13:46] [INFO] resolution: (512, 512)
[2024-10-24 13:13:46] [INFO] enable_bucket: False
[2024-10-24 13:13:46] [INFO] network_multiplier: 1.0
[2024-10-24 13:13:46] [INFO] 
[2024-10-24 13:13:46] [INFO] [Subset 0 of Dataset 0]
[2024-10-24 13:13:46] [INFO] image_dir:
[2024-10-24 13:13:46] [INFO] "/app/fluxgym/datasets/michael"
[2024-10-24 13:13:46] [INFO] image_count: 10
[2024-10-24 13:13:46] [INFO] num_repeats: 4
[2024-10-24 13:13:46] [INFO] shuffle_caption: False
[2024-10-24 13:13:46] [INFO] keep_tokens: 1
[2024-10-24 13:13:46] [INFO] keep_tokens_separator:
[2024-10-24 13:13:46] [INFO] caption_separator: ,
[2024-10-24 13:13:46] [INFO] secondary_separator: None
[2024-10-24 13:13:46] [INFO] enable_wildcard: False
[2024-10-24 13:13:46] [INFO] caption_dropout_rate: 0.0
[2024-10-24 13:13:46] [INFO] caption_dropout_every_n_epoc
[2024-10-24 13:13:46] [INFO] hes: 0
[2024-10-24 13:13:46] [INFO] caption_tag_dropout_rate:
[2024-10-24 13:13:46] [INFO] 0.0
[2024-10-24 13:13:46] [INFO] caption_prefix: None
[2024-10-24 13:13:46] [INFO] caption_suffix: None
[2024-10-24 13:13:46] [INFO] color_aug: False
[2024-10-24 13:13:46] [INFO] flip_aug: False
[2024-10-24 13:13:46] [INFO] face_crop_aug_range: None
[2024-10-24 13:13:46] [INFO] random_crop: False
[2024-10-24 13:13:46] [INFO] token_warmup_min: 1
[2024-10-24 13:13:46] [INFO] token_warmup_step: 0
[2024-10-24 13:13:46] [INFO] alpha_mask: False
[2024-10-24 13:13:46] [INFO] custom_attributes: {}
[2024-10-24 13:13:46] [INFO] is_reg: False
[2024-10-24 13:13:46] [INFO] class_tokens: Michael
[2024-10-24 13:13:46] [INFO] caption_extension: .txt
[2024-10-24 13:13:46] [INFO] 
[2024-10-24 13:13:46] [INFO] 
[2024-10-24 13:13:46] [INFO] INFO     [Dataset 0]                      config_util.py:573
[2024-10-24 13:13:46] [INFO] INFO     loading image sizes.              train_util.py:923
[2024-10-24 13:13:46] [INFO] 0%|          | 0/10 [00:00<?, ?it/s]
100%|██████████| 10/10 [00:00<00:00, 537731.28it/s]
[2024-10-24 13:13:46] [INFO] INFO     prepare dataset                   train_util.py:948
[2024-10-24 13:13:46] [INFO] INFO     preparing accelerator          train_network.py:369
[2024-10-24 13:13:46] [INFO] 2024-10-24 13:13:46 INFO     Loading dataset config from    train_network.py:304
[2024-10-24 13:13:46] [INFO] /app/fluxgym/outputs/michael-c
[2024-10-24 13:13:46] [INFO] iesielczyk/dataset.toml
[2024-10-24 13:13:46] [INFO] INFO     prepare images.                  train_util.py:1956
[2024-10-24 13:13:46] [INFO] INFO     get image size from name of      train_util.py:1873
[2024-10-24 13:13:46] [INFO] cache files
[2024-10-24 13:13:46] [INFO] 0%|          | 0/10 [00:00<?, ?it/s]
100%|██████████| 10/10 [00:00<00:00, 169809.88it/s]
[2024-10-24 13:13:46] [INFO] INFO     set image size from cache files: train_util.py:1901
[2024-10-24 13:13:46] [INFO] 10/10
[2024-10-24 13:13:46] [INFO] INFO     found directory                  train_util.py:1903
[2024-10-24 13:13:46] [INFO] /app/fluxgym/datasets/michael contains 10 image
[2024-10-24 13:13:46] [INFO] files
[2024-10-24 13:13:46] [INFO] read caption:   0%|          | 0/10 [00:00<?, ?it/s]
read caption: 100%|██████████| 10/10 [00:00<00:00, 29852.70it/s]
[2024-10-24 13:13:46] [INFO] INFO     40 train images with repeating.  train_util.py:1997
[2024-10-24 13:13:46] [INFO] INFO     0 reg images.                    train_util.py:2000
[2024-10-24 13:13:46] [INFO] WARNING  no regularization images /       train_util.py:2005
[2024-10-24 13:13:46] [INFO] 正則化画像が見つかりませんでした
[2024-10-24 13:13:46] [INFO] INFO     [Dataset 0]                      config_util.py:567
[2024-10-24 13:13:46] [INFO] batch_size: 1
[2024-10-24 13:13:46] [INFO] resolution: (512, 512)
[2024-10-24 13:13:46] [INFO] enable_bucket: False
[2024-10-24 13:13:46] [INFO] network_multiplier: 1.0
[2024-10-24 13:13:46] [INFO] 
[2024-10-24 13:13:46] [INFO] [Subset 0 of Dataset 0]
[2024-10-24 13:13:46] [INFO] image_dir:
[2024-10-24 13:13:46] [INFO] "/app/fluxgym/datasets/michael"
[2024-10-24 13:13:46] [INFO] image_count: 10
[2024-10-24 13:13:46] [INFO] num_repeats: 4
[2024-10-24 13:13:46] [INFO] shuffle_caption: False
[2024-10-24 13:13:46] [INFO] keep_tokens: 1
[2024-10-24 13:13:46] [INFO] keep_tokens_separator:
[2024-10-24 13:13:46] [INFO] caption_separator: ,
[2024-10-24 13:13:46] [INFO] secondary_separator: None
[2024-10-24 13:13:46] [INFO] enable_wildcard: False
[2024-10-24 13:13:46] [INFO] caption_dropout_rate: 0.0
[2024-10-24 13:13:46] [INFO] caption_dropout_every_n_epoc
[2024-10-24 13:13:46] [INFO] hes: 0
[2024-10-24 13:13:46] [INFO] caption_tag_dropout_rate:
[2024-10-24 13:13:46] [INFO] 0.0
[2024-10-24 13:13:46] [INFO] caption_prefix: None
[2024-10-24 13:13:46] [INFO] caption_suffix: None
[2024-10-24 13:13:46] [INFO] color_aug: False
[2024-10-24 13:13:46] [INFO] flip_aug: False
[2024-10-24 13:13:46] [INFO] face_crop_aug_range: None
[2024-10-24 13:13:46] [INFO] random_crop: False
[2024-10-24 13:13:46] [INFO] token_warmup_min: 1
[2024-10-24 13:13:46] [INFO] token_warmup_step: 0
[2024-10-24 13:13:46] [INFO] alpha_mask: False
[2024-10-24 13:13:46] [INFO] custom_attributes: {}
[2024-10-24 13:13:46] [INFO] is_reg: False
[2024-10-24 13:13:46] [INFO] class_tokens: Michael
[2024-10-24 13:13:46] [INFO] caption_extension: .txt
[2024-10-24 13:13:46] [INFO] 
[2024-10-24 13:13:46] [INFO] 
[2024-10-24 13:13:46] [INFO] INFO     [Dataset 0]                      config_util.py:573
[2024-10-24 13:13:46] [INFO] INFO     loading image sizes.              train_util.py:923
[2024-10-24 13:13:46] [INFO] 0%|          | 0/10 [00:00<?, ?it/s]
100%|██████████| 10/10 [00:00<00:00, 243854.88it/s]
[2024-10-24 13:13:46] [INFO] INFO     prepare dataset                   train_util.py:948
[2024-10-24 13:13:46] [INFO] INFO     preparing accelerator          train_network.py:369
[2024-10-24 13:13:46] [INFO] accelerator device: cuda:0
[2024-10-24 13:13:46] [INFO] INFO     Checking the state dict: Diffusers flux_utils.py:62
[2024-10-24 13:13:46] [INFO] or BFL, dev or schnell
[2024-10-24 13:13:46] [INFO] INFO     Building Flux model dev from BFL  flux_utils.py:116
[2024-10-24 13:13:46] [INFO] checkpoint
[2024-10-24 13:13:46] [INFO] accelerator device: cuda:1
[2024-10-24 13:13:46] [INFO] INFO     Checking the state dict: Diffusers flux_utils.py:62
[2024-10-24 13:13:46] [INFO] or BFL, dev or schnell
[2024-10-24 13:13:46] [INFO] INFO     Building Flux model dev from BFL  flux_utils.py:116
[2024-10-24 13:13:46] [INFO] checkpoint
[2024-10-24 13:13:46] [INFO] INFO     Loading state dict from           flux_utils.py:133
[2024-10-24 13:13:46] [INFO] /app/fluxgym/models/unet/flux1-de
[2024-10-24 13:13:46] [INFO] v.sft
[2024-10-24 13:13:46] [INFO] INFO     Loading state dict from           flux_utils.py:133
[2024-10-24 13:13:46] [INFO] /app/fluxgym/models/unet/flux1-de
[2024-10-24 13:13:46] [INFO] v.sft
[2024-10-24 13:13:46] [INFO] INFO     Loaded Flux: <All keys matched    flux_utils.py:145
[2024-10-24 13:13:46] [INFO] successfully>
[2024-10-24 13:13:46] [INFO] INFO     Loaded Flux: <All keys matched    flux_utils.py:145
[2024-10-24 13:13:46] [INFO] successfully>
[2024-10-24 13:13:46] [INFO] INFO     prepare split model       flux_train_network.py:107
[2024-10-24 13:13:46] [INFO] INFO     prepare split model       flux_train_network.py:107
[2024-10-24 13:13:46] [INFO] INFO     load state dict for lower flux_train_network.py:114
[2024-10-24 13:13:46] [INFO] INFO     load state dict for lower flux_train_network.py:114
[2024-10-24 13:13:46] [INFO] INFO     load state dict for upper flux_train_network.py:119
[2024-10-24 13:13:46] [INFO] INFO     load state dict for upper flux_train_network.py:119
[2024-10-24 13:13:46] [INFO] INFO     prepare upper model       flux_train_network.py:122
[2024-10-24 13:13:46] [INFO] INFO     prepare upper model       flux_train_network.py:122
[2024-10-24 13:14:40] [INFO] [rank0]: Traceback (most recent call last):
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/app/fluxgym/sd-scripts/flux_train_network.py", line 564, in <module>
[2024-10-24 13:14:40] [INFO] [rank0]:     trainer.train(args)
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/app/fluxgym/sd-scripts/train_network.py", line 378, in train
[2024-10-24 13:14:40] [INFO] [rank0]:     model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/app/fluxgym/sd-scripts/flux_train_network.py", line 79, in load_target_model
[2024-10-24 13:14:40] [INFO] [rank0]:     model = self.prepare_split_model(model, weight_dtype, accelerator)
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/app/fluxgym/sd-scripts/flux_train_network.py", line 129, in prepare_split_model
[2024-10-24 13:14:40] [INFO] [rank0]:     flux_upper = accelerator.prepare(flux_upper)
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1311, in prepare
[2024-10-24 13:14:40] [INFO] [rank0]:     result = tuple(
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1312, in <genexpr>
[2024-10-24 13:14:40] [INFO] [rank0]:     self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1188, in _prepare_one
[2024-10-24 13:14:40] [INFO] [rank0]:     return self.prepare_model(obj, device_placement=device_placement)
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1452, in prepare_model
[2024-10-24 13:14:40] [INFO] [rank0]:     model = torch.nn.parallel.DistributedDataParallel(
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 827, in __init__
[2024-10-24 13:14:40] [INFO] [rank0]:     _sync_module_states(
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/utils.py", line 317, in _sync_module_states
[2024-10-24 13:14:40] [INFO] [rank0]:     _sync_params_and_buffers(process_group, module_states, broadcast_bucket_size, src)
[2024-10-24 13:14:40] [INFO] [rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/utils.py", line 328, in _sync_params_and_buffers
[2024-10-24 13:14:40] [INFO] [rank0]:     dist._broadcast_coalesced(
[2024-10-24 13:14:40] [INFO] [rank0]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
[2024-10-24 13:14:40] [INFO] [rank1]: Traceback (most recent call last):
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/app/fluxgym/sd-scripts/flux_train_network.py", line 564, in <module>
[2024-10-24 13:14:40] [INFO] [rank1]:     trainer.train(args)
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/app/fluxgym/sd-scripts/train_network.py", line 378, in train
[2024-10-24 13:14:40] [INFO] [rank1]:     model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/app/fluxgym/sd-scripts/flux_train_network.py", line 79, in load_target_model
[2024-10-24 13:14:40] [INFO] [rank1]:     model = self.prepare_split_model(model, weight_dtype, accelerator)
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/app/fluxgym/sd-scripts/flux_train_network.py", line 129, in prepare_split_model
[2024-10-24 13:14:40] [INFO] [rank1]:     flux_upper = accelerator.prepare(flux_upper)
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1311, in prepare
[2024-10-24 13:14:40] [INFO] [rank1]:     result = tuple(
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1312, in <genexpr>
[2024-10-24 13:14:40] [INFO] [rank1]:     self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1188, in _prepare_one
[2024-10-24 13:14:40] [INFO] [rank1]:     return self.prepare_model(obj, device_placement=device_placement)
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1452, in prepare_model
[2024-10-24 13:14:40] [INFO] [rank1]:     model = torch.nn.parallel.DistributedDataParallel(
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 827, in __init__
[2024-10-24 13:14:40] [INFO] [rank1]:     _sync_module_states(
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/utils.py", line 317, in _sync_module_states
[2024-10-24 13:14:40] [INFO] [rank1]:     _sync_params_and_buffers(process_group, module_states, broadcast_bucket_size, src)
[2024-10-24 13:14:40] [INFO] [rank1]:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/utils.py", line 328, in _sync_params_and_buffers
[2024-10-24 13:14:40] [INFO] [rank1]:     dist._broadcast_coalesced(
[2024-10-24 13:14:40] [INFO] [rank1]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
[2024-10-24 13:14:40] [INFO] [rank0]:[W1024 13:14:40.811569300 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())
[2024-10-24 13:14:41] [INFO] E1024 13:14:41.891000 25 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 36) of binary: /usr/bin/python3
[2024-10-24 13:14:41] [INFO] Traceback (most recent call last):
[2024-10-24 13:14:41] [INFO] File "/usr/local/bin/accelerate", line 8, in <module>
[2024-10-24 13:14:41] [INFO] sys.exit(main())
[2024-10-24 13:14:41] [INFO] File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
[2024-10-24 13:14:41] [INFO] args.func(args)
[2024-10-24 13:14:41] [INFO] File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 1097, in launch_command
[2024-10-24 13:14:41] [INFO] multi_gpu_launcher(args)
[2024-10-24 13:14:41] [INFO] File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 734, in multi_gpu_launcher
[2024-10-24 13:14:41] [INFO] distrib_run.run(args)
[2024-10-24 13:14:41] [INFO] File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run
[2024-10-24 13:14:41] [INFO] elastic_launch(
[2024-10-24 13:14:41] [INFO] File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
[2024-10-24 13:14:41] [INFO] return launch_agent(self._config, self._entrypoint, list(args))
[2024-10-24 13:14:41] [INFO] File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
[2024-10-24 13:14:41] [INFO] raise ChildFailedError(
[2024-10-24 13:14:41] [INFO] torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
[2024-10-24 13:14:41] [INFO] ============================================================
[2024-10-24 13:14:41] [INFO] sd-scripts/flux_train_network.py FAILED
[2024-10-24 13:14:41] [INFO] ------------------------------------------------------------
[2024-10-24 13:14:41] [INFO] Failures:
[2024-10-24 13:14:41] [INFO] [1]:
[2024-10-24 13:14:41] [INFO] time      : 2024-10-24_13:14:41
[2024-10-24 13:14:41] [INFO] host      : fd73707f870d
[2024-10-24 13:14:41] [INFO] rank      : 1 (local_rank: 1)
[2024-10-24 13:14:41] [INFO] exitcode  : 1 (pid: 37)
[2024-10-24 13:14:41] [INFO] error_file: <N/A>
[2024-10-24 13:14:41] [INFO] traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2024-10-24 13:14:41] [INFO] ------------------------------------------------------------
[2024-10-24 13:14:41] [INFO] Root Cause (first observed failure):
[2024-10-24 13:14:41] [INFO] [0]:
[2024-10-24 13:14:41] [INFO] time      : 2024-10-24_13:14:41
[2024-10-24 13:14:41] [INFO] host      : fd73707f870d
[2024-10-24 13:14:41] [INFO] rank      : 0 (local_rank: 0)
[2024-10-24 13:14:41] [INFO] exitcode  : 1 (pid: 36)
[2024-10-24 13:14:41] [INFO] error_file: <N/A>
[2024-10-24 13:14:41] [INFO] traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2024-10-24 13:14:41] [INFO] ============================================================
[2024-10-24 13:14:42] [ERROR] Command exited with code 1
[2024-10-24 13:14:42] [INFO] Runner: <LogsViewRunner nb_logs=276 exit_code=1>

@wanglaofei
Copy link

Same problems. When training the Flux LoRA with multi-gpus, it occurs:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants