list index out of range > local_state[k] = v[grad_accum_step] > dlas/trainer/steps.py", line 242, in do_forward_backward #206

Closed
opened 2023-04-13 15:50:40 +00:00 by chigkim · 2 comments

Something seems to be broken in latest commit for training on Colab?

dlas/trainer/steps.py", line 242, in do_forward_backward
local_state[k] = v[grad_accum_step]
list index out of range
Spawning process:  ./train.sh ./training/test/train.yaml
[Training] [2023-04-13T15:41:52.636749] ./train.sh: line 2: ./venv/bin/activate: No such file or directory
[Training] [2023-04-13T15:41:57.356553] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/lib/python3.9/dist-packages/cv2/../../lib64')}
[Training] [2023-04-13T15:41:57.362564]   warn(msg)
[Training] [2023-04-13T15:41:57.367944] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: /usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
[Training] [2023-04-13T15:41:57.372368]   warn(msg)
[Training] [2023-04-13T15:41:57.376638] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}
[Training] [2023-04-13T15:41:57.383097]   warn(msg)
[Training] [2023-04-13T15:41:57.387732] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')}
[Training] [2023-04-13T15:41:57.391918]   warn(msg)
[Training] [2023-04-13T15:41:57.396820] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-s-13ynx4ms6lmmf --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')}
[Training] [2023-04-13T15:41:57.401227]   warn(msg)
[Training] [2023-04-13T15:41:57.405660] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}
[Training] [2023-04-13T15:41:57.409382]   warn(msg)
[Training] [2023-04-13T15:41:57.413190] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')}
[Training] [2023-04-13T15:41:57.417735]   warn(msg)
[Training] [2023-04-13T15:41:57.421958] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
[Training] [2023-04-13T15:41:57.426432] Either way, this might cause trouble in the future:
[Training] [2023-04-13T15:41:57.430784] If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
[Training] [2023-04-13T15:41:57.435286]   warn(msg)
[Training] [2023-04-13T15:41:59.846781] 23-04-13 15:41:59.845 - INFO:   name: test
[Training] [2023-04-13T15:41:59.852550]   model: extensibletrainer
[Training] [2023-04-13T15:41:59.861353]   scale: 1
[Training] [2023-04-13T15:41:59.867198]   gpu_ids: [0]
[Training] [2023-04-13T15:41:59.876323]   start_step: 0
[Training] [2023-04-13T15:41:59.887084]   checkpointing_enabled: True
[Training] [2023-04-13T15:41:59.895789]   fp16: False
[Training] [2023-04-13T15:41:59.905360]   bitsandbytes: True
[Training] [2023-04-13T15:41:59.915364]   gpus: 1
[Training] [2023-04-13T15:41:59.926806]   datasets:[
[Training] [2023-04-13T15:41:59.935873]     train:[
[Training] [2023-04-13T15:41:59.944649]       name: training
[Training] [2023-04-13T15:41:59.952880]       n_workers: 2
[Training] [2023-04-13T15:41:59.960848]       batch_size: 119
[Training] [2023-04-13T15:41:59.973342]       mode: paired_voice_audio
[Training] [2023-04-13T15:41:59.980772]       path: ./training/test/train.txt
[Training] [2023-04-13T15:41:59.990948]       fetcher_mode: ['lj']
[Training] [2023-04-13T15:41:59.997822]       phase: train
[Training] [2023-04-13T15:42:00.003493]       max_wav_length: 255995
[Training] [2023-04-13T15:42:00.008424]       max_text_length: 200
[Training] [2023-04-13T15:42:00.021768]       sample_rate: 22050
[Training] [2023-04-13T15:42:00.027860]       load_conditioning: True
[Training] [2023-04-13T15:42:00.035127]       num_conditioning_candidates: 2
[Training] [2023-04-13T15:42:00.040935]       conditioning_length: 44000
[Training] [2023-04-13T15:42:00.047147]       use_bpe_tokenizer: True
[Training] [2023-04-13T15:42:00.058545]       tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json
[Training] [2023-04-13T15:42:00.070782]       load_aligned_codes: False
[Training] [2023-04-13T15:42:00.093799]       data_type: img
[Training] [2023-04-13T15:42:00.117401]     ]
[Training] [2023-04-13T15:42:00.129305]     val:[
[Training] [2023-04-13T15:42:00.144631]       name: validation
[Training] [2023-04-13T15:42:00.150280]       n_workers: 2
[Training] [2023-04-13T15:42:00.157076]       batch_size: 4
[Training] [2023-04-13T15:42:00.169120]       mode: paired_voice_audio
[Training] [2023-04-13T15:42:00.177704]       path: ./training/test/validation.txt
[Training] [2023-04-13T15:42:00.183351]       fetcher_mode: ['lj']
[Training] [2023-04-13T15:42:00.193747]       phase: val
[Training] [2023-04-13T15:42:00.206693]       max_wav_length: 255995
[Training] [2023-04-13T15:42:00.215317]       max_text_length: 200
[Training] [2023-04-13T15:42:00.221602]       sample_rate: 22050
[Training] [2023-04-13T15:42:00.234369]       load_conditioning: True
[Training] [2023-04-13T15:42:00.241870]       num_conditioning_candidates: 2
[Training] [2023-04-13T15:42:00.250862]       conditioning_length: 44000
[Training] [2023-04-13T15:42:00.255783]       use_bpe_tokenizer: True
[Training] [2023-04-13T15:42:00.260953]       tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json
[Training] [2023-04-13T15:42:00.268827]       load_aligned_codes: False
[Training] [2023-04-13T15:42:00.276718]       data_type: img
[Training] [2023-04-13T15:42:00.284127]     ]
[Training] [2023-04-13T15:42:00.290899]   ]
[Training] [2023-04-13T15:42:00.296092]   steps:[
[Training] [2023-04-13T15:42:00.300737]     gpt_train:[
[Training] [2023-04-13T15:42:00.306883]       training: gpt
[Training] [2023-04-13T15:42:00.312111]       loss_log_buffer: 500
[Training] [2023-04-13T15:42:00.316954]       optimizer: adamw
[Training] [2023-04-13T15:42:00.326274]       optimizer_params:[
[Training] [2023-04-13T15:42:00.331435]         lr: 1e-05
[Training] [2023-04-13T15:42:00.336478]         weight_decay: 0.01
[Training] [2023-04-13T15:42:00.341716]         beta1: 0.9
[Training] [2023-04-13T15:42:00.349091]         beta2: 0.96
[Training] [2023-04-13T15:42:00.358426]       ]
[Training] [2023-04-13T15:42:00.365601]       clip_grad_eps: 4
[Training] [2023-04-13T15:42:00.372225]       injectors:[
[Training] [2023-04-13T15:42:00.378547]         paired_to_mel:[
[Training] [2023-04-13T15:42:00.385882]           type: torch_mel_spectrogram
[Training] [2023-04-13T15:42:00.395359]           mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth
[Training] [2023-04-13T15:42:00.401600]           in: wav
[Training] [2023-04-13T15:42:00.426385]           out: paired_mel
[Training] [2023-04-13T15:42:00.443228]         ]
[Training] [2023-04-13T15:42:00.451201]         paired_cond_to_mel:[
[Training] [2023-04-13T15:42:00.460088]           type: for_each
[Training] [2023-04-13T15:42:00.471114]           subtype: torch_mel_spectrogram
[Training] [2023-04-13T15:42:00.481096]           mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth
[Training] [2023-04-13T15:42:00.485412]           in: conditioning
[Training] [2023-04-13T15:42:00.495391]           out: paired_conditioning_mel
[Training] [2023-04-13T15:42:00.504951]         ]
[Training] [2023-04-13T15:42:00.513923]         to_codes:[
[Training] [2023-04-13T15:42:00.524441]           type: discrete_token
[Training] [2023-04-13T15:42:00.536475]           in: paired_mel
[Training] [2023-04-13T15:42:00.546114]           out: paired_mel_codes
[Training] [2023-04-13T15:42:00.554706]           dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml
[Training] [2023-04-13T15:42:00.559192]         ]
[Training] [2023-04-13T15:42:00.568721]         paired_fwd_text:[
[Training] [2023-04-13T15:42:00.579954]           type: generator
[Training] [2023-04-13T15:42:00.590043]           generator: gpt
[Training] [2023-04-13T15:42:00.600074]           in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths']
[Training] [2023-04-13T15:42:00.610001]           out: ['loss_text_ce', 'loss_mel_ce', 'logits']
[Training] [2023-04-13T15:42:00.618061]         ]
[Training] [2023-04-13T15:42:00.628726]       ]
[Training] [2023-04-13T15:42:00.634694]       losses:[
[Training] [2023-04-13T15:42:00.646845]         text_ce:[
[Training] [2023-04-13T15:42:00.655794]           type: direct
[Training] [2023-04-13T15:42:00.664226]           weight: 0.01
[Training] [2023-04-13T15:42:00.673668]           key: loss_text_ce
[Training] [2023-04-13T15:42:00.682828]         ]
[Training] [2023-04-13T15:42:00.689069]         mel_ce:[
[Training] [2023-04-13T15:42:00.696880]           type: direct
[Training] [2023-04-13T15:42:00.704703]           weight: 1
[Training] [2023-04-13T15:42:00.709503]           key: loss_mel_ce
[Training] [2023-04-13T15:42:00.721886]         ]
[Training] [2023-04-13T15:42:00.731887]       ]
[Training] [2023-04-13T15:42:00.739927]     ]
[Training] [2023-04-13T15:42:00.746677]   ]
[Training] [2023-04-13T15:42:00.750823]   networks:[
[Training] [2023-04-13T15:42:00.755311]     gpt:[
[Training] [2023-04-13T15:42:00.759789]       type: generator
[Training] [2023-04-13T15:42:00.766524]       which_model_G: unified_voice2
[Training] [2023-04-13T15:42:00.771137]       kwargs:[
[Training] [2023-04-13T15:42:00.776074]         layers: 30
[Training] [2023-04-13T15:42:00.780596]         model_dim: 1024
[Training] [2023-04-13T15:42:00.785389]         heads: 16
[Training] [2023-04-13T15:42:00.790175]         max_text_tokens: 402
[Training] [2023-04-13T15:42:00.794797]         max_mel_tokens: 604
[Training] [2023-04-13T15:42:00.799558]         max_conditioning_inputs: 2
[Training] [2023-04-13T15:42:00.805927]         mel_length_compression: 1024
[Training] [2023-04-13T15:42:00.811395]         number_text_tokens: 256
[Training] [2023-04-13T15:42:00.815897]         number_mel_codes: 8194
[Training] [2023-04-13T15:42:00.820584]         start_mel_token: 8192
[Training] [2023-04-13T15:42:00.825542]         stop_mel_token: 8193
[Training] [2023-04-13T15:42:00.830648]         start_text_token: 255
[Training] [2023-04-13T15:42:00.835087]         train_solo_embeddings: False
[Training] [2023-04-13T15:42:00.841909]         use_mel_codes_as_input: True
[Training] [2023-04-13T15:42:00.853354]         checkpointing: True
[Training] [2023-04-13T15:42:00.860889]         tortoise_compat: True
[Training] [2023-04-13T15:42:00.868546]       ]
[Training] [2023-04-13T15:42:00.877742]     ]
[Training] [2023-04-13T15:42:00.887919]   ]
[Training] [2023-04-13T15:42:00.893936]   path:[
[Training] [2023-04-13T15:42:00.900794]     strict_load: True
[Training] [2023-04-13T15:42:00.916217]     pretrain_model_gpt: ./models/tortoise/autoregressive.pth
[Training] [2023-04-13T15:42:00.926493]     root: ./
[Training] [2023-04-13T15:42:00.931746]     experiments_root: ./training/test/finetune
[Training] [2023-04-13T15:42:00.936646]     models: ./training/test/finetune/models
[Training] [2023-04-13T15:42:00.952195]     training_state: ./training/test/finetune/training_state
[Training] [2023-04-13T15:42:00.957018]     log: ./training/test/finetune
[Training] [2023-04-13T15:42:00.964565]     val_images: ./training/test/finetune/val_images
[Training] [2023-04-13T15:42:00.970415]   ]
[Training] [2023-04-13T15:42:00.983376]   train:[
[Training] [2023-04-13T15:42:00.991112]     niter: 1500
[Training] [2023-04-13T15:42:01.003025]     warmup_iter: -1
[Training] [2023-04-13T15:42:01.018846]     mega_batch_factor: 29
[Training] [2023-04-13T15:42:01.026797]     val_freq: 15
[Training] [2023-04-13T15:42:01.038650]     ema_enabled: False
[Training] [2023-04-13T15:42:01.048243]     default_lr_scheme: MultiStepLR
[Training] [2023-04-13T15:42:01.054150]     gen_lr_steps: [6, 12, 27, 54, 75, 99, 150]
[Training] [2023-04-13T15:42:01.060288]     lr_gamma: 0.5
[Training] [2023-04-13T15:42:01.065154]   ]
[Training] [2023-04-13T15:42:01.069906]   eval:[
[Training] [2023-04-13T15:42:01.074149]     pure: False
[Training] [2023-04-13T15:42:01.078672]     output_state: gen
[Training] [2023-04-13T15:42:01.083153]   ]
[Training] [2023-04-13T15:42:01.087901]   logger:[
[Training] [2023-04-13T15:42:01.093945]     save_checkpoint_freq: 15
[Training] [2023-04-13T15:42:01.098831]     visuals: ['gen', 'mel']
[Training] [2023-04-13T15:42:01.104228]     visual_debug_rate: 15
[Training] [2023-04-13T15:42:01.108947]     is_mel_spectrogram: True
[Training] [2023-04-13T15:42:01.113518]   ]
[Training] [2023-04-13T15:42:01.118213]   is_train: True
[Training] [2023-04-13T15:42:01.123220]   dist: False
[Training] [2023-04-13T15:42:01.127898] 
[Training] [2023-04-13T15:42:01.132417] 23-04-13 15:41:59.856 - INFO: Random seed: 5081
[Training] [2023-04-13T15:42:02.276639] 23-04-13 15:42:02.275 - INFO: Number of training data elements: 288, iters: 3
[Training] [2023-04-13T15:42:02.282825] 23-04-13 15:42:02.282 - INFO: Total epochs needed: 500 for iters 1,500
[Training] [2023-04-13T15:42:03.812921] /usr/local/lib/python3.9/dist-packages/transformers/configuration_utils.py:363: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
[Training] [2023-04-13T15:42:03.825588]   warnings.warn(
[Training] [2023-04-13T15:42:17.062878] 23-04-13 15:42:17.062 - INFO: Loading model for [./models/tortoise/autoregressive.pth]
[Training] [2023-04-13T15:42:26.212464] 23-04-13 15:42:26.202 - INFO: Start training from epoch: 0, iter: 0
[Training] [2023-04-13T15:42:26.212530] 
[Training] [2023-04-13T15:42:26.212549] ===================================BUG REPORT===================================
[Training] [2023-04-13T15:42:26.212563] Welcome to bitsandbytes. For bug reports, please run
[Training] [2023-04-13T15:42:26.212576] 
[Training] [2023-04-13T15:42:26.212589] python -m bitsandbytes
[Training] [2023-04-13T15:42:26.212601] 
[Training] [2023-04-13T15:42:26.212614]  and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
[Training] [2023-04-13T15:42:26.212627] ================================================================================
[Training] [2023-04-13T15:42:26.212640] bin /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
[Training] [2023-04-13T15:42:26.212653] CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
[Training] [2023-04-13T15:42:26.212667] CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
[Training] [2023-04-13T15:42:26.212679] CUDA SETUP: Highest compute capability among GPUs detected: 7.5
[Training] [2023-04-13T15:42:26.212692] CUDA SETUP: Detected CUDA version 118
[Training] [2023-04-13T15:42:26.212705] CUDA SETUP: Loading binary /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...
[Training] [2023-04-13T15:42:26.212717] Disabled distributed training.
[Training] [2023-04-13T15:42:26.212729] Loading from ./models/tortoise/dvae.pth
[Training] [2023-04-13T15:42:29.218519] /usr/local/lib/python3.9/dist-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
[Training] [2023-04-13T15:42:29.218579]   warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
[Training] [2023-04-13T15:43:09.949073] Traceback (most recent call last):
[Training] [2023-04-13T15:43:09.949119]   File "/content/ai-voice-cloning/./src/train.py", line 64, in <module>
[Training] [2023-04-13T15:43:09.949402]     train(config_path, args.launcher)
[Training] [2023-04-13T15:43:09.949439]   File "/content/ai-voice-cloning/./src/train.py", line 31, in train
[Training] [2023-04-13T15:43:09.949463]     trainer.do_training()
[Training] [2023-04-13T15:43:09.949483]   File "/content/ai-voice-cloning/modules/dlas/dlas/train.py", line 408, in do_training
[Training] [2023-04-13T15:43:09.949650]     metric = self.do_step(train_data)
[Training] [2023-04-13T15:43:09.949677]   File "/content/ai-voice-cloning/modules/dlas/dlas/train.py", line 271, in do_step
[Training] [2023-04-13T15:43:09.949756]     gradient_norms_dict = self.model.optimize_parameters(
[Training] [2023-04-13T15:43:09.949775]   File "/content/ai-voice-cloning/modules/dlas/dlas/trainer/ExtensibleTrainer.py", line 321, in optimize_parameters
[Training] [2023-04-13T15:43:09.949887]     ns = step.do_forward_backward(
[Training] [2023-04-13T15:43:09.949907]   File "/content/ai-voice-cloning/modules/dlas/dlas/trainer/steps.py", line 242, in do_forward_backward
[Training] [2023-04-13T15:43:09.950010]     local_state[k] = v[grad_accum_step]
[Training] [2023-04-13T15:43:09.950050] IndexError: list index out of range
[Training] [2023-04-13T15:43:22.858549] ./train.sh: line 4: deactivate: command not found
Something seems to be broken in latest commit for training on Colab? ``` dlas/trainer/steps.py", line 242, in do_forward_backward local_state[k] = v[grad_accum_step] list index out of range Spawning process: ./train.sh ./training/test/train.yaml [Training] [2023-04-13T15:41:52.636749] ./train.sh: line 2: ./venv/bin/activate: No such file or directory [Training] [2023-04-13T15:41:57.356553] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/lib/python3.9/dist-packages/cv2/../../lib64')} [Training] [2023-04-13T15:41:57.362564] warn(msg) [Training] [2023-04-13T15:41:57.367944] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: /usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... [Training] [2023-04-13T15:41:57.372368] warn(msg) [Training] [2023-04-13T15:41:57.376638] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} [Training] [2023-04-13T15:41:57.383097] warn(msg) [Training] [2023-04-13T15:41:57.387732] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')} [Training] [2023-04-13T15:41:57.391918] warn(msg) [Training] [2023-04-13T15:41:57.396820] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-s-13ynx4ms6lmmf --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} [Training] [2023-04-13T15:41:57.401227] warn(msg) [Training] [2023-04-13T15:41:57.405660] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} [Training] [2023-04-13T15:41:57.409382] warn(msg) [Training] [2023-04-13T15:41:57.413190] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} [Training] [2023-04-13T15:41:57.417735] warn(msg) [Training] [2023-04-13T15:41:57.421958] /usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward. [Training] [2023-04-13T15:41:57.426432] Either way, this might cause trouble in the future: [Training] [2023-04-13T15:41:57.430784] If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. [Training] [2023-04-13T15:41:57.435286] warn(msg) [Training] [2023-04-13T15:41:59.846781] 23-04-13 15:41:59.845 - INFO: name: test [Training] [2023-04-13T15:41:59.852550] model: extensibletrainer [Training] [2023-04-13T15:41:59.861353] scale: 1 [Training] [2023-04-13T15:41:59.867198] gpu_ids: [0] [Training] [2023-04-13T15:41:59.876323] start_step: 0 [Training] [2023-04-13T15:41:59.887084] checkpointing_enabled: True [Training] [2023-04-13T15:41:59.895789] fp16: False [Training] [2023-04-13T15:41:59.905360] bitsandbytes: True [Training] [2023-04-13T15:41:59.915364] gpus: 1 [Training] [2023-04-13T15:41:59.926806] datasets:[ [Training] [2023-04-13T15:41:59.935873] train:[ [Training] [2023-04-13T15:41:59.944649] name: training [Training] [2023-04-13T15:41:59.952880] n_workers: 2 [Training] [2023-04-13T15:41:59.960848] batch_size: 119 [Training] [2023-04-13T15:41:59.973342] mode: paired_voice_audio [Training] [2023-04-13T15:41:59.980772] path: ./training/test/train.txt [Training] [2023-04-13T15:41:59.990948] fetcher_mode: ['lj'] [Training] [2023-04-13T15:41:59.997822] phase: train [Training] [2023-04-13T15:42:00.003493] max_wav_length: 255995 [Training] [2023-04-13T15:42:00.008424] max_text_length: 200 [Training] [2023-04-13T15:42:00.021768] sample_rate: 22050 [Training] [2023-04-13T15:42:00.027860] load_conditioning: True [Training] [2023-04-13T15:42:00.035127] num_conditioning_candidates: 2 [Training] [2023-04-13T15:42:00.040935] conditioning_length: 44000 [Training] [2023-04-13T15:42:00.047147] use_bpe_tokenizer: True [Training] [2023-04-13T15:42:00.058545] tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json [Training] [2023-04-13T15:42:00.070782] load_aligned_codes: False [Training] [2023-04-13T15:42:00.093799] data_type: img [Training] [2023-04-13T15:42:00.117401] ] [Training] [2023-04-13T15:42:00.129305] val:[ [Training] [2023-04-13T15:42:00.144631] name: validation [Training] [2023-04-13T15:42:00.150280] n_workers: 2 [Training] [2023-04-13T15:42:00.157076] batch_size: 4 [Training] [2023-04-13T15:42:00.169120] mode: paired_voice_audio [Training] [2023-04-13T15:42:00.177704] path: ./training/test/validation.txt [Training] [2023-04-13T15:42:00.183351] fetcher_mode: ['lj'] [Training] [2023-04-13T15:42:00.193747] phase: val [Training] [2023-04-13T15:42:00.206693] max_wav_length: 255995 [Training] [2023-04-13T15:42:00.215317] max_text_length: 200 [Training] [2023-04-13T15:42:00.221602] sample_rate: 22050 [Training] [2023-04-13T15:42:00.234369] load_conditioning: True [Training] [2023-04-13T15:42:00.241870] num_conditioning_candidates: 2 [Training] [2023-04-13T15:42:00.250862] conditioning_length: 44000 [Training] [2023-04-13T15:42:00.255783] use_bpe_tokenizer: True [Training] [2023-04-13T15:42:00.260953] tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json [Training] [2023-04-13T15:42:00.268827] load_aligned_codes: False [Training] [2023-04-13T15:42:00.276718] data_type: img [Training] [2023-04-13T15:42:00.284127] ] [Training] [2023-04-13T15:42:00.290899] ] [Training] [2023-04-13T15:42:00.296092] steps:[ [Training] [2023-04-13T15:42:00.300737] gpt_train:[ [Training] [2023-04-13T15:42:00.306883] training: gpt [Training] [2023-04-13T15:42:00.312111] loss_log_buffer: 500 [Training] [2023-04-13T15:42:00.316954] optimizer: adamw [Training] [2023-04-13T15:42:00.326274] optimizer_params:[ [Training] [2023-04-13T15:42:00.331435] lr: 1e-05 [Training] [2023-04-13T15:42:00.336478] weight_decay: 0.01 [Training] [2023-04-13T15:42:00.341716] beta1: 0.9 [Training] [2023-04-13T15:42:00.349091] beta2: 0.96 [Training] [2023-04-13T15:42:00.358426] ] [Training] [2023-04-13T15:42:00.365601] clip_grad_eps: 4 [Training] [2023-04-13T15:42:00.372225] injectors:[ [Training] [2023-04-13T15:42:00.378547] paired_to_mel:[ [Training] [2023-04-13T15:42:00.385882] type: torch_mel_spectrogram [Training] [2023-04-13T15:42:00.395359] mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth [Training] [2023-04-13T15:42:00.401600] in: wav [Training] [2023-04-13T15:42:00.426385] out: paired_mel [Training] [2023-04-13T15:42:00.443228] ] [Training] [2023-04-13T15:42:00.451201] paired_cond_to_mel:[ [Training] [2023-04-13T15:42:00.460088] type: for_each [Training] [2023-04-13T15:42:00.471114] subtype: torch_mel_spectrogram [Training] [2023-04-13T15:42:00.481096] mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth [Training] [2023-04-13T15:42:00.485412] in: conditioning [Training] [2023-04-13T15:42:00.495391] out: paired_conditioning_mel [Training] [2023-04-13T15:42:00.504951] ] [Training] [2023-04-13T15:42:00.513923] to_codes:[ [Training] [2023-04-13T15:42:00.524441] type: discrete_token [Training] [2023-04-13T15:42:00.536475] in: paired_mel [Training] [2023-04-13T15:42:00.546114] out: paired_mel_codes [Training] [2023-04-13T15:42:00.554706] dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml [Training] [2023-04-13T15:42:00.559192] ] [Training] [2023-04-13T15:42:00.568721] paired_fwd_text:[ [Training] [2023-04-13T15:42:00.579954] type: generator [Training] [2023-04-13T15:42:00.590043] generator: gpt [Training] [2023-04-13T15:42:00.600074] in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths'] [Training] [2023-04-13T15:42:00.610001] out: ['loss_text_ce', 'loss_mel_ce', 'logits'] [Training] [2023-04-13T15:42:00.618061] ] [Training] [2023-04-13T15:42:00.628726] ] [Training] [2023-04-13T15:42:00.634694] losses:[ [Training] [2023-04-13T15:42:00.646845] text_ce:[ [Training] [2023-04-13T15:42:00.655794] type: direct [Training] [2023-04-13T15:42:00.664226] weight: 0.01 [Training] [2023-04-13T15:42:00.673668] key: loss_text_ce [Training] [2023-04-13T15:42:00.682828] ] [Training] [2023-04-13T15:42:00.689069] mel_ce:[ [Training] [2023-04-13T15:42:00.696880] type: direct [Training] [2023-04-13T15:42:00.704703] weight: 1 [Training] [2023-04-13T15:42:00.709503] key: loss_mel_ce [Training] [2023-04-13T15:42:00.721886] ] [Training] [2023-04-13T15:42:00.731887] ] [Training] [2023-04-13T15:42:00.739927] ] [Training] [2023-04-13T15:42:00.746677] ] [Training] [2023-04-13T15:42:00.750823] networks:[ [Training] [2023-04-13T15:42:00.755311] gpt:[ [Training] [2023-04-13T15:42:00.759789] type: generator [Training] [2023-04-13T15:42:00.766524] which_model_G: unified_voice2 [Training] [2023-04-13T15:42:00.771137] kwargs:[ [Training] [2023-04-13T15:42:00.776074] layers: 30 [Training] [2023-04-13T15:42:00.780596] model_dim: 1024 [Training] [2023-04-13T15:42:00.785389] heads: 16 [Training] [2023-04-13T15:42:00.790175] max_text_tokens: 402 [Training] [2023-04-13T15:42:00.794797] max_mel_tokens: 604 [Training] [2023-04-13T15:42:00.799558] max_conditioning_inputs: 2 [Training] [2023-04-13T15:42:00.805927] mel_length_compression: 1024 [Training] [2023-04-13T15:42:00.811395] number_text_tokens: 256 [Training] [2023-04-13T15:42:00.815897] number_mel_codes: 8194 [Training] [2023-04-13T15:42:00.820584] start_mel_token: 8192 [Training] [2023-04-13T15:42:00.825542] stop_mel_token: 8193 [Training] [2023-04-13T15:42:00.830648] start_text_token: 255 [Training] [2023-04-13T15:42:00.835087] train_solo_embeddings: False [Training] [2023-04-13T15:42:00.841909] use_mel_codes_as_input: True [Training] [2023-04-13T15:42:00.853354] checkpointing: True [Training] [2023-04-13T15:42:00.860889] tortoise_compat: True [Training] [2023-04-13T15:42:00.868546] ] [Training] [2023-04-13T15:42:00.877742] ] [Training] [2023-04-13T15:42:00.887919] ] [Training] [2023-04-13T15:42:00.893936] path:[ [Training] [2023-04-13T15:42:00.900794] strict_load: True [Training] [2023-04-13T15:42:00.916217] pretrain_model_gpt: ./models/tortoise/autoregressive.pth [Training] [2023-04-13T15:42:00.926493] root: ./ [Training] [2023-04-13T15:42:00.931746] experiments_root: ./training/test/finetune [Training] [2023-04-13T15:42:00.936646] models: ./training/test/finetune/models [Training] [2023-04-13T15:42:00.952195] training_state: ./training/test/finetune/training_state [Training] [2023-04-13T15:42:00.957018] log: ./training/test/finetune [Training] [2023-04-13T15:42:00.964565] val_images: ./training/test/finetune/val_images [Training] [2023-04-13T15:42:00.970415] ] [Training] [2023-04-13T15:42:00.983376] train:[ [Training] [2023-04-13T15:42:00.991112] niter: 1500 [Training] [2023-04-13T15:42:01.003025] warmup_iter: -1 [Training] [2023-04-13T15:42:01.018846] mega_batch_factor: 29 [Training] [2023-04-13T15:42:01.026797] val_freq: 15 [Training] [2023-04-13T15:42:01.038650] ema_enabled: False [Training] [2023-04-13T15:42:01.048243] default_lr_scheme: MultiStepLR [Training] [2023-04-13T15:42:01.054150] gen_lr_steps: [6, 12, 27, 54, 75, 99, 150] [Training] [2023-04-13T15:42:01.060288] lr_gamma: 0.5 [Training] [2023-04-13T15:42:01.065154] ] [Training] [2023-04-13T15:42:01.069906] eval:[ [Training] [2023-04-13T15:42:01.074149] pure: False [Training] [2023-04-13T15:42:01.078672] output_state: gen [Training] [2023-04-13T15:42:01.083153] ] [Training] [2023-04-13T15:42:01.087901] logger:[ [Training] [2023-04-13T15:42:01.093945] save_checkpoint_freq: 15 [Training] [2023-04-13T15:42:01.098831] visuals: ['gen', 'mel'] [Training] [2023-04-13T15:42:01.104228] visual_debug_rate: 15 [Training] [2023-04-13T15:42:01.108947] is_mel_spectrogram: True [Training] [2023-04-13T15:42:01.113518] ] [Training] [2023-04-13T15:42:01.118213] is_train: True [Training] [2023-04-13T15:42:01.123220] dist: False [Training] [2023-04-13T15:42:01.127898] [Training] [2023-04-13T15:42:01.132417] 23-04-13 15:41:59.856 - INFO: Random seed: 5081 [Training] [2023-04-13T15:42:02.276639] 23-04-13 15:42:02.275 - INFO: Number of training data elements: 288, iters: 3 [Training] [2023-04-13T15:42:02.282825] 23-04-13 15:42:02.282 - INFO: Total epochs needed: 500 for iters 1,500 [Training] [2023-04-13T15:42:03.812921] /usr/local/lib/python3.9/dist-packages/transformers/configuration_utils.py:363: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. [Training] [2023-04-13T15:42:03.825588] warnings.warn( [Training] [2023-04-13T15:42:17.062878] 23-04-13 15:42:17.062 - INFO: Loading model for [./models/tortoise/autoregressive.pth] [Training] [2023-04-13T15:42:26.212464] 23-04-13 15:42:26.202 - INFO: Start training from epoch: 0, iter: 0 [Training] [2023-04-13T15:42:26.212530] [Training] [2023-04-13T15:42:26.212549] ===================================BUG REPORT=================================== [Training] [2023-04-13T15:42:26.212563] Welcome to bitsandbytes. For bug reports, please run [Training] [2023-04-13T15:42:26.212576] [Training] [2023-04-13T15:42:26.212589] python -m bitsandbytes [Training] [2023-04-13T15:42:26.212601] [Training] [2023-04-13T15:42:26.212614] and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues [Training] [2023-04-13T15:42:26.212627] ================================================================================ [Training] [2023-04-13T15:42:26.212640] bin /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so [Training] [2023-04-13T15:42:26.212653] CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... [Training] [2023-04-13T15:42:26.212667] CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0 [Training] [2023-04-13T15:42:26.212679] CUDA SETUP: Highest compute capability among GPUs detected: 7.5 [Training] [2023-04-13T15:42:26.212692] CUDA SETUP: Detected CUDA version 118 [Training] [2023-04-13T15:42:26.212705] CUDA SETUP: Loading binary /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so... [Training] [2023-04-13T15:42:26.212717] Disabled distributed training. [Training] [2023-04-13T15:42:26.212729] Loading from ./models/tortoise/dvae.pth [Training] [2023-04-13T15:42:29.218519] /usr/local/lib/python3.9/dist-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate [Training] [2023-04-13T15:42:29.218579] warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [Training] [2023-04-13T15:43:09.949073] Traceback (most recent call last): [Training] [2023-04-13T15:43:09.949119] File "/content/ai-voice-cloning/./src/train.py", line 64, in <module> [Training] [2023-04-13T15:43:09.949402] train(config_path, args.launcher) [Training] [2023-04-13T15:43:09.949439] File "/content/ai-voice-cloning/./src/train.py", line 31, in train [Training] [2023-04-13T15:43:09.949463] trainer.do_training() [Training] [2023-04-13T15:43:09.949483] File "/content/ai-voice-cloning/modules/dlas/dlas/train.py", line 408, in do_training [Training] [2023-04-13T15:43:09.949650] metric = self.do_step(train_data) [Training] [2023-04-13T15:43:09.949677] File "/content/ai-voice-cloning/modules/dlas/dlas/train.py", line 271, in do_step [Training] [2023-04-13T15:43:09.949756] gradient_norms_dict = self.model.optimize_parameters( [Training] [2023-04-13T15:43:09.949775] File "/content/ai-voice-cloning/modules/dlas/dlas/trainer/ExtensibleTrainer.py", line 321, in optimize_parameters [Training] [2023-04-13T15:43:09.949887] ns = step.do_forward_backward( [Training] [2023-04-13T15:43:09.949907] File "/content/ai-voice-cloning/modules/dlas/dlas/trainer/steps.py", line 242, in do_forward_backward [Training] [2023-04-13T15:43:09.950010] local_state[k] = v[grad_accum_step] [Training] [2023-04-13T15:43:09.950050] IndexError: list index out of range [Training] [2023-04-13T15:43:22.858549] ./train.sh: line 4: deactivate: command not found ```
Owner
> mega_batch_factor: 29 https://git.ecker.tech/mrq/ai-voice-cloning/wiki/Issues#local_state-k-v-grad_accum_step-indexerror-list-index-out-of-range
Author

That worked!

That worked!
Sign in to join this conversation.
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/ai-voice-cloning#206
No description provided.