UnboundLocalError: local variable 'opt' referenced before assignment #102

Closed
opened 2023-03-09 20:14:42 +00:00 by SyntheticVoices · 9 comments
[Training] [2023-03-09T20:12:47.231609]         tortoise_compat: True
[Training] [2023-03-09T20:12:47.234609]       ]
[Training] [2023-03-09T20:12:47.237931]     ]
[Training] [2023-03-09T20:12:47.242929]   ]
[Training] [2023-03-09T20:12:47.245439]   path:[
[Training] [2023-03-09T20:12:47.248449]     strict_load: True
[Training] [2023-03-09T20:12:47.251958]     pretrain_model_gpt: H:\ai-voice-cloning\models\tortoise\autoregressive.pth
[Training] [2023-03-09T20:12:47.254969]     root: ./
[Training] [2023-03-09T20:12:47.258247]     experiments_root: ./training\Gebbs\finetune
[Training] [2023-03-09T20:12:47.260250]     models: ./training\Gebbs\finetune\models
[Training] [2023-03-09T20:12:47.263773]     training_state: ./training\Gebbs\finetune\training_state
[Training] [2023-03-09T20:12:47.267292]     log: ./training\Gebbs\finetune
[Training] [2023-03-09T20:12:47.269294]     val_images: ./training\Gebbs\finetune\val_images
[Training] [2023-03-09T20:12:47.274812]   ]
[Training] [2023-03-09T20:12:47.277925]   train:[
[Training] [2023-03-09T20:12:47.280926]     niter: 510
[Training] [2023-03-09T20:12:47.283441]     warmup_iter: -1
[Training] [2023-03-09T20:12:47.286968]     mega_batch_factor: 16
[Training] [2023-03-09T20:12:47.289966]     val_freq: 5
[Training] [2023-03-09T20:12:47.293490]     ema_enabled: False
[Training] [2023-03-09T20:12:47.296017]     default_lr_scheme: MultiStepLR
[Training] [2023-03-09T20:12:47.299018]     gen_lr_steps: [9, 18, 25, 33]
[Training] [2023-03-09T20:12:47.302012]     lr_gamma: 0.5
[Training] [2023-03-09T20:12:47.304700]   ]
[Training] [2023-03-09T20:12:47.307708]   eval:[
[Training] [2023-03-09T20:12:47.310714]     pure: True
[Training] [2023-03-09T20:12:47.313241]     output_state: gen
[Training] [2023-03-09T20:12:47.316758]   ]
[Training] [2023-03-09T20:12:47.319759]   logger:[
[Training] [2023-03-09T20:12:47.322267]     print_freq: 5
[Training] [2023-03-09T20:12:47.325793]     save_checkpoint_freq: 5
[Training] [2023-03-09T20:12:47.328791]     visuals: ['gen', 'mel']
[Training] [2023-03-09T20:12:47.330792]     visual_debug_rate: 5
[Training] [2023-03-09T20:12:47.334812]     is_mel_spectrogram: True
[Training] [2023-03-09T20:12:47.337830]   ]
[Training] [2023-03-09T20:12:47.340831]   is_train: True
[Training] [2023-03-09T20:12:47.343350]   dist: False
[Training] [2023-03-09T20:12:47.346867]
[Training] [2023-03-09T20:12:47.349866] 23-03-09 20:12:47.042 - INFO: Random seed: 5792
[Training] [2023-03-09T20:12:48.067957] 23-03-09 20:12:48.067 - INFO: Number of training data elements: 131, iters: 2
[Training] [2023-03-09T20:12:48.071956] 23-03-09 20:12:48.067 - INFO: Total epochs needed: 255 for iters 510
[Training] [2023-03-09T20:12:48.074983] 23-03-09 20:12:48.067 - INFO: Number of val images in [validation]: 0
[Training] [2023-03-09T20:12:49.011975] H:\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:375: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
[Training] [2023-03-09T20:12:49.016493]   warnings.warn(
[Training] [2023-03-09T20:12:56.239809] Using BitsAndBytes ADAMW optimizations
[Training] [2023-03-09T20:12:56.243815] Disabled distributed training.
[Training] [2023-03-09T20:12:56.247835] Path already exists. Rename it to [./training\Gebbs\finetune_archived_230309-201246]
[Training] [2023-03-09T20:12:56.251838] Loading from ./models/tortoise/dvae.pth
[Training] [2023-03-09T20:12:56.255853] Traceback (most recent call last):
[Training] [2023-03-09T20:12:56.258862]   File "H:\ai-voice-cloning\src\train.py", line 94, in <module>
[Training] [2023-03-09T20:12:56.261863]     train(args.opt, args.launcher)
[Training] [2023-03-09T20:12:56.264860]   File "H:\ai-voice-cloning\src\train.py", line 80, in train
[Training] [2023-03-09T20:12:56.267931]     trainer.init(yaml, opt, launcher)
[Training] [2023-03-09T20:12:56.270929]   File "H:\ai-voice-cloning\./modules/dlas\codes\train.py", line 144, in init
[Training] [2023-03-09T20:12:56.273933]     self.model = ExtensibleTrainer(opt)
[Training] [2023-03-09T20:12:56.276955]   File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\ExtensibleTrainer.py", line 113, in __init__
[Training] [2023-03-09T20:12:56.279955]     s.define_optimizers()
[Training] [2023-03-09T20:12:56.282955]   File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\steps.py", line 186, in define_optimizers
[Training] [2023-03-09T20:12:56.286463]     opt._config = opt_config  # This is a bit seedy, but we will need these configs later.
[Training] [2023-03-09T20:12:56.289472] UnboundLocalError: local variable 'opt' referenced before assignment

Getting this error after pulling a new update today. This is when I click train.

``` [Training] [2023-03-09T20:12:47.231609] tortoise_compat: True [Training] [2023-03-09T20:12:47.234609] ] [Training] [2023-03-09T20:12:47.237931] ] [Training] [2023-03-09T20:12:47.242929] ] [Training] [2023-03-09T20:12:47.245439] path:[ [Training] [2023-03-09T20:12:47.248449] strict_load: True [Training] [2023-03-09T20:12:47.251958] pretrain_model_gpt: H:\ai-voice-cloning\models\tortoise\autoregressive.pth [Training] [2023-03-09T20:12:47.254969] root: ./ [Training] [2023-03-09T20:12:47.258247] experiments_root: ./training\Gebbs\finetune [Training] [2023-03-09T20:12:47.260250] models: ./training\Gebbs\finetune\models [Training] [2023-03-09T20:12:47.263773] training_state: ./training\Gebbs\finetune\training_state [Training] [2023-03-09T20:12:47.267292] log: ./training\Gebbs\finetune [Training] [2023-03-09T20:12:47.269294] val_images: ./training\Gebbs\finetune\val_images [Training] [2023-03-09T20:12:47.274812] ] [Training] [2023-03-09T20:12:47.277925] train:[ [Training] [2023-03-09T20:12:47.280926] niter: 510 [Training] [2023-03-09T20:12:47.283441] warmup_iter: -1 [Training] [2023-03-09T20:12:47.286968] mega_batch_factor: 16 [Training] [2023-03-09T20:12:47.289966] val_freq: 5 [Training] [2023-03-09T20:12:47.293490] ema_enabled: False [Training] [2023-03-09T20:12:47.296017] default_lr_scheme: MultiStepLR [Training] [2023-03-09T20:12:47.299018] gen_lr_steps: [9, 18, 25, 33] [Training] [2023-03-09T20:12:47.302012] lr_gamma: 0.5 [Training] [2023-03-09T20:12:47.304700] ] [Training] [2023-03-09T20:12:47.307708] eval:[ [Training] [2023-03-09T20:12:47.310714] pure: True [Training] [2023-03-09T20:12:47.313241] output_state: gen [Training] [2023-03-09T20:12:47.316758] ] [Training] [2023-03-09T20:12:47.319759] logger:[ [Training] [2023-03-09T20:12:47.322267] print_freq: 5 [Training] [2023-03-09T20:12:47.325793] save_checkpoint_freq: 5 [Training] [2023-03-09T20:12:47.328791] visuals: ['gen', 'mel'] [Training] [2023-03-09T20:12:47.330792] visual_debug_rate: 5 [Training] [2023-03-09T20:12:47.334812] is_mel_spectrogram: True [Training] [2023-03-09T20:12:47.337830] ] [Training] [2023-03-09T20:12:47.340831] is_train: True [Training] [2023-03-09T20:12:47.343350] dist: False [Training] [2023-03-09T20:12:47.346867] [Training] [2023-03-09T20:12:47.349866] 23-03-09 20:12:47.042 - INFO: Random seed: 5792 [Training] [2023-03-09T20:12:48.067957] 23-03-09 20:12:48.067 - INFO: Number of training data elements: 131, iters: 2 [Training] [2023-03-09T20:12:48.071956] 23-03-09 20:12:48.067 - INFO: Total epochs needed: 255 for iters 510 [Training] [2023-03-09T20:12:48.074983] 23-03-09 20:12:48.067 - INFO: Number of val images in [validation]: 0 [Training] [2023-03-09T20:12:49.011975] H:\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:375: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. [Training] [2023-03-09T20:12:49.016493] warnings.warn( [Training] [2023-03-09T20:12:56.239809] Using BitsAndBytes ADAMW optimizations [Training] [2023-03-09T20:12:56.243815] Disabled distributed training. [Training] [2023-03-09T20:12:56.247835] Path already exists. Rename it to [./training\Gebbs\finetune_archived_230309-201246] [Training] [2023-03-09T20:12:56.251838] Loading from ./models/tortoise/dvae.pth [Training] [2023-03-09T20:12:56.255853] Traceback (most recent call last): [Training] [2023-03-09T20:12:56.258862] File "H:\ai-voice-cloning\src\train.py", line 94, in <module> [Training] [2023-03-09T20:12:56.261863] train(args.opt, args.launcher) [Training] [2023-03-09T20:12:56.264860] File "H:\ai-voice-cloning\src\train.py", line 80, in train [Training] [2023-03-09T20:12:56.267931] trainer.init(yaml, opt, launcher) [Training] [2023-03-09T20:12:56.270929] File "H:\ai-voice-cloning\./modules/dlas\codes\train.py", line 144, in init [Training] [2023-03-09T20:12:56.273933] self.model = ExtensibleTrainer(opt) [Training] [2023-03-09T20:12:56.276955] File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\ExtensibleTrainer.py", line 113, in __init__ [Training] [2023-03-09T20:12:56.279955] s.define_optimizers() [Training] [2023-03-09T20:12:56.282955] File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\steps.py", line 186, in define_optimizers [Training] [2023-03-09T20:12:56.286463] opt._config = opt_config # This is a bit seedy, but we will need these configs later. [Training] [2023-03-09T20:12:56.289472] UnboundLocalError: local variable 'opt' referenced before assignment ``` Getting this error after pulling a new update today. This is when I click train.
Owner

I can't really tell exactly the issue since you neglected to include the full training output (the provided configuration is very important), but it's safe to assume you need to remake your training configuration.

I can't really tell exactly the issue since you neglected to include the full training output (the provided configuration is very important), but it's safe to assume you need to remake your training configuration.
mrq added the
insufficient info
label 2023-03-09 20:20:34 +00:00

Sorry, I think this what your after :
Tbh I am using my usual settings that have worked, but I will look around

H:\ai-voice-cloning>call .\venv\Scripts\activate.bat
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
['text', 'delimiter', 'emotion', 'prompt', 'voice', 'mic_audio', 'voice_latents_chunks', 'candidates', 'seed', 'num_autoregressive_samples', 'diffusion_iterations', 'temperature', 'diffusion_sampler', 'breathing_room', 'cvvp_weight', 'top_p', 'diffusion_temperature', 'length_penalty', 'repetition_penalty', 'cond_free_k', 'experimentals']
{'text': None, 'delimiter': None, 'emotion': None, 'prompt': None, 'voice': None, 'mic_audio': None, 'voice_latents_chunks': None, 'candidates': None, 'seed': None, 'num_autoregressive_samples': 16, 'diffusion_iterations': 30, 'temperature': 0.8, 'diffusion_sampler': 'DDIM', 'breathing_room': 8, 'cvvp_weight': 0.0, 'top_p': 0.8, 'diffusion_temperature': 1.0, 'length_penalty': 1.0, 'repetition_penalty': 2.0, 'cond_free_k': 2.0, 'experimentals': None}
[None, None, None, None, None, None, None, None, None, 16, 30, 0.8, 'DDIM', 8, 0.0, 0.8, 1.0, 1.0, 2.0, 2.0, None]
Spawning process:  train.bat ./training/Gebbs/train.yaml
[Training] [2023-03-09T20:12:44.314056]
[Training] [2023-03-09T20:12:44.317576] (venv) H:\ai-voice-cloning>call .\venv\Scripts\activate.bat
[Training] [2023-03-09T20:12:46.835789] 23-03-09 20:12:46.835 - INFO:   name: Gebbs
[Training] [2023-03-09T20:12:46.840788]   model: extensibletrainer
[Training] [2023-03-09T20:12:46.843316]   scale: 1
[Training] [2023-03-09T20:12:46.847838]   gpu_ids: [0]
[Training] [2023-03-09T20:12:46.850843]   start_step: 0
[Training] [2023-03-09T20:12:46.854884]   checkpointing_enabled: True
[Training] [2023-03-09T20:12:46.857899]   fp16: False
[Training] [2023-03-09T20:12:46.860898]   bitsandbytes: True
[Training] [2023-03-09T20:12:46.863426]   gpus: 1
[Training] [2023-03-09T20:12:46.866953]   wandb: False
[Training] [2023-03-09T20:12:46.869954]   use_tb_logger: True
[Training] [2023-03-09T20:12:46.872461]   datasets:[
[Training] [2023-03-09T20:12:46.876004]     train:[
[Training] [2023-03-09T20:12:46.879003]       name: training
[Training] [2023-03-09T20:12:46.881003]       n_workers: 2
[Training] [2023-03-09T20:12:46.885040]       batch_size: 128
[Training] [2023-03-09T20:12:46.888063]       mode: paired_voice_audio
[Training] [2023-03-09T20:12:46.891065]       path: ./training/Gebbs/train.txt
[Training] [2023-03-09T20:12:46.895103]       fetcher_mode: ['lj']
[Training] [2023-03-09T20:12:46.898118]       phase: train
[Training] [2023-03-09T20:12:46.901118]       max_wav_length: 255995
[Training] [2023-03-09T20:12:46.903641]       max_text_length: 200
[Training] [2023-03-09T20:12:46.907170]       sample_rate: 22050
[Training] [2023-03-09T20:12:46.910167]       load_conditioning: True
[Training] [2023-03-09T20:12:46.912683]       num_conditioning_candidates: 2
[Training] [2023-03-09T20:12:46.916221]       conditioning_length: 44000
[Training] [2023-03-09T20:12:46.919220]       use_bpe_tokenizer: True
[Training] [2023-03-09T20:12:46.922727]       tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json
[Training] [2023-03-09T20:12:46.926252]       load_aligned_codes: False
[Training] [2023-03-09T20:12:46.929254]       data_type: img
[Training] [2023-03-09T20:12:46.932260]     ]
[Training] [2023-03-09T20:12:46.935785]     val:[
[Training] [2023-03-09T20:12:46.938786]       name: validation
[Training] [2023-03-09T20:12:46.941784]       n_workers: 2
[Training] [2023-03-09T20:12:46.944808]       batch_size: 0
[Training] [2023-03-09T20:12:46.947823]       mode: paired_voice_audio
[Training] [2023-03-09T20:12:46.950827]       path: ./training/Gebbs/validation.txt
[Training] [2023-03-09T20:12:46.953329]       fetcher_mode: ['lj']
[Training] [2023-03-09T20:12:46.958857]       phase: val
[Training] [2023-03-09T20:12:46.961858]       max_wav_length: 255995
[Training] [2023-03-09T20:12:46.964881]       max_text_length: 200
[Training] [2023-03-09T20:12:46.967890]       sample_rate: 22050
[Training] [2023-03-09T20:12:46.970896]       load_conditioning: True
[Training] [2023-03-09T20:12:46.973400]       num_conditioning_candidates: 2
[Training] [2023-03-09T20:12:46.976941]       conditioning_length: 44000
[Training] [2023-03-09T20:12:46.979936]       use_bpe_tokenizer: True
[Training] [2023-03-09T20:12:46.983457]       tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json
[Training] [2023-03-09T20:12:46.986958]       load_aligned_codes: False
[Training] [2023-03-09T20:12:46.989965]       data_type: img
[Training] [2023-03-09T20:12:46.992962]     ]
[Training] [2023-03-09T20:12:46.996044]   ]
[Training] [2023-03-09T20:12:46.999052]   steps:[
[Training] [2023-03-09T20:12:47.002053]     gpt_train:[
[Training] [2023-03-09T20:12:47.005067]       training: gpt
[Training] [2023-03-09T20:12:47.008082]       loss_log_buffer: 500
[Training] [2023-03-09T20:12:47.011082]       optimizer: ${optimizer}
[Training] [2023-03-09T20:12:47.014591]       optimizer_params:[
[Training] [2023-03-09T20:12:47.017118]         lr: 1e-05
[Training] [2023-03-09T20:12:47.020114]         weight_decay: 0.01
[Training] [2023-03-09T20:12:47.023114]         beta1: 0.9
[Training] [2023-03-09T20:12:47.026630]         beta2: 0.96
[Training] [2023-03-09T20:12:47.028634]       ]
[Training] [2023-03-09T20:12:47.031636]       clip_grad_eps: 4
[Training] [2023-03-09T20:12:47.034670]       injectors:[
[Training] [2023-03-09T20:12:47.037686]         paired_to_mel:[
[Training] [2023-03-09T20:12:47.040684]           type: torch_mel_spectrogram
[Training] [2023-03-09T20:12:47.044197]           mel_norm_file: ./models/tortoise/clips_mel_norms.pth
[Training] [2023-03-09T20:12:47.047738]           in: wav
[Training] [2023-03-09T20:12:47.050733]           out: paired_mel
[Training] [2023-03-09T20:12:47.052739]         ]
[Training] [2023-03-09T20:12:47.056793]         paired_cond_to_mel:[
[Training] [2023-03-09T20:12:47.059787]           type: for_each
[Training] [2023-03-09T20:12:47.062788]           subtype: torch_mel_spectrogram
[Training] [2023-03-09T20:12:47.065831]           mel_norm_file: ./models/tortoise/clips_mel_norms.pth
[Training] [2023-03-09T20:12:47.068829]           in: conditioning
[Training] [2023-03-09T20:12:47.071836]           out: paired_conditioning_mel
[Training] [2023-03-09T20:12:47.074866]         ]
[Training] [2023-03-09T20:12:47.077877]         to_codes:[
[Training] [2023-03-09T20:12:47.080874]           type: discrete_token
[Training] [2023-03-09T20:12:47.084915]           in: paired_mel
[Training] [2023-03-09T20:12:47.087929]           out: paired_mel_codes
[Training] [2023-03-09T20:12:47.089925]           dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml
[Training] [2023-03-09T20:12:47.092929]         ]
[Training] [2023-03-09T20:12:47.095983]         paired_fwd_text:[
[Training] [2023-03-09T20:12:47.099983]           type: generator
[Training] [2023-03-09T20:12:47.102982]           generator: gpt
[Training] [2023-03-09T20:12:47.106037]           in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths']
[Training] [2023-03-09T20:12:47.110031]           out: ['loss_text_ce', 'loss_mel_ce', 'logits']
[Training] [2023-03-09T20:12:47.113032]         ]
[Training] [2023-03-09T20:12:47.116083]       ]
[Training] [2023-03-09T20:12:47.119081]       losses:[
[Training] [2023-03-09T20:12:47.122082]         text_ce:[
[Training] [2023-03-09T20:12:47.124594]           type: direct
[Training] [2023-03-09T20:12:47.127139]           weight: 0.9
[Training] [2023-03-09T20:12:47.132134]           key: loss_text_ce
[Training] [2023-03-09T20:12:47.135654]         ]
[Training] [2023-03-09T20:12:47.138653]         mel_ce:[
[Training] [2023-03-09T20:12:47.142653]           type: direct
[Training] [2023-03-09T20:12:47.150170]           weight: 1
[Training] [2023-03-09T20:12:47.153170]           key: loss_mel_ce
[Training] [2023-03-09T20:12:47.155197]         ]
[Training] [2023-03-09T20:12:47.158207]       ]
[Training] [2023-03-09T20:12:47.161207]     ]
[Training] [2023-03-09T20:12:47.164714]   ]
[Training] [2023-03-09T20:12:47.167247]   networks:[
[Training] [2023-03-09T20:12:47.170248]     gpt:[
[Training] [2023-03-09T20:12:47.173247]       type: generator
[Training] [2023-03-09T20:12:47.178348]       which_model_G: unified_voice2
[Training] [2023-03-09T20:12:47.182345]       kwargs:[
[Training] [2023-03-09T20:12:47.184869]         layers: 30
[Training] [2023-03-09T20:12:47.187385]         model_dim: 1024
[Training] [2023-03-09T20:12:47.190394]         heads: 16
[Training] [2023-03-09T20:12:47.195420]         max_text_tokens: 402
[Training] [2023-03-09T20:12:47.198438]         max_mel_tokens: 604
[Training] [2023-03-09T20:12:47.201438]         max_conditioning_inputs: 2
[Training] [2023-03-09T20:12:47.203438]         mel_length_compression: 1024
[Training] [2023-03-09T20:12:47.207483]         number_text_tokens: 256
[Training] [2023-03-09T20:12:47.210487]         number_mel_codes: 8194
[Training] [2023-03-09T20:12:47.213482]         start_mel_token: 8192
[Training] [2023-03-09T20:12:47.216537]         stop_mel_token: 8193
[Training] [2023-03-09T20:12:47.219533]         start_text_token: 255
[Training] [2023-03-09T20:12:47.222534]         train_solo_embeddings: False
[Training] [2023-03-09T20:12:47.225599]         use_mel_codes_as_input: True
[Training] [2023-03-09T20:12:47.228609]         checkpointing: True
[Training] [2023-03-09T20:12:47.231609]         tortoise_compat: True
[Training] [2023-03-09T20:12:47.234609]       ]
[Training] [2023-03-09T20:12:47.237931]     ]
[Training] [2023-03-09T20:12:47.242929]   ]
[Training] [2023-03-09T20:12:47.245439]   path:[
[Training] [2023-03-09T20:12:47.248449]     strict_load: True
[Training] [2023-03-09T20:12:47.251958]     pretrain_model_gpt: H:\ai-voice-cloning\models\tortoise\autoregressive.pth
[Training] [2023-03-09T20:12:47.254969]     root: ./
[Training] [2023-03-09T20:12:47.258247]     experiments_root: ./training\Gebbs\finetune
[Training] [2023-03-09T20:12:47.260250]     models: ./training\Gebbs\finetune\models
[Training] [2023-03-09T20:12:47.263773]     training_state: ./training\Gebbs\finetune\training_state
[Training] [2023-03-09T20:12:47.267292]     log: ./training\Gebbs\finetune
[Training] [2023-03-09T20:12:47.269294]     val_images: ./training\Gebbs\finetune\val_images
[Training] [2023-03-09T20:12:47.274812]   ]
[Training] [2023-03-09T20:12:47.277925]   train:[
[Training] [2023-03-09T20:12:47.280926]     niter: 510
[Training] [2023-03-09T20:12:47.283441]     warmup_iter: -1
[Training] [2023-03-09T20:12:47.286968]     mega_batch_factor: 16
[Training] [2023-03-09T20:12:47.289966]     val_freq: 5
[Training] [2023-03-09T20:12:47.293490]     ema_enabled: False
[Training] [2023-03-09T20:12:47.296017]     default_lr_scheme: MultiStepLR
[Training] [2023-03-09T20:12:47.299018]     gen_lr_steps: [9, 18, 25, 33]
[Training] [2023-03-09T20:12:47.302012]     lr_gamma: 0.5
[Training] [2023-03-09T20:12:47.304700]   ]
[Training] [2023-03-09T20:12:47.307708]   eval:[
[Training] [2023-03-09T20:12:47.310714]     pure: True
[Training] [2023-03-09T20:12:47.313241]     output_state: gen
[Training] [2023-03-09T20:12:47.316758]   ]
[Training] [2023-03-09T20:12:47.319759]   logger:[
[Training] [2023-03-09T20:12:47.322267]     print_freq: 5
[Training] [2023-03-09T20:12:47.325793]     save_checkpoint_freq: 5
[Training] [2023-03-09T20:12:47.328791]     visuals: ['gen', 'mel']
[Training] [2023-03-09T20:12:47.330792]     visual_debug_rate: 5
[Training] [2023-03-09T20:12:47.334812]     is_mel_spectrogram: True
[Training] [2023-03-09T20:12:47.337830]   ]
[Training] [2023-03-09T20:12:47.340831]   is_train: True
[Training] [2023-03-09T20:12:47.343350]   dist: False
[Training] [2023-03-09T20:12:47.346867]
[Training] [2023-03-09T20:12:47.349866] 23-03-09 20:12:47.042 - INFO: Random seed: 5792
[Training] [2023-03-09T20:12:48.067957] 23-03-09 20:12:48.067 - INFO: Number of training data elements: 131, iters: 2
[Training] [2023-03-09T20:12:48.071956] 23-03-09 20:12:48.067 - INFO: Total epochs needed: 255 for iters 510
[Training] [2023-03-09T20:12:48.074983] 23-03-09 20:12:48.067 - INFO: Number of val images in [validation]: 0
[Training] [2023-03-09T20:12:49.011975] H:\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:375: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
[Training] [2023-03-09T20:12:49.016493]   warnings.warn(
[Training] [2023-03-09T20:12:56.239809] Using BitsAndBytes ADAMW optimizations
[Training] [2023-03-09T20:12:56.243815] Disabled distributed training.
[Training] [2023-03-09T20:12:56.247835] Path already exists. Rename it to [./training\Gebbs\finetune_archived_230309-201246]
[Training] [2023-03-09T20:12:56.251838] Loading from ./models/tortoise/dvae.pth
[Training] [2023-03-09T20:12:56.255853] Traceback (most recent call last):
[Training] [2023-03-09T20:12:56.258862]   File "H:\ai-voice-cloning\src\train.py", line 94, in <module>
[Training] [2023-03-09T20:12:56.261863]     train(args.opt, args.launcher)
[Training] [2023-03-09T20:12:56.264860]   File "H:\ai-voice-cloning\src\train.py", line 80, in train
[Training] [2023-03-09T20:12:56.267931]     trainer.init(yaml, opt, launcher)
[Training] [2023-03-09T20:12:56.270929]   File "H:\ai-voice-cloning\./modules/dlas\codes\train.py", line 144, in init
[Training] [2023-03-09T20:12:56.273933]     self.model = ExtensibleTrainer(opt)
[Training] [2023-03-09T20:12:56.276955]   File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\ExtensibleTrainer.py", line 113, in __init__
[Training] [2023-03-09T20:12:56.279955]     s.define_optimizers()
[Training] [2023-03-09T20:12:56.282955]   File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\steps.py", line 186, in define_optimizers
[Training] [2023-03-09T20:12:56.286463]     opt._config = opt_config  # This is a bit seedy, but we will need these configs later.
[Training] [2023-03-09T20:12:56.289472] UnboundLocalError: local variable 'opt' referenced before assignment
Sorry, I think this what your after : Tbh I am using my usual settings that have worked, but I will look around ``` H:\ai-voice-cloning>call .\venv\Scripts\activate.bat Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. ['text', 'delimiter', 'emotion', 'prompt', 'voice', 'mic_audio', 'voice_latents_chunks', 'candidates', 'seed', 'num_autoregressive_samples', 'diffusion_iterations', 'temperature', 'diffusion_sampler', 'breathing_room', 'cvvp_weight', 'top_p', 'diffusion_temperature', 'length_penalty', 'repetition_penalty', 'cond_free_k', 'experimentals'] {'text': None, 'delimiter': None, 'emotion': None, 'prompt': None, 'voice': None, 'mic_audio': None, 'voice_latents_chunks': None, 'candidates': None, 'seed': None, 'num_autoregressive_samples': 16, 'diffusion_iterations': 30, 'temperature': 0.8, 'diffusion_sampler': 'DDIM', 'breathing_room': 8, 'cvvp_weight': 0.0, 'top_p': 0.8, 'diffusion_temperature': 1.0, 'length_penalty': 1.0, 'repetition_penalty': 2.0, 'cond_free_k': 2.0, 'experimentals': None} [None, None, None, None, None, None, None, None, None, 16, 30, 0.8, 'DDIM', 8, 0.0, 0.8, 1.0, 1.0, 2.0, 2.0, None] Spawning process: train.bat ./training/Gebbs/train.yaml [Training] [2023-03-09T20:12:44.314056] [Training] [2023-03-09T20:12:44.317576] (venv) H:\ai-voice-cloning>call .\venv\Scripts\activate.bat [Training] [2023-03-09T20:12:46.835789] 23-03-09 20:12:46.835 - INFO: name: Gebbs [Training] [2023-03-09T20:12:46.840788] model: extensibletrainer [Training] [2023-03-09T20:12:46.843316] scale: 1 [Training] [2023-03-09T20:12:46.847838] gpu_ids: [0] [Training] [2023-03-09T20:12:46.850843] start_step: 0 [Training] [2023-03-09T20:12:46.854884] checkpointing_enabled: True [Training] [2023-03-09T20:12:46.857899] fp16: False [Training] [2023-03-09T20:12:46.860898] bitsandbytes: True [Training] [2023-03-09T20:12:46.863426] gpus: 1 [Training] [2023-03-09T20:12:46.866953] wandb: False [Training] [2023-03-09T20:12:46.869954] use_tb_logger: True [Training] [2023-03-09T20:12:46.872461] datasets:[ [Training] [2023-03-09T20:12:46.876004] train:[ [Training] [2023-03-09T20:12:46.879003] name: training [Training] [2023-03-09T20:12:46.881003] n_workers: 2 [Training] [2023-03-09T20:12:46.885040] batch_size: 128 [Training] [2023-03-09T20:12:46.888063] mode: paired_voice_audio [Training] [2023-03-09T20:12:46.891065] path: ./training/Gebbs/train.txt [Training] [2023-03-09T20:12:46.895103] fetcher_mode: ['lj'] [Training] [2023-03-09T20:12:46.898118] phase: train [Training] [2023-03-09T20:12:46.901118] max_wav_length: 255995 [Training] [2023-03-09T20:12:46.903641] max_text_length: 200 [Training] [2023-03-09T20:12:46.907170] sample_rate: 22050 [Training] [2023-03-09T20:12:46.910167] load_conditioning: True [Training] [2023-03-09T20:12:46.912683] num_conditioning_candidates: 2 [Training] [2023-03-09T20:12:46.916221] conditioning_length: 44000 [Training] [2023-03-09T20:12:46.919220] use_bpe_tokenizer: True [Training] [2023-03-09T20:12:46.922727] tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json [Training] [2023-03-09T20:12:46.926252] load_aligned_codes: False [Training] [2023-03-09T20:12:46.929254] data_type: img [Training] [2023-03-09T20:12:46.932260] ] [Training] [2023-03-09T20:12:46.935785] val:[ [Training] [2023-03-09T20:12:46.938786] name: validation [Training] [2023-03-09T20:12:46.941784] n_workers: 2 [Training] [2023-03-09T20:12:46.944808] batch_size: 0 [Training] [2023-03-09T20:12:46.947823] mode: paired_voice_audio [Training] [2023-03-09T20:12:46.950827] path: ./training/Gebbs/validation.txt [Training] [2023-03-09T20:12:46.953329] fetcher_mode: ['lj'] [Training] [2023-03-09T20:12:46.958857] phase: val [Training] [2023-03-09T20:12:46.961858] max_wav_length: 255995 [Training] [2023-03-09T20:12:46.964881] max_text_length: 200 [Training] [2023-03-09T20:12:46.967890] sample_rate: 22050 [Training] [2023-03-09T20:12:46.970896] load_conditioning: True [Training] [2023-03-09T20:12:46.973400] num_conditioning_candidates: 2 [Training] [2023-03-09T20:12:46.976941] conditioning_length: 44000 [Training] [2023-03-09T20:12:46.979936] use_bpe_tokenizer: True [Training] [2023-03-09T20:12:46.983457] tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json [Training] [2023-03-09T20:12:46.986958] load_aligned_codes: False [Training] [2023-03-09T20:12:46.989965] data_type: img [Training] [2023-03-09T20:12:46.992962] ] [Training] [2023-03-09T20:12:46.996044] ] [Training] [2023-03-09T20:12:46.999052] steps:[ [Training] [2023-03-09T20:12:47.002053] gpt_train:[ [Training] [2023-03-09T20:12:47.005067] training: gpt [Training] [2023-03-09T20:12:47.008082] loss_log_buffer: 500 [Training] [2023-03-09T20:12:47.011082] optimizer: ${optimizer} [Training] [2023-03-09T20:12:47.014591] optimizer_params:[ [Training] [2023-03-09T20:12:47.017118] lr: 1e-05 [Training] [2023-03-09T20:12:47.020114] weight_decay: 0.01 [Training] [2023-03-09T20:12:47.023114] beta1: 0.9 [Training] [2023-03-09T20:12:47.026630] beta2: 0.96 [Training] [2023-03-09T20:12:47.028634] ] [Training] [2023-03-09T20:12:47.031636] clip_grad_eps: 4 [Training] [2023-03-09T20:12:47.034670] injectors:[ [Training] [2023-03-09T20:12:47.037686] paired_to_mel:[ [Training] [2023-03-09T20:12:47.040684] type: torch_mel_spectrogram [Training] [2023-03-09T20:12:47.044197] mel_norm_file: ./models/tortoise/clips_mel_norms.pth [Training] [2023-03-09T20:12:47.047738] in: wav [Training] [2023-03-09T20:12:47.050733] out: paired_mel [Training] [2023-03-09T20:12:47.052739] ] [Training] [2023-03-09T20:12:47.056793] paired_cond_to_mel:[ [Training] [2023-03-09T20:12:47.059787] type: for_each [Training] [2023-03-09T20:12:47.062788] subtype: torch_mel_spectrogram [Training] [2023-03-09T20:12:47.065831] mel_norm_file: ./models/tortoise/clips_mel_norms.pth [Training] [2023-03-09T20:12:47.068829] in: conditioning [Training] [2023-03-09T20:12:47.071836] out: paired_conditioning_mel [Training] [2023-03-09T20:12:47.074866] ] [Training] [2023-03-09T20:12:47.077877] to_codes:[ [Training] [2023-03-09T20:12:47.080874] type: discrete_token [Training] [2023-03-09T20:12:47.084915] in: paired_mel [Training] [2023-03-09T20:12:47.087929] out: paired_mel_codes [Training] [2023-03-09T20:12:47.089925] dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml [Training] [2023-03-09T20:12:47.092929] ] [Training] [2023-03-09T20:12:47.095983] paired_fwd_text:[ [Training] [2023-03-09T20:12:47.099983] type: generator [Training] [2023-03-09T20:12:47.102982] generator: gpt [Training] [2023-03-09T20:12:47.106037] in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths'] [Training] [2023-03-09T20:12:47.110031] out: ['loss_text_ce', 'loss_mel_ce', 'logits'] [Training] [2023-03-09T20:12:47.113032] ] [Training] [2023-03-09T20:12:47.116083] ] [Training] [2023-03-09T20:12:47.119081] losses:[ [Training] [2023-03-09T20:12:47.122082] text_ce:[ [Training] [2023-03-09T20:12:47.124594] type: direct [Training] [2023-03-09T20:12:47.127139] weight: 0.9 [Training] [2023-03-09T20:12:47.132134] key: loss_text_ce [Training] [2023-03-09T20:12:47.135654] ] [Training] [2023-03-09T20:12:47.138653] mel_ce:[ [Training] [2023-03-09T20:12:47.142653] type: direct [Training] [2023-03-09T20:12:47.150170] weight: 1 [Training] [2023-03-09T20:12:47.153170] key: loss_mel_ce [Training] [2023-03-09T20:12:47.155197] ] [Training] [2023-03-09T20:12:47.158207] ] [Training] [2023-03-09T20:12:47.161207] ] [Training] [2023-03-09T20:12:47.164714] ] [Training] [2023-03-09T20:12:47.167247] networks:[ [Training] [2023-03-09T20:12:47.170248] gpt:[ [Training] [2023-03-09T20:12:47.173247] type: generator [Training] [2023-03-09T20:12:47.178348] which_model_G: unified_voice2 [Training] [2023-03-09T20:12:47.182345] kwargs:[ [Training] [2023-03-09T20:12:47.184869] layers: 30 [Training] [2023-03-09T20:12:47.187385] model_dim: 1024 [Training] [2023-03-09T20:12:47.190394] heads: 16 [Training] [2023-03-09T20:12:47.195420] max_text_tokens: 402 [Training] [2023-03-09T20:12:47.198438] max_mel_tokens: 604 [Training] [2023-03-09T20:12:47.201438] max_conditioning_inputs: 2 [Training] [2023-03-09T20:12:47.203438] mel_length_compression: 1024 [Training] [2023-03-09T20:12:47.207483] number_text_tokens: 256 [Training] [2023-03-09T20:12:47.210487] number_mel_codes: 8194 [Training] [2023-03-09T20:12:47.213482] start_mel_token: 8192 [Training] [2023-03-09T20:12:47.216537] stop_mel_token: 8193 [Training] [2023-03-09T20:12:47.219533] start_text_token: 255 [Training] [2023-03-09T20:12:47.222534] train_solo_embeddings: False [Training] [2023-03-09T20:12:47.225599] use_mel_codes_as_input: True [Training] [2023-03-09T20:12:47.228609] checkpointing: True [Training] [2023-03-09T20:12:47.231609] tortoise_compat: True [Training] [2023-03-09T20:12:47.234609] ] [Training] [2023-03-09T20:12:47.237931] ] [Training] [2023-03-09T20:12:47.242929] ] [Training] [2023-03-09T20:12:47.245439] path:[ [Training] [2023-03-09T20:12:47.248449] strict_load: True [Training] [2023-03-09T20:12:47.251958] pretrain_model_gpt: H:\ai-voice-cloning\models\tortoise\autoregressive.pth [Training] [2023-03-09T20:12:47.254969] root: ./ [Training] [2023-03-09T20:12:47.258247] experiments_root: ./training\Gebbs\finetune [Training] [2023-03-09T20:12:47.260250] models: ./training\Gebbs\finetune\models [Training] [2023-03-09T20:12:47.263773] training_state: ./training\Gebbs\finetune\training_state [Training] [2023-03-09T20:12:47.267292] log: ./training\Gebbs\finetune [Training] [2023-03-09T20:12:47.269294] val_images: ./training\Gebbs\finetune\val_images [Training] [2023-03-09T20:12:47.274812] ] [Training] [2023-03-09T20:12:47.277925] train:[ [Training] [2023-03-09T20:12:47.280926] niter: 510 [Training] [2023-03-09T20:12:47.283441] warmup_iter: -1 [Training] [2023-03-09T20:12:47.286968] mega_batch_factor: 16 [Training] [2023-03-09T20:12:47.289966] val_freq: 5 [Training] [2023-03-09T20:12:47.293490] ema_enabled: False [Training] [2023-03-09T20:12:47.296017] default_lr_scheme: MultiStepLR [Training] [2023-03-09T20:12:47.299018] gen_lr_steps: [9, 18, 25, 33] [Training] [2023-03-09T20:12:47.302012] lr_gamma: 0.5 [Training] [2023-03-09T20:12:47.304700] ] [Training] [2023-03-09T20:12:47.307708] eval:[ [Training] [2023-03-09T20:12:47.310714] pure: True [Training] [2023-03-09T20:12:47.313241] output_state: gen [Training] [2023-03-09T20:12:47.316758] ] [Training] [2023-03-09T20:12:47.319759] logger:[ [Training] [2023-03-09T20:12:47.322267] print_freq: 5 [Training] [2023-03-09T20:12:47.325793] save_checkpoint_freq: 5 [Training] [2023-03-09T20:12:47.328791] visuals: ['gen', 'mel'] [Training] [2023-03-09T20:12:47.330792] visual_debug_rate: 5 [Training] [2023-03-09T20:12:47.334812] is_mel_spectrogram: True [Training] [2023-03-09T20:12:47.337830] ] [Training] [2023-03-09T20:12:47.340831] is_train: True [Training] [2023-03-09T20:12:47.343350] dist: False [Training] [2023-03-09T20:12:47.346867] [Training] [2023-03-09T20:12:47.349866] 23-03-09 20:12:47.042 - INFO: Random seed: 5792 [Training] [2023-03-09T20:12:48.067957] 23-03-09 20:12:48.067 - INFO: Number of training data elements: 131, iters: 2 [Training] [2023-03-09T20:12:48.071956] 23-03-09 20:12:48.067 - INFO: Total epochs needed: 255 for iters 510 [Training] [2023-03-09T20:12:48.074983] 23-03-09 20:12:48.067 - INFO: Number of val images in [validation]: 0 [Training] [2023-03-09T20:12:49.011975] H:\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:375: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. [Training] [2023-03-09T20:12:49.016493] warnings.warn( [Training] [2023-03-09T20:12:56.239809] Using BitsAndBytes ADAMW optimizations [Training] [2023-03-09T20:12:56.243815] Disabled distributed training. [Training] [2023-03-09T20:12:56.247835] Path already exists. Rename it to [./training\Gebbs\finetune_archived_230309-201246] [Training] [2023-03-09T20:12:56.251838] Loading from ./models/tortoise/dvae.pth [Training] [2023-03-09T20:12:56.255853] Traceback (most recent call last): [Training] [2023-03-09T20:12:56.258862] File "H:\ai-voice-cloning\src\train.py", line 94, in <module> [Training] [2023-03-09T20:12:56.261863] train(args.opt, args.launcher) [Training] [2023-03-09T20:12:56.264860] File "H:\ai-voice-cloning\src\train.py", line 80, in train [Training] [2023-03-09T20:12:56.267931] trainer.init(yaml, opt, launcher) [Training] [2023-03-09T20:12:56.270929] File "H:\ai-voice-cloning\./modules/dlas\codes\train.py", line 144, in init [Training] [2023-03-09T20:12:56.273933] self.model = ExtensibleTrainer(opt) [Training] [2023-03-09T20:12:56.276955] File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\ExtensibleTrainer.py", line 113, in __init__ [Training] [2023-03-09T20:12:56.279955] s.define_optimizers() [Training] [2023-03-09T20:12:56.282955] File "H:\ai-voice-cloning\./modules/dlas/codes\trainer\steps.py", line 186, in define_optimizers [Training] [2023-03-09T20:12:56.286463] opt._config = opt_config # This is a bit seedy, but we will need these configs later. [Training] [2023-03-09T20:12:56.289472] UnboundLocalError: local variable 'opt' referenced before assignment ```
Owner

optimizer: ${optimizer}

Yeah, you'll need to regenerate your training configuration. I'd say you can manually edit that as "adamw", but I'm certain I had other minor bugs that were around at the time the configuration was spitting that out.

And make extra sure you actually did update with the update script and not a simple git pull (although it looks like it's not necessary since your DLAS is inthe modules folder)

> optimizer: ${optimizer} Yeah, you'll need to regenerate your training configuration. I'd say you can manually edit that as "adamw", but I'm certain I had other minor bugs that were around at the time the configuration was spitting that out. And make extra sure you actually did update with the update script and not a simple git pull (although it looks like it's not necessary since your DLAS is inthe modules folder)

optimizer: ${optimizer}

Yeah, you'll need to regenerate your training configuration. I'd say you can manually edit that as "adamw", but I'm certain I had other minor bugs that were around at the time the configuration was spitting that out.

And make extra sure you actually did update with the update script and not a simple git pull (although it looks like it's not necessary since your DLAS is inthe modules folder)

Hey.

I tried a new training config and deleted previous but got the same as above. One thing I noticed there is a "prepare Validation" button now, when I clicked that after transcribing I get 0 culled - not sure if that means anything.

Otherwise I edit training .yaml and put "adaw"as you said and the training now works :)

> > optimizer: ${optimizer} > > Yeah, you'll need to regenerate your training configuration. I'd say you can manually edit that as "adamw", but I'm certain I had other minor bugs that were around at the time the configuration was spitting that out. > > And make extra sure you actually did update with the update script and not a simple git pull (although it looks like it's not necessary since your DLAS is inthe modules folder) Hey. I tried a new training config and deleted previous but got the same as above. One thing I noticed there is a "prepare Validation" button now, when I clicked that after transcribing I get 0 culled - not sure if that means anything. Otherwise I edit training .yaml and put "adaw"as you said and the training now works :)

I am also having the same issue, on a fresh forced-update and with all new data and training config.

[Training] [2023-03-09T22:27:59.026504] File "C:\Users\nirin\Desktop\AIVoice\ai-voice-cloning./modules/dlas/codes\trainer\steps.py", line 186, in define_optimizers
[Training] [2023-03-09T22:27:59.027505] opt._config = opt_config # This is a bit seedy, but we will need these configs later.
[Training] [2023-03-09T22:27:59.030013] UnboundLocalError: local variable 'opt' referenced before assignment

I am also having the same issue, on a fresh forced-update and with all new data and training config. > [Training] [2023-03-09T22:27:59.026504] File "C:\Users\nirin\Desktop\AIVoice\ai-voice-cloning\./modules/dlas/codes\trainer\steps.py", line 186, in define_optimizers [Training] [2023-03-09T22:27:59.027505] opt._config = opt_config # This is a bit seedy, but we will need these configs later. [Training] [2023-03-09T22:27:59.030013] UnboundLocalError: local variable 'opt' referenced before assignment

Do I have to manually edit

optimizer: ${optimizer} # this should be adamw_zero if you're using distributed training

To

optimizer: adamw

or should it be adamw_zero (which the comment seems to imply)

Do I have to manually edit > optimizer: ${optimizer} # this should be adamw_zero if you're using distributed training To optimizer: adamw or should it be adamw_zero (which the comment seems to imply)
Owner

adamw

do not use adamw_zero, it will keep your learning rate fixed and not decay, as I've learned the hard way. A lot of the "do this if you're distributing" (multigpu) comments don't seem necessary desu.

I am also having the same issue, on a fresh forced-update and with all new data and training config.

Wonder what broke then, as I haven't had issues on my three machines. I'll check a fresh install then when I get the chance.

adamw do not use adamw_zero, it will keep your learning rate fixed and not decay, as I've learned the hard way. A lot of the "do this if you're distributing" (multigpu) comments don't seem necessary desu. > I am also having the same issue, on a fresh forced-update and with all new data and training config. Wonder what broke then, as I haven't had issues on my three machines. I'll check a fresh install then when I get the chance.

adamw

do not use adamw_zero, it will keep your learning rate fixed and not decay, as I've learned the hard way. A lot of the "do this if you're distributing" (multigpu) comments don't seem necessary desu.

I am also having the same issue, on a fresh forced-update and with all new data and training config.

Wonder what broke then, as I haven't had issues on my three machines. I'll check a fresh install then when I get the chance.

I didn't do a complete fresh install, I did the forced-update and then deleted the toretoise and dvae folders (as per the other issue message you put about the new update).

If your fresh install has no issues let me know, and I'll just wipe everything and do a fresh install, no big deal.

> adamw > > do not use adamw_zero, it will keep your learning rate fixed and not decay, as I've learned the hard way. A lot of the "do this if you're distributing" (multigpu) comments don't seem necessary desu. > > > I am also having the same issue, on a fresh forced-update and with all new data and training config. > > Wonder what broke then, as I haven't had issues on my three machines. I'll check a fresh install then when I get the chance. I didn't do a complete fresh install, I did the forced-update and then deleted the toretoise and dvae folders (as per the other issue message you put about the new update). If your fresh install has no issues let me know, and I'll just wipe everything and do a fresh install, no big deal.
Owner

Found it. I commented out what I thought was an override. Remedied in commit eb1551ee92.

Found it. I commented out what I thought was an override. Remedied in commit eb1551ee92f64e632935f0677439271f42c89937.
mrq closed this issue 2023-03-09 23:06:10 +00:00
Sign in to join this conversation.
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/ai-voice-cloning#102
No description provided.