Error when I click train Exception: Do not invoke this from an import #134

Closed
opened 2023-03-14 17:55:13 +00:00 by SyntheticVoices · 1 comment

H:\ai-voice-cloning>call .\venv\Scripts\activate.bat
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Spawning process:  train.bat ./training/harry/train.yaml
[Training] [2023-03-14T17:53:58.300854]
[Training] [2023-03-14T17:53:58.304382] (venv) H:\ai-voice-cloning>call .\venv\Scripts\activate.bat
[Training] [2023-03-14T17:53:59.698972] NOTE: Redirects are currently not supported in Windows or MacOs.
[Training] [2023-03-14T17:54:01.164938] 23-03-14 17:54:01.164 - INFO:   name: harry
[Training] [2023-03-14T17:54:01.168938]   model: extensibletrainer
[Training] [2023-03-14T17:54:01.172460]   scale: 1
[Training] [2023-03-14T17:54:01.176461]   gpu_ids: [0]
[Training] [2023-03-14T17:54:01.179458]   start_step: 0
[Training] [2023-03-14T17:54:01.182982]   checkpointing_enabled: True
[Training] [2023-03-14T17:54:01.186986]   fp16: False
[Training] [2023-03-14T17:54:01.189980]   bitsandbytes: False
[Training] [2023-03-14T17:54:01.193496]   gpus: 1
[Training] [2023-03-14T17:54:01.196496]   datasets:[
[Training] [2023-03-14T17:54:01.199497]     train:[
[Training] [2023-03-14T17:54:01.202006]       name: training
[Training] [2023-03-14T17:54:01.204014]       n_workers: 2
[Training] [2023-03-14T17:54:01.207532]       batch_size: 128
[Training] [2023-03-14T17:54:01.210530]       mode: paired_voice_audio
[Training] [2023-03-14T17:54:01.214047]       path: ./training/harry/train.txt
[Training] [2023-03-14T17:54:01.217048]       fetcher_mode: ['lj']
[Training] [2023-03-14T17:54:01.220047]       phase: train
[Training] [2023-03-14T17:54:01.222555]       max_wav_length: 255995
[Training] [2023-03-14T17:54:01.225565]       max_text_length: 200
[Training] [2023-03-14T17:54:01.228565]       sample_rate: 22050
[Training] [2023-03-14T17:54:01.231564]       load_conditioning: True
[Training] [2023-03-14T17:54:01.235083]       num_conditioning_candidates: 2
[Training] [2023-03-14T17:54:01.238083]       conditioning_length: 44000
[Training] [2023-03-14T17:54:01.241083]       use_bpe_tokenizer: True
[Training] [2023-03-14T17:54:01.243590]       tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json
[Training] [2023-03-14T17:54:01.246600]       load_aligned_codes: False
[Training] [2023-03-14T17:54:01.249600]       data_type: img
[Training] [2023-03-14T17:54:01.252600]     ]
[Training] [2023-03-14T17:54:01.255118]     val:[
[Training] [2023-03-14T17:54:01.260122]       name: validation
[Training] [2023-03-14T17:54:01.263116]       n_workers: 2
[Training] [2023-03-14T17:54:01.265636]       batch_size: 8
[Training] [2023-03-14T17:54:01.268636]       mode: paired_voice_audio
[Training] [2023-03-14T17:54:01.271635]       path: ./training/harry/validation.txt
[Training] [2023-03-14T17:54:01.273636]       fetcher_mode: ['lj']
[Training] [2023-03-14T17:54:01.277156]       phase: val
[Training] [2023-03-14T17:54:01.280155]       max_wav_length: 255995
[Training] [2023-03-14T17:54:01.284159]       max_text_length: 200
[Training] [2023-03-14T17:54:01.287680]       sample_rate: 22050
[Training] [2023-03-14T17:54:01.290681]       load_conditioning: True
[Training] [2023-03-14T17:54:01.293681]       num_conditioning_candidates: 2
[Training] [2023-03-14T17:54:01.297199]       conditioning_length: 44000
[Training] [2023-03-14T17:54:01.300204]       use_bpe_tokenizer: True
[Training] [2023-03-14T17:54:01.303199]       tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json
[Training] [2023-03-14T17:54:01.306230]       load_aligned_codes: False
[Training] [2023-03-14T17:54:01.309242]       data_type: img
[Training] [2023-03-14T17:54:01.312243]     ]
[Training] [2023-03-14T17:54:01.314241]   ]
[Training] [2023-03-14T17:54:01.317759]   steps:[
[Training] [2023-03-14T17:54:01.320759]     gpt_train:[
[Training] [2023-03-14T17:54:01.323760]       training: gpt
[Training] [2023-03-14T17:54:01.325759]       loss_log_buffer: 500
[Training] [2023-03-14T17:54:01.329283]       optimizer: adamw
[Training] [2023-03-14T17:54:01.332281]       optimizer_params:[
[Training] [2023-03-14T17:54:01.335279]         lr: 9e-05
[Training] [2023-03-14T17:54:01.338800]         weight_decay: 0.01
[Training] [2023-03-14T17:54:01.341804]         beta1: 0.9
[Training] [2023-03-14T17:54:01.343804]         beta2: 0.96
[Training] [2023-03-14T17:54:01.347799]       ]
[Training] [2023-03-14T17:54:01.350312]       clip_grad_eps: 4
[Training] [2023-03-14T17:54:01.354313]       injectors:[
[Training] [2023-03-14T17:54:01.356314]         paired_to_mel:[
[Training] [2023-03-14T17:54:01.360833]           type: torch_mel_spectrogram
[Training] [2023-03-14T17:54:01.362833]           mel_norm_file: ./models/tortoise/clips_mel_norms.pth
[Training] [2023-03-14T17:54:01.365832]           in: wav
[Training] [2023-03-14T17:54:01.369342]           out: paired_mel
[Training] [2023-03-14T17:54:01.372351]         ]
[Training] [2023-03-14T17:54:01.375352]         paired_cond_to_mel:[
[Training] [2023-03-14T17:54:01.378365]           type: for_each
[Training] [2023-03-14T17:54:01.380864]           subtype: torch_mel_spectrogram
[Training] [2023-03-14T17:54:01.383863]           mel_norm_file: ./models/tortoise/clips_mel_norms.pth
[Training] [2023-03-14T17:54:01.387863]           in: conditioning
[Training] [2023-03-14T17:54:01.391379]           out: paired_conditioning_mel
[Training] [2023-03-14T17:54:01.394380]         ]
[Training] [2023-03-14T17:54:01.397381]         to_codes:[
[Training] [2023-03-14T17:54:01.400889]           type: discrete_token
[Training] [2023-03-14T17:54:01.402898]           in: paired_mel
[Training] [2023-03-14T17:54:01.405900]           out: paired_mel_codes
[Training] [2023-03-14T17:54:01.408942]           dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml
[Training] [2023-03-14T17:54:01.412461]         ]
[Training] [2023-03-14T17:54:01.414462]         paired_fwd_text:[
[Training] [2023-03-14T17:54:01.417465]           type: generator
[Training] [2023-03-14T17:54:01.420972]           generator: gpt
[Training] [2023-03-14T17:54:01.423980]           in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths']
[Training] [2023-03-14T17:54:01.426980]           out: ['loss_text_ce', 'loss_mel_ce', 'logits']
[Training] [2023-03-14T17:54:01.429979]         ]
[Training] [2023-03-14T17:54:01.432495]       ]
[Training] [2023-03-14T17:54:01.435498]       losses:[
[Training] [2023-03-14T17:54:01.438497]         text_ce:[
[Training] [2023-03-14T17:54:01.440502]           type: direct
[Training] [2023-03-14T17:54:01.445020]           weight: 0.01
[Training] [2023-03-14T17:54:01.447020]           key: loss_text_ce
[Training] [2023-03-14T17:54:01.450018]         ]
[Training] [2023-03-14T17:54:01.453020]         mel_ce:[
[Training] [2023-03-14T17:54:01.457019]           type: direct
[Training] [2023-03-14T17:54:01.460019]           weight: 1
[Training] [2023-03-14T17:54:01.463024]           key: loss_mel_ce
[Training] [2023-03-14T17:54:01.466019]         ]
[Training] [2023-03-14T17:54:01.468535]       ]
[Training] [2023-03-14T17:54:01.471545]     ]
[Training] [2023-03-14T17:54:01.474543]   ]
[Training] [2023-03-14T17:54:01.477545]   networks:[
[Training] [2023-03-14T17:54:01.481069]     gpt:[
[Training] [2023-03-14T17:54:01.485067]       type: generator
[Training] [2023-03-14T17:54:01.488064]       which_model_G: unified_voice2
[Training] [2023-03-14T17:54:01.490583]       kwargs:[
[Training] [2023-03-14T17:54:01.493584]         layers: 30
[Training] [2023-03-14T17:54:01.497583]         model_dim: 1024
[Training] [2023-03-14T17:54:01.500093]         heads: 16
[Training] [2023-03-14T17:54:01.503103]         max_text_tokens: 402
[Training] [2023-03-14T17:54:01.506610]         max_mel_tokens: 604
[Training] [2023-03-14T17:54:01.508619]         max_conditioning_inputs: 2
[Training] [2023-03-14T17:54:01.512138]         mel_length_compression: 1024
[Training] [2023-03-14T17:54:01.515135]         number_text_tokens: 256
[Training] [2023-03-14T17:54:01.518138]         number_mel_codes: 8194
[Training] [2023-03-14T17:54:01.521655]         start_mel_token: 8192
[Training] [2023-03-14T17:54:01.524656]         stop_mel_token: 8193
[Training] [2023-03-14T17:54:01.527656]         start_text_token: 255
[Training] [2023-03-14T17:54:01.529653]         train_solo_embeddings: False
[Training] [2023-03-14T17:54:01.533173]         use_mel_codes_as_input: True
[Training] [2023-03-14T17:54:01.537175]         checkpointing: True
[Training] [2023-03-14T17:54:01.540173]         tortoise_compat: True
[Training] [2023-03-14T17:54:01.543689]       ]
[Training] [2023-03-14T17:54:01.546690]     ]
[Training] [2023-03-14T17:54:01.548690]   ]
[Training] [2023-03-14T17:54:01.552198]   path:[
[Training] [2023-03-14T17:54:01.555208]     strict_load: True
[Training] [2023-03-14T17:54:01.558205]     pretrain_model_gpt: H:\ai-voice-cloning\models\tortoise\autoregressive.pth
[Training] [2023-03-14T17:54:01.561208]     root: ./
[Training] [2023-03-14T17:54:01.563724]     experiments_root: ./training\harry\finetune
[Training] [2023-03-14T17:54:01.566722]     models: ./training\harry\finetune\models
[Training] [2023-03-14T17:54:01.569723]     training_state: ./training\harry\finetune\training_state
[Training] [2023-03-14T17:54:01.573233]     log: ./training\harry\finetune
[Training] [2023-03-14T17:54:01.576243]     val_images: ./training\harry\finetune\val_images
[Training] [2023-03-14T17:54:01.579242]   ]
[Training] [2023-03-14T17:54:01.582243]   train:[
[Training] [2023-03-14T17:54:01.586762]     niter: 440
[Training] [2023-03-14T17:54:01.588762]     warmup_iter: -1
[Training] [2023-03-14T17:54:01.592760]     mega_batch_factor: 16
[Training] [2023-03-14T17:54:01.595277]     val_freq: 22
[Training] [2023-03-14T17:54:01.598277]     ema_enabled: False
[Training] [2023-03-14T17:54:01.601275]     default_lr_scheme: MultiStepLR
[Training] [2023-03-14T17:54:01.604788]     gen_lr_steps: [8, 16, 36, 72, 100, 132, 200]
[Training] [2023-03-14T17:54:01.607317]     lr_gamma: 0.5
[Training] [2023-03-14T17:54:01.610317]   ]
[Training] [2023-03-14T17:54:01.613316]   eval:[
[Training] [2023-03-14T17:54:01.616836]     pure: True
[Training] [2023-03-14T17:54:01.619835]     output_state: gen
[Training] [2023-03-14T17:54:01.621835]   ]
[Training] [2023-03-14T17:54:01.625342]   logger:[
[Training] [2023-03-14T17:54:01.628352]     save_checkpoint_freq: 22
[Training] [2023-03-14T17:54:01.631349]     visuals: ['gen', 'mel']
[Training] [2023-03-14T17:54:01.634352]     visual_debug_rate: 22
[Training] [2023-03-14T17:54:01.637870]     is_mel_spectrogram: True
[Training] [2023-03-14T17:54:01.640870]   ]
[Training] [2023-03-14T17:54:01.643874]   is_train: True
[Training] [2023-03-14T17:54:01.647395]   dist: False
[Training] [2023-03-14T17:54:01.650395]
[Training] [2023-03-14T17:54:01.653400] 23-03-14 17:54:01.164 - INFO: Random seed: 1208
[Training] [2023-03-14T17:54:02.136769] 23-03-14 17:54:02.136 - INFO: Number of training data elements: 572, iters: 5
[Training] [2023-03-14T17:54:02.140768] 23-03-14 17:54:02.136 - INFO: Total epochs needed: 88 for iters 440
[Training] [2023-03-14T17:54:02.144277] 23-03-14 17:54:02.137 - INFO: Number of val images in [validation]: 46
[Training] [2023-03-14T17:54:03.085522] H:\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:375: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
[Training] [2023-03-14T17:54:03.090522]   warnings.warn(
[Training] [2023-03-14T17:54:09.585474] 23-03-14 17:54:09.585 - INFO: Loading model for [H:\ai-voice-cloning\models\tortoise\autoregressive.pth]
[Training] [2023-03-14T17:54:10.450064] 23-03-14 17:54:10.437 - INFO: Start training from epoch: 0, iter: 0
[Training] [2023-03-14T17:54:11.793737] NOTE: Redirects are currently not supported in Windows or MacOs.
[Training] [2023-03-14T17:54:11.958742] Traceback (most recent call last):
[Training] [2023-03-14T17:54:11.959741]   File "<string>", line 1, in <module>
[Training] [2023-03-14T17:54:11.959741]   File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
[Training] [2023-03-14T17:54:11.960742]     exitcode = _main(fd, parent_sentinel)
[Training] [2023-03-14T17:54:11.960742]   File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 125, in _main
[Training] [2023-03-14T17:54:11.961743]     prepare(preparation_data)
[Training] [2023-03-14T17:54:11.961743]   File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 236, in prepare
[Training] [2023-03-14T17:54:11.961743]     _fixup_main_from_path(data['init_main_from_path'])
[Training] [2023-03-14T17:54:11.962741]   File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
[Training] [2023-03-14T17:54:11.962741]     main_content = runpy.run_path(main_path,
[Training] [2023-03-14T17:54:11.962741]   File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 289, in run_path
[Training] [2023-03-14T17:54:11.964248]     return _run_module_code(code, init_globals, run_name,
[Training] [2023-03-14T17:54:11.964248]   File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 96, in _run_module_code
[Training] [2023-03-14T17:54:11.965254]     _run_code(code, mod_globals, init_globals,
[Training] [2023-03-14T17:54:11.965254]   File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
[Training] [2023-03-14T17:54:11.965254]     exec(code, run_globals)
[Training] [2023-03-14T17:54:11.966254]   File "H:\ai-voice-cloning\src\train.py", line 11, in <module>
[Training] [2023-03-14T17:54:11.966254]     raise Exception("Do not invoke this from an import")
[Training] [2023-03-14T17:54:11.967255] Exception: Do not invoke this from an import

I am on the latest pull

``` H:\ai-voice-cloning>call .\venv\Scripts\activate.bat Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Spawning process: train.bat ./training/harry/train.yaml [Training] [2023-03-14T17:53:58.300854] [Training] [2023-03-14T17:53:58.304382] (venv) H:\ai-voice-cloning>call .\venv\Scripts\activate.bat [Training] [2023-03-14T17:53:59.698972] NOTE: Redirects are currently not supported in Windows or MacOs. [Training] [2023-03-14T17:54:01.164938] 23-03-14 17:54:01.164 - INFO: name: harry [Training] [2023-03-14T17:54:01.168938] model: extensibletrainer [Training] [2023-03-14T17:54:01.172460] scale: 1 [Training] [2023-03-14T17:54:01.176461] gpu_ids: [0] [Training] [2023-03-14T17:54:01.179458] start_step: 0 [Training] [2023-03-14T17:54:01.182982] checkpointing_enabled: True [Training] [2023-03-14T17:54:01.186986] fp16: False [Training] [2023-03-14T17:54:01.189980] bitsandbytes: False [Training] [2023-03-14T17:54:01.193496] gpus: 1 [Training] [2023-03-14T17:54:01.196496] datasets:[ [Training] [2023-03-14T17:54:01.199497] train:[ [Training] [2023-03-14T17:54:01.202006] name: training [Training] [2023-03-14T17:54:01.204014] n_workers: 2 [Training] [2023-03-14T17:54:01.207532] batch_size: 128 [Training] [2023-03-14T17:54:01.210530] mode: paired_voice_audio [Training] [2023-03-14T17:54:01.214047] path: ./training/harry/train.txt [Training] [2023-03-14T17:54:01.217048] fetcher_mode: ['lj'] [Training] [2023-03-14T17:54:01.220047] phase: train [Training] [2023-03-14T17:54:01.222555] max_wav_length: 255995 [Training] [2023-03-14T17:54:01.225565] max_text_length: 200 [Training] [2023-03-14T17:54:01.228565] sample_rate: 22050 [Training] [2023-03-14T17:54:01.231564] load_conditioning: True [Training] [2023-03-14T17:54:01.235083] num_conditioning_candidates: 2 [Training] [2023-03-14T17:54:01.238083] conditioning_length: 44000 [Training] [2023-03-14T17:54:01.241083] use_bpe_tokenizer: True [Training] [2023-03-14T17:54:01.243590] tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json [Training] [2023-03-14T17:54:01.246600] load_aligned_codes: False [Training] [2023-03-14T17:54:01.249600] data_type: img [Training] [2023-03-14T17:54:01.252600] ] [Training] [2023-03-14T17:54:01.255118] val:[ [Training] [2023-03-14T17:54:01.260122] name: validation [Training] [2023-03-14T17:54:01.263116] n_workers: 2 [Training] [2023-03-14T17:54:01.265636] batch_size: 8 [Training] [2023-03-14T17:54:01.268636] mode: paired_voice_audio [Training] [2023-03-14T17:54:01.271635] path: ./training/harry/validation.txt [Training] [2023-03-14T17:54:01.273636] fetcher_mode: ['lj'] [Training] [2023-03-14T17:54:01.277156] phase: val [Training] [2023-03-14T17:54:01.280155] max_wav_length: 255995 [Training] [2023-03-14T17:54:01.284159] max_text_length: 200 [Training] [2023-03-14T17:54:01.287680] sample_rate: 22050 [Training] [2023-03-14T17:54:01.290681] load_conditioning: True [Training] [2023-03-14T17:54:01.293681] num_conditioning_candidates: 2 [Training] [2023-03-14T17:54:01.297199] conditioning_length: 44000 [Training] [2023-03-14T17:54:01.300204] use_bpe_tokenizer: True [Training] [2023-03-14T17:54:01.303199] tokenizer_vocab: ./models/tortoise/bpe_lowercase_asr_256.json [Training] [2023-03-14T17:54:01.306230] load_aligned_codes: False [Training] [2023-03-14T17:54:01.309242] data_type: img [Training] [2023-03-14T17:54:01.312243] ] [Training] [2023-03-14T17:54:01.314241] ] [Training] [2023-03-14T17:54:01.317759] steps:[ [Training] [2023-03-14T17:54:01.320759] gpt_train:[ [Training] [2023-03-14T17:54:01.323760] training: gpt [Training] [2023-03-14T17:54:01.325759] loss_log_buffer: 500 [Training] [2023-03-14T17:54:01.329283] optimizer: adamw [Training] [2023-03-14T17:54:01.332281] optimizer_params:[ [Training] [2023-03-14T17:54:01.335279] lr: 9e-05 [Training] [2023-03-14T17:54:01.338800] weight_decay: 0.01 [Training] [2023-03-14T17:54:01.341804] beta1: 0.9 [Training] [2023-03-14T17:54:01.343804] beta2: 0.96 [Training] [2023-03-14T17:54:01.347799] ] [Training] [2023-03-14T17:54:01.350312] clip_grad_eps: 4 [Training] [2023-03-14T17:54:01.354313] injectors:[ [Training] [2023-03-14T17:54:01.356314] paired_to_mel:[ [Training] [2023-03-14T17:54:01.360833] type: torch_mel_spectrogram [Training] [2023-03-14T17:54:01.362833] mel_norm_file: ./models/tortoise/clips_mel_norms.pth [Training] [2023-03-14T17:54:01.365832] in: wav [Training] [2023-03-14T17:54:01.369342] out: paired_mel [Training] [2023-03-14T17:54:01.372351] ] [Training] [2023-03-14T17:54:01.375352] paired_cond_to_mel:[ [Training] [2023-03-14T17:54:01.378365] type: for_each [Training] [2023-03-14T17:54:01.380864] subtype: torch_mel_spectrogram [Training] [2023-03-14T17:54:01.383863] mel_norm_file: ./models/tortoise/clips_mel_norms.pth [Training] [2023-03-14T17:54:01.387863] in: conditioning [Training] [2023-03-14T17:54:01.391379] out: paired_conditioning_mel [Training] [2023-03-14T17:54:01.394380] ] [Training] [2023-03-14T17:54:01.397381] to_codes:[ [Training] [2023-03-14T17:54:01.400889] type: discrete_token [Training] [2023-03-14T17:54:01.402898] in: paired_mel [Training] [2023-03-14T17:54:01.405900] out: paired_mel_codes [Training] [2023-03-14T17:54:01.408942] dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml [Training] [2023-03-14T17:54:01.412461] ] [Training] [2023-03-14T17:54:01.414462] paired_fwd_text:[ [Training] [2023-03-14T17:54:01.417465] type: generator [Training] [2023-03-14T17:54:01.420972] generator: gpt [Training] [2023-03-14T17:54:01.423980] in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths'] [Training] [2023-03-14T17:54:01.426980] out: ['loss_text_ce', 'loss_mel_ce', 'logits'] [Training] [2023-03-14T17:54:01.429979] ] [Training] [2023-03-14T17:54:01.432495] ] [Training] [2023-03-14T17:54:01.435498] losses:[ [Training] [2023-03-14T17:54:01.438497] text_ce:[ [Training] [2023-03-14T17:54:01.440502] type: direct [Training] [2023-03-14T17:54:01.445020] weight: 0.01 [Training] [2023-03-14T17:54:01.447020] key: loss_text_ce [Training] [2023-03-14T17:54:01.450018] ] [Training] [2023-03-14T17:54:01.453020] mel_ce:[ [Training] [2023-03-14T17:54:01.457019] type: direct [Training] [2023-03-14T17:54:01.460019] weight: 1 [Training] [2023-03-14T17:54:01.463024] key: loss_mel_ce [Training] [2023-03-14T17:54:01.466019] ] [Training] [2023-03-14T17:54:01.468535] ] [Training] [2023-03-14T17:54:01.471545] ] [Training] [2023-03-14T17:54:01.474543] ] [Training] [2023-03-14T17:54:01.477545] networks:[ [Training] [2023-03-14T17:54:01.481069] gpt:[ [Training] [2023-03-14T17:54:01.485067] type: generator [Training] [2023-03-14T17:54:01.488064] which_model_G: unified_voice2 [Training] [2023-03-14T17:54:01.490583] kwargs:[ [Training] [2023-03-14T17:54:01.493584] layers: 30 [Training] [2023-03-14T17:54:01.497583] model_dim: 1024 [Training] [2023-03-14T17:54:01.500093] heads: 16 [Training] [2023-03-14T17:54:01.503103] max_text_tokens: 402 [Training] [2023-03-14T17:54:01.506610] max_mel_tokens: 604 [Training] [2023-03-14T17:54:01.508619] max_conditioning_inputs: 2 [Training] [2023-03-14T17:54:01.512138] mel_length_compression: 1024 [Training] [2023-03-14T17:54:01.515135] number_text_tokens: 256 [Training] [2023-03-14T17:54:01.518138] number_mel_codes: 8194 [Training] [2023-03-14T17:54:01.521655] start_mel_token: 8192 [Training] [2023-03-14T17:54:01.524656] stop_mel_token: 8193 [Training] [2023-03-14T17:54:01.527656] start_text_token: 255 [Training] [2023-03-14T17:54:01.529653] train_solo_embeddings: False [Training] [2023-03-14T17:54:01.533173] use_mel_codes_as_input: True [Training] [2023-03-14T17:54:01.537175] checkpointing: True [Training] [2023-03-14T17:54:01.540173] tortoise_compat: True [Training] [2023-03-14T17:54:01.543689] ] [Training] [2023-03-14T17:54:01.546690] ] [Training] [2023-03-14T17:54:01.548690] ] [Training] [2023-03-14T17:54:01.552198] path:[ [Training] [2023-03-14T17:54:01.555208] strict_load: True [Training] [2023-03-14T17:54:01.558205] pretrain_model_gpt: H:\ai-voice-cloning\models\tortoise\autoregressive.pth [Training] [2023-03-14T17:54:01.561208] root: ./ [Training] [2023-03-14T17:54:01.563724] experiments_root: ./training\harry\finetune [Training] [2023-03-14T17:54:01.566722] models: ./training\harry\finetune\models [Training] [2023-03-14T17:54:01.569723] training_state: ./training\harry\finetune\training_state [Training] [2023-03-14T17:54:01.573233] log: ./training\harry\finetune [Training] [2023-03-14T17:54:01.576243] val_images: ./training\harry\finetune\val_images [Training] [2023-03-14T17:54:01.579242] ] [Training] [2023-03-14T17:54:01.582243] train:[ [Training] [2023-03-14T17:54:01.586762] niter: 440 [Training] [2023-03-14T17:54:01.588762] warmup_iter: -1 [Training] [2023-03-14T17:54:01.592760] mega_batch_factor: 16 [Training] [2023-03-14T17:54:01.595277] val_freq: 22 [Training] [2023-03-14T17:54:01.598277] ema_enabled: False [Training] [2023-03-14T17:54:01.601275] default_lr_scheme: MultiStepLR [Training] [2023-03-14T17:54:01.604788] gen_lr_steps: [8, 16, 36, 72, 100, 132, 200] [Training] [2023-03-14T17:54:01.607317] lr_gamma: 0.5 [Training] [2023-03-14T17:54:01.610317] ] [Training] [2023-03-14T17:54:01.613316] eval:[ [Training] [2023-03-14T17:54:01.616836] pure: True [Training] [2023-03-14T17:54:01.619835] output_state: gen [Training] [2023-03-14T17:54:01.621835] ] [Training] [2023-03-14T17:54:01.625342] logger:[ [Training] [2023-03-14T17:54:01.628352] save_checkpoint_freq: 22 [Training] [2023-03-14T17:54:01.631349] visuals: ['gen', 'mel'] [Training] [2023-03-14T17:54:01.634352] visual_debug_rate: 22 [Training] [2023-03-14T17:54:01.637870] is_mel_spectrogram: True [Training] [2023-03-14T17:54:01.640870] ] [Training] [2023-03-14T17:54:01.643874] is_train: True [Training] [2023-03-14T17:54:01.647395] dist: False [Training] [2023-03-14T17:54:01.650395] [Training] [2023-03-14T17:54:01.653400] 23-03-14 17:54:01.164 - INFO: Random seed: 1208 [Training] [2023-03-14T17:54:02.136769] 23-03-14 17:54:02.136 - INFO: Number of training data elements: 572, iters: 5 [Training] [2023-03-14T17:54:02.140768] 23-03-14 17:54:02.136 - INFO: Total epochs needed: 88 for iters 440 [Training] [2023-03-14T17:54:02.144277] 23-03-14 17:54:02.137 - INFO: Number of val images in [validation]: 46 [Training] [2023-03-14T17:54:03.085522] H:\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:375: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. [Training] [2023-03-14T17:54:03.090522] warnings.warn( [Training] [2023-03-14T17:54:09.585474] 23-03-14 17:54:09.585 - INFO: Loading model for [H:\ai-voice-cloning\models\tortoise\autoregressive.pth] [Training] [2023-03-14T17:54:10.450064] 23-03-14 17:54:10.437 - INFO: Start training from epoch: 0, iter: 0 [Training] [2023-03-14T17:54:11.793737] NOTE: Redirects are currently not supported in Windows or MacOs. [Training] [2023-03-14T17:54:11.958742] Traceback (most recent call last): [Training] [2023-03-14T17:54:11.959741] File "<string>", line 1, in <module> [Training] [2023-03-14T17:54:11.959741] File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main [Training] [2023-03-14T17:54:11.960742] exitcode = _main(fd, parent_sentinel) [Training] [2023-03-14T17:54:11.960742] File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 125, in _main [Training] [2023-03-14T17:54:11.961743] prepare(preparation_data) [Training] [2023-03-14T17:54:11.961743] File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 236, in prepare [Training] [2023-03-14T17:54:11.961743] _fixup_main_from_path(data['init_main_from_path']) [Training] [2023-03-14T17:54:11.962741] File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path [Training] [2023-03-14T17:54:11.962741] main_content = runpy.run_path(main_path, [Training] [2023-03-14T17:54:11.962741] File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 289, in run_path [Training] [2023-03-14T17:54:11.964248] return _run_module_code(code, init_globals, run_name, [Training] [2023-03-14T17:54:11.964248] File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 96, in _run_module_code [Training] [2023-03-14T17:54:11.965254] _run_code(code, mod_globals, init_globals, [Training] [2023-03-14T17:54:11.965254] File "C:\Users\Ali\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code [Training] [2023-03-14T17:54:11.965254] exec(code, run_globals) [Training] [2023-03-14T17:54:11.966254] File "H:\ai-voice-cloning\src\train.py", line 11, in <module> [Training] [2023-03-14T17:54:11.966254] raise Exception("Do not invoke this from an import") [Training] [2023-03-14T17:54:11.967255] Exception: Do not invoke this from an import ``` I am on the latest pull

Can confirm latest commit is broken on Windows but working on faux Linux (WSL2 on Win10).

Can confirm latest commit is broken on Windows but working on faux Linux (WSL2 on Win10).
mrq referenced this issue from a commit 2023-03-14 18:53:03 +00:00
mrq closed this issue 2023-03-14 18:53:03 +00:00
Sign in to join this conversation.
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/ai-voice-cloning#134
No description provided.