utf-8 codec can't decod ebyte 0x81 in position 2 #166

Open
opened 2023-03-22 05:48:49 +00:00 by sampleuser1 · 14 comments

Getting following error when trying to start to train

  File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1050, in run_training
    for line in iter(training_state.process.stdout.readline, ""):
  File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte
Getting following error when trying to start to train ``` File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1050, in run_training for line in iter(training_state.process.stdout.readline, ""): File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte ```

Likely to be invalid UTF8 characters in your train.txt or validation.txt files. Are you training a language other than English?

Likely to be invalid UTF8 characters in your train.txt or validation.txt files. Are you training a language other than English?
Author

nope currently training only English, German I will try later. However it's so quick with the error I don't know if it's even started reading the train or validation.txt file

Here's the full error when starting to train, maybe there's something else there.

Spawning process:  train.bat ./training/ciritestsinglelines/train.yaml
[Training] [2023-03-23T01:40:02.630670]
[Training] [2023-03-23T01:40:02.633855] (venv) C:\Users\Dominik\ai-voice-cloning>call .\venv\Scripts\activate.bat
[Training] [2023-03-23T01:40:03.876553] NOTE: Redirects are currently not supported in Windows or MacOs.
[Training] [2023-03-23T01:40:04.024540] Traceback (most recent call last):
[Training] [2023-03-23T01:40:04.028554]   File "C:\Users\Dominik\ai-voice-cloning\src\train.py", line 61, in <module>
[Training] [2023-03-23T01:40:04.031057]     from dlas import train as tr
[Training] [2023-03-23T01:40:04.035070] ModuleNotFoundError: No module named 'dlas'
Traceback (most recent call last):
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
    result = await self.call_function(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 898, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\utils.py", line 549, in async_iteration
    return next(iterator)
  File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1050, in run_training
    for line in iter(training_state.process.stdout.readline, ""):
  File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte
nope currently training only English, German I will try later. However it's so quick with the error I don't know if it's even started reading the train or validation.txt file Here's the full error when starting to train, maybe there's something else there. ``` Spawning process: train.bat ./training/ciritestsinglelines/train.yaml [Training] [2023-03-23T01:40:02.630670] [Training] [2023-03-23T01:40:02.633855] (venv) C:\Users\Dominik\ai-voice-cloning>call .\venv\Scripts\activate.bat [Training] [2023-03-23T01:40:03.876553] NOTE: Redirects are currently not supported in Windows or MacOs. [Training] [2023-03-23T01:40:04.024540] Traceback (most recent call last): [Training] [2023-03-23T01:40:04.028554] File "C:\Users\Dominik\ai-voice-cloning\src\train.py", line 61, in <module> [Training] [2023-03-23T01:40:04.031057] from dlas import train as tr [Training] [2023-03-23T01:40:04.035070] ModuleNotFoundError: No module named 'dlas' Traceback (most recent call last): File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api result = await self.call_function( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 898, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\utils.py", line 549, in async_iteration return next(iterator) File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1050, in run_training for line in iter(training_state.process.stdout.readline, ""): File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte ```
Author

If helpful I could also post the train.txt and the validation.txt however I couldn't imagine what the non utf-8 character would be

If helpful I could also post the train.txt and the validation.txt however I couldn't imagine what the non utf-8 character would be

If you have Notepad++ you can open up those two files, then go to Encoding>Convert to UTF8, save them and see if there's any difference.

If you have Notepad++ you can open up those two files, then go to Encoding>Convert to UTF8, save them and see if there's any difference.
Author

Tried it, same outcome with the same error.

Tried it, same outcome with the same error.

Somehow I missed the [Training] [2023-03-23T01:40:04.035070] ModuleNotFoundError: No module named 'dlas' bit above. You might need to re-run the setup script. If that doesn't fix it I could try training with your dataset if its small enough to upload.

Somehow I missed the `[Training] [2023-03-23T01:40:04.035070] ModuleNotFoundError: No module named 'dlas'` bit above. You might need to re-run the setup script. If that doesn't fix it I could try training with your dataset if its small enough to upload.
Author

looks like it's working currently, at least a new smaller sample worked (with around 30 wav only).
What i did was the setup again (setp-cuda.bat) and after that update.bat, it downloaded some new files maybe that's what fixed it. However currently working, I write again if everything is ok.

looks like it's working currently, at least a new smaller sample worked (with around 30 wav only). What i did was the setup again (setp-cuda.bat) and after that update.bat, it downloaded some new files maybe that's what fixed it. However currently working, I write again if everything is ok.
Author

well it doesn't at least not now. First try worked, now I've tried it again, however now the utf-8 error comes a bit later in the process, right before the training begins

Spawning process:  train.bat ./training/ciritestsinglelines/train.yaml
[Training] [2023-03-26T08:45:24.653273]
[Training] [2023-03-26T08:45:24.656283] (venv) C:\Users\Dominik\ai-voice-cloning>call .\venv\Scripts\activate.bat
[Training] [2023-03-26T08:45:25.886732] NOTE: Redirects are currently not supported in Windows or MacOs.
[Training] [2023-03-26T08:45:26.815762] 23-03-26 08:45:26.815 - INFO:   name: ciritestsinglelines
[Training] [2023-03-26T08:45:26.819776]   model: extensibletrainer
[Training] [2023-03-26T08:45:26.821783]   scale: 1
[Training] [2023-03-26T08:45:26.825797]   gpu_ids: [0]
[Training] [2023-03-26T08:45:26.827804]   start_step: 0
[Training] [2023-03-26T08:45:26.830814]   checkpointing_enabled: True
[Training] [2023-03-26T08:45:26.833825]   fp16: False
[Training] [2023-03-26T08:45:26.835832]   bitsandbytes: True
[Training] [2023-03-26T08:45:26.838842]   gpus: 1
[Training] [2023-03-26T08:45:26.841853]   datasets:[
[Training] [2023-03-26T08:45:26.844863]     train:[
[Training] [2023-03-26T08:45:26.846870]       name: training
[Training] [2023-03-26T08:45:26.849881]       n_workers: 2
[Training] [2023-03-26T08:45:26.851888]       batch_size: 128
[Training] [2023-03-26T08:45:26.854898]       mode: paired_voice_audio
[Training] [2023-03-26T08:45:26.857908]       path: ./training/ciritestsinglelines/train.txt
[Training] [2023-03-26T08:45:26.859916]       fetcher_mode: ['lj']
[Training] [2023-03-26T08:45:26.861922]       phase: train
[Training] [2023-03-26T08:45:26.863929]       max_wav_length: 255995
[Training] [2023-03-26T08:45:26.865937]       max_text_length: 200
[Training] [2023-03-26T08:45:26.867944]       sample_rate: 22050
[Training] [2023-03-26T08:45:26.869950]       load_conditioning: True
[Training] [2023-03-26T08:45:26.871958]       num_conditioning_candidates: 2
[Training] [2023-03-26T08:45:26.874968]       conditioning_length: 44000
[Training] [2023-03-26T08:45:26.876975]       use_bpe_tokenizer: True
[Training] [2023-03-26T08:45:26.878981]       tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json
[Training] [2023-03-26T08:45:26.880989]       load_aligned_codes: False
[Training] [2023-03-26T08:45:26.882995]       data_type: img
[Training] [2023-03-26T08:45:26.886006]     ]
[Training] [2023-03-26T08:45:26.888013]     val:[
[Training] [2023-03-26T08:45:26.890019]       name: validation
[Training] [2023-03-26T08:45:26.892027]       n_workers: 2
[Training] [2023-03-26T08:45:26.894034]       batch_size: 5
[Training] [2023-03-26T08:45:26.896041]       mode: paired_voice_audio
[Training] [2023-03-26T08:45:26.898047]       path: ./training/ciritestsinglelines/validation.txt
[Training] [2023-03-26T08:45:26.901058]       fetcher_mode: ['lj']
[Training] [2023-03-26T08:45:26.904069]       phase: val
[Training] [2023-03-26T08:45:26.906076]       max_wav_length: 255995
[Training] [2023-03-26T08:45:26.908082]       max_text_length: 200
[Training] [2023-03-26T08:45:26.910089]       sample_rate: 22050
[Training] [2023-03-26T08:45:26.912096]       load_conditioning: True
[Training] [2023-03-26T08:45:26.914104]       num_conditioning_candidates: 2
[Training] [2023-03-26T08:45:26.916111]       conditioning_length: 44000
[Training] [2023-03-26T08:45:26.919120]       use_bpe_tokenizer: True
[Training] [2023-03-26T08:45:26.921128]       tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json
[Training] [2023-03-26T08:45:26.923135]       load_aligned_codes: False
[Training] [2023-03-26T08:45:26.925141]       data_type: img
[Training] [2023-03-26T08:45:26.927148]     ]
[Training] [2023-03-26T08:45:26.929156]   ]
[Training] [2023-03-26T08:45:26.932166]   steps:[
[Training] [2023-03-26T08:45:26.934173]     gpt_train:[
[Training] [2023-03-26T08:45:26.936180]       training: gpt
[Training] [2023-03-26T08:45:26.938187]       loss_log_buffer: 500
[Training] [2023-03-26T08:45:26.940194]       optimizer: adamw
[Training] [2023-03-26T08:45:26.942201]       optimizer_params:[
[Training] [2023-03-26T08:45:26.944208]         lr: 1e-05
[Training] [2023-03-26T08:45:26.946215]         weight_decay: 0.01
[Training] [2023-03-26T08:45:26.948222]         beta1: 0.9
[Training] [2023-03-26T08:45:26.951233]         beta2: 0.96
[Training] [2023-03-26T08:45:26.953239]       ]
[Training] [2023-03-26T08:45:26.955246]       clip_grad_eps: 4
[Training] [2023-03-26T08:45:26.957253]       injectors:[
[Training] [2023-03-26T08:45:26.959260]         paired_to_mel:[
[Training] [2023-03-26T08:45:26.962271]           type: torch_mel_spectrogram
[Training] [2023-03-26T08:45:26.964277]           mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth
[Training] [2023-03-26T08:45:26.966284]           in: wav
[Training] [2023-03-26T08:45:26.968291]           out: paired_mel
[Training] [2023-03-26T08:45:26.970298]         ]
[Training] [2023-03-26T08:45:26.973309]         paired_cond_to_mel:[
[Training] [2023-03-26T08:45:26.975316]           type: for_each
[Training] [2023-03-26T08:45:26.977322]           subtype: torch_mel_spectrogram
[Training] [2023-03-26T08:45:26.979330]           mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth
[Training] [2023-03-26T08:45:26.981337]           in: conditioning
[Training] [2023-03-26T08:45:26.984347]           out: paired_conditioning_mel
[Training] [2023-03-26T08:45:26.987358]         ]
[Training] [2023-03-26T08:45:26.989364]         to_codes:[
[Training] [2023-03-26T08:45:26.991371]           type: discrete_token
[Training] [2023-03-26T08:45:26.993378]           in: paired_mel
[Training] [2023-03-26T08:45:26.995385]           out: paired_mel_codes
[Training] [2023-03-26T08:45:26.997393]           dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml
[Training] [2023-03-26T08:45:27.000403]         ]
[Training] [2023-03-26T08:45:27.002410]         paired_fwd_text:[
[Training] [2023-03-26T08:45:27.004417]           type: generator
[Training] [2023-03-26T08:45:27.007427]           generator: gpt
[Training] [2023-03-26T08:45:27.009434]           in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths']
[Training] [2023-03-26T08:45:27.011441]           out: ['loss_text_ce', 'loss_mel_ce', 'logits']
[Training] [2023-03-26T08:45:27.014452]         ]
[Training] [2023-03-26T08:45:27.016458]       ]
[Training] [2023-03-26T08:45:27.018466]       losses:[
[Training] [2023-03-26T08:45:27.020473]         text_ce:[
[Training] [2023-03-26T08:45:27.022479]           type: direct
[Training] [2023-03-26T08:45:27.024486]           weight: 0.01
[Training] [2023-03-26T08:45:27.026493]           key: loss_text_ce
[Training] [2023-03-26T08:45:27.028500]         ]
[Training] [2023-03-26T08:45:27.030507]         mel_ce:[
[Training] [2023-03-26T08:45:27.032514]           type: direct
[Training] [2023-03-26T08:45:27.034521]           weight: 1
[Training] [2023-03-26T08:45:27.036528]           key: loss_mel_ce
[Training] [2023-03-26T08:45:27.038536]         ]
[Training] [2023-03-26T08:45:27.040542]       ]
[Training] [2023-03-26T08:45:27.042549]     ]
[Training] [2023-03-26T08:45:27.044556]   ]
[Training] [2023-03-26T08:45:27.046562]   networks:[
[Training] [2023-03-26T08:45:27.049573]     gpt:[
[Training] [2023-03-26T08:45:27.051580]       type: generator
[Training] [2023-03-26T08:45:27.053588]       which_model_G: unified_voice2
[Training] [2023-03-26T08:45:27.056598]       kwargs:[
[Training] [2023-03-26T08:45:27.058605]         layers: 30
[Training] [2023-03-26T08:45:27.060612]         model_dim: 1024
[Training] [2023-03-26T08:45:27.062618]         heads: 16
[Training] [2023-03-26T08:45:27.064625]         max_text_tokens: 402
[Training] [2023-03-26T08:45:27.067636]         max_mel_tokens: 604
[Training] [2023-03-26T08:45:27.070646]         max_conditioning_inputs: 2
[Training] [2023-03-26T08:45:27.073657]         mel_length_compression: 1024
[Training] [2023-03-26T08:45:27.076667]         number_text_tokens: 256
[Training] [2023-03-26T08:45:27.078674]         number_mel_codes: 8194
[Training] [2023-03-26T08:45:27.080681]         start_mel_token: 8192
[Training] [2023-03-26T08:45:27.082688]         stop_mel_token: 8193
[Training] [2023-03-26T08:45:27.084695]         start_text_token: 255
[Training] [2023-03-26T08:45:27.086702]         train_solo_embeddings: False
[Training] [2023-03-26T08:45:27.088709]         use_mel_codes_as_input: True
[Training] [2023-03-26T08:45:27.090716]         checkpointing: True
[Training] [2023-03-26T08:45:27.092723]         tortoise_compat: True
[Training] [2023-03-26T08:45:27.094730]       ]
[Training] [2023-03-26T08:45:27.096737]     ]
[Training] [2023-03-26T08:45:27.098744]   ]
[Training] [2023-03-26T08:45:27.100751]   path:[
[Training] [2023-03-26T08:45:27.102757]     strict_load: True
[Training] [2023-03-26T08:45:27.105768]     pretrain_model_gpt: ./models/tortoise/autoregressive.pth
[Training] [2023-03-26T08:45:27.107776]     root: ./
[Training] [2023-03-26T08:45:27.109782]     experiments_root: ./training\ciritestsinglelines\finetune
[Training] [2023-03-26T08:45:27.111789]     models: ./training\ciritestsinglelines\finetune\models
[Training] [2023-03-26T08:45:27.113796]     training_state: ./training\ciritestsinglelines\finetune\training_state
[Training] [2023-03-26T08:45:27.115803]     log: ./training\ciritestsinglelines\finetune
[Training] [2023-03-26T08:45:27.117810]     val_images: ./training\ciritestsinglelines\finetune\val_images
[Training] [2023-03-26T08:45:27.119817]   ]
[Training] [2023-03-26T08:45:27.121823]   train:[
[Training] [2023-03-26T08:45:27.123830]     niter: 5600
[Training] [2023-03-26T08:45:27.125838]     warmup_iter: -1
[Training] [2023-03-26T08:45:27.127845]     mega_batch_factor: 24
[Training] [2023-03-26T08:45:27.129852]     val_freq: 35
[Training] [2023-03-26T08:45:27.132862]     ema_enabled: False
[Training] [2023-03-26T08:45:27.134869]     default_lr_scheme: MultiStepLR
[Training] [2023-03-26T08:45:27.136876]     gen_lr_steps: [14, 28, 63, 126, 175, 231, 350]
[Training] [2023-03-26T08:45:27.138883]     lr_gamma: 0.5
[Training] [2023-03-26T08:45:27.140900]   ]
[Training] [2023-03-26T08:45:27.142897]   eval:[
[Training] [2023-03-26T08:45:27.144904]     pure: False
[Training] [2023-03-26T08:45:27.147915]     output_state: gen
[Training] [2023-03-26T08:45:27.149922]   ]
[Training] [2023-03-26T08:45:27.152932]   logger:[
[Training] [2023-03-26T08:45:27.154939]     save_checkpoint_freq: 35
[Training] [2023-03-26T08:45:27.156946]     visuals: ['gen', 'mel']
[Training] [2023-03-26T08:45:27.159956]     visual_debug_rate: 35
[Training] [2023-03-26T08:45:27.161963]     is_mel_spectrogram: True
[Training] [2023-03-26T08:45:27.163970]   ]
[Training] [2023-03-26T08:45:27.165977]   is_train: True
[Training] [2023-03-26T08:45:27.167985]   dist: False
[Training] [2023-03-26T08:45:27.169991]
[Training] [2023-03-26T08:45:27.171997] 23-03-26 08:45:26.815 - INFO: Random seed: 3174
[Training] [2023-03-26T08:45:27.396777] 23-03-26 08:45:27.395 - INFO: Number of training data elements: 778, iters: 7
[Training] [2023-03-26T08:45:27.399787] 23-03-26 08:45:27.396 - INFO: Total epochs needed: 800 for iters 5,600
[Training] [2023-03-26T08:45:27.984502] C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:379: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
[Training] [2023-03-26T08:45:27.988516]   warnings.warn(
[Training] [2023-03-26T08:45:32.050950] 23-03-26 08:45:32.049 - INFO: Loading model for [./models/tortoise/autoregressive.pth]
[Training] [2023-03-26T08:45:32.625303] 23-03-26 08:45:32.624 - INFO: Start training from epoch: 0, iter: 0
[Training] [2023-03-26T08:45:33.809904] NOTE: Redirects are currently not supported in Windows or MacOs.
[Training] [2023-03-26T08:45:36.113973] NOTE: Redirects are currently not supported in Windows or MacOs.
[Training] [2023-03-26T08:45:37.667301] C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\torch\optim\lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
[Training] [2023-03-26T08:45:37.667301]   warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
[Training] [2023-03-26T08:45:43.856655] Disabled distributed training.
[Training] [2023-03-26T08:45:43.856655] Path already exists. Rename it to [./training\ciritestsinglelines\finetune_archived_230326-084526]
[Training] [2023-03-26T08:45:43.857658] Loading from ./models/tortoise/dvae.pth
[Training] [2023-03-26T08:45:43.857658] Traceback (most recent call last):
[Training] [2023-03-26T08:45:43.858662]   File "C:\Users\Dominik\ai-voice-cloning\src\train.py", line 64, in <module>
[Training] [2023-03-26T08:45:43.858662]     train(config_path, args.launcher)
[Training] [2023-03-26T08:45:43.858662]   File "C:\Users\Dominik\ai-voice-cloning\src\train.py", line 31, in train
[Training] [2023-03-26T08:45:43.858662]     trainer.do_training()
[Training] [2023-03-26T08:45:43.859665]   File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\train.py", line 408, in do_training
[Training] [2023-03-26T08:45:43.859665]     metric = self.do_step(train_data)
[Training] [2023-03-26T08:45:43.859665]   File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\train.py", line 271, in do_step
[Training] [2023-03-26T08:45:43.860669]     gradient_norms_dict = self.model.optimize_parameters(
[Training] [2023-03-26T08:45:43.860669]   File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\trainer\ExtensibleTrainer.py", line 321, in optimize_parameters
[Training] [2023-03-26T08:45:43.860669]     ns = step.do_forward_backward(
[Training] [2023-03-26T08:45:43.860669]   File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\trainer\steps.py", line 242, in do_forward_backward
[Training] [2023-03-26T08:45:43.861672]     local_state[k] = v[grad_accum_step]
[Training] [2023-03-26T08:45:43.861672] IndexError: list index out of range
Traceback (most recent call last):
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
    result = await self.call_function(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 898, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\utils.py", line 549, in async_iteration
    return next(iterator)
  File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1099, in run_training
    for line in iter(training_state.process.stdout.readline, ""):
  File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte
well it doesn't at least not now. First try worked, now I've tried it again, however now the utf-8 error comes a bit later in the process, right before the training begins ``` Spawning process: train.bat ./training/ciritestsinglelines/train.yaml [Training] [2023-03-26T08:45:24.653273] [Training] [2023-03-26T08:45:24.656283] (venv) C:\Users\Dominik\ai-voice-cloning>call .\venv\Scripts\activate.bat [Training] [2023-03-26T08:45:25.886732] NOTE: Redirects are currently not supported in Windows or MacOs. [Training] [2023-03-26T08:45:26.815762] 23-03-26 08:45:26.815 - INFO: name: ciritestsinglelines [Training] [2023-03-26T08:45:26.819776] model: extensibletrainer [Training] [2023-03-26T08:45:26.821783] scale: 1 [Training] [2023-03-26T08:45:26.825797] gpu_ids: [0] [Training] [2023-03-26T08:45:26.827804] start_step: 0 [Training] [2023-03-26T08:45:26.830814] checkpointing_enabled: True [Training] [2023-03-26T08:45:26.833825] fp16: False [Training] [2023-03-26T08:45:26.835832] bitsandbytes: True [Training] [2023-03-26T08:45:26.838842] gpus: 1 [Training] [2023-03-26T08:45:26.841853] datasets:[ [Training] [2023-03-26T08:45:26.844863] train:[ [Training] [2023-03-26T08:45:26.846870] name: training [Training] [2023-03-26T08:45:26.849881] n_workers: 2 [Training] [2023-03-26T08:45:26.851888] batch_size: 128 [Training] [2023-03-26T08:45:26.854898] mode: paired_voice_audio [Training] [2023-03-26T08:45:26.857908] path: ./training/ciritestsinglelines/train.txt [Training] [2023-03-26T08:45:26.859916] fetcher_mode: ['lj'] [Training] [2023-03-26T08:45:26.861922] phase: train [Training] [2023-03-26T08:45:26.863929] max_wav_length: 255995 [Training] [2023-03-26T08:45:26.865937] max_text_length: 200 [Training] [2023-03-26T08:45:26.867944] sample_rate: 22050 [Training] [2023-03-26T08:45:26.869950] load_conditioning: True [Training] [2023-03-26T08:45:26.871958] num_conditioning_candidates: 2 [Training] [2023-03-26T08:45:26.874968] conditioning_length: 44000 [Training] [2023-03-26T08:45:26.876975] use_bpe_tokenizer: True [Training] [2023-03-26T08:45:26.878981] tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json [Training] [2023-03-26T08:45:26.880989] load_aligned_codes: False [Training] [2023-03-26T08:45:26.882995] data_type: img [Training] [2023-03-26T08:45:26.886006] ] [Training] [2023-03-26T08:45:26.888013] val:[ [Training] [2023-03-26T08:45:26.890019] name: validation [Training] [2023-03-26T08:45:26.892027] n_workers: 2 [Training] [2023-03-26T08:45:26.894034] batch_size: 5 [Training] [2023-03-26T08:45:26.896041] mode: paired_voice_audio [Training] [2023-03-26T08:45:26.898047] path: ./training/ciritestsinglelines/validation.txt [Training] [2023-03-26T08:45:26.901058] fetcher_mode: ['lj'] [Training] [2023-03-26T08:45:26.904069] phase: val [Training] [2023-03-26T08:45:26.906076] max_wav_length: 255995 [Training] [2023-03-26T08:45:26.908082] max_text_length: 200 [Training] [2023-03-26T08:45:26.910089] sample_rate: 22050 [Training] [2023-03-26T08:45:26.912096] load_conditioning: True [Training] [2023-03-26T08:45:26.914104] num_conditioning_candidates: 2 [Training] [2023-03-26T08:45:26.916111] conditioning_length: 44000 [Training] [2023-03-26T08:45:26.919120] use_bpe_tokenizer: True [Training] [2023-03-26T08:45:26.921128] tokenizer_vocab: ./modules/tortoise-tts/tortoise/data/tokenizer.json [Training] [2023-03-26T08:45:26.923135] load_aligned_codes: False [Training] [2023-03-26T08:45:26.925141] data_type: img [Training] [2023-03-26T08:45:26.927148] ] [Training] [2023-03-26T08:45:26.929156] ] [Training] [2023-03-26T08:45:26.932166] steps:[ [Training] [2023-03-26T08:45:26.934173] gpt_train:[ [Training] [2023-03-26T08:45:26.936180] training: gpt [Training] [2023-03-26T08:45:26.938187] loss_log_buffer: 500 [Training] [2023-03-26T08:45:26.940194] optimizer: adamw [Training] [2023-03-26T08:45:26.942201] optimizer_params:[ [Training] [2023-03-26T08:45:26.944208] lr: 1e-05 [Training] [2023-03-26T08:45:26.946215] weight_decay: 0.01 [Training] [2023-03-26T08:45:26.948222] beta1: 0.9 [Training] [2023-03-26T08:45:26.951233] beta2: 0.96 [Training] [2023-03-26T08:45:26.953239] ] [Training] [2023-03-26T08:45:26.955246] clip_grad_eps: 4 [Training] [2023-03-26T08:45:26.957253] injectors:[ [Training] [2023-03-26T08:45:26.959260] paired_to_mel:[ [Training] [2023-03-26T08:45:26.962271] type: torch_mel_spectrogram [Training] [2023-03-26T08:45:26.964277] mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth [Training] [2023-03-26T08:45:26.966284] in: wav [Training] [2023-03-26T08:45:26.968291] out: paired_mel [Training] [2023-03-26T08:45:26.970298] ] [Training] [2023-03-26T08:45:26.973309] paired_cond_to_mel:[ [Training] [2023-03-26T08:45:26.975316] type: for_each [Training] [2023-03-26T08:45:26.977322] subtype: torch_mel_spectrogram [Training] [2023-03-26T08:45:26.979330] mel_norm_file: ./modules/tortoise-tts/tortoise/data/mel_norms.pth [Training] [2023-03-26T08:45:26.981337] in: conditioning [Training] [2023-03-26T08:45:26.984347] out: paired_conditioning_mel [Training] [2023-03-26T08:45:26.987358] ] [Training] [2023-03-26T08:45:26.989364] to_codes:[ [Training] [2023-03-26T08:45:26.991371] type: discrete_token [Training] [2023-03-26T08:45:26.993378] in: paired_mel [Training] [2023-03-26T08:45:26.995385] out: paired_mel_codes [Training] [2023-03-26T08:45:26.997393] dvae_config: ./models/tortoise/train_diffusion_vocoder_22k_level.yml [Training] [2023-03-26T08:45:27.000403] ] [Training] [2023-03-26T08:45:27.002410] paired_fwd_text:[ [Training] [2023-03-26T08:45:27.004417] type: generator [Training] [2023-03-26T08:45:27.007427] generator: gpt [Training] [2023-03-26T08:45:27.009434] in: ['paired_conditioning_mel', 'padded_text', 'text_lengths', 'paired_mel_codes', 'wav_lengths'] [Training] [2023-03-26T08:45:27.011441] out: ['loss_text_ce', 'loss_mel_ce', 'logits'] [Training] [2023-03-26T08:45:27.014452] ] [Training] [2023-03-26T08:45:27.016458] ] [Training] [2023-03-26T08:45:27.018466] losses:[ [Training] [2023-03-26T08:45:27.020473] text_ce:[ [Training] [2023-03-26T08:45:27.022479] type: direct [Training] [2023-03-26T08:45:27.024486] weight: 0.01 [Training] [2023-03-26T08:45:27.026493] key: loss_text_ce [Training] [2023-03-26T08:45:27.028500] ] [Training] [2023-03-26T08:45:27.030507] mel_ce:[ [Training] [2023-03-26T08:45:27.032514] type: direct [Training] [2023-03-26T08:45:27.034521] weight: 1 [Training] [2023-03-26T08:45:27.036528] key: loss_mel_ce [Training] [2023-03-26T08:45:27.038536] ] [Training] [2023-03-26T08:45:27.040542] ] [Training] [2023-03-26T08:45:27.042549] ] [Training] [2023-03-26T08:45:27.044556] ] [Training] [2023-03-26T08:45:27.046562] networks:[ [Training] [2023-03-26T08:45:27.049573] gpt:[ [Training] [2023-03-26T08:45:27.051580] type: generator [Training] [2023-03-26T08:45:27.053588] which_model_G: unified_voice2 [Training] [2023-03-26T08:45:27.056598] kwargs:[ [Training] [2023-03-26T08:45:27.058605] layers: 30 [Training] [2023-03-26T08:45:27.060612] model_dim: 1024 [Training] [2023-03-26T08:45:27.062618] heads: 16 [Training] [2023-03-26T08:45:27.064625] max_text_tokens: 402 [Training] [2023-03-26T08:45:27.067636] max_mel_tokens: 604 [Training] [2023-03-26T08:45:27.070646] max_conditioning_inputs: 2 [Training] [2023-03-26T08:45:27.073657] mel_length_compression: 1024 [Training] [2023-03-26T08:45:27.076667] number_text_tokens: 256 [Training] [2023-03-26T08:45:27.078674] number_mel_codes: 8194 [Training] [2023-03-26T08:45:27.080681] start_mel_token: 8192 [Training] [2023-03-26T08:45:27.082688] stop_mel_token: 8193 [Training] [2023-03-26T08:45:27.084695] start_text_token: 255 [Training] [2023-03-26T08:45:27.086702] train_solo_embeddings: False [Training] [2023-03-26T08:45:27.088709] use_mel_codes_as_input: True [Training] [2023-03-26T08:45:27.090716] checkpointing: True [Training] [2023-03-26T08:45:27.092723] tortoise_compat: True [Training] [2023-03-26T08:45:27.094730] ] [Training] [2023-03-26T08:45:27.096737] ] [Training] [2023-03-26T08:45:27.098744] ] [Training] [2023-03-26T08:45:27.100751] path:[ [Training] [2023-03-26T08:45:27.102757] strict_load: True [Training] [2023-03-26T08:45:27.105768] pretrain_model_gpt: ./models/tortoise/autoregressive.pth [Training] [2023-03-26T08:45:27.107776] root: ./ [Training] [2023-03-26T08:45:27.109782] experiments_root: ./training\ciritestsinglelines\finetune [Training] [2023-03-26T08:45:27.111789] models: ./training\ciritestsinglelines\finetune\models [Training] [2023-03-26T08:45:27.113796] training_state: ./training\ciritestsinglelines\finetune\training_state [Training] [2023-03-26T08:45:27.115803] log: ./training\ciritestsinglelines\finetune [Training] [2023-03-26T08:45:27.117810] val_images: ./training\ciritestsinglelines\finetune\val_images [Training] [2023-03-26T08:45:27.119817] ] [Training] [2023-03-26T08:45:27.121823] train:[ [Training] [2023-03-26T08:45:27.123830] niter: 5600 [Training] [2023-03-26T08:45:27.125838] warmup_iter: -1 [Training] [2023-03-26T08:45:27.127845] mega_batch_factor: 24 [Training] [2023-03-26T08:45:27.129852] val_freq: 35 [Training] [2023-03-26T08:45:27.132862] ema_enabled: False [Training] [2023-03-26T08:45:27.134869] default_lr_scheme: MultiStepLR [Training] [2023-03-26T08:45:27.136876] gen_lr_steps: [14, 28, 63, 126, 175, 231, 350] [Training] [2023-03-26T08:45:27.138883] lr_gamma: 0.5 [Training] [2023-03-26T08:45:27.140900] ] [Training] [2023-03-26T08:45:27.142897] eval:[ [Training] [2023-03-26T08:45:27.144904] pure: False [Training] [2023-03-26T08:45:27.147915] output_state: gen [Training] [2023-03-26T08:45:27.149922] ] [Training] [2023-03-26T08:45:27.152932] logger:[ [Training] [2023-03-26T08:45:27.154939] save_checkpoint_freq: 35 [Training] [2023-03-26T08:45:27.156946] visuals: ['gen', 'mel'] [Training] [2023-03-26T08:45:27.159956] visual_debug_rate: 35 [Training] [2023-03-26T08:45:27.161963] is_mel_spectrogram: True [Training] [2023-03-26T08:45:27.163970] ] [Training] [2023-03-26T08:45:27.165977] is_train: True [Training] [2023-03-26T08:45:27.167985] dist: False [Training] [2023-03-26T08:45:27.169991] [Training] [2023-03-26T08:45:27.171997] 23-03-26 08:45:26.815 - INFO: Random seed: 3174 [Training] [2023-03-26T08:45:27.396777] 23-03-26 08:45:27.395 - INFO: Number of training data elements: 778, iters: 7 [Training] [2023-03-26T08:45:27.399787] 23-03-26 08:45:27.396 - INFO: Total epochs needed: 800 for iters 5,600 [Training] [2023-03-26T08:45:27.984502] C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\transformers\configuration_utils.py:379: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. [Training] [2023-03-26T08:45:27.988516] warnings.warn( [Training] [2023-03-26T08:45:32.050950] 23-03-26 08:45:32.049 - INFO: Loading model for [./models/tortoise/autoregressive.pth] [Training] [2023-03-26T08:45:32.625303] 23-03-26 08:45:32.624 - INFO: Start training from epoch: 0, iter: 0 [Training] [2023-03-26T08:45:33.809904] NOTE: Redirects are currently not supported in Windows or MacOs. [Training] [2023-03-26T08:45:36.113973] NOTE: Redirects are currently not supported in Windows or MacOs. [Training] [2023-03-26T08:45:37.667301] C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\torch\optim\lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate [Training] [2023-03-26T08:45:37.667301] warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " [Training] [2023-03-26T08:45:43.856655] Disabled distributed training. [Training] [2023-03-26T08:45:43.856655] Path already exists. Rename it to [./training\ciritestsinglelines\finetune_archived_230326-084526] [Training] [2023-03-26T08:45:43.857658] Loading from ./models/tortoise/dvae.pth [Training] [2023-03-26T08:45:43.857658] Traceback (most recent call last): [Training] [2023-03-26T08:45:43.858662] File "C:\Users\Dominik\ai-voice-cloning\src\train.py", line 64, in <module> [Training] [2023-03-26T08:45:43.858662] train(config_path, args.launcher) [Training] [2023-03-26T08:45:43.858662] File "C:\Users\Dominik\ai-voice-cloning\src\train.py", line 31, in train [Training] [2023-03-26T08:45:43.858662] trainer.do_training() [Training] [2023-03-26T08:45:43.859665] File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\train.py", line 408, in do_training [Training] [2023-03-26T08:45:43.859665] metric = self.do_step(train_data) [Training] [2023-03-26T08:45:43.859665] File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\train.py", line 271, in do_step [Training] [2023-03-26T08:45:43.860669] gradient_norms_dict = self.model.optimize_parameters( [Training] [2023-03-26T08:45:43.860669] File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\trainer\ExtensibleTrainer.py", line 321, in optimize_parameters [Training] [2023-03-26T08:45:43.860669] ns = step.do_forward_backward( [Training] [2023-03-26T08:45:43.860669] File "c:\users\dominik\ai-voice-cloning\modules\dlas\dlas\trainer\steps.py", line 242, in do_forward_backward [Training] [2023-03-26T08:45:43.861672] local_state[k] = v[grad_accum_step] [Training] [2023-03-26T08:45:43.861672] IndexError: list index out of range Traceback (most recent call last): File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api result = await self.call_function( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 898, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\utils.py", line 549, in async_iteration return next(iterator) File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1099, in run_training for line in iter(training_state.process.stdout.readline, ""): File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte ```
Owner

[Training] [2023-03-26T08:45:43.861672] local_state[k] = v[grad_accum_step]
[Training] [2023-03-26T08:45:43.861672] IndexError: list index out of range

#159

https://git.ecker.tech/mrq/ai-voice-cloning/wiki/Issues#local_state-k-v-grad_accum_step-indexerror-list-index-out-of-range

> [Training] [2023-03-26T08:45:43.861672] local_state[k] = v[grad_accum_step] > [Training] [2023-03-26T08:45:43.861672] IndexError: list index out of range https://git.ecker.tech/mrq/ai-voice-cloning/issues/159 https://git.ecker.tech/mrq/ai-voice-cloning/wiki/Issues#local_state-k-v-grad_accum_step-indexerror-list-index-out-of-range
Author

training seems to run, but I got this

[Training] [2023-03-31T06:46:34.505627] 23-03-31 06:46:34.505 - INFO: Training Metrics: {"loss_text_ce": 2.766988515853882, "loss_mel_ce": 1.5485143661499023, "loss_gpt_total": 1.5761841535568237, "lr": 7.8125e-08, "it": 3199, "step": 3, "steps": 4, "epoch": 799, "iteration_rate": 4.77694296836853}
[Training] [2023-03-31T06:46:41.700762] 23-03-31 06:46:41.699 - INFO: Saving models and training states.
[Training] [2023-03-31T06:46:41.701766] 23-03-31 06:46:41.701 - INFO: Training Metrics: {"loss_text_ce": 2.7660417556762695, "loss_mel_ce": 1.5481622219085693, "loss_gpt_total": 1.5758225917816162, "lr": 7.8125e-08, "it": 3200, "step": 4, "steps": 4, "epoch": 799, "iteration_rate": 5.182716608047485}
[Training] [2023-03-31T06:46:47.056203] 23-03-31 06:46:47.056 - INFO: Training Metrics: {"loss_text_ce": 2.761721134185791, "loss_mel_ce": 1.5438686609268188, "loss_gpt_total": 1.5714858770370483, "lr": 7.8125e-08, "it": 3201, "step": 1, "steps": 4, "epoch": 800, "iteration_rate": 4.9955222606658936}
[Training] [2023-03-31T06:46:52.188457] 23-03-31 06:46:52.188 - INFO: Training Metrics: {"loss_text_ce": 2.762148857116699, "loss_mel_ce": 1.5456483364105225, "loss_gpt_total": 1.5732698440551758, "lr": 7.8125e-08, "it": 3202, "step": 2, "steps": 4, "epoch": 800, "iteration_rate": 5.131250381469727}
[Training] [2023-03-31T06:46:57.270874] 23-03-31 06:46:57.269 - INFO: Training Metrics: {"loss_text_ce": 2.7620925903320312, "loss_mel_ce": 1.5464383363723755, "loss_gpt_total": 1.5740593671798706, "lr": 7.8125e-08, "it": 3203, "step": 3, "steps": 4, "epoch": 800, "iteration_rate": 5.080410003662109}
[Training] [2023-03-31T06:47:02.130231] 23-03-31 06:47:02.130 - INFO: Training Metrics: {"loss_text_ce": 2.761744976043701, "loss_mel_ce": 1.5469928979873657, "loss_gpt_total": 1.5746103525161743, "lr": 7.8125e-08, "it": 3204, "step": 4, "steps": 4, "epoch": 800, "iteration_rate": 4.85835337638855}
[Training] [2023-03-31T06:47:03.999666] 23-03-31 06:47:03.999 - INFO: Saving models and training states.
[Training] [2023-03-31T06:47:03.999666] 23-03-31 06:47:03.999 - INFO: Finished training!
[Training] [2023-03-31T06:47:05.104287] Disabled distributed training.
[Training] [2023-03-31T06:47:05.104287] Loading from ./models/tortoise/dvae.pth
Traceback (most recent call last):
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
    result = await self.call_function(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 898, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\utils.py", line 549, in async_iteration
    return next(iterator)
  File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1176, in reconnect_training
    for line in iter(training_state.process.stdout.readline, ""):
  File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte

it said finished training, is it done? But why did I get the utf-8 error again?

training seems to run, but I got this ``` [Training] [2023-03-31T06:46:34.505627] 23-03-31 06:46:34.505 - INFO: Training Metrics: {"loss_text_ce": 2.766988515853882, "loss_mel_ce": 1.5485143661499023, "loss_gpt_total": 1.5761841535568237, "lr": 7.8125e-08, "it": 3199, "step": 3, "steps": 4, "epoch": 799, "iteration_rate": 4.77694296836853} [Training] [2023-03-31T06:46:41.700762] 23-03-31 06:46:41.699 - INFO: Saving models and training states. [Training] [2023-03-31T06:46:41.701766] 23-03-31 06:46:41.701 - INFO: Training Metrics: {"loss_text_ce": 2.7660417556762695, "loss_mel_ce": 1.5481622219085693, "loss_gpt_total": 1.5758225917816162, "lr": 7.8125e-08, "it": 3200, "step": 4, "steps": 4, "epoch": 799, "iteration_rate": 5.182716608047485} [Training] [2023-03-31T06:46:47.056203] 23-03-31 06:46:47.056 - INFO: Training Metrics: {"loss_text_ce": 2.761721134185791, "loss_mel_ce": 1.5438686609268188, "loss_gpt_total": 1.5714858770370483, "lr": 7.8125e-08, "it": 3201, "step": 1, "steps": 4, "epoch": 800, "iteration_rate": 4.9955222606658936} [Training] [2023-03-31T06:46:52.188457] 23-03-31 06:46:52.188 - INFO: Training Metrics: {"loss_text_ce": 2.762148857116699, "loss_mel_ce": 1.5456483364105225, "loss_gpt_total": 1.5732698440551758, "lr": 7.8125e-08, "it": 3202, "step": 2, "steps": 4, "epoch": 800, "iteration_rate": 5.131250381469727} [Training] [2023-03-31T06:46:57.270874] 23-03-31 06:46:57.269 - INFO: Training Metrics: {"loss_text_ce": 2.7620925903320312, "loss_mel_ce": 1.5464383363723755, "loss_gpt_total": 1.5740593671798706, "lr": 7.8125e-08, "it": 3203, "step": 3, "steps": 4, "epoch": 800, "iteration_rate": 5.080410003662109} [Training] [2023-03-31T06:47:02.130231] 23-03-31 06:47:02.130 - INFO: Training Metrics: {"loss_text_ce": 2.761744976043701, "loss_mel_ce": 1.5469928979873657, "loss_gpt_total": 1.5746103525161743, "lr": 7.8125e-08, "it": 3204, "step": 4, "steps": 4, "epoch": 800, "iteration_rate": 4.85835337638855} [Training] [2023-03-31T06:47:03.999666] 23-03-31 06:47:03.999 - INFO: Saving models and training states. [Training] [2023-03-31T06:47:03.999666] 23-03-31 06:47:03.999 - INFO: Finished training! [Training] [2023-03-31T06:47:05.104287] Disabled distributed training. [Training] [2023-03-31T06:47:05.104287] Loading from ./models/tortoise/dvae.pth Traceback (most recent call last): File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict output = await app.get_blocks().process_api( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api result = await self.call_function( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 898, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\Dominik\ai-voice-cloning\venv\lib\site-packages\gradio\utils.py", line 549, in async_iteration return next(iterator) File "C:\Users\Dominik\ai-voice-cloning\src\utils.py", line 1176, in reconnect_training for line in iter(training_state.process.stdout.readline, ""): File "C:\Users\Dominik\anaconda3\lib\codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 2: invalid start byte ``` it said finished training, is it done? But why did I get the utf-8 error again?

Does it happen with other python packages?

Does it happen with other python packages?
Author

Does it happen with other python packages?

don't now, how could I test it, sadly not very familiar with python

> Does it happen with other python packages? don't now, how could I test it, sadly not very familiar with python

If you check the training/<voice name>/finetune/models/ directory is there a 800_gpt.pth file there? Can you use it to generate samples?

If you check the `training/<voice name>/finetune/models/` directory is there a `800_gpt.pth` file there? Can you use it to generate samples?
Author

In that folder there are 129 files with that structure.
I can generate samples with the voice. However I've around 40 minutes of clean voice samples, transcribed and sliced everything and trained it with it (with the error above). With that I can generate samples, however, despite playing arount with the iterations, samples and temperature, the samples are really bad, so I think something went wrong during the traiing.

In that folder there are 129 files with that structure. I can generate samples with the voice. However I've around 40 minutes of clean voice samples, transcribed and sliced everything and trained it with it (with the error above). With that I can generate samples, however, despite playing arount with the iterations, samples and temperature, the samples are really bad, so I think something went wrong during the traiing.
Sign in to join this conversation.
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/ai-voice-cloning#166
No description provided.