ValueError when generating prompt #144

Closed
opened 2023-03-16 08:50:16 +00:00 by Greythorn · 3 comments

Error happened after clicking the generate button while using a trained model, same error happens with random voice and microphone generates a seperate error 'wrong file type'.

python 3.10.8
Gpu nvidia 3060

Loading autoregressive model: ./training/Rylai/finetune/models//15972_gpt.pth
Loaded autoregressive model
Loading voice: Rylai with model 62def33e
Reading from latent: ./voices\Rylai\cond_latents_62def33e.pth
Generating autoregressive samples
Traceback (most recent call last):
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 393, in run_predict
    output = await app.get_blocks().process_api(
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1059, in process_api
    result = await self.call_function(
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 868, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\helpers.py", line 587, in tracked_fn
    response = fn(*args)
  File "S:\zzz TTS AI THING\ai-voice-cloning\src\webui.py", line 91, in generate_proxy
    raise e
  File "S:\zzz TTS AI THING\ai-voice-cloning\src\webui.py", line 85, in generate_proxy
    sample, outputs, stats = generate(**kwargs)
  File "S:\zzz TTS AI THING\ai-voice-cloning\src\utils.py", line 365, in generate
    gen, additionals = tts.tts(cut_text, **settings )
  File "s:\zzz tts ai thing\ai-voice-cloning\modules\tortoise-tts\tortoise\api.py", line 672, in tts
    codes = self.autoregressive.inference_speech(auto_conditioning, text_tokens,
  File "s:\zzz tts ai thing\ai-voice-cloning\modules\tortoise-tts\tortoise\models\autoregressive.py", line 513, in inference_speech
    gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token,
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\transformers\generation\utils.py", line 1213, in generate
    self._validate_model_kwargs(model_kwargs.copy())
  File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\transformers\generation\utils.py", line 1105, in _validate_model_kwargs
    raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['diffusion_model', 'tokenizer_json'] (note: typos in the generate arguments will also show up in this list)]```
Error happened after clicking the generate button while using a trained model, same error happens with random voice and microphone generates a seperate error 'wrong file type'. python 3.10.8 Gpu nvidia 3060 ```[[1/1] Generating line: test Loading autoregressive model: ./training/Rylai/finetune/models//15972_gpt.pth Loaded autoregressive model Loading voice: Rylai with model 62def33e Reading from latent: ./voices\Rylai\cond_latents_62def33e.pth Generating autoregressive samples Traceback (most recent call last): File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\routes.py", line 393, in run_predict output = await app.get_blocks().process_api( File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 1059, in process_api result = await self.call_function( File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\blocks.py", line 868, in call_function prediction = await anyio.to_thread.run_sync( File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\gradio\helpers.py", line 587, in tracked_fn response = fn(*args) File "S:\zzz TTS AI THING\ai-voice-cloning\src\webui.py", line 91, in generate_proxy raise e File "S:\zzz TTS AI THING\ai-voice-cloning\src\webui.py", line 85, in generate_proxy sample, outputs, stats = generate(**kwargs) File "S:\zzz TTS AI THING\ai-voice-cloning\src\utils.py", line 365, in generate gen, additionals = tts.tts(cut_text, **settings ) File "s:\zzz tts ai thing\ai-voice-cloning\modules\tortoise-tts\tortoise\api.py", line 672, in tts codes = self.autoregressive.inference_speech(auto_conditioning, text_tokens, File "s:\zzz tts ai thing\ai-voice-cloning\modules\tortoise-tts\tortoise\models\autoregressive.py", line 513, in inference_speech gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token, File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\transformers\generation\utils.py", line 1213, in generate self._validate_model_kwargs(model_kwargs.copy()) File "S:\zzz TTS AI THING\ai-voice-cloning\venv\lib\site-packages\transformers\generation\utils.py", line 1105, in _validate_model_kwargs raise ValueError( ValueError: The following `model_kwargs` are not used by the model: ['diffusion_model', 'tokenizer_json'] (note: typos in the generate arguments will also show up in this list)]```
Owner

I'll push a fix to mrq/tortoise-tts when I get a chance. Odd it didn't turn up in my many generation tests over the past few days.

You might also just need to update tortoise itself with the update-force script.

I'll push a fix to mrq/tortoise-tts when I get a chance. Odd it didn't turn up in my many generation tests over the past few days. You might also just need to update tortoise itself with the update-force script.
Owner

Should be fixed in mrq/tortoise-tts commit e201746eeb.

I'm still not sure how it wasn't erroring out for me.

Should be fixed in mrq/tortoise-tts commit https://git.ecker.tech/mrq/tortoise-tts/commit/e201746eeb3f5be602ae3395df8344f231a5f0d4. I'm still not sure how it wasn't erroring out for me.
Author

ran the update.bat and it now works, thank you. Issue solved

ran the update.bat and it now works, thank you. Issue solved
Sign in to join this conversation.
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/ai-voice-cloning#144
No description provided.