Error when using CVVP weight above 0% on already generated latents #21

Closed
opened 2023-02-11 19:16:03 +00:00 by gannybal · 2 comments

Slimmer computed latents are turned off
The first generation (latent are generated just fine) is okay when CVVP weight is not 0%. i.e. 85% CLVP and 15% CVVP. But when i try to generate with already computed latents during Computing best candidates using CLVP 85% and CVVP 15% i get this error:

Traceback (most recent call last):
  File "B:\AIVoice\tortoise-tts\webui.py", line 854, in run_generation
    sample, outputs, stats = generate(
  File "B:\AIVoice\tortoise-tts\webui.py", line 178, in generate
    gen, additionals = tts.tts(cut_text, **settings )
  File "B:\AIVoice\tortoise-tts\tortoise\api.py", line 547, in tts
    cvvp_accumulator = cvvp_accumulator + self.cvvp(auto_conds[:, cl].repeat(batch.shape[0], 1, 1), batch, return_loss=False)
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "B:\AIVoice\tortoise-tts\tortoise\models\cvvp.py", line 111, in forward
    cond_emb = self.cond_emb(mel_cond).permute(0, 2, 1)
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\conv.py", line 313, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\conv.py", line 309, in _conv_forward
    return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\routes.py", line 374, in run_predict
    output = await app.get_blocks().process_api(
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\blocks.py", line 1017, in process_api
    result = await self.call_function(
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\blocks.py", line 835, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\helpers.py", line 584, in tracked_fn
    response = fn(*args)
  File "B:\AIVoice\tortoise-tts\webui.py", line 882, in run_generation
    raise gr.Error(message)
gradio.exceptions.Error: 'Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor'
Slimmer computed latents are turned off The first generation (latent are generated just fine) is okay when CVVP weight is not 0%. i.e. 85% CLVP and 15% CVVP. But when i try to generate with already computed latents during `Computing best candidates using CLVP 85% and CVVP 15%` i get this error: ``` Traceback (most recent call last): File "B:\AIVoice\tortoise-tts\webui.py", line 854, in run_generation sample, outputs, stats = generate( File "B:\AIVoice\tortoise-tts\webui.py", line 178, in generate gen, additionals = tts.tts(cut_text, **settings ) File "B:\AIVoice\tortoise-tts\tortoise\api.py", line 547, in tts cvvp_accumulator = cvvp_accumulator + self.cvvp(auto_conds[:, cl].repeat(batch.shape[0], 1, 1), batch, return_loss=False) File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "B:\AIVoice\tortoise-tts\tortoise\models\cvvp.py", line 111, in forward cond_emb = self.cond_emb(mel_cond).permute(0, 2, 1) File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward input = module(input) File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\conv.py", line 313, in forward return self._conv_forward(input, self.weight, self.bias) File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\torch\nn\modules\conv.py", line 309, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor During handling of the above exception, another exception occurred: Traceback (most recent call last): File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\routes.py", line 374, in run_predict output = await app.get_blocks().process_api( File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\blocks.py", line 1017, in process_api result = await self.call_function( File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\blocks.py", line 835, in call_function prediction = await anyio.to_thread.run_sync( File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "B:\AIVoice\tortoise-tts\tortoise-venv\lib\site-packages\gradio\helpers.py", line 584, in tracked_fn response = fn(*args) File "B:\AIVoice\tortoise-tts\webui.py", line 882, in run_generation raise gr.Error(message) gradio.exceptions.Error: 'Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor' ```
gannybal changed title from Error when using CVVP weight above 0% to Error when using CVVP weight above 0% on already generated latents 2023-02-11 19:33:39 +00:00

You need to manually delete the cond_latents.pth for it to generate again now. Maybe specific naming conventions for activated/disabled could resolve this?

The same error occurs when trying to generate with multiple candidates for the 2nd time.

You need to manually delete the cond_latents.pth for it to generate again now. Maybe specific naming conventions for activated/disabled could resolve this? The same error occurs when trying to generate with multiple candidates for the 2nd time.
Owner

Fixed in commit 5f1c032312.

I think while I was trying to cram DirectML support, I forgot to guarantee the auto_conds also got moved to the GPU, but just the main conditioning latents did.

Fixed in commit 5f1c032312048596e1b00e836622fc7201bdf6be. I think while I was trying to cram DirectML support, I forgot to guarantee the `auto_conds` also got moved to the GPU, but just the main conditioning latents did.
mrq closed this issue 2023-02-11 20:35:52 +00:00
Sign in to join this conversation.
No Label
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/tortoise-tts#21
No description provided.