• https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.

    XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG

  • Joined on Oct 10, 2022
Loading Heatmap…

mrq pushed to master at mrq/vall-e

  • 99e980d323 documentation and more better-er attribution

2023-10-10 22:14:20 +07:00

mrq pushed to master at mrq/vall-e

  • e727b6e5c1 changed dynamic temperature trigger to be a min-(n)ar-temp value between [0,(n)ar-temp), flags to set min temp, checkbox in web UI to request it

2023-10-10 22:01:45 +07:00

mrq pushed to master at mrq/vall-e

  • ec25f56bd9 used torch.max fixes things, somehow, for dynamic temp sampling

2023-10-10 21:41:36 +07:00

mrq pushed to main at mrq/tortoise-tts

  • 95f679f4ba possible fix for when candidates >= samples

2023-10-10 15:31:50 +07:00

mrq pushed to master at mrq/tortoise-tts

  • 95f679f4ba possible fix for when candidates >= samples

2023-10-10 15:30:16 +07:00

mrq commented on issue mrq/ai-voice-cloning#406

selected index k out of range when attempting to gen more than 2 candidates

I remember it working in the past, but the only thing that comes to mind is if you have `Unsqueeze Sample Batches` checked, it might botch some things.

2023-10-10 15:24:13 +07:00

mrq pushed to master at mrq/vall-e

  • 87db03dd93 trim the input prompt to 3 seconds when training NAR tasks (marked as experimental; the paper mentions doing so, but I don't know how much this would harm the retention heads)

2023-10-10 03:03:06 +07:00

mrq pushed to master at mrq/vall-e

  • 893a610fad cleanup, use deepspeed inferencing pathway if requested

2023-10-09 20:23:11 +07:00

mrq pushed to master at mrq/vall-e

  • 26fbb92ec6 reduced dynamic temperature threshold to > 1.0, as it seems to not quite be useful for audio LMs, sped up any sampling that touches logits by copying them to CPU first, as accessing tensors on the GPU is slow as balls)

2023-10-09 19:45:28 +07:00

mrq commented on issue mrq/ai-voice-cloning#405

Does not run in Docker

A mix of: * documentation of versions have been outdated as PyTorch gets updates, as originally this was worked on with `torch==1.13.1+cu116" * the Dockerfile was added in a PR, as I don't use…

2023-10-09 18:42:11 +07:00

mrq pushed to master at mrq/vall-e

  • 29873e6ded extend the max temps in the web UI to actually allow dynamic temp sampling

2023-10-09 18:29:49 +07:00

mrq pushed to master at mrq/vall-e

  • 27483e56f0 disabled preparing of SpeechX tasks, added dynamic temperature testing (to-do: test it, credited in the function)

2023-10-09 18:00:45 +07:00

mrq pushed to master at mrq/vall-e

2023-10-07 01:07:39 +07:00

mrq commented on issue mrq/ai-voice-cloning#152

VALL-E Integration (and In Response To TorToiSe: a Quick Retrospective)

Besides that, and the site issues, [microsoft/torchscale](https://github.com/microsoft/torchscale/) did some commits that breaks compatibility with existing models using its RetNet. It messes with…

2023-10-06 22:56:01 +07:00

mrq commented on issue mrq/ai-voice-cloning#152

VALL-E Integration (and In Response To TorToiSe: a Quick Retrospective)

> but in inference.py you separately instantiate and call the ar and nar. Is that correct? The AR/NAR/AR_NAR classes just have overloaded properties and a forward to do the sampling proper. I…

2023-10-06 22:50:30 +07:00

mrq pushed to master at mrq/vall-e

2023-10-06 15:12:58 +07:00

mrq pushed to master at mrq/vall-e

2023-10-06 15:09:12 +07:00

mrq pushed to master at mrq/vall-e

  • 3db7e7dea1 implicitly load checkpoint if deepspeed checkpoint not found, updated setup script to grab the diskcached dataloader things

2023-10-06 15:01:52 +07:00

mrq pushed to master at mrq/vall-e

2023-10-06 14:25:55 +07:00

mrq commented on issue mrq/vall-e#8

Training GPU offer

Alright, after a painful day of trying to upload the gunzipped dataset twice, the latest libre dataset has been uploaded, and the setup script is good to go, so all that's needed to be done to get…

2023-10-06 13:13:24 +07:00