mrq
-
https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.
XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG
- Joined on
2022-10-10
Block a user
100b4d7e61
Added settings page, added checking for updates (disabled by default), some other things that I don't remember
92cf9e1efe
Added tab to read and copy settings from a voice clip (in the future, I'll see about enmbedding the latent used to generate the voice)
5affc777e0
added another (somewhat adequate) example, added metadata storage to generated files (need to add in a viewer later)
b441a84615
added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM
do_tts.py and read.py
Use the Line Delimiter
input in the web UI to process your text input into pieces, similar to read.py
's behavior.
For example, set Line Delimiter
to \n
for it to process each line one by…
2cfd3bc213
updated README (before I go mad trying to nitpick and edit it while getting distracted from an iToddler sperging)
5bf21fdbe1
modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents)
6e89dcb97a
modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents)
f19cbda183
modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents)
1c582b5dc8
added button to refresh voice list, enabling KV caching for a bonerific speed increase (credit to https://github.com/152334H/tortoise-tts-fast/)