1
1
forked from mrq/tortoise-tts
Commit Graph

258 Commits

Author SHA1 Message Date
mrq
328deeddae forgot to auto compute batch size again if set to 0 2023-02-06 23:14:17 -06:00
mrq
6475045f87 changed ROCm pip index URL from 5.2 to 5.1.1, because it's what worked for me desu 2023-02-06 22:52:40 -06:00
mrq
5d76d47a49 added shell scripts for linux, wrapped sorted() for voice list, I guess 2023-02-06 21:54:31 -06:00
mrq
3b56c437aa fixed combining audio, somehow this broke, oops 2023-02-07 00:26:22 +00:00
mrq
a3c077ba13 added setting to adjust autoregressive sample batch size 2023-02-06 22:31:06 +00:00
mrq
d8c88078f3 Added settings page, added checking for updates (disabled by default), some other things that I don't remember 2023-02-06 21:43:01 +00:00
mrq
d1172ead36 Added encoding and ripping latents used to generate the voice 2023-02-06 16:32:09 +00:00
mrq
e25ec325fe Added tab to read and copy settings from a voice clip (in the future, I'll see about enmbedding the latent used to generate the voice) 2023-02-06 16:00:44 +00:00
mrq
edb6a173d3 added another (somewhat adequate) example, added metadata storage to generated files (need to add in a viewer later) 2023-02-06 14:17:41 +00:00
mrq
b8b15d827d added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM 2023-02-06 05:10:07 +00:00
mrq
319e7ec0a6 fixed up the computing conditional latents 2023-02-06 03:44:34 +00:00
mrq
3c0648beaf updated README (before I go mad trying to nitpick and edit it while getting distracted from an iToddler sperging) 2023-02-06 00:56:17 +00:00
mrq
b23f583c4e Forgot to rename the cached latents to the new filename 2023-02-05 23:51:52 +00:00
mrq
c2c9b1b683 modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents) 2023-02-05 23:25:41 +00:00
mrq
4ea997106e oops 2023-02-05 20:10:40 +00:00
mrq
daebc6c21c added button to refresh voice list, enabling KV caching for a bonerific speed increase (credit to https://github.com/152334H/tortoise-tts-fast/) 2023-02-05 17:59:13 +00:00
mrq
7b767e1442 New tunable: pause size/breathing room (governs pause at the end of clips) 2023-02-05 14:45:51 +00:00
mrq
ec31d1763a Fix to keep prompted emotion for every split line 2023-02-05 06:55:09 +00:00
mrq
9ef7f11c0a Updated .gitignore (that does not apply to me because I have a bad habit of having a repo copy separate from a working copy) 2023-02-05 06:40:50 +00:00
mrq
98dbf56d44 Skip combining if not splitting, also avoids reading back the audio files to combine them by keeping them in memory 2023-02-05 06:35:32 +00:00
mrq
f38c479e9b Added multi-line parsing 2023-02-05 06:17:51 +00:00
mrq
3e3634f36a Fixed accidentally not passing user-provided samples/iteration values (oops), fixed error thrown when trying to write unicode because python sucks 2023-02-05 05:51:57 +00:00
mrq
26daca3dc6 Forgot to add steps=1 to Candidates slider 2023-02-05 04:27:20 +00:00
mrq
111c45b181 Set transformer and model folder to local './models/' instead of for the user profile, because I'm sick of more bloat polluting my C:\ 2023-02-05 04:18:35 +00:00
mrq
d2aeadd754 cleaned up element order with Blocks, also added preset updating the samples/iterations counts 2023-02-05 03:53:46 +00:00
mrq
078dc0c6e2 Added choices to choose between diffusion samplers (p, ddim) 2023-02-05 01:28:31 +00:00
mrq
4274cce218 Added small optimization with caching latents, dropped Anaconda for just a py3.9 + pip + venv setup, added helper install scripts for such, cleaned up app.py, added flag '--low-vram' to disable minor optimizations 2023-02-04 01:50:57 +00:00
mrq
061aa65ac4 Reverted slight improvement patch, as it's just enough to OOM on GPUs with low VRAM 2023-02-03 21:45:06 +00:00
mrq
4f359bffa4 Added progress for transforming to audio, changed number inputs to sliders instead 2023-02-03 04:56:30 +00:00
mrq
ef237c70d0 forgot to copy the alleged slight performance improvement patch, added detailed progress information with passing gr.Progress, save a little more info with output 2023-02-03 04:20:01 +00:00
mrq
aafef3a140 Cleaned up the good-morning-sirs-dialect labels, fixed seed=0 not being a random seed, show seed on output 2023-02-03 01:25:03 +00:00
mrq
1eb92a1236 QoL fixes 2023-02-02 21:13:28 +00:00
mrq
5ebe587898 Quick fixes for Conda 2023-02-01 01:21:56 +00:00
James Betker
98a891e66e Merge pull request #263 from netshade/remove-ffmpeg-dep
Remove FFMPEG dep
2023-01-22 17:55:36 -07:00
chris
b0296ba528 remove ffmpeg requirement, not actually necessary 2023-01-22 16:41:25 -05:00
James Betker
aad67d0e78 Merge pull request #233 from kianmeng/fix-typos
Fix typos
2023-01-17 18:24:24 -07:00
James Betker
69738359c6 Merge pull request #245 from netshade/installation-updates
Documentation and Dependency Updates
2023-01-11 09:30:50 -07:00
chris
0793800526 add explicit requirements.txt usage for dep installation 2023-01-11 10:50:18 -05:00
chris
38d97caf48 update requirements to ensure project will build and run 2023-01-11 10:48:58 -05:00
James Betker
04068133a6 Merge pull request #234 from Wonbin-Jung/ack
Add reference of univnet implementation
2023-01-06 02:03:49 -07:00
원빈 정
092b15eded Add reference of univnet implementation 2023-01-06 15:57:02 +09:00
Kian-Meng Ang
49bbdd597e Fix typos
Found via `codespell -S *.json -L splitted,nd,ser,broadcat`
2023-01-06 11:04:36 +08:00
James Betker
a5a0907e76 Update README.md 2022-12-05 13:16:36 -08:00
James Betker
49aff9a36d Merge pull request #193 from casonclagg/main
Pin transformers version to 4.19, fixes #186, google colab crashing
2022-11-13 22:20:11 -08:00
Cason Clagg
18dbbb56b6 Pin transformers version to 4.19, fixes #186, google colab crashing 2022-11-11 17:16:56 -06:00
James Betker
dd88ad6be6 Merge pull request #122 from mogwai/fix/readme-instructions
Added keyword argument for API usage in README
2022-07-08 08:22:43 -06:00
Harry Coultas Blum
75e920438a Added keyword argument 2022-07-08 14:28:24 +01:00
James Betker
83cc5eb5b4 Update README.md 2022-06-23 15:57:50 -07:00
James Betker
e5201bf14e Get rid of checkpointing
It isn't needed in inference.
2022-06-15 22:09:15 -06:00
James Betker
1aa4e0d4b8 Merge pull request #97 from jnordberg/cpu-support
CPU support
2022-06-12 23:12:03 -06:00