Commit Graph

61 Commits

Author SHA1 Message Date
mrq
4d01bbd429 added button to recalculate voice latents, added experimental switch for computing voice latents 2023-02-12 18:11:40 +00:00
mrq
88529fda43 fixed regression with computing conditional latencies outside of the CPU 2023-02-12 17:44:39 +00:00
mrq
65f74692a0 fixed silently crashing from enabling kv_cache-ing if using the DirectML backend, throw an error when reading a generated audio file that does not have any embedded metadata in it, cleaned up the blocks of code that would DMA/transfer tensors/models between GPU and CPU 2023-02-12 14:46:21 +00:00
mrq
94757f5b41 instll python3.9, wrapped try/catch when parsing args.listen in case you somehow manage to insert garbage into that field and fuck up your config, removed a very redudnant setup.py install call since that only is required if you're just going to install it for using outside of the tortoise-tts folder 2023-02-12 04:35:21 +00:00
mrq
3a8ce5a110 Moved experimental settings to main tab, hidden under a check box 2023-02-11 17:21:08 +00:00
mrq
a7330164ab Added integration for "voicefixer", fixed issue where candidates>1 and lines>1 only outputs the last combined candidate, numbered step for each generation in progress, output time per generation step 2023-02-11 15:02:11 +00:00
mrq
58e2b22b0e History tab (3/10 it works) 2023-02-11 01:45:25 +00:00
mrq
52a9ed7858 Moved voices out of the tortoise folder because it kept being processed for setup.py 2023-02-10 20:11:56 +00:00
mrq
8b83c9083d Cleanup 2023-02-10 19:55:33 +00:00
mrq
7baf9e3f79 Added a link to the colab notebook 2023-02-10 16:26:13 +00:00
mrq
efa556b793 Added new options: "Output Sample Rate", "Output Volume", and documentation 2023-02-10 03:02:09 +00:00
mrq
504db0d1ac Added 'Only Load Models Locally' setting 2023-02-09 22:06:55 +00:00
mrq
460f5d6e32 Added and documented 2023-02-09 21:07:51 +00:00
mrq
3f8302a680 I didn't have to suck off a wizard for DirectML support (courtesy of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7600 for leading the way) 2023-02-09 05:05:21 +00:00
mrq
b23d6b4b4c owari da... 2023-02-09 01:53:25 +00:00
mrq
494f3c84a1 beginning to add DirectML support 2023-02-08 23:03:52 +00:00
mrq
81e4d261b7 Added two flags/settings: embed output settings, slimmer computed voice latents 2023-02-08 14:14:28 +00:00
mrq
e45e4431d1 (finally) added the CVVP model weigh slider, latents export more data too for weighing against CVVP 2023-02-07 20:55:56 +00:00
mrq
5d76d47a49 added shell scripts for linux, wrapped sorted() for voice list, I guess 2023-02-06 21:54:31 -06:00
mrq
a3c077ba13 added setting to adjust autoregressive sample batch size 2023-02-06 22:31:06 +00:00
mrq
d8c88078f3 Added settings page, added checking for updates (disabled by default), some other things that I don't remember 2023-02-06 21:43:01 +00:00
mrq
edb6a173d3 added another (somewhat adequate) example, added metadata storage to generated files (need to add in a viewer later) 2023-02-06 14:17:41 +00:00
mrq
3c0648beaf updated README (before I go mad trying to nitpick and edit it while getting distracted from an iToddler sperging) 2023-02-06 00:56:17 +00:00
mrq
b23f583c4e Forgot to rename the cached latents to the new filename 2023-02-05 23:51:52 +00:00
mrq
c2c9b1b683 modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents) 2023-02-05 23:25:41 +00:00
mrq
daebc6c21c added button to refresh voice list, enabling KV caching for a bonerific speed increase (credit to https://github.com/152334H/tortoise-tts-fast/) 2023-02-05 17:59:13 +00:00
mrq
7b767e1442 New tunable: pause size/breathing room (governs pause at the end of clips) 2023-02-05 14:45:51 +00:00
mrq
98dbf56d44 Skip combining if not splitting, also avoids reading back the audio files to combine them by keeping them in memory 2023-02-05 06:35:32 +00:00
mrq
d2aeadd754 cleaned up element order with Blocks, also added preset updating the samples/iterations counts 2023-02-05 03:53:46 +00:00
mrq
4274cce218 Added small optimization with caching latents, dropped Anaconda for just a py3.9 + pip + venv setup, added helper install scripts for such, cleaned up app.py, added flag '--low-vram' to disable minor optimizations 2023-02-04 01:50:57 +00:00
mrq
aafef3a140 Cleaned up the good-morning-sirs-dialect labels, fixed seed=0 not being a random seed, show seed on output 2023-02-03 01:25:03 +00:00
mrq
1eb92a1236 QoL fixes 2023-02-02 21:13:28 +00:00
James Betker
aad67d0e78 Merge pull request #233 from kianmeng/fix-typos
Fix typos
2023-01-17 18:24:24 -07:00
chris
0793800526 add explicit requirements.txt usage for dep installation 2023-01-11 10:50:18 -05:00
원빈 정
092b15eded Add reference of univnet implementation 2023-01-06 15:57:02 +09:00
Kian-Meng Ang
49bbdd597e Fix typos
Found via `codespell -S *.json -L splitted,nd,ser,broadcat`
2023-01-06 11:04:36 +08:00
James Betker
a5a0907e76 Update README.md 2022-12-05 13:16:36 -08:00
Harry Coultas Blum
75e920438a Added keyword argument 2022-07-08 14:28:24 +01:00
James Betker
83cc5eb5b4 Update README.md 2022-06-23 15:57:50 -07:00
Jai Mu
dc5b296636 Update README.md
Useless update but it was bothering me.
2022-05-22 00:56:06 +09:30
James Betker
849db260ba v2.4 2022-05-17 12:15:13 -06:00
James Betker
556172281d Release notes for 2.3 2022-05-12 20:26:24 -06:00
James Betker
ef2ce3fd05 Update README with suggestions for windows installation 2022-05-08 20:44:44 -06:00
James Betker
e18428166d v2.2 2022-05-06 00:11:10 -06:00
James Betker
4704eb1cef Update readme with prompt engineering 2022-05-03 21:32:06 -06:00
James Betker
958b939b64 Add setup 2022-05-02 21:24:34 -06:00
James Betker
19f38f454f Update README and update to version 2.1 2022-05-02 21:02:29 -06:00
James Betker
e4e8ebfc55 getting ready for 2.1 release 2022-05-02 20:20:50 -06:00
James Betker
01b783fc02 Add support for extracting and feeding conditioning latents directly into the model
- Adds a new script and API endpoints for doing this
- Reworks autoregressive and diffusion models so that the conditioning is computed separately (which will actually provide a mild performance boost)
- Updates README

This is untested. Need to do the following manual tests (and someday write unit tests for this behemoth before
it becomes a problem..)
1) Does get_conditioning_latents.py work?
2) Can I feed those latents back into the model by creating a new voice?
3) Can I still mix and match voices (both with conditioning latents and normal voices) with read.py?
2022-05-01 17:25:18 -06:00
James Betker
e8966d09cf ack 2022-04-27 23:22:55 -06:00