Commit Graph

283 Commits

Author SHA1 Message Date
mrq
2427c98333 Implemented kv_cache "fix" (from 1f3c1b5f4a); guess I should find out why it's crashing DirectML backend 2023-02-13 13:48:31 +00:00
mrq
b383222be2 Merge pull request 'Download from Gradio' (#31) from Armored1065/tortoise-tts:main into main
Reviewed-on: mrq/tortoise-tts#31
2023-02-13 13:30:09 +00:00
Armored1065
446d643d62 Merge pull request 'Update 'README.md'' (#1) from armored1065-patch-1 into main
Reviewed-on: Armored1065/tortoise-tts#1
2023-02-13 06:21:37 +00:00
Armored1065
99f901baa9 Update 'README.md'
Updated text to reflect the download and playback options
2023-02-13 06:19:42 +00:00
mrq
37d25573ac added random voice option back because I forgot I accidentally removed it 2023-02-13 04:57:06 +00:00
mrq
a84aaa4f96 Fixed out of order settings causing other settings to flipflop 2023-02-13 03:43:08 +00:00
mrq
4ced0296a2 DirectML: fixed redaction/aligner by forcing it to stay on CPU 2023-02-12 20:52:04 +00:00
mrq
409dec98d5 fixed voicefixing not working as intended, load TTS before Gradio in the webui due to how long it takes to initialize tortoise (instead of just having a block to preload it) 2023-02-12 20:05:59 +00:00
mrq
b85c9921d7 added button to recalculate voice latents, added experimental switch for computing voice latents 2023-02-12 18:11:40 +00:00
mrq
2210b49cb6 fixed regression with computing conditional latencies outside of the CPU 2023-02-12 17:44:39 +00:00
mrq
a2d95fe208 fixed silently crashing from enabling kv_cache-ing if using the DirectML backend, throw an error when reading a generated audio file that does not have any embedded metadata in it, cleaned up the blocks of code that would DMA/transfer tensors/models between GPU and CPU 2023-02-12 14:46:21 +00:00
mrq
25e70dce1a instll python3.9, wrapped try/catch when parsing args.listen in case you somehow manage to insert garbage into that field and fuck up your config, removed a very redudnant setup.py install call since that only is required if you're just going to install it for using outside of the tortoise-tts folder 2023-02-12 04:35:21 +00:00
mrq
6328466852 cleanup loop, save files while generating a batch in the event it crashes midway through 2023-02-12 01:15:22 +00:00
mrq
5f1c032312 fixed regression where the auto_conds do not move to the GPU and causes a problem during CVVP compare pass 2023-02-11 20:34:12 +00:00
mrq
2f86565969 Merge pull request 'Only directories in the voice list' (#20) from lightmare/tortoise-tts:only_dirs_in_voice_list into main
Reviewed-on: mrq/tortoise-tts#20
2023-02-11 20:14:36 +00:00
lightmare
192a510ee1 Only directories in the voice list 2023-02-11 18:26:51 +00:00
mrq
84316d8f80 Moved experimental settings to main tab, hidden under a check box 2023-02-11 17:21:08 +00:00
mrq
50073e635f sloppily guarantee stop/reloading TTS actually works 2023-02-11 17:01:40 +00:00
mrq
4b3b0ead1a Added candidate selection for outputs, hide output elements (except for the main one) to only show one progress bar 2023-02-11 16:34:47 +00:00
mrq
c5337a6b51 Added integration for "voicefixer", fixed issue where candidates>1 and lines>1 only outputs the last combined candidate, numbered step for each generation in progress, output time per generation step 2023-02-11 15:02:11 +00:00
mrq
fa743e2e9b store generation time per generation rather than per entire request 2023-02-11 13:00:39 +00:00
mrq
ffb269e579 fixed using old output dir because of my autism with prefixing everything with "./" broke it, fixed incrementing filenames 2023-02-11 12:39:16 +00:00
mrq
9bf1ea5b0a History tab (3/10 it works) 2023-02-11 01:45:25 +00:00
mrq
340a89f883 Numbering predicates on input_#.json files instead of "number of wavs" 2023-02-10 22:51:56 +00:00
mrq
8641cc9906 revamped result formatting, added "kludgy" stop button 2023-02-10 22:12:37 +00:00
mrq
8f789d17b9 Slight notebook adjust 2023-02-10 20:22:12 +00:00
mrq
7471bc209c Moved voices out of the tortoise folder because it kept being processed for setup.py 2023-02-10 20:11:56 +00:00
mrq
2bce24b9dd Cleanup 2023-02-10 19:55:33 +00:00
mrq
811539b20a Added the remaining input settings 2023-02-10 16:47:57 +00:00
mrq
f5ed5499a0 Added a link to the colab notebook 2023-02-10 16:26:13 +00:00
mrq
07c54ad361 Colab notebook (part II) 2023-02-10 16:12:11 +00:00
mrq
939c89f16e Colab notebook (part 1) 2023-02-10 15:58:56 +00:00
mrq
39b81318f2 Added new options: "Output Sample Rate", "Output Volume", and documentation 2023-02-10 03:02:09 +00:00
mrq
77b39e59ac oops 2023-02-09 22:17:57 +00:00
mrq
3621e16ef9 Added 'Only Load Models Locally' setting 2023-02-09 22:06:55 +00:00
mrq
dccedc3f66 Added and documented 2023-02-09 21:07:51 +00:00
mrq
8c30cd1aa4 Oops 2023-02-09 20:49:22 +00:00
mrq
d7443dfa06 Added option: listen path 2023-02-09 20:42:38 +00:00
mrq
38ee19cd57 I didn't have to suck off a wizard for DirectML support (courtesy of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7600 for leading the way) 2023-02-09 05:05:21 +00:00
mrq
716e227953 oops 2023-02-09 02:39:08 +00:00
mrq
a37546ad99 owari da... 2023-02-09 01:53:25 +00:00
mrq
6255c98006 beginning to add DirectML support 2023-02-08 23:03:52 +00:00
mrq
d9a9fa6a82 Added two flags/settings: embed output settings, slimmer computed voice latents 2023-02-08 14:14:28 +00:00
mrq
f03b6b8d97 disable telemetry/what-have-you if not requesting a public Gradio URL 2023-02-07 21:44:16 +00:00
mrq
479f30c808 Merge pull request 'Added convert.sh' (#8) from lightmare/tortoise-tts:convert_sh into main
Reviewed-on: mrq/tortoise-tts#8
2023-02-07 21:09:00 +00:00
lightmare
40f52fa8d1 Added convert.sh 2023-02-07 21:09:00 +00:00
mrq
6ebdde58f0 (finally) added the CVVP model weigh slider, latents export more data too for weighing against CVVP 2023-02-07 20:55:56 +00:00
mrq
793515772a un-hardcoded input output sampling rates (changing them "works" but leads to wrong audio, naturally) 2023-02-07 18:34:29 +00:00
mrq
5f934c5feb (maybe) fixed an issue with using prompt redactions (emotions) on CPU causing a crash, because for some reason the wav2vec_alignment assumed CUDA was always available 2023-02-07 07:51:05 -06:00
mrq
d6b5d67f79 forgot to auto compute batch size again if set to 0 2023-02-06 23:14:17 -06:00