Commit Graph

293 Commits

Author SHA1 Message Date
mrq
c533b1b391 oops 2023-02-15 05:55:01 +00:00
mrq
88535f192d voicefixed files do not overwrite, as my autism wants to hear the difference between them, incrementing file format fixed for real 2023-02-15 05:49:28 +00:00
mrq
f4d2d0d7f8 added option: force cpu for conditioning latents, for when you want low chunk counts but your GPU keeps OOMing because fuck fragmentation 2023-02-15 05:01:40 +00:00
mrq
defa460028 modified conversion scripts to not give a shit about bitrate and formats since torchaudio.load handles all of that anyways, and it all gets resampled anyways 2023-02-15 04:44:14 +00:00
mrq
2ee6068f98 done away with kludgy shit code, just have the user decide how many chunks to slice concat'd samples to (since it actually does improve vocie replicability) 2023-02-15 04:39:31 +00:00
mrq
c12ada600b added reset generation settings to default button, revamped utilities tab to double as plain jane voice importer (and runs through voicefixer despite it not really doing anything if your voice samples are already of decent quality anyways), ditched load_wav_to_torch or whatever it was called because it literally exists as torchaudio.load, sample voice is now a combined waveform of all your samples and will always return even if using a latents file 2023-02-14 21:20:04 +00:00
mrq
15924bd3ec updates chunk size to the chunked tensor length, just in case 2023-02-14 17:13:34 +00:00
mrq
b4ca260de9 added flag to enable/disable voicefixer using CUDA because I'll OOM on my 2060, changed from naively subdividing eavenly (2,4,8,16 pieces) to just incrementing by 1 (1,2,3,4) when trying to subdivide within constraints of the max chunk size for computing voice latents 2023-02-14 16:47:34 +00:00
mrq
b16eb99538 history tab doesn't naively reuse the voice dir instead for results, experimental "divide total sound size until it fits under requests max chunk size" doesn't have a +1 to mess things up (need to re-evaluate how I want to calculate sizes of bests fits eventually) 2023-02-14 16:23:04 +00:00
mrq
5e843fe29d voicefixer uses CUDA if exposed 2023-02-13 15:30:49 +00:00
mrq
2427c98333 Implemented kv_cache "fix" (from 1f3c1b5f4a); guess I should find out why it's crashing DirectML backend 2023-02-13 13:48:31 +00:00
mrq
b383222be2 Merge pull request 'Download from Gradio' (#31) from Armored1065/tortoise-tts:main into main
Reviewed-on: mrq/tortoise-tts#31
2023-02-13 13:30:09 +00:00
Armored1065
446d643d62 Merge pull request 'Update 'README.md'' (#1) from armored1065-patch-1 into main
Reviewed-on: Armored1065/tortoise-tts#1
2023-02-13 06:21:37 +00:00
Armored1065
99f901baa9 Update 'README.md'
Updated text to reflect the download and playback options
2023-02-13 06:19:42 +00:00
mrq
37d25573ac added random voice option back because I forgot I accidentally removed it 2023-02-13 04:57:06 +00:00
mrq
a84aaa4f96 Fixed out of order settings causing other settings to flipflop 2023-02-13 03:43:08 +00:00
mrq
4ced0296a2 DirectML: fixed redaction/aligner by forcing it to stay on CPU 2023-02-12 20:52:04 +00:00
mrq
409dec98d5 fixed voicefixing not working as intended, load TTS before Gradio in the webui due to how long it takes to initialize tortoise (instead of just having a block to preload it) 2023-02-12 20:05:59 +00:00
mrq
b85c9921d7 added button to recalculate voice latents, added experimental switch for computing voice latents 2023-02-12 18:11:40 +00:00
mrq
2210b49cb6 fixed regression with computing conditional latencies outside of the CPU 2023-02-12 17:44:39 +00:00
mrq
a2d95fe208 fixed silently crashing from enabling kv_cache-ing if using the DirectML backend, throw an error when reading a generated audio file that does not have any embedded metadata in it, cleaned up the blocks of code that would DMA/transfer tensors/models between GPU and CPU 2023-02-12 14:46:21 +00:00
mrq
25e70dce1a instll python3.9, wrapped try/catch when parsing args.listen in case you somehow manage to insert garbage into that field and fuck up your config, removed a very redudnant setup.py install call since that only is required if you're just going to install it for using outside of the tortoise-tts folder 2023-02-12 04:35:21 +00:00
mrq
6328466852 cleanup loop, save files while generating a batch in the event it crashes midway through 2023-02-12 01:15:22 +00:00
mrq
5f1c032312 fixed regression where the auto_conds do not move to the GPU and causes a problem during CVVP compare pass 2023-02-11 20:34:12 +00:00
mrq
2f86565969 Merge pull request 'Only directories in the voice list' (#20) from lightmare/tortoise-tts:only_dirs_in_voice_list into main
Reviewed-on: mrq/tortoise-tts#20
2023-02-11 20:14:36 +00:00
lightmare
192a510ee1 Only directories in the voice list 2023-02-11 18:26:51 +00:00
mrq
84316d8f80 Moved experimental settings to main tab, hidden under a check box 2023-02-11 17:21:08 +00:00
mrq
50073e635f sloppily guarantee stop/reloading TTS actually works 2023-02-11 17:01:40 +00:00
mrq
4b3b0ead1a Added candidate selection for outputs, hide output elements (except for the main one) to only show one progress bar 2023-02-11 16:34:47 +00:00
mrq
c5337a6b51 Added integration for "voicefixer", fixed issue where candidates>1 and lines>1 only outputs the last combined candidate, numbered step for each generation in progress, output time per generation step 2023-02-11 15:02:11 +00:00
mrq
fa743e2e9b store generation time per generation rather than per entire request 2023-02-11 13:00:39 +00:00
mrq
ffb269e579 fixed using old output dir because of my autism with prefixing everything with "./" broke it, fixed incrementing filenames 2023-02-11 12:39:16 +00:00
mrq
9bf1ea5b0a History tab (3/10 it works) 2023-02-11 01:45:25 +00:00
mrq
340a89f883 Numbering predicates on input_#.json files instead of "number of wavs" 2023-02-10 22:51:56 +00:00
mrq
8641cc9906 revamped result formatting, added "kludgy" stop button 2023-02-10 22:12:37 +00:00
mrq
8f789d17b9 Slight notebook adjust 2023-02-10 20:22:12 +00:00
mrq
7471bc209c Moved voices out of the tortoise folder because it kept being processed for setup.py 2023-02-10 20:11:56 +00:00
mrq
2bce24b9dd Cleanup 2023-02-10 19:55:33 +00:00
mrq
811539b20a Added the remaining input settings 2023-02-10 16:47:57 +00:00
mrq
f5ed5499a0 Added a link to the colab notebook 2023-02-10 16:26:13 +00:00
mrq
07c54ad361 Colab notebook (part II) 2023-02-10 16:12:11 +00:00
mrq
939c89f16e Colab notebook (part 1) 2023-02-10 15:58:56 +00:00
mrq
39b81318f2 Added new options: "Output Sample Rate", "Output Volume", and documentation 2023-02-10 03:02:09 +00:00
mrq
77b39e59ac oops 2023-02-09 22:17:57 +00:00
mrq
3621e16ef9 Added 'Only Load Models Locally' setting 2023-02-09 22:06:55 +00:00
mrq
dccedc3f66 Added and documented 2023-02-09 21:07:51 +00:00
mrq
8c30cd1aa4 Oops 2023-02-09 20:49:22 +00:00
mrq
d7443dfa06 Added option: listen path 2023-02-09 20:42:38 +00:00
mrq
38ee19cd57 I didn't have to suck off a wizard for DirectML support (courtesy of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7600 for leading the way) 2023-02-09 05:05:21 +00:00
mrq
716e227953 oops 2023-02-09 02:39:08 +00:00