|
88529fda43
|
fixed regression with computing conditional latencies outside of the CPU
|
2023-02-12 17:44:39 +00:00 |
|
|
65f74692a0
|
fixed silently crashing from enabling kv_cache-ing if using the DirectML backend, throw an error when reading a generated audio file that does not have any embedded metadata in it, cleaned up the blocks of code that would DMA/transfer tensors/models between GPU and CPU
|
2023-02-12 14:46:21 +00:00 |
|
|
94757f5b41
|
instll python3.9, wrapped try/catch when parsing args.listen in case you somehow manage to insert garbage into that field and fuck up your config, removed a very redudnant setup.py install call since that only is required if you're just going to install it for using outside of the tortoise-tts folder
|
2023-02-12 04:35:21 +00:00 |
|
|
ddd0c4ccf8
|
cleanup loop, save files while generating a batch in the event it crashes midway through
|
2023-02-12 01:15:22 +00:00 |
|
|
1b55730e67
|
fixed regression where the auto_conds do not move to the GPU and causes a problem during CVVP compare pass
|
2023-02-11 20:34:12 +00:00 |
|
mrq
|
3d69274a46
|
Merge pull request 'Only directories in the voice list' (#20) from lightmare/tortoise-tts:only_dirs_in_voice_list into main
Reviewed-on: mrq/tortoise-tts#20
|
2023-02-11 20:14:36 +00:00 |
|
lightmare
|
13b60db29c
|
Only directories in the voice list
|
2023-02-11 18:26:51 +00:00 |
|
|
3a8ce5a110
|
Moved experimental settings to main tab, hidden under a check box
|
2023-02-11 17:21:08 +00:00 |
|
|
126f1a0afe
|
sloppily guarantee stop/reloading TTS actually works
|
2023-02-11 17:01:40 +00:00 |
|
|
6d06bcce05
|
Added candidate selection for outputs, hide output elements (except for the main one) to only show one progress bar
|
2023-02-11 16:34:47 +00:00 |
|
|
a7330164ab
|
Added integration for "voicefixer", fixed issue where candidates>1 and lines>1 only outputs the last combined candidate, numbered step for each generation in progress, output time per generation step
|
2023-02-11 15:02:11 +00:00 |
|
|
841754602e
|
store generation time per generation rather than per entire request
|
2023-02-11 13:00:39 +00:00 |
|
|
44eba62dc8
|
fixed using old output dir because of my autism with prefixing everything with "./" broke it, fixed incrementing filenames
|
2023-02-11 12:39:16 +00:00 |
|
|
58e2b22b0e
|
History tab (3/10 it works)
|
2023-02-11 01:45:25 +00:00 |
|
|
c924ebd034
|
Numbering predicates on input_#.json files instead of "number of wavs"
|
2023-02-10 22:51:56 +00:00 |
|
|
4f903159ee
|
revamped result formatting, added "kludgy" stop button
|
2023-02-10 22:12:37 +00:00 |
|
|
9e0fbff545
|
Slight notebook adjust
|
2023-02-10 20:22:12 +00:00 |
|
|
52a9ed7858
|
Moved voices out of the tortoise folder because it kept being processed for setup.py
|
2023-02-10 20:11:56 +00:00 |
|
|
8b83c9083d
|
Cleanup
|
2023-02-10 19:55:33 +00:00 |
|
|
a09eff5d9c
|
Added the remaining input settings
|
2023-02-10 16:47:57 +00:00 |
|
|
7baf9e3f79
|
Added a link to the colab notebook
|
2023-02-10 16:26:13 +00:00 |
|
|
5b852da720
|
Colab notebook (part II)
|
2023-02-10 16:12:11 +00:00 |
|
|
3d6ac3afaa
|
Colab notebook (part 1)
|
2023-02-10 15:58:56 +00:00 |
|
|
efa556b793
|
Added new options: "Output Sample Rate", "Output Volume", and documentation
|
2023-02-10 03:02:09 +00:00 |
|
|
57af25c6c0
|
oops
|
2023-02-09 22:17:57 +00:00 |
|
|
504db0d1ac
|
Added 'Only Load Models Locally' setting
|
2023-02-09 22:06:55 +00:00 |
|
|
460f5d6e32
|
Added and documented
|
2023-02-09 21:07:51 +00:00 |
|
|
145298b766
|
Oops
|
2023-02-09 20:49:22 +00:00 |
|
|
729be135ef
|
Added option: listen path
|
2023-02-09 20:42:38 +00:00 |
|
|
3f8302a680
|
I didn't have to suck off a wizard for DirectML support (courtesy of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7600 for leading the way)
|
2023-02-09 05:05:21 +00:00 |
|
|
50b4e2c458
|
oops
|
2023-02-09 02:39:08 +00:00 |
|
|
b23d6b4b4c
|
owari da...
|
2023-02-09 01:53:25 +00:00 |
|
|
494f3c84a1
|
beginning to add DirectML support
|
2023-02-08 23:03:52 +00:00 |
|
|
81e4d261b7
|
Added two flags/settings: embed output settings, slimmer computed voice latents
|
2023-02-08 14:14:28 +00:00 |
|
|
94eab20529
|
disable telemetry/what-have-you if not requesting a public Gradio URL
|
2023-02-07 21:44:16 +00:00 |
|
mrq
|
0bf4fefd42
|
Merge pull request 'Added convert.sh' (#8) from lightmare/tortoise-tts:convert_sh into main
Reviewed-on: mrq/tortoise-tts#8
|
2023-02-07 21:09:00 +00:00 |
|
lightmare
|
f62a3675aa
|
Added convert.sh
|
2023-02-07 21:09:00 +00:00 |
|
|
e45e4431d1
|
(finally) added the CVVP model weigh slider, latents export more data too for weighing against CVVP
|
2023-02-07 20:55:56 +00:00 |
|
|
f7274112c3
|
un-hardcoded input output sampling rates (changing them "works" but leads to wrong audio, naturally)
|
2023-02-07 18:34:29 +00:00 |
|
|
55058675d2
|
(maybe) fixed an issue with using prompt redactions (emotions) on CPU causing a crash, because for some reason the wav2vec_alignment assumed CUDA was always available
|
2023-02-07 07:51:05 -06:00 |
|
|
328deeddae
|
forgot to auto compute batch size again if set to 0
|
2023-02-06 23:14:17 -06:00 |
|
|
6475045f87
|
changed ROCm pip index URL from 5.2 to 5.1.1, because it's what worked for me desu
|
2023-02-06 22:52:40 -06:00 |
|
|
5d76d47a49
|
added shell scripts for linux, wrapped sorted() for voice list, I guess
|
2023-02-06 21:54:31 -06:00 |
|
|
3b56c437aa
|
fixed combining audio, somehow this broke, oops
|
2023-02-07 00:26:22 +00:00 |
|
|
a3c077ba13
|
added setting to adjust autoregressive sample batch size
|
2023-02-06 22:31:06 +00:00 |
|
|
d8c88078f3
|
Added settings page, added checking for updates (disabled by default), some other things that I don't remember
|
2023-02-06 21:43:01 +00:00 |
|
|
d1172ead36
|
Added encoding and ripping latents used to generate the voice
|
2023-02-06 16:32:09 +00:00 |
|
|
e25ec325fe
|
Added tab to read and copy settings from a voice clip (in the future, I'll see about enmbedding the latent used to generate the voice)
|
2023-02-06 16:00:44 +00:00 |
|
|
edb6a173d3
|
added another (somewhat adequate) example, added metadata storage to generated files (need to add in a viewer later)
|
2023-02-06 14:17:41 +00:00 |
|
|
b8b15d827d
|
added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM
|
2023-02-06 05:10:07 +00:00 |
|