|
7cc0250a1a
|
added more kill checks, since it only actually did it for the first iteration of a loop
|
2023-02-24 23:10:04 +00:00 |
|
|
de46cf7831
|
adding magically deleted files back (might have a hunch on what happened)
|
2023-02-24 19:30:04 +00:00 |
|
|
2c7c02eb5c
|
moved the old readme back, to align with how DLAS is setup, sorta
|
2023-02-19 17:37:36 +00:00 |
|
|
34b232927e
|
Oops
|
2023-02-19 01:54:21 +00:00 |
|
|
d8c6739820
|
added constructor argument and function to load a user-specified autoregressive model
|
2023-02-18 14:08:45 +00:00 |
|
|
00cb19b6cf
|
arg to skip voice latents for grabbing voice lists (for preparing datasets)
|
2023-02-17 04:50:02 +00:00 |
|
|
b255a77a05
|
updated notebooks to use the new "main" setup
|
2023-02-17 03:31:19 +00:00 |
|
|
150138860c
|
oops
|
2023-02-17 01:46:38 +00:00 |
|
|
6ad3477bfd
|
one more update
|
2023-02-16 23:18:02 +00:00 |
|
|
413703b572
|
fixed colab to use the new repo, reorder loading tortoise before the web UI for people who don't wait
|
2023-02-16 22:12:13 +00:00 |
|
|
30298b9ca3
|
fixing brain worms
|
2023-02-16 21:36:49 +00:00 |
|
|
d53edf540e
|
pip-ifying things
|
2023-02-16 19:48:06 +00:00 |
|
|
d159346572
|
oops
|
2023-02-16 13:23:07 +00:00 |
|
|
eca61af016
|
actually for real fixed incrementing filenames because i had a regex that actually only worked if candidates or lines>1, cuda now takes priority over dml if you're a nut with both of them installed because you can just specify an override anyways
|
2023-02-16 01:06:32 +00:00 |
|
|
ec80ca632b
|
added setting "device-override", less naively decide the number to use for results, some other thing
|
2023-02-15 21:51:22 +00:00 |
|
|
dcc5c140e6
|
fixes
|
2023-02-15 15:33:08 +00:00 |
|
|
729b292515
|
oops x2
|
2023-02-15 05:57:42 +00:00 |
|
|
5bf98de301
|
oops
|
2023-02-15 05:55:01 +00:00 |
|
|
3e8365fdec
|
voicefixed files do not overwrite, as my autism wants to hear the difference between them, incrementing file format fixed for real
|
2023-02-15 05:49:28 +00:00 |
|
|
ea1bc770aa
|
added option: force cpu for conditioning latents, for when you want low chunk counts but your GPU keeps OOMing because fuck fragmentation
|
2023-02-15 05:01:40 +00:00 |
|
|
b721e395b5
|
modified conversion scripts to not give a shit about bitrate and formats since torchaudio.load handles all of that anyways, and it all gets resampled anyways
|
2023-02-15 04:44:14 +00:00 |
|
|
2e777e8a67
|
done away with kludgy shit code, just have the user decide how many chunks to slice concat'd samples to (since it actually does improve vocie replicability)
|
2023-02-15 04:39:31 +00:00 |
|
|
314feaeea1
|
added reset generation settings to default button, revamped utilities tab to double as plain jane voice importer (and runs through voicefixer despite it not really doing anything if your voice samples are already of decent quality anyways), ditched load_wav_to_torch or whatever it was called because it literally exists as torchaudio.load, sample voice is now a combined waveform of all your samples and will always return even if using a latents file
|
2023-02-14 21:20:04 +00:00 |
|
|
0bc2c1f540
|
updates chunk size to the chunked tensor length, just in case
|
2023-02-14 17:13:34 +00:00 |
|
|
48275899e8
|
added flag to enable/disable voicefixer using CUDA because I'll OOM on my 2060, changed from naively subdividing eavenly (2,4,8,16 pieces) to just incrementing by 1 (1,2,3,4) when trying to subdivide within constraints of the max chunk size for computing voice latents
|
2023-02-14 16:47:34 +00:00 |
|
|
b648186691
|
history tab doesn't naively reuse the voice dir instead for results, experimental "divide total sound size until it fits under requests max chunk size" doesn't have a +1 to mess things up (need to re-evaluate how I want to calculate sizes of bests fits eventually)
|
2023-02-14 16:23:04 +00:00 |
|
|
47f4b5bf81
|
voicefixer uses CUDA if exposed
|
2023-02-13 15:30:49 +00:00 |
|
|
8250a79b23
|
Implemented kv_cache "fix" (from 1f3c1b5f4a ); guess I should find out why it's crashing DirectML backend
|
2023-02-13 13:48:31 +00:00 |
|
mrq
|
80eeef01fb
|
Merge pull request 'Download from Gradio' (#31) from Armored1065/tortoise-tts:main into main
Reviewed-on: #31
|
2023-02-13 13:30:09 +00:00 |
|
Armored1065
|
8c96aa02c5
|
Merge pull request 'Update 'README.md'' (#1) from armored1065-patch-1 into main
Reviewed-on: Armored1065/tortoise-tts#1
|
2023-02-13 06:21:37 +00:00 |
|
Armored1065
|
d458e932be
|
Update 'README.md'
Updated text to reflect the download and playback options
|
2023-02-13 06:19:42 +00:00 |
|
|
f92e432c8d
|
added random voice option back because I forgot I accidentally removed it
|
2023-02-13 04:57:06 +00:00 |
|
|
a2bac3fb2c
|
Fixed out of order settings causing other settings to flipflop
|
2023-02-13 03:43:08 +00:00 |
|
|
5b5e32338c
|
DirectML: fixed redaction/aligner by forcing it to stay on CPU
|
2023-02-12 20:52:04 +00:00 |
|
|
824ad38cca
|
fixed voicefixing not working as intended, load TTS before Gradio in the webui due to how long it takes to initialize tortoise (instead of just having a block to preload it)
|
2023-02-12 20:05:59 +00:00 |
|
|
4d01bbd429
|
added button to recalculate voice latents, added experimental switch for computing voice latents
|
2023-02-12 18:11:40 +00:00 |
|
|
88529fda43
|
fixed regression with computing conditional latencies outside of the CPU
|
2023-02-12 17:44:39 +00:00 |
|
|
65f74692a0
|
fixed silently crashing from enabling kv_cache-ing if using the DirectML backend, throw an error when reading a generated audio file that does not have any embedded metadata in it, cleaned up the blocks of code that would DMA/transfer tensors/models between GPU and CPU
|
2023-02-12 14:46:21 +00:00 |
|
|
94757f5b41
|
instll python3.9, wrapped try/catch when parsing args.listen in case you somehow manage to insert garbage into that field and fuck up your config, removed a very redudnant setup.py install call since that only is required if you're just going to install it for using outside of the tortoise-tts folder
|
2023-02-12 04:35:21 +00:00 |
|
|
ddd0c4ccf8
|
cleanup loop, save files while generating a batch in the event it crashes midway through
|
2023-02-12 01:15:22 +00:00 |
|
|
1b55730e67
|
fixed regression where the auto_conds do not move to the GPU and causes a problem during CVVP compare pass
|
2023-02-11 20:34:12 +00:00 |
|
mrq
|
3d69274a46
|
Merge pull request 'Only directories in the voice list' (#20) from lightmare/tortoise-tts:only_dirs_in_voice_list into main
Reviewed-on: #20
|
2023-02-11 20:14:36 +00:00 |
|
lightmare
|
13b60db29c
|
Only directories in the voice list
|
2023-02-11 18:26:51 +00:00 |
|
|
3a8ce5a110
|
Moved experimental settings to main tab, hidden under a check box
|
2023-02-11 17:21:08 +00:00 |
|
|
126f1a0afe
|
sloppily guarantee stop/reloading TTS actually works
|
2023-02-11 17:01:40 +00:00 |
|
|
6d06bcce05
|
Added candidate selection for outputs, hide output elements (except for the main one) to only show one progress bar
|
2023-02-11 16:34:47 +00:00 |
|
|
a7330164ab
|
Added integration for "voicefixer", fixed issue where candidates>1 and lines>1 only outputs the last combined candidate, numbered step for each generation in progress, output time per generation step
|
2023-02-11 15:02:11 +00:00 |
|
|
841754602e
|
store generation time per generation rather than per entire request
|
2023-02-11 13:00:39 +00:00 |
|
|
44eba62dc8
|
fixed using old output dir because of my autism with prefixing everything with "./" broke it, fixed incrementing filenames
|
2023-02-11 12:39:16 +00:00 |
|
|
58e2b22b0e
|
History tab (3/10 it works)
|
2023-02-11 01:45:25 +00:00 |
|