|
729be135ef
|
Added option: listen path
|
2023-02-09 20:42:38 +00:00 |
|
|
3f8302a680
|
I didn't have to suck off a wizard for DirectML support (courtesy of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7600 for leading the way)
|
2023-02-09 05:05:21 +00:00 |
|
|
50b4e2c458
|
oops
|
2023-02-09 02:39:08 +00:00 |
|
|
b23d6b4b4c
|
owari da...
|
2023-02-09 01:53:25 +00:00 |
|
|
494f3c84a1
|
beginning to add DirectML support
|
2023-02-08 23:03:52 +00:00 |
|
|
81e4d261b7
|
Added two flags/settings: embed output settings, slimmer computed voice latents
|
2023-02-08 14:14:28 +00:00 |
|
|
94eab20529
|
disable telemetry/what-have-you if not requesting a public Gradio URL
|
2023-02-07 21:44:16 +00:00 |
|
mrq
|
0bf4fefd42
|
Merge pull request 'Added convert.sh' (#8) from lightmare/tortoise-tts:convert_sh into main
Reviewed-on: mrq/tortoise-tts#8
|
2023-02-07 21:09:00 +00:00 |
|
lightmare
|
f62a3675aa
|
Added convert.sh
|
2023-02-07 21:09:00 +00:00 |
|
|
e45e4431d1
|
(finally) added the CVVP model weigh slider, latents export more data too for weighing against CVVP
|
2023-02-07 20:55:56 +00:00 |
|
|
f7274112c3
|
un-hardcoded input output sampling rates (changing them "works" but leads to wrong audio, naturally)
|
2023-02-07 18:34:29 +00:00 |
|
|
55058675d2
|
(maybe) fixed an issue with using prompt redactions (emotions) on CPU causing a crash, because for some reason the wav2vec_alignment assumed CUDA was always available
|
2023-02-07 07:51:05 -06:00 |
|
|
328deeddae
|
forgot to auto compute batch size again if set to 0
|
2023-02-06 23:14:17 -06:00 |
|
|
6475045f87
|
changed ROCm pip index URL from 5.2 to 5.1.1, because it's what worked for me desu
|
2023-02-06 22:52:40 -06:00 |
|
|
5d76d47a49
|
added shell scripts for linux, wrapped sorted() for voice list, I guess
|
2023-02-06 21:54:31 -06:00 |
|
|
3b56c437aa
|
fixed combining audio, somehow this broke, oops
|
2023-02-07 00:26:22 +00:00 |
|
|
a3c077ba13
|
added setting to adjust autoregressive sample batch size
|
2023-02-06 22:31:06 +00:00 |
|
|
d8c88078f3
|
Added settings page, added checking for updates (disabled by default), some other things that I don't remember
|
2023-02-06 21:43:01 +00:00 |
|
|
d1172ead36
|
Added encoding and ripping latents used to generate the voice
|
2023-02-06 16:32:09 +00:00 |
|
|
e25ec325fe
|
Added tab to read and copy settings from a voice clip (in the future, I'll see about enmbedding the latent used to generate the voice)
|
2023-02-06 16:00:44 +00:00 |
|
|
edb6a173d3
|
added another (somewhat adequate) example, added metadata storage to generated files (need to add in a viewer later)
|
2023-02-06 14:17:41 +00:00 |
|
|
b8b15d827d
|
added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM
|
2023-02-06 05:10:07 +00:00 |
|
|
319e7ec0a6
|
fixed up the computing conditional latents
|
2023-02-06 03:44:34 +00:00 |
|
|
3c0648beaf
|
updated README (before I go mad trying to nitpick and edit it while getting distracted from an iToddler sperging)
|
2023-02-06 00:56:17 +00:00 |
|
|
b23f583c4e
|
Forgot to rename the cached latents to the new filename
|
2023-02-05 23:51:52 +00:00 |
|
|
c2c9b1b683
|
modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents)
|
2023-02-05 23:25:41 +00:00 |
|
|
4ea997106e
|
oops
|
2023-02-05 20:10:40 +00:00 |
|
|
daebc6c21c
|
added button to refresh voice list, enabling KV caching for a bonerific speed increase (credit to https://github.com/152334H/tortoise-tts-fast/)
|
2023-02-05 17:59:13 +00:00 |
|
|
7b767e1442
|
New tunable: pause size/breathing room (governs pause at the end of clips)
|
2023-02-05 14:45:51 +00:00 |
|
|
ec31d1763a
|
Fix to keep prompted emotion for every split line
|
2023-02-05 06:55:09 +00:00 |
|
|
9ef7f11c0a
|
Updated .gitignore (that does not apply to me because I have a bad habit of having a repo copy separate from a working copy)
|
2023-02-05 06:40:50 +00:00 |
|
|
98dbf56d44
|
Skip combining if not splitting, also avoids reading back the audio files to combine them by keeping them in memory
|
2023-02-05 06:35:32 +00:00 |
|
|
f38c479e9b
|
Added multi-line parsing
|
2023-02-05 06:17:51 +00:00 |
|
|
3e3634f36a
|
Fixed accidentally not passing user-provided samples/iteration values (oops), fixed error thrown when trying to write unicode because python sucks
|
2023-02-05 05:51:57 +00:00 |
|
|
26daca3dc6
|
Forgot to add steps=1 to Candidates slider
|
2023-02-05 04:27:20 +00:00 |
|
|
111c45b181
|
Set transformer and model folder to local './models/' instead of for the user profile, because I'm sick of more bloat polluting my C:\
|
2023-02-05 04:18:35 +00:00 |
|
|
d2aeadd754
|
cleaned up element order with Blocks, also added preset updating the samples/iterations counts
|
2023-02-05 03:53:46 +00:00 |
|
|
078dc0c6e2
|
Added choices to choose between diffusion samplers (p, ddim)
|
2023-02-05 01:28:31 +00:00 |
|
|
4274cce218
|
Added small optimization with caching latents, dropped Anaconda for just a py3.9 + pip + venv setup, added helper install scripts for such, cleaned up app.py, added flag '--low-vram' to disable minor optimizations
|
2023-02-04 01:50:57 +00:00 |
|
|
061aa65ac4
|
Reverted slight improvement patch, as it's just enough to OOM on GPUs with low VRAM
|
2023-02-03 21:45:06 +00:00 |
|
|
4f359bffa4
|
Added progress for transforming to audio, changed number inputs to sliders instead
|
2023-02-03 04:56:30 +00:00 |
|
|
ef237c70d0
|
forgot to copy the alleged slight performance improvement patch, added detailed progress information with passing gr.Progress, save a little more info with output
|
2023-02-03 04:20:01 +00:00 |
|
|
aafef3a140
|
Cleaned up the good-morning-sirs-dialect labels, fixed seed=0 not being a random seed, show seed on output
|
2023-02-03 01:25:03 +00:00 |
|
|
1eb92a1236
|
QoL fixes
|
2023-02-02 21:13:28 +00:00 |
|
|
5ebe587898
|
Quick fixes for Conda
|
2023-02-01 01:21:56 +00:00 |
|
James Betker
|
98a891e66e
|
Merge pull request #263 from netshade/remove-ffmpeg-dep
Remove FFMPEG dep
|
2023-01-22 17:55:36 -07:00 |
|
chris
|
b0296ba528
|
remove ffmpeg requirement, not actually necessary
|
2023-01-22 16:41:25 -05:00 |
|
James Betker
|
aad67d0e78
|
Merge pull request #233 from kianmeng/fix-typos
Fix typos
|
2023-01-17 18:24:24 -07:00 |
|
James Betker
|
69738359c6
|
Merge pull request #245 from netshade/installation-updates
Documentation and Dependency Updates
|
2023-01-11 09:30:50 -07:00 |
|
chris
|
0793800526
|
add explicit requirements.txt usage for dep installation
|
2023-01-11 10:50:18 -05:00 |
|