|
be6fab9dcb
|
added setting to adjust autoregressive sample batch size
|
2023-02-06 22:31:06 +00:00 |
|
|
100b4d7e61
|
Added settings page, added checking for updates (disabled by default), some other things that I don't remember
|
2023-02-06 21:43:01 +00:00 |
|
|
240858487f
|
Added encoding and ripping latents used to generate the voice
|
2023-02-06 16:32:09 +00:00 |
|
|
92cf9e1efe
|
Added tab to read and copy settings from a voice clip (in the future, I'll see about enmbedding the latent used to generate the voice)
|
2023-02-06 16:00:44 +00:00 |
|
|
5affc777e0
|
added another (somewhat adequate) example, added metadata storage to generated files (need to add in a viewer later)
|
2023-02-06 14:17:41 +00:00 |
|
|
b441a84615
|
added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM
|
2023-02-06 05:10:07 +00:00 |
|
|
a1f3b6a4da
|
fixed up the computing conditional latents
|
2023-02-06 03:44:34 +00:00 |
|
|
2cfd3bc213
|
updated README (before I go mad trying to nitpick and edit it while getting distracted from an iToddler sperging)
|
2023-02-06 00:56:17 +00:00 |
|
|
945136330c
|
Forgot to rename the cached latents to the new filename
|
2023-02-05 23:51:52 +00:00 |
|
|
5bf21fdbe1
|
modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents)
|
2023-02-05 23:25:41 +00:00 |
|
|
f66754b557
|
oops
|
2023-02-05 20:10:40 +00:00 |
|
|
1c582b5dc8
|
added button to refresh voice list, enabling KV caching for a bonerific speed increase (credit to https://github.com/152334H/tortoise-tts-fast/)
|
2023-02-05 17:59:13 +00:00 |
|
|
8831522de9
|
New tunable: pause size/breathing room (governs pause at the end of clips)
|
2023-02-05 14:45:51 +00:00 |
|
|
c7f85dbba2
|
Fix to keep prompted emotion for every split line
|
2023-02-05 06:55:09 +00:00 |
|
|
79e0b85602
|
Updated .gitignore (that does not apply to me because I have a bad habit of having a repo copy separate from a working copy)
|
2023-02-05 06:40:50 +00:00 |
|
|
bc567d7263
|
Skip combining if not splitting, also avoids reading back the audio files to combine them by keeping them in memory
|
2023-02-05 06:35:32 +00:00 |
|
|
bf32efe503
|
Added multi-line parsing
|
2023-02-05 06:17:51 +00:00 |
|
|
cd94cc8459
|
Fixed accidentally not passing user-provided samples/iteration values (oops), fixed error thrown when trying to write unicode because python sucks
|
2023-02-05 05:51:57 +00:00 |
|
|
cab32e1f45
|
Forgot to add steps=1 to Candidates slider
|
2023-02-05 04:27:20 +00:00 |
|
|
84a9758ab9
|
Set transformer and model folder to local './models/' instead of for the user profile, because I'm sick of more bloat polluting my C:\
|
2023-02-05 04:18:35 +00:00 |
|
|
d29ba75dd6
|
cleaned up element order with Blocks, also added preset updating the samples/iterations counts
|
2023-02-05 03:53:46 +00:00 |
|
|
ed33e34fcc
|
Added choices to choose between diffusion samplers (p, ddim)
|
2023-02-05 01:28:31 +00:00 |
|
|
5c876b81f3
|
Added small optimization with caching latents, dropped Anaconda for just a py3.9 + pip + venv setup, added helper install scripts for such, cleaned up app.py, added flag '--low-vram' to disable minor optimizations
|
2023-02-04 01:50:57 +00:00 |
|
|
8f20afc18f
|
Reverted slight improvement patch, as it's just enough to OOM on GPUs with low VRAM
|
2023-02-03 21:45:06 +00:00 |
|
|
e8d4a4f89c
|
Added progress for transforming to audio, changed number inputs to sliders instead
|
2023-02-03 04:56:30 +00:00 |
|
|
ea751d7b6c
|
forgot to copy the alleged slight performance improvement patch, added detailed progress information with passing gr.Progress, save a little more info with output
|
2023-02-03 04:20:01 +00:00 |
|
|
43f45274dd
|
Cleaned up the good-morning-sirs-dialect labels, fixed seed=0 not being a random seed, show seed on output
|
2023-02-03 01:25:03 +00:00 |
|
|
74f447e5d0
|
QoL fixes
|
2023-02-02 21:13:28 +00:00 |
|
|
f6be2a3ee8
|
Quick fixes for Conda
|
2023-02-01 01:21:56 +00:00 |
|
James Betker
|
8d342cfbc0
|
Merge pull request #263 from netshade/remove-ffmpeg-dep
Remove FFMPEG dep
|
2023-01-22 17:55:36 -07:00 |
|
chris
|
e55b498239
|
remove ffmpeg requirement, not actually necessary
|
2023-01-22 16:41:25 -05:00 |
|
James Betker
|
5dc3e269b3
|
Merge pull request #233 from kianmeng/fix-typos
Fix typos
|
2023-01-17 18:24:24 -07:00 |
|
James Betker
|
b5eec7aba3
|
Merge pull request #245 from netshade/installation-updates
Documentation and Dependency Updates
|
2023-01-11 09:30:50 -07:00 |
|
chris
|
7ce3dc7bf1
|
add explicit requirements.txt usage for dep installation
|
2023-01-11 10:50:18 -05:00 |
|
chris
|
d999f55841
|
update requirements to ensure project will build and run
|
2023-01-11 10:48:58 -05:00 |
|
James Betker
|
217dc09d5f
|
Merge pull request #234 from Wonbin-Jung/ack
Add reference of univnet implementation
|
2023-01-06 02:03:49 -07:00 |
|
원빈 정
|
b3d67dcc6b
|
Add reference of univnet implementation
|
2023-01-06 15:57:02 +09:00 |
|
Kian-Meng Ang
|
551fe655ff
|
Fix typos
Found via `codespell -S *.json -L splitted,nd,ser,broadcat`
|
2023-01-06 11:04:36 +08:00 |
|
James Betker
|
2c0d8d71e0
|
Merge pull request #229 from Livshitz/patch-1
Update tortoise_v2_examples.html
|
2023-01-02 13:05:34 -07:00 |
|
Elya Livshitz
|
7bc068ca5a
|
Update tortoise_v2_examples.html
|
2023-01-02 19:45:11 +02:00 |
|
James Betker
|
f28a116b48
|
Update README.md
|
2022-12-05 13:16:36 -08:00 |
|
James Betker
|
121b0e9e9c
|
Merge pull request #193 from casonclagg/main
Pin transformers version to 4.19, fixes #186, google colab crashing
|
2022-11-13 22:20:11 -08:00 |
|
Cason Clagg
|
6587d1934e
|
Pin transformers version to 4.19, fixes #186, google colab crashing
|
2022-11-11 17:16:56 -06:00 |
|
James Betker
|
122d92d491
|
Merge pull request #122 from mogwai/fix/readme-instructions
Added keyword argument for API usage in README
|
2022-07-08 08:22:43 -06:00 |
|
Harry Coultas Blum
|
2efc5a3e50
|
Added keyword argument
|
2022-07-08 14:28:24 +01:00 |
|
James Betker
|
00f8bc5e78
|
Update README.md
|
2022-06-23 15:57:50 -07:00 |
|
James Betker
|
958c6d2f73
|
Get rid of checkpointing
It isn't needed in inference.
|
2022-06-15 22:09:15 -06:00 |
|
James Betker
|
29c1d9e561
|
Merge pull request #97 from jnordberg/cpu-support
CPU support
|
2022-06-12 23:12:03 -06:00 |
|
Johan Nordberg
|
de7c5ddec3
|
Typofix
|
2022-06-11 21:19:07 +09:00 |
|
Johan Nordberg
|
fc4a31028a
|
Expose batch size and device settings in CLI
|
2022-06-11 20:46:23 +09:00 |
|