ec80ca632b
added setting "device-override", less naively decide the number to use for results, some other thing
2023-02-15 21:51:22 +00:00
ea1bc770aa
added option: force cpu for conditioning latents, for when you want low chunk counts but your GPU keeps OOMing because fuck fragmentation
2023-02-15 05:01:40 +00:00
2e777e8a67
done away with kludgy shit code, just have the user decide how many chunks to slice concat'd samples to (since it actually does improve vocie replicability)
2023-02-15 04:39:31 +00:00
314feaeea1
added reset generation settings to default button, revamped utilities tab to double as plain jane voice importer (and runs through voicefixer despite it not really doing anything if your voice samples are already of decent quality anyways), ditched load_wav_to_torch or whatever it was called because it literally exists as torchaudio.load, sample voice is now a combined waveform of all your samples and will always return even if using a latents file
2023-02-14 21:20:04 +00:00
0bc2c1f540
updates chunk size to the chunked tensor length, just in case
2023-02-14 17:13:34 +00:00
48275899e8
added flag to enable/disable voicefixer using CUDA because I'll OOM on my 2060, changed from naively subdividing eavenly (2,4,8,16 pieces) to just incrementing by 1 (1,2,3,4) when trying to subdivide within constraints of the max chunk size for computing voice latents
2023-02-14 16:47:34 +00:00
b648186691
history tab doesn't naively reuse the voice dir instead for results, experimental "divide total sound size until it fits under requests max chunk size" doesn't have a +1 to mess things up (need to re-evaluate how I want to calculate sizes of bests fits eventually)
2023-02-14 16:23:04 +00:00
8250a79b23
Implemented kv_cache "fix" (from 1f3c1b5f4a
); guess I should find out why it's crashing DirectML backend
2023-02-13 13:48:31 +00:00
5b5e32338c
DirectML: fixed redaction/aligner by forcing it to stay on CPU
2023-02-12 20:52:04 +00:00
4d01bbd429
added button to recalculate voice latents, added experimental switch for computing voice latents
2023-02-12 18:11:40 +00:00
88529fda43
fixed regression with computing conditional latencies outside of the CPU
2023-02-12 17:44:39 +00:00
65f74692a0
fixed silently crashing from enabling kv_cache-ing if using the DirectML backend, throw an error when reading a generated audio file that does not have any embedded metadata in it, cleaned up the blocks of code that would DMA/transfer tensors/models between GPU and CPU
2023-02-12 14:46:21 +00:00
1b55730e67
fixed regression where the auto_conds do not move to the GPU and causes a problem during CVVP compare pass
2023-02-11 20:34:12 +00:00
a7330164ab
Added integration for "voicefixer", fixed issue where candidates>1 and lines>1 only outputs the last combined candidate, numbered step for each generation in progress, output time per generation step
2023-02-11 15:02:11 +00:00
4f903159ee
revamped result formatting, added "kludgy" stop button
2023-02-10 22:12:37 +00:00
52a9ed7858
Moved voices out of the tortoise folder because it kept being processed for setup.py
2023-02-10 20:11:56 +00:00
efa556b793
Added new options: "Output Sample Rate", "Output Volume", and documentation
2023-02-10 03:02:09 +00:00
57af25c6c0
oops
2023-02-09 22:17:57 +00:00
504db0d1ac
Added 'Only Load Models Locally' setting
2023-02-09 22:06:55 +00:00
729be135ef
Added option: listen path
2023-02-09 20:42:38 +00:00
3f8302a680
I didn't have to suck off a wizard for DirectML support (courtesy of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7600 for leading the way)
2023-02-09 05:05:21 +00:00
b23d6b4b4c
owari da...
2023-02-09 01:53:25 +00:00
494f3c84a1
beginning to add DirectML support
2023-02-08 23:03:52 +00:00
e45e4431d1
(finally) added the CVVP model weigh slider, latents export more data too for weighing against CVVP
2023-02-07 20:55:56 +00:00
f7274112c3
un-hardcoded input output sampling rates (changing them "works" but leads to wrong audio, naturally)
2023-02-07 18:34:29 +00:00
55058675d2
(maybe) fixed an issue with using prompt redactions (emotions) on CPU causing a crash, because for some reason the wav2vec_alignment assumed CUDA was always available
2023-02-07 07:51:05 -06:00
328deeddae
forgot to auto compute batch size again if set to 0
2023-02-06 23:14:17 -06:00
a3c077ba13
added setting to adjust autoregressive sample batch size
2023-02-06 22:31:06 +00:00
b8b15d827d
added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM
2023-02-06 05:10:07 +00:00
319e7ec0a6
fixed up the computing conditional latents
2023-02-06 03:44:34 +00:00
b23f583c4e
Forgot to rename the cached latents to the new filename
2023-02-05 23:51:52 +00:00
c2c9b1b683
modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents)
2023-02-05 23:25:41 +00:00
4ea997106e
oops
2023-02-05 20:10:40 +00:00
daebc6c21c
added button to refresh voice list, enabling KV caching for a bonerific speed increase (credit to https://github.com/152334H/tortoise-tts-fast/ )
2023-02-05 17:59:13 +00:00
7b767e1442
New tunable: pause size/breathing room (governs pause at the end of clips)
2023-02-05 14:45:51 +00:00
f38c479e9b
Added multi-line parsing
2023-02-05 06:17:51 +00:00
111c45b181
Set transformer and model folder to local './models/' instead of for the user profile, because I'm sick of more bloat polluting my C:\
2023-02-05 04:18:35 +00:00
078dc0c6e2
Added choices to choose between diffusion samplers (p, ddim)
2023-02-05 01:28:31 +00:00
4274cce218
Added small optimization with caching latents, dropped Anaconda for just a py3.9 + pip + venv setup, added helper install scripts for such, cleaned up app.py, added flag '--low-vram' to disable minor optimizations
2023-02-04 01:50:57 +00:00
061aa65ac4
Reverted slight improvement patch, as it's just enough to OOM on GPUs with low VRAM
2023-02-03 21:45:06 +00:00
4f359bffa4
Added progress for transforming to audio, changed number inputs to sliders instead
2023-02-03 04:56:30 +00:00
ef237c70d0
forgot to copy the alleged slight performance improvement patch, added detailed progress information with passing gr.Progress, save a little more info with output
2023-02-03 04:20:01 +00:00
1eb92a1236
QoL fixes
2023-02-02 21:13:28 +00:00
James Betker
aad67d0e78
Merge pull request #233 from kianmeng/fix-typos
...
Fix typos
2023-01-17 18:24:24 -07:00
원빈 정
092b15eded
Add reference of univnet implementation
2023-01-06 15:57:02 +09:00
Kian-Meng Ang
49bbdd597e
Fix typos
...
Found via `codespell -S *.json -L splitted,nd,ser,broadcat`
2023-01-06 11:04:36 +08:00
James Betker
e5201bf14e
Get rid of checkpointing
...
It isn't needed in inference.
2022-06-15 22:09:15 -06:00
Johan Nordberg
dba14650cb
Typofix
2022-06-11 21:19:07 +09:00
Johan Nordberg
5c7a50820c
Allow running on CPU
2022-06-11 20:03:14 +09:00
Marcus Llewellyn
0e08760896
Fixed silly lack of EOF blank line, indentation
2022-06-06 15:13:29 -05:00
Marcus Llewellyn
5a74461c1e
read.py combines all candidates
...
If candidates where greater than 1 on in read.py, only the fist candidate clips would be combined. This adds a bit of code to make a combined file for every candidate.
2022-06-04 17:47:29 -05:00
James Betker
ce30b5bbe5
Merge pull request #74 from jnordberg/improved-cli
...
Add CLI tool
2022-05-28 21:33:53 -06:00
Johan Nordberg
491fe7f6d3
Remove some assumptions about working directory
...
This allows cli tool to run when not standing in repository dir
2022-05-29 01:10:19 +00:00
Johan Nordberg
a641d8f29b
Add tortoise_cli.py
2022-05-28 05:25:23 +00:00
Johan Nordberg
821be4171b
Typofix
2022-05-28 01:29:34 +00:00
Johan Nordberg
069e7001ad
Improve splitting on text that has many quotes
2022-05-28 01:22:21 +00:00
Johan Nordberg
cf26074fa5
Add riding hood test
...
Also fix a bug discovered by the test that would seek past the text end if it ended in a boundary
2022-05-27 23:08:53 +00:00
Johan Nordberg
acc0891e85
Improve sentence boundary detection
2022-05-27 05:58:09 +00:00
Josh Ziegler
53f6563e3e
avoid mutable default in aligner
2022-05-26 16:20:09 -04:00
Johan Nordberg
f396dcc023
Skip CLVP if cvvp_amount is 1
...
Also fixes formatting bug in log message
2022-05-25 11:12:53 +00:00
Johan Nordberg
0ca4d8f291
Revive CVVP model
2022-05-25 10:22:50 +00:00
James Betker
e0be49f02f
Fix bug
2022-05-22 05:50:26 -06:00
James Betker
42a3bc9cfd
Support combining voices in do_tts
2022-05-22 05:28:15 -06:00
James Betker
412315ab7d
Update read.py to support multiple candidates
2022-05-22 05:26:01 -06:00
James Betker
d96d55a8b4
Fix faulty merge
2022-05-19 10:37:57 -06:00
James Betker
a1c131bde9
Merge remote-tracking branch 'origin/main'
...
# Conflicts:
# tortoise/read.py
2022-05-19 10:34:54 -06:00
Johan Nordberg
b4fa8c86b9
Allow passing additional voice directories when loading voices
2022-05-19 21:02:11 +09:00
Johan Nordberg
00730d2786
Allow setting models path from environment variable
2022-05-19 21:02:09 +09:00
James Betker
8fdf516e62
Remove CVVP
...
After training a similar model for a different purpose, I realized that
this model is faulty: the contrastive loss it uses only pays attention
to high-frequency details which do not contribute meaningfully to
output quality. I validated this by comparing a no-CVVP output with
a baseline using tts-scores and found no differences.
2022-05-17 12:21:25 -06:00
James Betker
a1ae84c49d
Add a way to get deterministic behavior from tortoise and add debug states for reporting
2022-05-17 12:11:18 -06:00
James Betker
93d0ce60d3
Merge remote-tracking branch 'origin/main'
2022-05-17 11:22:40 -06:00
James Betker
5d611aff8c
Add chapter 1 of GoT for read.py demos
2022-05-17 11:21:57 -06:00
Danila Berezin
ef5fb5f5fc
Fix bug in load_voices in audio.py
...
The read.py script did not work with pth latents, so I fix bug in audio.py. It seems that in the elif statement, instead of voice, voices should be clip, clips. And torch stack doesn't work with tuples, so I had to split this operation.
2022-05-17 18:34:54 +03:00
James Betker
e0329de2c2
Merge pull request #42 from jnordberg/main
...
Improve sentence splitting
2022-05-14 08:52:46 -06:00
James Betker
0570034eda
Automatically pick batch size based on available GPU memory
2022-05-13 10:30:02 -06:00
Johan Nordberg
a8fa71b82d
Improve sentence splitting
2022-05-13 11:02:17 +00:00
James Betker
b3b36c0041
update model paths (including clvp2!)
2022-05-12 20:18:11 -06:00
James Betker
f5ebd14d09
Add error message
2022-05-12 20:15:40 -06:00
James Betker
c4a5a23985
add eval script for testing
2022-05-12 20:15:22 -06:00
James Betker
44a4419348
CLVP2!
2022-05-12 13:23:03 -06:00
James Betker
fc7b308e3b
Add support for multiple output candidates in do_tts.
2022-05-12 11:25:35 -06:00
James Betker
33d4226a7d
read.py: allow user-specified splits
2022-05-12 11:24:55 -06:00
Mark Baushenko
cc38333249
Optimizing graphics card memory
...
During inference it does not store gradients, which take up most of the video memory
2022-05-11 16:35:11 +03:00
James Betker
e4e9523900
re-enable redaction
2022-05-06 09:36:42 -06:00
James Betker
9151650559
temporarily disable redaction
2022-05-06 09:06:20 -06:00
James Betker
e18428166d
v2.2
2022-05-06 00:11:10 -06:00
James Betker
4704eb1cef
Update readme with prompt engineering
2022-05-03 21:32:06 -06:00
James Betker
b11f6ddd60
Enable redaction by default
2022-05-03 21:21:52 -06:00
James Betker
53cb3299d4
change quality presets
2022-05-03 21:01:26 -06:00
James Betker
e23e6f6696
Use librosa for loading mp3s
2022-05-03 20:44:31 -06:00
James Betker
dc0390ade1
Remove entmax dep
2022-05-02 21:43:14 -06:00
James Betker
12acac6f77
Fix default output path
2022-05-02 21:37:39 -06:00
James Betker
022d330300
k I think this works..
2022-05-02 21:31:31 -06:00
James Betker
00e84bbd86
fix paths
2022-05-02 20:56:28 -06:00
James Betker
e4e8ebfc55
getting ready for 2.1 release
2022-05-02 20:20:50 -06:00
James Betker
5663e98904
misc fixes
2022-05-02 18:00:57 -06:00
James Betker
e00606a601
Fix bug with k>1
2022-05-02 18:00:22 -06:00
James Betker
ccf16f978e
more fixes
2022-05-02 16:44:47 -06:00
James Betker
4836e1f792
fix warning
2022-05-02 16:36:02 -06:00
James Betker
ee24d3ee4b
Support totally random voices (and make fixes to previous changes)
2022-05-02 15:40:03 -06:00