|
a05faf0dfa
|
backport
|
2024-07-22 20:56:45 -05:00 |
|
|
f25e765682
|
maybe backported some weird fixes for LoRA loading from mrq/vall-e ?
|
2024-07-22 20:48:06 -05:00 |
|
|
90ecf3da7d
|
more backporting
|
2024-06-28 22:44:42 -05:00 |
|
|
43d85d97aa
|
backported additions from e-c-k-e-r/vall-e (paths sorted-by-duration and batched sampling)
|
2024-06-28 22:29:42 -05:00 |
|
|
e0a93a6400
|
readme tweaks
|
2024-06-28 21:02:40 -05:00 |
|
|
80d6494973
|
might help to resample to the right sample rate for the AR / dvae,,,
|
2024-06-25 19:48:45 -05:00 |
|
|
20789a0b8a
|
i swear it worked before and now it didnt
|
2024-06-25 19:17:14 -05:00 |
|
|
6ee5f21ddc
|
oops, needed some fixes
|
2024-06-25 13:40:39 -05:00 |
|
|
286681c87c
|
oops
|
2024-06-21 00:20:53 -05:00 |
|
|
79fc406c78
|
calm_token was set wrong, somehow
|
2024-06-19 22:20:06 -05:00 |
|
|
e2c9b0465f
|
set seed on inference, since it seems to be set to 0 every time
|
2024-06-19 22:10:59 -05:00 |
|
|
0b1a71430c
|
added BigVGAN and HiFiGAN (from https://git.ecker.tech/jarod/tortoise-tts), vocoder selectable in webUI
|
2024-06-19 21:43:29 -05:00 |
|
|
a5c21d65d2
|
added automatically loading default YAML if --yaml is not profided (although I think it already does this by using defaults), default YAML will use local backend + deepspeed inferencing for speedups
|
2024-06-19 18:49:39 -05:00 |
|
|
f4fcc35aa8
|
fixed it breaking on subsequent utterances through the web UI from latents being on the CPU
|
2024-06-19 18:26:15 -05:00 |
|
|
96b74f38ef
|
sampler and cond_free selectable in webUI, re-enabled cond_free as default (somehow it's working again)
|
2024-06-19 17:12:28 -05:00 |
|
|
73f271fb8a
|
added automagic offloading models to GPU then CPU when theyre done during inference
|
2024-06-19 17:01:05 -05:00 |
|
|
5d24631bfb
|
don't pad output mel tokens to speed up diffusion (despite copying it exactly from tortoise)
|
2024-06-19 15:27:11 -05:00 |
|
|
849de13f27
|
added tqdm bar for AR
|
2024-06-19 15:00:14 -05:00 |
|
|
99be487482
|
backported old fork features (kv_cache (which looking back seems like a spook), ddim sampling, etc)
|
2024-06-19 14:49:24 -05:00 |
|
|
268ba17485
|
crammed in HF attention selection mechanisms for the AR
|
2024-06-19 10:21:43 -05:00 |
|
|
e5136613f5
|
semblance of documentation, automagic model downloading, a little saner inference results folder
|
2024-06-19 10:08:14 -05:00 |
|
|
6c2e00ce2a
|
load exported LoRA weights if exists (to-do: make a better LoRA loading mechanism)
|
2024-06-18 21:46:42 -05:00 |
|
|
7c9144ff22
|
working webui
|
2024-06-18 21:03:25 -05:00 |
|
|
fb313d7ef4
|
working, the vocoder was just loading wrong
|
2024-06-18 20:55:50 -05:00 |
|
|
b5570f1b86
|
progress
|
2024-06-18 17:09:50 -05:00 |
|
|
7aae9d48ab
|
training + LoRA training works? (keeps OOMing after a step)
|
2024-06-18 13:28:50 -05:00 |
|
|
d7b63d2f70
|
encoding mel tokens + dataset preparation
|
2024-06-18 10:30:54 -05:00 |
|
|
37ec9f1b79
|
initial "refractoring"
|
2024-06-17 22:48:34 -05:00 |
|
James Betker
|
98a891e66e
|
Merge pull request #263 from netshade/remove-ffmpeg-dep
Remove FFMPEG dep
|
2023-01-22 17:55:36 -07:00 |
|
chris
|
b0296ba528
|
remove ffmpeg requirement, not actually necessary
|
2023-01-22 16:41:25 -05:00 |
|
James Betker
|
aad67d0e78
|
Merge pull request #233 from kianmeng/fix-typos
Fix typos
|
2023-01-17 18:24:24 -07:00 |
|
James Betker
|
69738359c6
|
Merge pull request #245 from netshade/installation-updates
Documentation and Dependency Updates
|
2023-01-11 09:30:50 -07:00 |
|
chris
|
0793800526
|
add explicit requirements.txt usage for dep installation
|
2023-01-11 10:50:18 -05:00 |
|
chris
|
38d97caf48
|
update requirements to ensure project will build and run
|
2023-01-11 10:48:58 -05:00 |
|
James Betker
|
04068133a6
|
Merge pull request #234 from Wonbin-Jung/ack
Add reference of univnet implementation
|
2023-01-06 02:03:49 -07:00 |
|
원빈 정
|
092b15eded
|
Add reference of univnet implementation
|
2023-01-06 15:57:02 +09:00 |
|
Kian-Meng Ang
|
49bbdd597e
|
Fix typos
Found via `codespell -S *.json -L splitted,nd,ser,broadcat`
|
2023-01-06 11:04:36 +08:00 |
|
James Betker
|
a5a0907e76
|
Update README.md
|
2022-12-05 13:16:36 -08:00 |
|
James Betker
|
49aff9a36d
|
Merge pull request #193 from casonclagg/main
Pin transformers version to 4.19, fixes #186, google colab crashing
|
2022-11-13 22:20:11 -08:00 |
|
Cason Clagg
|
18dbbb56b6
|
Pin transformers version to 4.19, fixes #186, google colab crashing
|
2022-11-11 17:16:56 -06:00 |
|
James Betker
|
dd88ad6be6
|
Merge pull request #122 from mogwai/fix/readme-instructions
Added keyword argument for API usage in README
|
2022-07-08 08:22:43 -06:00 |
|
Harry Coultas Blum
|
75e920438a
|
Added keyword argument
|
2022-07-08 14:28:24 +01:00 |
|
James Betker
|
83cc5eb5b4
|
Update README.md
|
2022-06-23 15:57:50 -07:00 |
|
James Betker
|
e5201bf14e
|
Get rid of checkpointing
It isn't needed in inference.
|
2022-06-15 22:09:15 -06:00 |
|
James Betker
|
1aa4e0d4b8
|
Merge pull request #97 from jnordberg/cpu-support
CPU support
|
2022-06-12 23:12:03 -06:00 |
|
Johan Nordberg
|
dba14650cb
|
Typofix
|
2022-06-11 21:19:07 +09:00 |
|
Johan Nordberg
|
3791eb7267
|
Expose batch size and device settings in CLI
|
2022-06-11 20:46:23 +09:00 |
|
Johan Nordberg
|
5c7a50820c
|
Allow running on CPU
|
2022-06-11 20:03:14 +09:00 |
|
James Betker
|
5d96b486fb
|
Merge pull request #90 from MarcusLlewellyn/read_combine
read.py combines all candidates
|
2022-06-06 14:59:35 -06:00 |
|
Marcus Llewellyn
|
0e08760896
|
Fixed silly lack of EOF blank line, indentation
|
2022-06-06 15:13:29 -05:00 |
|