Commit Graph

197 Commits

Author SHA1 Message Date
mrq
20789a0b8a i swear it worked before and now it didnt 2024-06-25 19:17:14 -05:00
mrq
6ee5f21ddc oops, needed some fixes 2024-06-25 13:40:39 -05:00
mrq
286681c87c oops 2024-06-21 00:20:53 -05:00
mrq
79fc406c78 calm_token was set wrong, somehow 2024-06-19 22:20:06 -05:00
mrq
e2c9b0465f set seed on inference, since it seems to be set to 0 every time 2024-06-19 22:10:59 -05:00
mrq
0b1a71430c added BigVGAN and HiFiGAN (from https://git.ecker.tech/jarod/tortoise-tts), vocoder selectable in webUI 2024-06-19 21:43:29 -05:00
mrq
a5c21d65d2 added automatically loading default YAML if --yaml is not profided (although I think it already does this by using defaults), default YAML will use local backend + deepspeed inferencing for speedups 2024-06-19 18:49:39 -05:00
mrq
f4fcc35aa8 fixed it breaking on subsequent utterances through the web UI from latents being on the CPU 2024-06-19 18:26:15 -05:00
mrq
96b74f38ef sampler and cond_free selectable in webUI, re-enabled cond_free as default (somehow it's working again) 2024-06-19 17:12:28 -05:00
mrq
73f271fb8a added automagic offloading models to GPU then CPU when theyre done during inference 2024-06-19 17:01:05 -05:00
mrq
5d24631bfb don't pad output mel tokens to speed up diffusion (despite copying it exactly from tortoise) 2024-06-19 15:27:11 -05:00
mrq
849de13f27 added tqdm bar for AR 2024-06-19 15:00:14 -05:00
mrq
99be487482 backported old fork features (kv_cache (which looking back seems like a spook), ddim sampling, etc) 2024-06-19 14:49:24 -05:00
mrq
268ba17485 crammed in HF attention selection mechanisms for the AR 2024-06-19 10:21:43 -05:00
mrq
e5136613f5 semblance of documentation, automagic model downloading, a little saner inference results folder 2024-06-19 10:08:14 -05:00
mrq
6c2e00ce2a load exported LoRA weights if exists (to-do: make a better LoRA loading mechanism) 2024-06-18 21:46:42 -05:00
mrq
7c9144ff22 working webui 2024-06-18 21:03:25 -05:00
mrq
fb313d7ef4 working, the vocoder was just loading wrong 2024-06-18 20:55:50 -05:00
mrq
b5570f1b86 progress 2024-06-18 17:09:50 -05:00
mrq
7aae9d48ab training + LoRA training works? (keeps OOMing after a step) 2024-06-18 13:28:50 -05:00
mrq
d7b63d2f70 encoding mel tokens + dataset preparation 2024-06-18 10:30:54 -05:00
mrq
37ec9f1b79 initial "refractoring" 2024-06-17 22:48:34 -05:00
James Betker
98a891e66e Merge pull request #263 from netshade/remove-ffmpeg-dep
Remove FFMPEG dep
2023-01-22 17:55:36 -07:00
chris
b0296ba528 remove ffmpeg requirement, not actually necessary 2023-01-22 16:41:25 -05:00
James Betker
aad67d0e78 Merge pull request #233 from kianmeng/fix-typos
Fix typos
2023-01-17 18:24:24 -07:00
James Betker
69738359c6 Merge pull request #245 from netshade/installation-updates
Documentation and Dependency Updates
2023-01-11 09:30:50 -07:00
chris
0793800526 add explicit requirements.txt usage for dep installation 2023-01-11 10:50:18 -05:00
chris
38d97caf48 update requirements to ensure project will build and run 2023-01-11 10:48:58 -05:00
James Betker
04068133a6 Merge pull request #234 from Wonbin-Jung/ack
Add reference of univnet implementation
2023-01-06 02:03:49 -07:00
원빈 정
092b15eded Add reference of univnet implementation 2023-01-06 15:57:02 +09:00
Kian-Meng Ang
49bbdd597e Fix typos
Found via `codespell -S *.json -L splitted,nd,ser,broadcat`
2023-01-06 11:04:36 +08:00
James Betker
a5a0907e76 Update README.md 2022-12-05 13:16:36 -08:00
James Betker
49aff9a36d Merge pull request #193 from casonclagg/main
Pin transformers version to 4.19, fixes #186, google colab crashing
2022-11-13 22:20:11 -08:00
Cason Clagg
18dbbb56b6 Pin transformers version to 4.19, fixes #186, google colab crashing 2022-11-11 17:16:56 -06:00
James Betker
dd88ad6be6 Merge pull request #122 from mogwai/fix/readme-instructions
Added keyword argument for API usage in README
2022-07-08 08:22:43 -06:00
Harry Coultas Blum
75e920438a Added keyword argument 2022-07-08 14:28:24 +01:00
James Betker
83cc5eb5b4 Update README.md 2022-06-23 15:57:50 -07:00
James Betker
e5201bf14e Get rid of checkpointing
It isn't needed in inference.
2022-06-15 22:09:15 -06:00
James Betker
1aa4e0d4b8 Merge pull request #97 from jnordberg/cpu-support
CPU support
2022-06-12 23:12:03 -06:00
Johan Nordberg
dba14650cb Typofix 2022-06-11 21:19:07 +09:00
Johan Nordberg
3791eb7267 Expose batch size and device settings in CLI 2022-06-11 20:46:23 +09:00
Johan Nordberg
5c7a50820c Allow running on CPU 2022-06-11 20:03:14 +09:00
James Betker
5d96b486fb Merge pull request #90 from MarcusLlewellyn/read_combine
read.py combines all candidates
2022-06-06 14:59:35 -06:00
Marcus Llewellyn
0e08760896 Fixed silly lack of EOF blank line, indentation 2022-06-06 15:13:29 -05:00
Marcus Llewellyn
5a74461c1e read.py combines all candidates
If candidates where greater than 1 on in read.py, only the fist candidate clips would be combined. This adds a bit of code to make a combined file for every candidate.
2022-06-04 17:47:29 -05:00
James Betker
e574f19fc9 Also include voices in the manifest 2022-05-31 10:31:50 -06:00
James Betker
eda44cd9ab Include data in manifest 2022-05-31 09:10:06 -06:00
James Betker
855268ba4e Merge pull request #78 from jnordberg/cli-typo-fix
Typofix in CLI
2022-05-28 22:30:41 -06:00
Johan Nordberg
3f641c2beb Typofix 2022-05-29 04:26:11 +00:00
James Betker
ce30b5bbe5 Merge pull request #74 from jnordberg/improved-cli
Add CLI tool
2022-05-28 21:33:53 -06:00