Commit Graph

299 Commits

Author SHA1 Message Date
mrq
a6daf289bc when the sanitizer thingy works in testing but it doesn't outside of testing, and you have to retranscribe for the fourth time today 2023-03-23 02:37:44 +00:00
mrq
86589fff91 why does this keep happening to me 2023-03-23 01:55:16 +00:00
mrq
0ea93a7f40 more cleanup, use 24KHz for preparing for VALL-E (encodec will resample to 24Khz anyways, makes audio a little nicer), some other things 2023-03-23 01:52:26 +00:00
mrq
d2a9ab9e41 remove redundant phonemize for vall-e (oops), quantize all files and then phonemize all files for cope optimization, load alignment model once instead of for every transcription (speedup with whisperx) 2023-03-23 00:22:25 +00:00
mrq
19c0854e6a do not write current whisper.json if there's no changes 2023-03-22 22:24:07 +00:00
mrq
932eaccdf5 added whisper transcription 'sanitizing' (collapse very short transcriptions to the previous segment) (I really have to stop having several copies spanning several machines for AIVC, I keep reverting shit) 2023-03-22 22:10:01 +00:00
mrq
736cdc8926 disable diarization for whisperx as it's just a useless performance hit (I don't have anything that's multispeaker within the same audio file at the moment) 2023-03-22 20:38:58 +00:00
mrq
aa5bdafb06 ugh 2023-03-22 20:26:28 +00:00
mrq
13605f980c now whisperx should output json that aligns with what's expected 2023-03-22 20:01:30 +00:00
mrq
8877960062 fixes for whisperx batching 2023-03-22 19:53:42 +00:00
mrq
4056a27bcb begrudgingly added back whisperx integration (VAD/Diarization testing, I really, really need accurate timestamps before dumping mondo amounts of time on training a dataset) 2023-03-22 19:24:53 +00:00
mrq
b8c3c4cfe2 Fixed #167 2023-03-22 18:21:37 +00:00
mrq
da96161aaa oops 2023-03-22 18:07:46 +00:00
mrq
f822c87344 cleanups, realigning vall-e training 2023-03-22 17:47:23 +00:00
mrq
909325bb5a ugh 2023-03-21 22:18:57 +00:00
mrq
5a5fd9ca87 Added option to unsqueeze sample batches after sampling 2023-03-21 21:34:26 +00:00
mrq
9657c1d4ce oops 2023-03-21 20:31:01 +00:00
mrq
0c2a9168f8 DLAS is PIPified (but I'm still cloning it as a submodule to make updating it easier) 2023-03-21 15:46:53 +00:00
mrq
34ef0467b9 VALL-E config edits 2023-03-20 01:22:53 +00:00
mrq
2e33bf071a forgot to not require it to be relative 2023-03-19 22:05:33 +00:00
mrq
5cb86106ce option to set results folder location 2023-03-19 22:03:41 +00:00
mrq
74510e8623 doing what I do best: sourcing other configs and banging until it works (it doesnt work) 2023-03-18 15:16:15 +00:00
mrq
da9b4b5fb5 tweaks 2023-03-18 15:14:22 +00:00
mrq
f44895978d brain worms 2023-03-17 20:08:08 +00:00
mrq
b17260cddf added japanese tokenizer (experimental) 2023-03-17 20:04:40 +00:00
mrq
f34cc382c5 yammed 2023-03-17 18:57:36 +00:00
mrq
96b7f9d2cc yammed 2023-03-17 13:08:34 +00:00
mrq
249c6019af cleanup, metrics are grabbed for vall-e trainer 2023-03-17 05:33:49 +00:00
mrq
1b72d0bba0 forgot to separate phonemes by spaces for [redacted] 2023-03-17 02:08:07 +00:00
mrq
d4c50967a6 cleaned up some prepare dataset code 2023-03-17 01:24:02 +00:00
mrq
0b62ccc112 setup bnb on windows as needed 2023-03-16 20:48:48 +00:00
mrq
c4edfb7d5e unbump rocm5.4.2 because it does not work for me desu 2023-03-16 15:33:23 +00:00
mrq
520fbcd163 bumped torch up (CUDA: 11.8, ROCm, 5.4.2) 2023-03-16 15:09:11 +00:00
mrq
1a8c5de517 unk hunting 2023-03-16 14:59:12 +00:00
mrq
46ff3c476a fixes v2 2023-03-16 14:41:40 +00:00
mrq
0408d44602 fixed reload tts being broken due to being as untouched as I am 2023-03-16 14:24:44 +00:00
mrq
aeb904a800 yammed 2023-03-16 14:23:47 +00:00
mrq
f9154c4db1 fixes 2023-03-16 14:19:56 +00:00
mrq
54f2fc792a ops 2023-03-16 05:14:15 +00:00
mrq
0a7d6f02a7 ops 2023-03-16 04:54:17 +00:00
mrq
4ac43fa3a3 I forgot I undid the thing in DLAS 2023-03-16 04:51:35 +00:00
mrq
da4f92681e oops 2023-03-16 04:35:12 +00:00
mrq
ee8270bdfb preparations for training an IPA-based finetune 2023-03-16 04:25:33 +00:00
mrq
7b80f7a42f fixed not cleaning up states while training (oops) 2023-03-15 02:48:05 +00:00
mrq
b31bf1206e oops 2023-03-15 01:51:04 +00:00
mrq
d752a22331 print a warning if automatically deduced batch size returns 1 2023-03-15 01:20:15 +00:00
mrq
f6d34e1dd3 and maybe I should have actually tested with ./models/tokenizers/ made 2023-03-15 01:09:20 +00:00
mrq
5e4f6808ce I guess I didn't test on a blank-ish slate 2023-03-15 00:54:27 +00:00
mrq
363d0b09b1 added options to pick tokenizer json and diffusion model (so I don't have to add it in later when I get bored and add in diffusion training) 2023-03-15 00:37:38 +00:00
mrq
07b684c4e7 removed redundant training data (they exist within tortoise itself anyways), added utility: view tokenized text 2023-03-14 21:51:27 +00:00