|
c4ca04cc92
|
added showing reported training accuracy and eval/validation metrics to graph
|
2023-03-26 04:08:45 +00:00 |
|
|
8c647c889d
|
now there should be feature parity between trainers
|
2023-03-25 04:12:03 +00:00 |
|
|
fd9b2e082c
|
x_lim and y_lim for graph
|
2023-03-25 02:34:14 +00:00 |
|
|
9856db5900
|
actually make parsing VALL-E metrics work
|
2023-03-23 15:42:51 +00:00 |
|
|
69d84bb9e0
|
I forget
|
2023-03-23 04:53:31 +00:00 |
|
|
444bcdaf62
|
my sanitizer actually did work, it was just batch sizes leading to problems when transcribing
|
2023-03-23 04:41:56 +00:00 |
|
|
a6daf289bc
|
when the sanitizer thingy works in testing but it doesn't outside of testing, and you have to retranscribe for the fourth time today
|
2023-03-23 02:37:44 +00:00 |
|
|
86589fff91
|
why does this keep happening to me
|
2023-03-23 01:55:16 +00:00 |
|
|
0ea93a7f40
|
more cleanup, use 24KHz for preparing for VALL-E (encodec will resample to 24Khz anyways, makes audio a little nicer), some other things
|
2023-03-23 01:52:26 +00:00 |
|
|
d2a9ab9e41
|
remove redundant phonemize for vall-e (oops), quantize all files and then phonemize all files for cope optimization, load alignment model once instead of for every transcription (speedup with whisperx)
|
2023-03-23 00:22:25 +00:00 |
|
|
19c0854e6a
|
do not write current whisper.json if there's no changes
|
2023-03-22 22:24:07 +00:00 |
|
|
932eaccdf5
|
added whisper transcription 'sanitizing' (collapse very short transcriptions to the previous segment) (I really have to stop having several copies spanning several machines for AIVC, I keep reverting shit)
|
2023-03-22 22:10:01 +00:00 |
|
|
736cdc8926
|
disable diarization for whisperx as it's just a useless performance hit (I don't have anything that's multispeaker within the same audio file at the moment)
|
2023-03-22 20:38:58 +00:00 |
|
|
aa5bdafb06
|
ugh
|
2023-03-22 20:26:28 +00:00 |
|
|
13605f980c
|
now whisperx should output json that aligns with what's expected
|
2023-03-22 20:01:30 +00:00 |
|
|
8877960062
|
fixes for whisperx batching
|
2023-03-22 19:53:42 +00:00 |
|
|
4056a27bcb
|
begrudgingly added back whisperx integration (VAD/Diarization testing, I really, really need accurate timestamps before dumping mondo amounts of time on training a dataset)
|
2023-03-22 19:24:53 +00:00 |
|
|
b8c3c4cfe2
|
Fixed #167
|
2023-03-22 18:21:37 +00:00 |
|
|
f822c87344
|
cleanups, realigning vall-e training
|
2023-03-22 17:47:23 +00:00 |
|
|
909325bb5a
|
ugh
|
2023-03-21 22:18:57 +00:00 |
|
|
5a5fd9ca87
|
Added option to unsqueeze sample batches after sampling
|
2023-03-21 21:34:26 +00:00 |
|
|
9657c1d4ce
|
oops
|
2023-03-21 20:31:01 +00:00 |
|
|
0c2a9168f8
|
DLAS is PIPified (but I'm still cloning it as a submodule to make updating it easier)
|
2023-03-21 15:46:53 +00:00 |
|
|
34ef0467b9
|
VALL-E config edits
|
2023-03-20 01:22:53 +00:00 |
|
|
2e33bf071a
|
forgot to not require it to be relative
|
2023-03-19 22:05:33 +00:00 |
|
|
5cb86106ce
|
option to set results folder location
|
2023-03-19 22:03:41 +00:00 |
|
|
da9b4b5fb5
|
tweaks
|
2023-03-18 15:14:22 +00:00 |
|
|
f44895978d
|
brain worms
|
2023-03-17 20:08:08 +00:00 |
|
|
f34cc382c5
|
yammed
|
2023-03-17 18:57:36 +00:00 |
|
|
96b7f9d2cc
|
yammed
|
2023-03-17 13:08:34 +00:00 |
|
|
249c6019af
|
cleanup, metrics are grabbed for vall-e trainer
|
2023-03-17 05:33:49 +00:00 |
|
|
1b72d0bba0
|
forgot to separate phonemes by spaces for [redacted]
|
2023-03-17 02:08:07 +00:00 |
|
|
d4c50967a6
|
cleaned up some prepare dataset code
|
2023-03-17 01:24:02 +00:00 |
|
|
0b62ccc112
|
setup bnb on windows as needed
|
2023-03-16 20:48:48 +00:00 |
|
|
1a8c5de517
|
unk hunting
|
2023-03-16 14:59:12 +00:00 |
|
|
46ff3c476a
|
fixes v2
|
2023-03-16 14:41:40 +00:00 |
|
|
0408d44602
|
fixed reload tts being broken due to being as untouched as I am
|
2023-03-16 14:24:44 +00:00 |
|
|
aeb904a800
|
yammed
|
2023-03-16 14:23:47 +00:00 |
|
|
f9154c4db1
|
fixes
|
2023-03-16 14:19:56 +00:00 |
|
|
54f2fc792a
|
ops
|
2023-03-16 05:14:15 +00:00 |
|
|
0a7d6f02a7
|
ops
|
2023-03-16 04:54:17 +00:00 |
|
|
4ac43fa3a3
|
I forgot I undid the thing in DLAS
|
2023-03-16 04:51:35 +00:00 |
|
|
da4f92681e
|
oops
|
2023-03-16 04:35:12 +00:00 |
|
|
ee8270bdfb
|
preparations for training an IPA-based finetune
|
2023-03-16 04:25:33 +00:00 |
|
|
7b80f7a42f
|
fixed not cleaning up states while training (oops)
|
2023-03-15 02:48:05 +00:00 |
|
|
b31bf1206e
|
oops
|
2023-03-15 01:51:04 +00:00 |
|
|
d752a22331
|
print a warning if automatically deduced batch size returns 1
|
2023-03-15 01:20:15 +00:00 |
|
|
f6d34e1dd3
|
and maybe I should have actually tested with ./models/tokenizers/ made
|
2023-03-15 01:09:20 +00:00 |
|
|
5e4f6808ce
|
I guess I didn't test on a blank-ish slate
|
2023-03-15 00:54:27 +00:00 |
|
|
363d0b09b1
|
added options to pick tokenizer json and diffusion model (so I don't have to add it in later when I get bored and add in diffusion training)
|
2023-03-15 00:37:38 +00:00 |
|