|
0d809561c6
|
accuracy k=1 and k=80 because im probably dumb for k=10 as the default since it does not represent any usecase
|
2025-03-05 16:35:34 -06:00 |
|
|
2fb2b732fc
|
wow that was fast
|
2025-03-04 23:17:18 -06:00 |
|
|
462f71e2f7
|
ugh
|
2025-03-04 14:57:00 -06:00 |
|
|
1cd24f3381
|
a birdie tells me i should probably use a different optimizer (also preliminary support for native sparse attention but I don't know if I'll use it)
|
2025-03-04 14:53:02 -06:00 |
|
|
0451f75e33
|
now that the new model seems a little more promising, i can re-document things non-cynically
|
2025-03-03 13:21:41 -06:00 |
|
|
3f1070f575
|
tweaks
|
2025-03-02 22:36:25 -06:00 |
|
|
4afa4ccce5
|
at wits end (parhaps the semantic token approach is the toughest pill to swallow)
|
2025-03-01 21:03:25 -06:00 |
|
|
1d3290b023
|
could have sworn this worked before, might have broke it when i decoupled from omegaconf
|
2025-03-01 19:30:26 -06:00 |
|
|
17094b8002
|
reticulating splines
|
2025-03-01 17:48:51 -06:00 |
|
|
56f8be4d62
|
lol
|
2025-02-28 22:15:37 -06:00 |
|
|
ddc49c89c5
|
the learning rate scheduler pill is a tough pill to swallow
|
2025-02-28 22:12:19 -06:00 |
|
|
b97faa8173
|
fixes...
|
2025-02-28 18:53:07 -06:00 |
|
|
4e7d885542
|
lol
|
2025-02-28 18:06:41 -06:00 |
|
|
a174c33db6
|
a gorillionth time's the charm (aka: the encoder/decoder pill is a tough pill to swallow)
|
2025-02-28 17:56:50 -06:00 |
|
|
09d82a26fe
|
ugh
|
2025-02-28 01:06:38 -06:00 |
|
|
93feb5660f
|
do not like that
|
2025-02-27 23:59:56 -06:00 |
|
|
f4f435d7f5
|
when you already had these ideas to stabilize training but you just ignored them
|
2025-02-27 23:39:20 -06:00 |
|
|
0a45c9c042
|
fix attention backend not being used
|
2025-02-27 21:38:38 -06:00 |
|
|
b8e9f3d785
|
maybe this will work
|
2025-02-27 20:42:12 -06:00 |
|
|
01e96bafc9
|
ugh
|
2025-02-27 19:05:32 -06:00 |
|
|
eff180248c
|
decoupled llama backend to avoid any funny changes from transformers, removed other backends since i dont think i'll ever bother using them
|
2025-02-27 19:00:37 -06:00 |
|
|
ceecac6ffe
|
I think I made resp_parallel_training=True faster with loss factoring?
|
2025-02-26 23:13:32 -06:00 |
|
|
06ef3daf3c
|
require minimum of 1 second durations for training because of my slop code auto-transposing that I don't wanna fix right now
|
2025-02-26 22:00:33 -06:00 |
|
|
cbd4d7d7f4
|
ugh
|
2025-02-26 21:31:10 -06:00 |
|
|
2ea387c08a
|
segregated experimental changes into its own streamlined file to avoid breaking the existing model, and it can pivot to the cleaned up code if it actually works (nothing is working)
|
2025-02-26 21:26:13 -06:00 |
|
|
7d2e64630c
|
lol
|
2025-02-26 10:49:06 -06:00 |
|
|
95da4e9405
|
made muon actually work by actually utilizing param groups (thanks APOLLO for reminding me this is the sane way to handle this split)
|
2025-02-26 10:39:13 -06:00 |
|
|
de27115bb7
|
there's something wrong with it on my 4xV100 rig......
|
2025-02-25 15:14:08 -06:00 |
|
|
db181f8e88
|
only do auto=equal for nemo as its an FSQ
|
2025-02-24 21:07:44 -06:00 |
|
|
a5a04c39ef
|
when the
|
2025-02-24 21:03:23 -06:00 |
|
|
918e0dbac1
|
small slop cleanup
|
2025-02-24 19:03:53 -06:00 |
|
|
3330b5bb00
|
maybe fix NaNs being thrown for immature models at fp16 for training evals
|
2025-02-24 18:25:54 -06:00 |
|
|
0f39f4d7a1
|
lol
|
2025-02-24 17:51:35 -06:00 |
|
|
33d5a7109a
|
its a miracle i was able to get a semblance of audio with the naive AudioEncoder (now it interleaves properly)
|
2025-02-24 14:39:12 -06:00 |
|
|
6e7b269147
|
ugh
|
2025-02-24 13:54:21 -06:00 |
|
|
8f5a3997bd
|
another experimental flag
|
2025-02-24 13:50:41 -06:00 |
|
|
f593ee98fc
|
ugh
|
2025-02-23 21:20:36 -06:00 |
|
|
cbf6b84e27
|
fixed grad norm and loss scale not reporting for local trainer
|
2025-02-23 19:08:26 -06:00 |
|
|
b640fabab5
|
borrowed muon since it might better work under deepspeed and not require cruft (even though it really does not like the masked-NAR, also make the masked-NAR faux-causal since it might better help out for cfg.model.version >= 7
|
2025-02-23 17:23:24 -06:00 |
|
|
d33ccd188a
|
ugh
|
2025-02-23 12:31:07 -06:00 |
|
|
8f3c3e01ee
|
oops
|
2025-02-23 12:09:56 -06:00 |
|
|
b39aaacd77
|
oops
|
2025-02-23 11:55:43 -06:00 |
|
|
3019c88799
|
separate mask token and stop token because this might cause issues
|
2025-02-23 11:36:32 -06:00 |
|
|
6634d07576
|
added muon optimizer through kludge hacks because it necessitates a second optimizer in tandum that seems to only sometimes work with deepspeed
|
2025-02-23 11:22:13 -06:00 |
|
|
67a6009555
|
(finally) added parallel AR for cfg.model.version >= 7 (nvidia/audio-codec-44khz is being a pain and it might require training purely AR first......)
|
2025-02-23 08:31:03 -06:00 |
|
|
15b3c20e19
|
also throw exception for zero'd out tensor during training (I am very paranoid now)
|
2025-02-22 14:09:41 -06:00 |
|
|
ab0abd2b12
|
fixes fixes fixes (a quarter of my recently processed audio returned zero'd tensors......)
|
2025-02-22 09:07:33 -06:00 |
|
|
50506e5ebc
|
oops
|
2025-02-20 20:55:58 -06:00 |
|
|
fc1ec2019d
|
added option to buffer process jobs across multiple speakers to maybe squeeze out some throughput speeds for vall_e.emb.process (in the event of lots of speakers with low file counts, such as Emilia)
|
2025-02-20 14:56:32 -06:00 |
|
|
ce1ca0124a
|
lol...
|
2025-02-20 13:40:36 -06:00 |
|