|
9def34cd66
|
lol
|
2024-11-10 12:48:41 -06:00 |
|
|
9cb0b6901b
|
unified nar.py into ar_nar.py
|
2024-11-10 12:19:48 -06:00 |
|
|
a9d2faf2d7
|
all I can do now until I wait for the model to (re)train for pure NAR
|
2024-11-09 22:57:34 -06:00 |
|
|
ad7e290a5e
|
ugh (ROCm seems to silently clamp any token value >= logits.shape[-1] for loss calculation, while cuda will throw an assert, making it hard to find this dumb fuckup)
|
2024-11-09 19:40:02 -06:00 |
|
|
943fe70c10
|
I don't know why this fixes an assert thrown but it does
|
2024-11-09 19:04:13 -06:00 |
|
|
f50d92ba6c
|
Almost made a mistake
|
2024-11-09 18:12:54 -06:00 |
|
|
c6a38693a2
|
This better work
|
2024-11-09 18:04:59 -06:00 |
|
|
8b3d1cf70a
|
Something's Wrong
|
2024-11-09 15:07:43 -06:00 |
|
|
dcd5fecff3
|
some cleanup while I wait for the NAR-len to train to an acceptable state (currently it performs okay, but only on audo after 3 seconds or so)
|
2024-11-09 12:12:46 -06:00 |
|
|
69b0b3b854
|
set timestep tensor to whatever the time embedding's dtype is because it'll gripe under amp
|
2024-11-09 00:11:16 -06:00 |
|
|
5a09a5f6e9
|
I forgot about the time embedding...
|
2024-11-08 22:46:26 -06:00 |
|
|
811b15d280
|
I suppose I just have a shit training method since the sampler is as solid as I can get it...............
|
2024-11-08 22:05:41 -06:00 |
|
|
13b54953bd
|
agony
|
2024-11-08 13:34:39 -06:00 |
|
|
c127c4e488
|
'borrowed' a sampling scheduler for NAR-len's RVQ level 0 (better than before, but still not good enough)
|
2024-11-07 21:19:14 -06:00 |
|
|
e108c54daf
|
new NAR-len training paradigm......
|
2024-11-07 11:32:11 -06:00 |
|
|
ed174c589e
|
ugh
|
2024-11-07 09:19:21 -06:00 |
|
|
d13ab00ad8
|
one more note
|
2024-11-07 09:11:21 -06:00 |
|
|
5698188824
|
あたしって、ほんとバカ
|
2024-11-07 09:10:18 -06:00 |
|
|
77ff23e319
|
repeat extend the prom to fill the initial tokens for nar-len (it somewhat works, the model just needs to train more)
|
2024-11-06 23:29:53 -06:00 |
|
|
a3bc26f7ec
|
ugh
|
2024-11-06 23:16:28 -06:00 |
|
|
d606a693ff
|
eval fix for nar-len
|
2024-11-06 23:14:16 -06:00 |
|
|
105ed51159
|
I guess I'll fall for the NAR-len meme again (I don't know where my previous weights are, so I need to train it again to test something)
|
2024-11-06 19:17:12 -06:00 |
|
|
bcabde3454
|
more notes
|
2024-11-06 13:51:28 -06:00 |
|
|
bfc5e1d723
|
agony
|
2024-11-05 22:30:49 -06:00 |
|
|
aefe8fcdad
|
UGH
|
2024-11-05 22:13:58 -06:00 |
|
|
556d9db0d5
|
web UI support for HF ZeroGPU
|
2024-11-05 21:38:02 -06:00 |
|
|
e58a9469a3
|
move layerskip to experimental settings.......
|
2024-11-05 20:37:06 -06:00 |
|
|
d5aa8186f0
|
more doc
|
2024-11-05 16:53:00 -06:00 |
|
|
9901c4f8ca
|
documentation under ./docs/
|
2024-11-05 16:11:01 -06:00 |
|
|
bbc2de3713
|
ugh
|
2024-11-05 11:50:05 -06:00 |
|
|
9e65e05e83
|
more windows specific fixes, limit gradio to <5.0.0 on linux (it works on windows, but not on my linux machine tm)
|
2024-11-04 18:00:33 -06:00 |
|
|
c83670c38c
|
Windows specific fixes (to-do: find libespeak-ng.dll automatically because it cannot be trusted to do it by default)
|
2024-11-03 19:19:15 -06:00 |
|
|
d229725c76
|
more adjustments (adjustments of early-exit entropy/varentropy thresholds, default rep pen being 1.5, experimental refine-on-stop, etc.)
|
2024-11-03 18:31:28 -06:00 |
|
|
aee08b7307
|
changed layerskip float16 training warning (since it didnt seem to fry on my 4xV100 system)
|
2024-11-03 09:58:29 -06:00 |
|
|
3826f9bae4
|
saner mask creation? (it doesnt matter, kv cache wont work)
|
2024-11-02 21:00:21 -05:00 |
|
|
ded746e157
|
very, very naive layerskip speculative sampling (it just checks if the current layer's state is good enough)
|
2024-11-02 11:49:05 -05:00 |
|
|
62fe5b0943
|
ughh
|
2024-11-01 22:36:48 -05:00 |
|
|
ec79230965
|
shuffled web UI options hidden by cfg.experimental to its own tab, expose early exit selection to inferencing (it kinda works naively, still need to implement self-speculation)
|
2024-11-01 21:30:06 -05:00 |
|
|
ef1c17430f
|
skip step on nan loss (ironically I have not had a nan loss after adding this), throw exception with invalid cfg.dataset.sample_type and sample_order combination (because I was tricked by this in my yaml and had inconsistent vram usage)
|
2024-11-01 20:54:53 -05:00 |
|
|
fb8faa295b
|
actually float16(+AMP) and layerskip is bad and will kill the model......
|
2024-11-01 18:36:44 -05:00 |
|
|
edf1e66bf9
|
layerskip_r=6 fries the model so hard the loss is sub-1...
|
2024-11-01 17:06:07 -05:00 |
|
|
9b6c57bc57
|
third time's the charm (for some reason it escaped me that I should treat early exit loss as an aux_loss to be used with the normal loss, as if I was training a MoE's router)
|
2024-11-01 12:50:37 -05:00 |
|
|
76ebef45dc
|
off-by-one...
|
2024-10-31 13:24:48 -05:00 |
|
|
b63293cbbe
|
ugh
|
2024-10-30 22:49:11 -05:00 |
|
|
a22534e8f4
|
layer skip training implemented (need to gut the inferencing from the repo, and to actually see if the model can benefit from this)
|
2024-10-30 20:05:45 -05:00 |
|
|
4049f51ba9
|
added option to load lora directly from the model file itself with --lora
|
2024-10-26 00:13:10 -05:00 |
|
|
023c3af331
|
updated readme to reflect changes
|
2024-10-25 22:17:05 -05:00 |
|
|
ccf71dc1b6
|
added option to load from a model state dict directly instead of a yaml (to-do: do this for LoRAs too), automatically download the default model if none is provided
|
2024-10-25 22:15:15 -05:00 |
|
|
a96f5aee32
|
adjusted how i want to pass eval kwargs
|
2024-10-25 20:38:09 -05:00 |
|
|
92e6bff6dc
|
actually ar temp 0.5 with rep pen 1.125 seems to have the benefits of better outputs without it degrading some of the time but not all the time
|
2024-10-23 00:03:35 -05:00 |
|