This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
aee08b7307
vall-e
/
vall_e
/
models
History
mrq
aee08b7307
changed layerskip float16 training warning (since it didnt seem to fry on my 4xV100 system)
2024-11-03 09:58:29 -06:00
..
arch
very, very naive layerskip speculative sampling (it just checks if the current layer's state is good enough)
2024-11-02 11:49:05 -05:00
__init__.py
added option to load from a model state dict directly instead of a yaml (to-do: do this for LoRAs too), automatically download the default model if none is provided
2024-10-25 22:15:15 -05:00
ar_nar.py
changed layerskip float16 training warning (since it didnt seem to fry on my 4xV100 system)
2024-11-03 09:58:29 -06:00
ar.py
shuffled web UI options hidden by cfg.experimental to its own tab, expose early exit selection to inferencing (it kinda works naively, still need to implement self-speculation)
2024-11-01 21:30:06 -05:00
base.py
changed layerskip float16 training warning (since it didnt seem to fry on my 4xV100 system)
2024-11-03 09:58:29 -06:00
experimental.py
moved prints to use logger, edited readme (fused_attn doesnt seem stable for training)
2024-08-29 13:27:16 -05:00
lora.py
naive model offloading support (handles automatically splitting parts of the model to requested device per memory constraints, either inferred or requested in the yaml, input tensors are automatically migrated to the right device, it SEEMS to work for training under the test trainer when split between GPU and CPU) (this was specifically only because that Flux imagegen model released so I can test it there)
2024-08-01 20:12:06 -05:00
nar.py
shuffled web UI options hidden by cfg.experimental to its own tab, expose early exit selection to inferencing (it kinda works naively, still need to implement self-speculation)
2024-11-01 21:30:06 -05:00