This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
ccb14c06ef
vall-e
/
vall_e
/
models
History
mrq
ccb14c06ef
mamba2-hf using
vasqu/mamba2-torch
because it lets me use mamba2 without triton ops (training with my 4xV100s are not happy with mamba2 because of triton)
2024-06-14 19:42:17 -05:00
..
arch
mamba2-hf using
vasqu/mamba2-torch
because it lets me use mamba2 without triton ops (training with my 4xV100s are not happy with mamba2 because of triton)
2024-06-14 19:42:17 -05:00
__init__.py
ugh
2024-06-11 23:59:28 -05:00
ar_nar.py
actually going for the suggested "2x layers, no intermediate scaling" is wrong for VALL-E, directly copying the normal transformer structure fixes mamba2 performance in the test trainer
2024-06-13 20:08:22 -05:00
base.py
mamba2-hf using
vasqu/mamba2-torch
because it lets me use mamba2 without triton ops (training with my 4xV100s are not happy with mamba2 because of triton)
2024-06-14 19:42:17 -05:00
experimental.py
actually going for the suggested "2x layers, no intermediate scaling" is wrong for VALL-E, directly copying the normal transformer structure fixes mamba2 performance in the test trainer
2024-06-13 20:08:22 -05:00
nar.py
the NAR only dream is dead (it just won't work)
2024-06-12 19:49:47 -05:00