This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
31f71fa134
vall-e
/
vall_e
/
models
/
arch
History
mrq
83eab4fa59
actually going for the suggested "2x layers, no intermediate scaling" is wrong for VALL-E, directly copying the normal transformer structure fixes mamba2 performance in the test trainer
2024-06-13 20:08:22 -05:00
..
retnet_syncdoth
__init__.py
actually going for the suggested "2x layers, no intermediate scaling" is wrong for VALL-E, directly copying the normal transformer structure fixes mamba2 performance in the test trainer
2024-06-13 20:08:22 -05:00
bitnet.py
llama.py
mamba.py
mixtral.py
mmfreelm.py
option to split classifier per-level instead of sharing one (at this point I'm just scrambling to try and cope with training a DAC model, the NAR is being a pain)
2024-06-11 22:28:59 -05:00
retnet.py
transformer.py