vall-e/vall_e/models/arch
2024-06-13 20:08:22 -05:00
..
retnet_syncdoth
__init__.py actually going for the suggested "2x layers, no intermediate scaling" is wrong for VALL-E, directly copying the normal transformer structure fixes mamba2 performance in the test trainer 2024-06-13 20:08:22 -05:00
bitnet.py
llama.py
mamba.py
mixtral.py
mmfreelm.py option to split classifier per-level instead of sharing one (at this point I'm just scrambling to try and cope with training a DAC model, the NAR is being a pain) 2024-06-11 22:28:59 -05:00
retnet.py
transformer.py