This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
4f5c9e518a
vall-e
/
vall_e
/
ext
/
retnet_hf
History
mrq
aa1e25fbf5
backwards compat for old YAMLs with
models
, option to set flash attention 2 for Llama (and derivatives), included
syncdoth/RetNet
s torchscale retnet for shits and grins, etc.
2024-04-16 10:02:31 -05:00
..
__init__.py
added FP8 support through
NVIDIA/TransformerEngine
, added RetNet_HF through
syncdoth/RetNet
(as an alternative to branch away from torchscale)
2024-04-08 20:14:51 -05:00
configuration_retnet.py
added FP8 support through
NVIDIA/TransformerEngine
, added RetNet_HF through
syncdoth/RetNet
(as an alternative to branch away from torchscale)
2024-04-08 20:14:51 -05:00
modeling_retnet.py
backwards compat for old YAMLs with
models
, option to set flash attention 2 for Llama (and derivatives), included
syncdoth/RetNet
s torchscale retnet for shits and grins, etc.
2024-04-16 10:02:31 -05:00