This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
0841f366e8
vall-e
/
vall_e
/
models
History
mrq
0841f366e8
I should really just grab modelling_llama wholesale (fix for the adapted attention class)
2025-01-28 21:55:05 -06:00
..
arch
I should really just grab modelling_llama wholesale (fix for the adapted attention class)
2025-01-28 21:55:05 -06:00
__init__.py
ar_nar.py
updated mixtral backend (need this for something else)
2025-01-20 21:50:56 -06:00
base.py
updated mixtral backend (need this for something else)
2025-01-20 21:50:56 -06:00
lora.py