vall-e/vall_e/models
2024-08-18 20:51:14 -05:00
..
arch added flash_attn LlamaAttention (including flash_attn==1.0.9) 2024-08-18 20:51:14 -05:00
__init__.py
ar_nar.py ughghghhhh 2024-08-09 21:15:01 -05:00
ar.py fix issue with sft and shared tensors... 2024-08-04 19:56:21 -05:00
base.py added flash_attn LlamaAttention (including flash_attn==1.0.9) 2024-08-18 20:51:14 -05:00
experimental.py ughghghhhh 2024-08-09 21:15:01 -05:00
lora.py
nar.py changed torch.Tensor().to(device, dtype) to just torch.tensor(..., device, dtype) because it's been bothering my autism that I'm creating tensors then converting rather than creating with the right device/dtype, some 'optimization' to compile the model but it doesnt seem to do anything useful 2024-08-03 22:10:21 -05:00