vall-e/vall_e/models
2024-08-26 19:13:34 -05:00
..
arch added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm 2024-08-26 19:13:34 -05:00
__init__.py
ar_nar.py ughghghhhh 2024-08-09 21:15:01 -05:00
ar.py fix issue with sft and shared tensors... 2024-08-04 19:56:21 -05:00
base.py added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm 2024-08-26 19:13:34 -05:00
experimental.py ughghghhhh 2024-08-09 21:15:01 -05:00
lora.py
nar.py changed torch.Tensor().to(device, dtype) to just torch.tensor(..., device, dtype) because it's been bothering my autism that I'm creating tensors then converting rather than creating with the right device/dtype, some 'optimization' to compile the model but it doesnt seem to do anything useful 2024-08-03 22:10:21 -05:00