• https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.

    XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG

  • Joined on 2022-10-10
mrq pushed to master at mrq/vall-e 2024-10-18 14:36:10 +00:00
0dfab973e7 oops
mrq pushed to master at mrq/vall-e 2024-10-18 14:33:19 +00:00
90654766a8 oops
mrq pushed to master at mrq/vall-e 2024-10-17 22:02:55 +00:00
75b90be325 cleaned up unused config flags, allow less strict yaml by pruning missing keys, renamed some dataset configs to be more unified
mrq pushed to master at mrq/vall-e 2024-10-17 19:33:32 +00:00
8b6095f681 saner defaults, maybe
mrq pushed to master at mrq/vall-e 2024-10-16 19:23:58 +00:00
f88097ccf6 add config option to set the rate of sampling randomly vs similar speakers during training
mrq pushed to master at mrq/vall-e 2024-10-16 00:26:40 +00:00
mrq pushed to master at mrq/vall-e 2024-10-16 00:21:03 +00:00
eea70f5698 kludge fix for an oversight in the model when trying to train for longer input prompt durations......
mrq pushed to master at mrq/vall-e 2024-10-13 16:57:30 +00:00
84005c5b00 entropix apparently processes the entire sequence of logits but it falls apart when doing that
mrq pushed to master at mrq/vall-e 2024-10-13 15:58:42 +00:00
c800d28bb8 respect attention defined in the yaml for web UI (which might explain why theres been a discrepancy in outputs for me)
mrq pushed to master at mrq/vall-e 2024-10-13 05:22:43 +00:00
ed6b7a690f ugh.........
mrq pushed to master at mrq/vall-e 2024-10-13 04:49:39 +00:00
d405f243d4 at wits end in trying to output the right attention scores
mrq pushed to master at mrq/vall-e 2024-10-12 17:05:18 +00:00
70cf694cfd output attention scores for SDPA/flash, since naive attention seems broken
mrq pushed to master at mrq/vall-e 2024-10-12 16:25:13 +00:00
mrq pushed to master at mrq/vall-e 2024-10-12 16:23:56 +00:00
04e983b86b modified demo page to be more modular with demoing comparisons, actually provide a path to use modified naive attention, entropix sampling is not tied to an experimental yaml flag now
mrq pushed to master at mrq/vall-e 2024-10-12 15:37:35 +00:00
mrq pushed to master at mrq/vall-e 2024-10-12 15:01:44 +00:00
3d6ef9666b overridden naive llama attention to get the right score values that entropix needs
mrq pushed to master at mrq/vall-e 2024-10-12 14:53:31 +00:00
mrq pushed to master at mrq/vall-e 2024-10-12 14:42:16 +00:00
d6f7c86a5c entropix tweaks (it doesn't output garbage but it loves to go for silence)
mrq pushed to master at mrq/vall-e 2024-10-12 03:32:10 +00:00
d0ab7d755a added min-p (really does not seem useful since it's very sensitive), more tweaks to entropix
mrq pushed to master at mrq/vall-e 2024-10-12 02:14:29 +00:00
bef43a0c18 added experimental entropix sampling support