• https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.

    XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG

  • Joined on 2022-10-10
mrq pushed to master at mrq/vall-e 2025-02-09 18:26:10 +00:00
mrq pushed to master at mrq/vall-e 2025-02-09 18:20:26 +00:00
mrq pushed to master at mrq/vall-e 2025-02-08 02:44:43 +00:00
mrq pushed to master at mrq/vall-e 2025-02-08 02:41:03 +00:00
mrq pushed to master at mrq/vall-e 2025-02-08 00:47:42 +00:00
ed94b261dc could have sworn i had 'vall_e.emb.process --dtype' working, also possible RAM optimization so I can stop locking up my server when firing four encoding processes
mrq pushed to master at mrq/vall-e 2025-02-07 05:21:42 +00:00
47eb498046 more tweaks
mrq pushed to master at mrq/vall-e 2025-02-06 21:09:41 +00:00
67a9401cce oops
mrq pushed to master at mrq/vall-e 2025-02-06 18:33:11 +00:00
712ce4af5d maybe fixed errors with DAC backend, added option to limit by duration in emb.process (because I only really need short utternaces right now and I'm not ready to spend a week on processing everything again)
mrq pushed to master at mrq/vall-e 2025-02-06 03:50:28 +00:00
299cc88821 re-added amp encoding/decoding for audio, possible bad idea to ignore using amp instead if requested
mrq pushed to master at mrq/vall-e 2025-02-06 03:08:37 +00:00
7592befc53 updated vall_e.emb.process to allow for batched processing, some typo fixes (it's painfully slow on my 7900XTX...)
mrq pushed to master at mrq/vall-e 2025-02-06 02:49:46 +00:00
79c504c278 cleaned up encode/decode functions to make them a little more coherent, added option to batch encode/decode (would have been very nice in the past, but this should speed things up for me when i fall for the latest meme codec)
mrq pushed to master at mrq/vall-e 2025-02-05 16:20:22 +00:00
84174c1c1b oops
mrq pushed to master at mrq/vall-e 2025-02-05 02:25:15 +00:00
bb2ebe1ca2 fixed issues that may rise from updating transformers with attention, added nvidia/audio-codec-44khz backend support (by gutting everything necessary because I do NOT want to install more dependencies
mrq pushed to master at mrq/vall-e 2025-02-05 02:24:42 +00:00
d8ee56f769 fixed issues that may rise from updating transformers with attention, added nvidia/audio-codec-44khz backend support (by gutting everything necessary because I do NOT want to install more dependencies
mrq pushed to master at mrq/vall-e 2025-01-29 03:50:11 +00:00
0841f366e8 I should really just grab modelling_llama wholesale (fix for the adapted attention class)
mrq pushed to master at mrq/vall-e 2025-01-21 17:54:31 +00:00
e5f9da2221 oops
mrq pushed to master at mrq/vall-e 2025-01-21 03:52:06 +00:00
69c1d2991f updated mixtral backend (need this for something else)
mrq pushed to master at mrq/vall-e 2025-01-13 03:48:22 +00:00
1a26f789a5 added option to playback audio directly, removed no-phonemize option since I swear it worked in testing but it doesn't actually work
mrq pushed to master at mrq/vee-speedrun-ratings 2025-01-12 20:27:33 +00:00
d7e79d078f Forgot to update tweaks to chart.py (namely not doubling up on marker counts when used multiple times in a run)
mrq pushed to master at mrq/vee-speedrun-ratings 2025-01-12 07:32:39 +00:00
97d59d9a34 Good.