• https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.

    XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG

  • Joined on 2022-10-10
mrq pushed to master at mrq/vall-e 2025-02-21 02:50:59 +00:00
50506e5ebc oops
mrq pushed to master at mrq/vall-e 2025-02-21 02:50:20 +00:00
fb867dbb21 oops
mrq pushed to master at mrq/vall-e 2025-02-20 20:51:35 +00:00
fc1ec2019d added option to buffer process jobs across multiple speakers to maybe squeeze out some throughput speeds for vall_e.emb.process (in the event of lots of speakers with low file counts, such as Emilia)
mrq pushed to master at mrq/vall-e 2025-02-20 19:35:36 +00:00
ce1ca0124a lol...
mrq pushed to master at mrq/vall-e 2025-02-20 19:34:28 +00:00
4a4a46c14f lol...
mrq pushed to master at mrq/vall-e 2025-02-20 19:32:51 +00:00
5cce684a62 lol...
mrq pushed to master at mrq/vall-e 2025-02-19 01:51:43 +00:00
92139b6da9 additional cruft, added a note in documentation to be aware of NUMA node topology when running vall_e.emb.process with more than one process
mrq pushed to master at mrq/vall-e 2025-02-18 16:44:38 +00:00
596c2df11c added arg to skip processing speakers with not enough utterances for whenever I get around to processing my subest of Emilia for nvidia/audio-codec-44khz (because Emilia has a ton of low-utternace speaker counts and right now my focus with the nemo model is on getting it to actually speak without much problems rather than feed it a gorillion speakers)
mrq pushed to master at mrq/vall-e 2025-02-18 16:14:33 +00:00
8331eee6fa added arg to limit vall_e.emb.process batch size since there's some speaker groups in LibriLight/Speech/whatever that have 10K utterances and I'm going impatient
mrq pushed to master at mrq/vall-e 2025-02-18 16:08:02 +00:00
dbcab9e570 added arg to limit vall_e.emb.process batch size since there's some speaker groups in LibriLight/Speech/whatever that have 10K utterances and I'm going impatient
mrq pushed to master at mrq/vall-e 2025-02-18 16:06:43 +00:00
6bc61c1a0f added arg to limit vall_e.emb.process batch size since there's some speaker groups in LibriLight/Speech/whatever that have 10K utterances and I'm going impatient
mrq pushed to master at mrq/vall-e 2025-02-16 17:30:01 +00:00
8f86cf0e4e possible logic optimization so I don't spend another 15 minutes simply iterating back to the point I was at in vall_e.emb.process
mrq pushed to master at mrq/vall-e 2025-02-15 23:37:12 +00:00
0dc49ef4d5 documentation update while I wait for more audio (between 4 and 8 seconds per utterance) quantize for nvidia/audio-codec-44khz (I was foolish to think I can get something servicable with just 4 seconds max for an utterance)
mrq pushed to master at mrq/vall-e 2025-02-14 22:32:20 +00:00
13c3a08853 nevermind thats slow
mrq pushed to master at mrq/vall-e 2025-02-14 22:19:35 +00:00
285e493b12 ugh..........
mrq pushed to master at mrq/vall-e 2025-02-14 00:33:58 +00:00
a65c8144f4 with the amount of tweaks I keep making I could have probably had the nvidia/audio-codec-44khz model realized already......
mrq pushed to master at mrq/vall-e 2025-02-13 23:19:53 +00:00
mrq pushed to master at mrq/vall-e 2025-02-13 22:06:40 +00:00
mrq pushed to master at mrq/vall-e 2025-02-13 22:02:40 +00:00
54d65cf37d what has science done
mrq pushed to master at mrq/vall-e 2025-02-13 22:00:02 +00:00
133e01a25b what has science done