-
https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.
XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG
- Joined on
Oct 10, 2022
mrq commented on issue mrq/ai-voice-cloning#403
Very Bad Training Time & Results.This might be related to #399 where using the "latest" drivers is actually a detriment when training closed to max VRAM usage. I'd suggest either: * downgrading your drivers * reducing your…
mrq pushed to master at mrq/vall-e
- 63cc9cf37a added compat flags for torchscale because the maintainer for torchscale broke compat for existing models
mrq pushed to main at mrq/torchscale
- 008f1b6d18 added compat flags because I guess the maintainer assumed no one was actually using the retnet and thinks they can change things willy nilly
- ce77afe916 added arg to change RelPos's base
- 881d03079d Merge pull request #70 from sunyt32/retnet-official
- 50174a3078 fix fairseq example
- ab1d9d677a Merge pull request #69 from sunyt32/retnet-official
- Compare 11 commits »
mrq pushed to main at mrq/torchscale
- 02740c874d added arg to change RelPos's base
mrq pushed to main at mrq/torchscale
- ce77afe916 added arg to change RelPos's base
- 881d03079d Merge pull request #70 from sunyt32/retnet-official
- 50174a3078 fix fairseq example
- ab1d9d677a Merge pull request #69 from sunyt32/retnet-official
- 05a9628309 fix bug
- Compare 10 commits »
mrq commented on issue mrq/vall-e#8
Training GPU offer> I might want some guidance on tweaking LR Adjusting the LR is as simple as entering, for example, `lr 0.05` into the training window. The only caveat is having to remember to edit the…
mrq pushed to master at mrq/vall-e
- 12cfc9e502 added prodigyopt as a dependency because I keep forgetting
mrq pushed to master at mrq/vall-e
- 153f8b293c added min-x and min-y arguments to plot.py, helper script to download from my existing checkpoint
mrq commented on issue mrq/vall-e#8
Training GPU offerdesu, I would first see if: * [VALL-E X](https://huggingface.co/spaces/Plachta/VALL-E-X/) is serviceable enough for you (I personally have my issues with it, but that's neither here nor there),…
mrq commented on issue mrq/ai-voice-cloning#152
VALL-E Integration (and In Response To TorToiSe: a Quick Retrospective)> I'm asking about the accuracies and losses you see once it turns into human sounding (just trying to debug inference for my custom dataset). E.g. is it 50% acc, 60%, 70%, 80%? Since losses and…
mrq commented on issue mrq/ai-voice-cloning#152
VALL-E Integration (and In Response To TorToiSe: a Quick Retrospective)Also I just realized the issue is working again. I'm not sure why it broke, or how it resolved itself. There wasn't really anything noteworthy outside of: * I added mirostat sampling, but it's…
mrq commented on issue mrq/ai-voice-cloning#152
VALL-E Integration (and In Response To TorToiSe: a Quick Retrospective)> So, I'm trying to overfit on just 3 speakers just to ensure I have things set up correctly. Right, I never went back to try and test training on much narrower datasets, as I was doing things…
mrq pushed to master at mrq/vall-e
- d12877ee09 added option to set probability of selecting the AR during training under a monolithic AR+NAR, added some more to-dos while I have them in mind
mrq commented on issue mrq/ai-voice-cloning#400
Upscaled output creates bad quality?Yeah by default the generated outputs will be resampled to 44K (for some reason I didn't have it set at 44.1K, but I think that's because Voicefixer resamples to 44K anyways). I can't remember…
mrq pushed to master at mrq/vall-e
- e85b798fbf set default NAR levels to max for the web UI
mrq commented on issue mrq/ai-voice-cloning#398
Issue #152 InaccessableI'm not too sure what happened as I've been "away" for a few days, and there hasn't been anything to note for a while now. I'll see about looking into it when (if) I get the chance, but I would…
mrq commented on issue mrq/ai-voice-cloning#399
Nvidia Driver Woes - Super slow trainingI remember reading in the LLaMA sphere to stick with older Nvidia driver versions due to newer drivers happily spilling over-committed VRAM allocations onto system RAM. I haven't encountered it…
mrq pushed to master at mrq/vall-e
- c7fb740d41 do not specify a default dtype for the web UI, let it implicitly load from the yaml instead