|
0e3bbc55f8
|
added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend list
|
2023-03-06 05:21:33 +00:00 |
|
|
788a957f79
|
stretch loss plot to target iteration just so its not so misleading with the scale
|
2023-03-06 00:44:29 +00:00 |
|
|
5be14abc21
|
UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model)
|
2023-03-05 23:55:27 +00:00 |
|
|
287738a338
|
(should) fix reported epoch metric desyncing from defacto metric, fixed finding next milestone from wrong sign because of 2AM brain
|
2023-03-05 20:42:45 +00:00 |
|
|
206a14fdbe
|
brianworms
|
2023-03-05 20:30:27 +00:00 |
|
|
b82961ba8a
|
typo
|
2023-03-05 20:13:39 +00:00 |
|
|
b2e89d8da3
|
oops
|
2023-03-05 19:58:15 +00:00 |
|
|
8094401a6d
|
print in e-notation for LR
|
2023-03-05 19:48:24 +00:00 |
|
|
8b9c9e1bbf
|
remove redundant stats, add showing LR
|
2023-03-05 18:53:12 +00:00 |
|
|
0231550287
|
forgot to remove a debug print
|
2023-03-05 18:27:16 +00:00 |
|
|
d97639e138
|
whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards them
|
2023-03-05 17:54:36 +00:00 |
|
|
b8a620e8d7
|
actually accumulate derivatives when estimating milestones and final loss by using half of the log
|
2023-03-05 14:39:24 +00:00 |
|
|
35225a35da
|
oops v2
|
2023-03-05 14:19:41 +00:00 |
|
|
b5e9899bbf
|
5 hour sleep brained
|
2023-03-05 13:37:05 +00:00 |
|
|
cd8702ab0d
|
oops
|
2023-03-05 13:24:07 +00:00 |
|
|
d312019d05
|
reordered things so it uses fresh data and not last-updated data
|
2023-03-05 07:37:27 +00:00 |
|
|
ce3866d0cd
|
added '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum later
|
2023-03-05 06:45:07 +00:00 |
|
|
1316331be3
|
forgot to try and have it try and auto-detect for openai/whisper when no language is specified
|
2023-03-05 05:22:35 +00:00 |
|
|
3e220ed306
|
added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing
|
2023-03-05 05:17:19 +00:00 |
|
|
37cab14272
|
use torchrun instead for multigpu
|
2023-03-04 20:53:00 +00:00 |
|
|
5026d93ecd
|
sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2
|
2023-03-04 20:42:54 +00:00 |
|
|
1a9d159b2a
|
forgot to add 'bs / gradient accum < 2 clamp validation logic
|
2023-03-04 17:37:08 +00:00 |
|
|
df24827b9a
|
renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details)
|
2023-03-04 15:55:06 +00:00 |
|
|
6d5e1e1a80
|
fixed user inputted LR schedule not actually getting used (oops)
|
2023-03-04 04:41:56 +00:00 |
|
|
6d8c2dd459
|
auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider
|
2023-03-03 21:13:48 +00:00 |
|
mrq
|
07163644dd
|
Merge pull request 'Added optional whispercpp update functionality' (#57) from lightmare/ai-voice-cloning:whispercpp-update into master
Reviewed-on: mrq/ai-voice-cloning#57
|
2023-03-03 19:32:38 +00:00 |
|
|
e1f3ffa08c
|
oops
|
2023-03-03 18:51:33 +00:00 |
|
lightmare
|
5487c28683
|
Added optional whispercpp update functionality
|
2023-03-03 18:34:49 +00:00 |
|
|
9fb4aa7917
|
validated whispercpp working, fixed args.listen not being saved due to brainworms
|
2023-03-03 07:23:10 +00:00 |
|
|
740b5587df
|
added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts
|
2023-03-03 06:39:37 +00:00 |
|
|
68f4858ce9
|
oops
|
2023-03-03 05:51:17 +00:00 |
|
|
e859a7c01d
|
experimental multi-gpu training (Linux only, because I can't into batch files)
|
2023-03-03 04:37:18 +00:00 |
|
|
e205322c8d
|
added setup script for bitsandbytes-rocm (soon: multi-gpu testing, because I am finally making use of my mispurchased second 6800XT)
|
2023-03-03 02:58:34 +00:00 |
|
|
59773a7637
|
just uninstall bitsandbytes on ROCm systems for now, I'll need to get it working tomorrow
|
2023-03-02 03:04:11 +00:00 |
|
|
c956d81baf
|
added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh
|
2023-03-02 01:35:12 +00:00 |
|
|
534a761e49
|
added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models
|
2023-03-02 00:46:52 +00:00 |
|
|
5a41db978e
|
oops
|
2023-03-01 19:39:43 +00:00 |
|
|
b989123bd4
|
leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training)
|
2023-03-01 19:32:11 +00:00 |
|
|
c2726fa0d4
|
added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier
|
2023-03-01 01:17:38 +00:00 |
|
|
5037752059
|
oops
|
2023-02-28 22:13:21 +00:00 |
|
|
787b44807a
|
added to embedded metadata: datetime, model path, model hash
|
2023-02-28 15:36:06 +00:00 |
|
|
81eb58f0d6
|
show different losses, rewordings
|
2023-02-28 06:18:18 +00:00 |
|
|
fda47156ec
|
oops
|
2023-02-28 01:08:07 +00:00 |
|
|
bc0d9ab3ed
|
added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else
|
2023-02-28 01:01:50 +00:00 |
|
|
6925ec731b
|
I don't remember.
|
2023-02-27 19:20:06 +00:00 |
|
|
47abde224c
|
compat with python3.10+ finally (and maybe a small perf uplift with using cu117)
|
2023-02-26 17:46:57 +00:00 |
|
|
92553973be
|
Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json
|
2023-02-26 01:57:56 +00:00 |
|
|
aafeb9f96a
|
actually fixed the training output text parser
|
2023-02-25 16:44:25 +00:00 |
|
|
65329dba31
|
oops, epoch increments twice
|
2023-02-25 15:31:18 +00:00 |
|
|
8b4da29d5f
|
csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420)
|
2023-02-25 13:55:25 +00:00 |
|