Commit Graph

  • 9cbb7b2acf Update update.sh dev camenduru 2023-06-24 10:45:12 +0000
  • 76ed34ddd2 added CLI script (python ./src/cli.py --text=TEXT --voice=VOICE' etc) master 1723700788007617800/tmp_refs/heads/master 1723700788007617800/master 1722581315670219528/tmp_refs/heads/master 1722581315670219528/master 1722384036012217482/tmp_refs/heads/master 1722384036012217482/master 1721805386708419155/tmp_refs/heads/master 1721805386708419155/master mrq 2023-06-11 04:46:22 +0000
  • e227ab8e08 updated whisperX integration for use with the latest version (v3) (NOTE: you WILL need to also update whisperx if you pull this commit) mrq 2023-06-09 02:41:29 +0000
  • 805d7d35e8 the power of a separate setup for testing mrq 2023-05-22 17:36:28 +0000
  • 2f5486a8d5 oops mrq 2023-05-21 23:24:13 +0000
  • baa6b76b85 added gradio API for changing AR model mrq 2023-05-21 23:20:39 +0000
  • 31da215c5f added checkboxes to use the original method for calculating latents (ignores the voice chunk field) mrq 2023-05-21 01:47:48 +0000
  • 9e3eca2261 freeze gradio because I forgot to do it last week when it broke mrq 2023-05-18 14:45:49 +0000
  • cbe21745df I am very smart (need to validate) mrq 2023-05-12 17:41:26 +0000
  • 74bd0f0cdc revert local change that made its way upstream (showing graphs by it instead of epoch) mrq 2023-05-11 03:30:54 +0000
  • 149aaca554 fixed the whisperx has no attribute named load_model whatever because I guess whisperx has as stable of an API as I do mrq 2023-05-06 10:45:17 +0000
  • e416b0fe6f oops mrq 2023-05-05 12:36:48 +0000
  • 5003bc89d3 cleaned up brain worms with wrapping around gradio progress by instead just using tqdm directly (slight regressions with some messages not getting pushed) mrq 2023-05-04 23:40:33 +0000
  • 09d849a78f quick hotfix if it actually is a problem in the repo itself mrq 2023-05-04 23:01:47 +0000
  • 853c7fdccf forgot to uncomment the block to transcribe and slice when using transcribe all because I was piece-processing a huge batch of LibriTTS and somehow that leaked over to the repo mrq 2023-05-03 21:31:37 +0000
  • fd306d850d updated setup-directml.bat to not hard require torch version because it's updated to torch2 now mrq 2023-04-29 00:50:16 +0000
  • eddb8aaa9a indentation fix mrq 2023-04-28 15:56:57 +0000
  • 99387920e1 backported caching of phonemizer backend from mrq/vall-e mrq 2023-04-28 15:31:45 +0000
  • c5e9b407fa boolean oops mrq 2023-04-27 14:40:22 +0000
  • 3978921e71 forgot to make the transcription tab visible with the bark backend (god the code is a mess now, I'll suck you off if you clean this up for me (not really)) mrq 2023-04-26 04:55:10 +0000
  • b6440091fb Very, very, VERY, barebones integration with Bark (documentation soon) mrq 2023-04-26 04:48:09 +0000
  • faa8da12d7 modified logic to determine valid voice folders, also allows subdirs within the folder (for example: ./voices/SH/james/ will be named SH/james) mrq 2023-04-13 21:10:38 +0000
  • 02beb1dd8e should fix #203 mrq 2023-04-13 03:14:06 +0000
  • 8f3e9447ba disable diarize button mrq 2023-04-12 20:03:54 +0000
  • d8b996911c a bunch of shit i had uncommited over the past while pertaining to VALL-E mrq 2023-04-12 20:02:46 +0000
  • b785192dfc Merge pull request 'Make convenient to use with Docker' (#191) from psr/ai-voice-cloning:docker into master mrq 2023-04-08 14:04:45 +0000
  • 9afafc69c1 docker: add training script psr 2023-04-07 23:15:13 +0000
  • c018bfca9c docker: add ffmpeg for whisper and general cleanup psr 2023-04-07 23:14:05 +0000
  • d64cba667f docker support psr 2023-04-05 22:38:53 +0000
  • 0440eac2bc #185 mrq 2023-03-31 06:55:52 +0000
  • 9f64153a28 fixes #185 mrq 2023-03-31 06:03:56 +0000
  • 4744120be2 added VALL-E inference support (very rudimentary, gimped, but it will load a model trained on a config generated through the web UI) mrq 2023-03-31 03:26:00 +0000
  • 9b01377667 only include auto in the list of models under setting, nothing else mrq 2023-03-29 19:53:23 +0000
  • f66281f10c added mixing models (shamelessly inspired from voldy's web ui) mrq 2023-03-29 19:29:13 +0000
  • c89c648b4a fixes #176 mrq 2023-03-26 11:05:50 +0000
  • 41d47c7c2a for real this time show those new vall-e metrics mrq 2023-03-26 04:31:50 +0000
  • c4ca04cc92 added showing reported training accuracy and eval/validation metrics to graph mrq 2023-03-26 04:08:45 +0000
  • 8c647c889d now there should be feature parity between trainers mrq 2023-03-25 04:12:03 +0000
  • fd9b2e082c x_lim and y_lim for graph mrq 2023-03-25 02:34:14 +0000
  • 9856db5900 actually make parsing VALL-E metrics work mrq 2023-03-23 15:42:51 +0000
  • 69d84bb9e0 I forget mrq 2023-03-23 04:53:31 +0000
  • 444bcdaf62 my sanitizer actually did work, it was just batch sizes leading to problems when transcribing mrq 2023-03-23 04:41:56 +0000
  • a6daf289bc when the sanitizer thingy works in testing but it doesn't outside of testing, and you have to retranscribe for the fourth time today mrq 2023-03-23 02:37:44 +0000
  • 86589fff91 why does this keep happening to me mrq 2023-03-23 01:55:16 +0000
  • 0ea93a7f40 more cleanup, use 24KHz for preparing for VALL-E (encodec will resample to 24Khz anyways, makes audio a little nicer), some other things mrq 2023-03-23 01:52:26 +0000
  • d2a9ab9e41 remove redundant phonemize for vall-e (oops), quantize all files and then phonemize all files for cope optimization, load alignment model once instead of for every transcription (speedup with whisperx) mrq 2023-03-23 00:22:25 +0000
  • 19c0854e6a do not write current whisper.json if there's no changes mrq 2023-03-22 22:24:07 +0000
  • 932eaccdf5 added whisper transcription 'sanitizing' (collapse very short transcriptions to the previous segment) (I really have to stop having several copies spanning several machines for AIVC, I keep reverting shit) mrq 2023-03-22 22:10:01 +0000
  • 736cdc8926 disable diarization for whisperx as it's just a useless performance hit (I don't have anything that's multispeaker within the same audio file at the moment) mrq 2023-03-22 20:38:58 +0000
  • aa5bdafb06 ugh mrq 2023-03-22 20:26:28 +0000
  • 13605f980c now whisperx should output json that aligns with what's expected mrq 2023-03-22 20:01:30 +0000
  • 8877960062 fixes for whisperx batching mrq 2023-03-22 19:53:42 +0000
  • 4056a27bcb begrudgingly added back whisperx integration (VAD/Diarization testing, I really, really need accurate timestamps before dumping mondo amounts of time on training a dataset) mrq 2023-03-22 19:24:53 +0000
  • b8c3c4cfe2 Fixed #167 mrq 2023-03-22 18:21:37 +0000
  • da96161aaa oops mrq 2023-03-22 18:07:46 +0000
  • f822c87344 cleanups, realigning vall-e training mrq 2023-03-22 17:47:23 +0000
  • 909325bb5a ugh mrq 2023-03-21 22:18:57 +0000
  • 5a5fd9ca87 Added option to unsqueeze sample batches after sampling mrq 2023-03-21 21:34:26 +0000
  • 9657c1d4ce oops mrq 2023-03-21 20:31:01 +0000
  • 0c2a9168f8 DLAS is PIPified (but I'm still cloning it as a submodule to make updating it easier) mrq 2023-03-21 15:46:53 +0000
  • 34ef0467b9 VALL-E config edits mrq 2023-03-20 01:22:53 +0000
  • 2e33bf071a forgot to not require it to be relative mrq 2023-03-19 22:05:33 +0000
  • 5cb86106ce option to set results folder location mrq 2023-03-19 22:03:41 +0000
  • 74510e8623 doing what I do best: sourcing other configs and banging until it works (it doesnt work) mrq 2023-03-18 15:16:15 +0000
  • da9b4b5fb5 tweaks mrq 2023-03-18 15:14:22 +0000
  • f44895978d brain worms mrq 2023-03-17 20:08:08 +0000
  • b17260cddf added japanese tokenizer (experimental) mrq 2023-03-17 20:04:40 +0000
  • f34cc382c5 yammed mrq 2023-03-17 18:57:36 +0000
  • 96b7f9d2cc yammed mrq 2023-03-17 13:08:34 +0000
  • 249c6019af cleanup, metrics are grabbed for vall-e trainer mrq 2023-03-17 05:33:49 +0000
  • 1b72d0bba0 forgot to separate phonemes by spaces for [redacted] mrq 2023-03-17 02:08:07 +0000
  • d4c50967a6 cleaned up some prepare dataset code mrq 2023-03-17 01:24:02 +0000
  • 0b62ccc112 setup bnb on windows as needed mrq 2023-03-16 20:48:48 +0000
  • c4edfb7d5e unbump rocm5.4.2 because it does not work for me desu mrq 2023-03-16 15:33:23 +0000
  • 520fbcd163 bumped torch up (CUDA: 11.8, ROCm, 5.4.2) mrq 2023-03-16 15:09:11 +0000
  • 1a8c5de517 unk hunting mrq 2023-03-16 14:59:12 +0000
  • 46ff3c476a fixes v2 mrq 2023-03-16 14:41:40 +0000
  • 0408d44602 fixed reload tts being broken due to being as untouched as I am mrq 2023-03-16 14:24:44 +0000
  • aeb904a800 yammed mrq 2023-03-16 14:23:47 +0000
  • f9154c4db1 fixes mrq 2023-03-16 14:19:56 +0000
  • 54f2fc792a ops mrq 2023-03-16 05:14:15 +0000
  • 0a7d6f02a7 ops mrq 2023-03-16 04:54:17 +0000
  • 4ac43fa3a3 I forgot I undid the thing in DLAS mrq 2023-03-16 04:51:35 +0000
  • da4f92681e oops mrq 2023-03-16 04:35:12 +0000
  • ee8270bdfb preparations for training an IPA-based finetune mrq 2023-03-16 04:25:33 +0000
  • 7b80f7a42f fixed not cleaning up states while training (oops) mrq 2023-03-15 02:48:05 +0000
  • b31bf1206e oops mrq 2023-03-15 01:51:04 +0000
  • d752a22331 print a warning if automatically deduced batch size returns 1 mrq 2023-03-15 01:20:15 +0000
  • f6d34e1dd3 and maybe I should have actually tested with ./models/tokenizers/ made mrq 2023-03-15 01:09:20 +0000
  • 5e4f6808ce I guess I didn't test on a blank-ish slate mrq 2023-03-15 00:54:27 +0000
  • 363d0b09b1 added options to pick tokenizer json and diffusion model (so I don't have to add it in later when I get bored and add in diffusion training) mrq 2023-03-15 00:37:38 +0000
  • 07b684c4e7 removed redundant training data (they exist within tortoise itself anyways), added utility: view tokenized text mrq 2023-03-14 21:51:27 +0000
  • 469dd47a44 fixes #131 mrq 2023-03-14 18:58:03 +0000
  • 84b7383428 fixes #134 mrq 2023-03-14 18:52:56 +0000
  • 4b952ea52a fixes #132 mrq 2023-03-14 18:46:20 +0000
  • fe03ae5839 fixes mrq 2023-03-14 17:42:42 +0000
  • 9d2c7fb942 cleanup mrq 2023-03-14 16:23:29 +0000
  • 65fe304267 fixed broken graph displaying mrq 2023-03-14 16:04:56 +0000
  • 7b16b3e88a ;) mrq 2023-03-14 15:48:09 +0000
  • c85e32ff53 (: mrq 2023-03-14 14:08:35 +0000