Commit Graph

  • a507b769a1 sped up inferencing by not doing .tolist() for rep pen / length pen (and a bug fix in the web UI from prev commit) mrq 2024-10-04 22:18:20 -0500
  • 4a8e3ccf06 README tweaks, added --input-prompt-prefix as an experiment (its literally better to just not do this, but i'll retain it in case i have a revelation on how to improve it) mrq 2024-10-04 18:57:19 -0500
  • a9fa0898a9 tweaked demo page script to sample speakers instead mrq 2024-09-28 10:50:26 -0500
  • 2f1dca3089 added language selection in web UI, tweaked demo script mrq 2024-09-28 09:49:45 -0500
  • 10df2ef5f3 fixed oversight where input audio does not resample (lol...) mrq 2024-09-27 20:27:53 -0500
  • 039482a48e don't do eval on stt because it's so slow and I don't even bother doing any metrics against it anyways (to-do: make this a flag) mrq 2024-09-26 18:56:57 -0500
  • ff7a1b4163 coerce into path for other sampler_types (it's required for sampling for similar utterances) mrq 2024-09-26 18:37:56 -0500
  • f24547ad4e add top_k sampling / offset for prompt similar utterance sampling mrq 2024-09-26 16:26:40 -0500
  • 9da630f73a swap order of demo entries, as the model prioritizes adhering to the speaker prompt more (instead of trying to match the ground truth magically) mrq 2024-09-25 23:31:24 -0500
  • e84d466261 vall_e.plot tweaks mrq 2024-09-24 20:05:10 -0500
  • 2266d34818 oops mrq 2024-09-21 16:06:01 -0500
  • c5e9142863 added option to retokenize phonemes for hdf5 (to save having to remake my hdf5 file) mrq 2024-09-21 13:08:01 -0500
  • 536c11c4ac actually validated and fixed sampling similar utterances for the prompt (hopefully nothing else is needed) mrq 2024-09-21 12:59:51 -0500
  • d31f27119a regex replace out the (lang) markers in espeak, updated tokenizer vocab as lazily as possible to not have unk tokens mrq 2024-09-21 12:29:28 -0500
  • 769f67dcfe actually fix validation of phonemes in the symmap mrq 2024-09-21 12:19:34 -0500
  • c8d4716a9f ugh mrq 2024-09-18 21:40:57 -0500
  • fe241f6a99 support for wildcard in training/validation/noise dataset array (to-do: a better way to query between metadata folder and data folder) mrq 2024-09-18 21:34:43 -0500
  • b5bec0c9ce oops, turns out these are not split by speaker names already........ (also added sampling the dataset in the webui for easy viewing) mrq 2024-09-18 20:19:46 -0500
  • fa9d3f6c06 lang fixes / reworked phoneme symmap validation mrq 2024-09-18 19:36:03 -0500
  • 84647f588a more tweaks mrq 2024-09-18 16:43:57 -0500
  • ebac1db16c maybe final tweaks, I really needed to unify my json read/write and orjson is proven to be fast enough for me to try and rely on it more mrq 2024-09-17 22:57:04 -0500
  • 6ceed866b5 *faster* mrq 2024-09-17 22:44:36 -0500
  • f00283440c faster mrq 2024-09-17 22:26:31 -0500
  • be22b65300 solved my problem mrq 2024-09-17 21:58:44 -0500
  • 8f41d1b324 more tweaks mrq 2024-09-17 16:26:30 -0500
  • 804ddb5182 optimizations (6 hours to do cosine similarities on a speaker set of just 17k utterances................) mrq 2024-09-17 15:51:45 -0500
  • a9fbe81f98 oops mrq 2024-09-17 15:25:12 -0500
  • c440c4fe7e relegated processing similarity data into vall_e.emb.similarity since it's easier, seems to work? mrq 2024-09-17 14:37:21 -0500
  • 56f25f7a9b more stuff for similar-speaker prompt sampling (to-do: actually test if this works...) mrq 2024-09-16 23:10:29 -0500
  • 69f140ba45 fix oversight with phonemizing french because espeak defines french as fr-fr instead of fr (even though spain spanish is es and not es-sp or some shit, but portugal portuguese is pt-pt) mrq 2024-09-13 12:53:36 -0500
  • 4f3c7a37c8 also do text similarities (dont know what use I'll have for this) mrq 2024-09-10 16:45:59 -0500
  • 1c615a0f52 helper script (vall_e.emb.similar) to figure out the best way to compute similarity scores for audio (iunno how to go about it desu) mrq 2024-09-10 16:34:23 -0500
  • 17487ad70a weird quirk in process_emilia.py where language gets mutated, somehow (I hate python) mrq 2024-09-10 14:00:27 -0500
  • d059f6f56d added helper script to process Emilia (amphion/Emilia-Dataset), clean up espeak phonemes for non-English transcriptions with English words (because for some reason espeak injects (en){word}(lang) markers and it's annoying) mrq 2024-09-09 09:57:32 -0500
  • 31e8b7edb8 tweaks and fixes for lora stuffs mrq 2024-09-08 18:05:21 -0500
  • 54203c059d validated rep pen for STT (sometimes needed to wrangle the model) mrq 2024-09-08 08:30:30 -0500
  • 6a967f91b9 oops mrq 2024-09-07 22:13:49 -0500
  • 5d66a7db52 webui cleanup, more tweaks, default to safetensors in config mrq 2024-09-07 21:45:05 -0500
  • a6ad0577b8 cleanup the resultant text from STT mrq 2024-09-06 18:44:25 -0500
  • fa93061b3e more fixes, moved sampler state dict to a better place, eval works again mrq 2024-09-06 16:59:56 -0500
  • 4bd9bb39c8 webui for STT (still need to bake the model to handle it better, a few hours so far has it generate what looks like a normal transcription but does not correlate to the audio right now) mrq 2024-09-06 15:13:04 -0500
  • d33a906119 cleanup for AR_NAR inferencing to allow both TTS and STT tasks simultaneously (need to have training eval do this to though) mrq 2024-09-06 14:30:12 -0500
  • 341e19162b fixes, again mrq 2024-09-06 11:41:41 -0500
  • 94cf81d38c tweak mrq 2024-09-05 23:21:18 -0500
  • 413097f5f7 fixes mrq 2024-09-05 21:42:59 -0500
  • 54547b74d8 experimental implementation of STT (need to actually test on a model, test trainer seems to work) mrq 2024-09-05 20:43:20 -0500
  • d319d33368 haha mrq 2024-09-04 14:52:26 -0500
  • 619369236b ugh mrq 2024-08-30 21:10:57 -0500
  • 168e203942 ugh mrq 2024-08-30 14:39:07 -0500
  • 685f4faec0 ugh mrq 2024-08-30 10:46:26 -0500
  • 32287710a2 moved prints to use logger, edited readme (fused_attn doesnt seem stable for training) mrq 2024-08-29 13:27:16 -0500
  • d423bc03c2 fixed attentions for MoE mrq 2024-08-27 17:02:42 -0500
  • b7b99a25f1 added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) mrq 2024-08-26 19:33:51 -0500
  • 0d706ec6a1 added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm mrq 2024-08-26 19:13:34 -0500
  • 6b0891448c pain (some shit to try and get some flash attention for ROCm (gfx1100) through triton fused attention but no good) mrq 2024-08-25 20:07:27 -0500
  • 40e1799adc fixed xformers and flash_attn to actually work now mrq 2024-08-19 01:03:35 -0500
  • 29c35528e5 the sooner I accept there's no FA for V100s the sooner I'll go to bed mrq 2024-08-18 23:54:33 -0500
  • d636edd3a2 added flash_attn LlamaAttention (including flash_attn==1.0.9) mrq 2024-08-18 20:51:14 -0500
  • 054d28573a my DAC dataset again managed to only have some utterances with only 8 of 9 RVQ levels, this fixes an oversight from it mrq 2024-08-09 21:18:01 -0500
  • 2a1794c084 ughghghhhh mrq 2024-08-09 21:15:01 -0500
  • ed373957e2 maybe not mrq 2024-08-09 11:38:08 -0500
  • c658a7b440 make loss scaling opt-in rather than automatically determined (because it seems a DAC-based model really doesnt like loss scaling) mrq 2024-08-09 10:51:36 -0500
  • d04f6911b4 oops mrq 2024-08-08 19:38:55 -0500
  • 0aa59e6f3f uncommented block that writes the metadata on HDF5 creation mrq 2024-08-08 19:21:29 -0500
  • 79a6781c9e fix vall_e.data --action=hdf5 actually transcribing because past me completely forgot it tried to already put the transcribe/process dataset scripts inside the module before mrq 2024-08-08 07:51:42 -0500
  • 949339a3fa do not include SDPA attention if there's no available SDPA backends mrq 2024-08-06 20:42:39 -0500
  • 613024ec0d ugh mrq 2024-08-06 20:35:15 -0500
  • eac353cd0b busy work and cleanup while I wait for 1TB of audio to quantize... again. mrq 2024-08-06 20:23:33 -0500
  • f284c7ea9c do mixed-precision for AMP inside the compress function itself, because the loudness function gripes when using a float16 (non-power of 2 lengths) or bfloat16 (something about views for bfloat16) mrq 2024-08-06 15:08:37 -0500
  • b6ba2cc8e7 tweaked vall_e.emb.process to instead process audio one file at a time instead of all the files for a given speaker to avoid OOMing on less-memory-filled systems with --low-memory mrq 2024-08-06 14:24:40 -0500
  • 9710b06b74 tweaks and things mrq 2024-08-06 08:17:25 -0500
  • 8bac8fe902 oops mrq 2024-08-05 20:38:29 -0500
  • 134dac8c2b re-adapted process_libritts.py to a 'better' way (better because it processed without needing to shuffle a bunch of things and adapt to cope or something) mrq 2024-08-05 20:34:58 -0500
  • 3f73fcca29 oops mrq 2024-08-05 20:12:13 -0500
  • 597441e48b moved transcribe and process dataset scripts to vall_e/emb within the module itself, argparse-ified transcription script mrq 2024-08-05 19:40:50 -0500
  • 7cdfa3dc0c updated process_datasets.py, added argparsing so I can mostly stop manually editing things, and some other cleanup mrq 2024-08-05 15:59:25 -0500
  • debcc93e7e add adapted MixtralAttention for when I make a bad decision to actually train a MoE mrq 2024-08-04 22:03:22 -0500
  • 10aaf840e7 added export option to convert Llama to MixtralMoE for another dumb experiment mrq 2024-08-04 20:25:06 -0500
  • 3a65cc4b22 fix issue with sft and shared tensors... mrq 2024-08-04 19:56:21 -0500
  • 23f3b56fda oops mrq 2024-08-04 08:18:57 -0500
  • d19f93a2c0 documentation update mrq 2024-08-04 00:14:49 -0500
  • 2cb465018b implicitly load either normal pickled weights or safetensors on loading the model mrq 2024-08-03 23:34:18 -0500
  • c09133d00f added safetensors support (with metadata) and feed whatever torch.load/torch.save into it mrq 2024-08-03 23:15:20 -0500
  • 6a733eb2ed changed torch.Tensor().to(device, dtype) to just torch.tensor(..., device, dtype) because it's been bothering my autism that I'm creating tensors then converting rather than creating with the right device/dtype, some 'optimization' to compile the model but it doesnt seem to do anything useful mrq 2024-08-03 22:10:21 -0500
  • ab673e0426 add cap for NAR-len training, to avoid any weird cases in early training where it'll just mess up and generate long lengths mrq 2024-08-03 21:00:32 -0500
  • 4d2b88b164 throw exception if training, but no model is set to train (because i ran into this wondering what the hell was happening) mrq 2024-08-03 20:51:23 -0500
  • d0a5c7eca2 more coping with the NAR len mrq 2024-08-03 20:23:36 -0500
  • 11fa3da665 some cleanup, fixed the wrapper attention to explicitly use other sdpa backends mrq 2024-08-03 19:51:00 -0500
  • 9564ecda43 wrapper attention class for other sdpa backends + xformers seems to have broke... mrq 2024-08-03 15:12:11 -0500
  • 9e1989be1b tweaked initial NAR pass's initial token embeddings to use a different value, or osmething mrq 2024-08-03 09:01:37 -0500
  • 26f74c5739 somehow fixed non-unified position IDs for the NAR-len mrq 2024-08-03 08:43:42 -0500
  • 66407e5bdb tweaks for the NAR-len model, maybe mrq 2024-08-03 08:40:39 -0500
  • 97c5241bef fixes, throw an exception when using NAR only model with non-unified position IDs, since for some reason it outputs garbage for the NAR mrq 2024-08-02 22:25:49 -0500
  • 4456d3172b that's what I get for testing without hdf5 on my previous machine.... mrq 2024-08-02 20:44:01 -0500
  • 7a77978096 oversight with using resize_modules mrq 2024-08-02 20:28:49 -0500
  • 808a79ebaf oops mrq 2024-08-01 22:56:04 -0500
  • 443422ecb5 ugh, finally got some form of offloading working (need to test if it works on different GPUs, but GPU and CPU offloading seems to work in the test trainer) mrq 2024-08-01 22:43:39 -0500
  • c9ec6b28ef it actually wasn't working because Engines.__init__() automatically moves the entire module to the requested device, which was being called after offloading the model in the test trainer (and it seems I cant do it without injecting a bunch of shit in modeling_llama.py) mrq 2024-08-01 20:56:28 -0500
  • b4c895114c naive model offloading support (handles automatically splitting parts of the model to requested device per memory constraints, either inferred or requested in the yaml, input tensors are automatically migrated to the right device, it SEEMS to work for training under the test trainer when split between GPU and CPU) (this was specifically only because that Flux imagegen model released so I can test it there) mrq 2024-08-01 20:12:06 -0500
  • 387358bc8a fixes for the NAR-len model, and documentation some config options, and a better way to handle resizing modules on state_dict load mrq 2024-07-31 20:35:09 -0500