James Betker
|
c24682c668
|
Record load times in fast_paired_dataset
|
2022-02-07 15:45:38 -07:00 |
|
James Betker
|
5ae816bead
|
ctc gen checkin
|
2022-02-05 15:59:53 -07:00 |
|
James Betker
|
8fb147e8ab
|
add an autoregressive ctc code generator
|
2022-02-04 11:00:15 -07:00 |
|
James Betker
|
4249681c4b
|
Mods to support a autoregressive CTC code generator
|
2022-02-03 19:58:54 -07:00 |
|
James Betker
|
8c255811ad
|
more fixes
|
2022-01-25 17:57:16 -07:00 |
|
James Betker
|
91b4b240ac
|
dont pickle unique files
|
2022-01-21 00:02:06 -07:00 |
|
James Betker
|
7fef7fb9ff
|
Update fast_paired_dataset to report how many audio files it is actually using
|
2022-01-20 21:49:38 -07:00 |
|
James Betker
|
20312211e0
|
Fix bug in code alignment
|
2022-01-20 11:28:12 -07:00 |
|
James Betker
|
bcd8cc51e1
|
Enable collated data for diffusion purposes
|
2022-01-19 00:35:08 -07:00 |
|
James Betker
|
b6190e96b2
|
fast_paired
|
2022-01-17 15:46:02 -07:00 |
|
James Betker
|
1d30d79e34
|
De-specify fast-paired-dataset
|
2022-01-16 21:20:00 -07:00 |
|
James Betker
|
2b36ca5f8e
|
Revert paired back
|
2022-01-16 21:10:46 -07:00 |
|
James Betker
|
ad3e7df086
|
Split the fast random into its own new dataset
|
2022-01-16 21:10:11 -07:00 |
|
James Betker
|
7331862755
|
Updated paired to randomly index data, offsetting memory costs and speeding up initialization
|
2022-01-16 21:09:22 -07:00 |
|
James Betker
|
37e4e737b5
|
a few fixes
|
2022-01-16 15:17:17 -07:00 |
|
James Betker
|
35db5ebf41
|
paired_voice_audio_dataset - aligned codes support
|
2022-01-15 17:38:26 -07:00 |
|
James Betker
|
6706591d3d
|
Fix dataset
|
2022-01-06 15:24:37 -07:00 |
|
James Betker
|
f4484fd155
|
Add "dataset_debugger" support
This allows the datasets themselves compile statistics and report them
via tensorboard and wandb.
|
2022-01-06 12:38:20 -07:00 |
|
James Betker
|
f3cab45658
|
Revise audio datasets to include interesting statistics in batch
Stats include:
- How many indices were skipped to retrieve a given index
- Whether or not a conditioning input was actually the file itself
|
2022-01-06 11:15:16 -07:00 |
|
James Betker
|
06c1093090
|
Remove collating from paired_voice_audio_dataset
This will now be done at the model level, which is more efficient
|
2022-01-06 10:29:39 -07:00 |
|
James Betker
|
5e1d1da2e9
|
Clean paired_voice
|
2022-01-06 10:26:53 -07:00 |
|
James Betker
|
0fe34f57d1
|
Use torch resampler
|
2022-01-05 15:47:22 -07:00 |
|
James Betker
|
d4a6298658
|
more debugging
|
2022-01-01 14:25:27 -07:00 |
|
James Betker
|
35abefd038
|
More fix
|
2022-01-01 10:31:03 -07:00 |
|
James Betker
|
d5a5111890
|
Fix collating on by default on grand_conjoined
|
2022-01-01 10:30:15 -07:00 |
|
James Betker
|
4d9ba4a48a
|
can i has fix now
|
2022-01-01 00:48:27 -07:00 |
|
James Betker
|
56752f1dbc
|
Fix collator bug
|
2022-01-01 00:33:31 -07:00 |
|
James Betker
|
c28d8770c7
|
fix tensor lengths
|
2022-01-01 00:23:46 -07:00 |
|
James Betker
|
bbacffb790
|
dataset improvements and fix to unified_voice_Bilevel
|
2022-01-01 00:16:30 -07:00 |
|
James Betker
|
17fb934575
|
wer update
|
2021-12-31 16:21:39 -07:00 |
|
James Betker
|
f0c4cd6317
|
Taking another stab at a BPE tokenizer
|
2021-12-30 13:41:24 -07:00 |
|
James Betker
|
f2cd6a7f08
|
For loading conditional clips, default to falling back to loading the clip itself
|
2021-12-30 09:10:14 -07:00 |
|
James Betker
|
51ce1b5007
|
Add conditioning clips features to grand_conjoined
|
2021-12-29 14:44:32 -07:00 |
|
James Betker
|
c6ef0eef0b
|
asdf
|
2021-12-29 10:07:39 -07:00 |
|
James Betker
|
53784ec806
|
grand conjoined dataset: support collating
|
2021-12-29 09:44:37 -07:00 |
|
James Betker
|
07c2b9907c
|
Add voice2voice clip model
|
2021-12-28 16:18:12 -07:00 |
|
James Betker
|
746392f35c
|
Fix DS
|
2021-12-25 15:28:59 -07:00 |
|
James Betker
|
736c2626ee
|
build in character tokenizer
|
2021-12-25 15:21:01 -07:00 |
|
James Betker
|
52410fd9d9
|
256-bpe tokenizer
|
2021-12-25 08:52:08 -07:00 |
|
James Betker
|
ead2a74bf0
|
Add debug_failures flag
|
2021-12-23 16:12:16 -07:00 |
|
James Betker
|
9677f7084c
|
dataset mod
|
2021-12-23 15:21:30 -07:00 |
|
James Betker
|
8b19c37409
|
UnifiedGptVoice!
|
2021-12-23 15:20:26 -07:00 |
|
James Betker
|
5bc9772cb0
|
grand: support validation mode
|
2021-12-23 15:03:20 -07:00 |
|
James Betker
|
e55d949855
|
GrandConjoinedDataset
|
2021-12-23 14:32:33 -07:00 |
|
James Betker
|
b9de8a8eda
|
More fixes
|
2021-12-22 19:21:29 -07:00 |
|
James Betker
|
191e0130ee
|
Another fix
|
2021-12-22 18:30:50 -07:00 |
|
James Betker
|
6c6daa5795
|
Build a bigger, better tokenizer
|
2021-12-22 17:46:18 -07:00 |
|
James Betker
|
c737632eae
|
Train and use a bespoke tokenizer
|
2021-12-22 15:06:14 -07:00 |
|
James Betker
|
a9629f7022
|
Try out using the GPT tokenizer rather than nv_tacotron
This results in a significant compression of the text domain, I'm curious what the
effect on speech quality will be.
|
2021-12-22 14:03:18 -07:00 |
|
James Betker
|
ced81a760b
|
restore nv_tacotron
|
2021-12-22 13:48:53 -07:00 |
|