|
ee58db746f
|
actually make the evaluation dataset shuffled for sample_type=speaker
|
2023-08-17 15:04:45 -05:00 |
|
|
18403a3523
|
maybe fixes eval dataloader not shuffling under distributed
|
2023-08-17 13:41:53 -05:00 |
|
|
b5f247aa11
|
just nuked about 9 hours of progress because I didn't make sure it pruned only on the global leader
|
2023-08-16 23:37:52 -05:00 |
|
|
44c08d828e
|
added sample_type that samples from speakers to truly balance an epoch by speakers rather than the entire dataset and a sampler that tries to balance by speakers
|
2023-08-16 19:39:21 -05:00 |
|
|
277c759ab1
|
fixed issue with non-distributed training, oops
|
2023-08-14 21:42:35 -05:00 |
|
|
5fa86182b5
|
oops
|
2023-08-14 10:50:40 -05:00 |
|
|
d7deaf6def
|
distributed training works now (hopefully)
|
2023-08-13 22:07:45 -05:00 |
|
|
bf8cedc9dd
|
Rewrite init
|
2023-08-02 21:53:35 +00:00 |
|