|
2d1a9f10c0
|
nightmare of spaghetti that might break compat; mechanism to increase RVQ bins of an existing model without retraining, keeps sampled proms/resps at max RVQ level and trim off excess levels according to what model receives them, some other things I already forgot (I really hope no one else has weights being baked right now)
|
2023-08-19 15:06:33 -05:00 |
|
|
6ca347e1e1
|
literally had a urethra moment before going to bed with a way to implement cse/nse tasks
|
2023-08-19 01:16:46 -05:00 |
|
|
508677fcd5
|
repaired auraloss loss calc during eval/val
|
2023-08-18 21:19:47 -05:00 |
|
|
fb4e816823
|
oops
|
2023-08-18 21:11:19 -05:00 |
|
|
2a71486cb6
|
preparing for SpeechX extensions
|
2023-08-18 20:58:07 -05:00 |
|
|
8e7f900210
|
forgot the =
|
2023-08-17 19:07:59 -05:00 |
|
|
3ff7cf8341
|
maybe fix evaluation dataset not being capped to cfg.evaluation.size
|
2023-08-17 18:56:37 -05:00 |
|
|
d7deaf6def
|
distributed training works now (hopefully)
|
2023-08-13 22:07:45 -05:00 |
|
|
2af09d0bef
|
fixed that mysterious discepancy between the reported losses (I am so freaking mad, my piss is boiling, I had to interrupt halfway through an epoch)
|
2023-08-05 15:25:41 -05:00 |
|
|
d1b9770d41
|
set model to eval when inferencing (very important)
|
2023-08-05 04:29:05 +00:00 |
|
|
d89568a96e
|
some fixes for the local framework
|
2023-08-05 03:22:15 +00:00 |
|
|
012f54b7f1
|
another classic commit so i can copy it to another machine to gut out things and use the trainer bits for a side project that I should really get around to working on sooner than later
|
2023-08-04 14:21:30 -05:00 |
|
|
c85101403f
|
big cleanup
|
2023-08-03 20:26:36 -05:00 |
|
|
2e03e5ac93
|
Fixed an issue with having fairseq installed at all will brick logging
|
2023-08-02 22:57:10 -05:00 |
|
|
bf8cedc9dd
|
Rewrite init
|
2023-08-02 21:53:35 +00:00 |
|