James Betker
9c6f776980
Add univnet vocoder
2022-03-15 11:34:51 -06:00
James Betker
7929fd89de
Refactor audio-style models into the audio folder
2022-03-15 11:06:25 -06:00
James Betker
f95d3d2b82
move waveglow to audio/vocoders
2022-03-15 11:03:07 -06:00
James Betker
0419a64107
misc
2022-03-15 10:36:34 -06:00
James Betker
bb03cbb9fc
composable initial checkin
2022-03-15 10:35:40 -06:00
James Betker
86b0d76fb9
tts8 (incomplete, may be removed)
2022-03-15 10:35:31 -06:00
James Betker
eecbc0e678
Use wider spectrogram when asked
2022-03-15 10:35:11 -06:00
James Betker
9767260c6c
tacotron stft - loosen bounds restrictions and clip
2022-03-15 10:31:26 -06:00
James Betker
f8631ad4f7
Updates to support inputting MELs into the conditioning encoder
2022-03-14 17:31:42 -06:00
James Betker
e045fb0ad7
fix clip grad norm with scaler
2022-03-13 16:28:23 -06:00
James Betker
22c67ce8d3
tts9 mods
2022-03-13 10:25:55 -06:00
James Betker
08599b4c75
fix random_audio_crop injector
2022-03-12 20:42:29 -07:00
James Betker
8f130e2b3f
add scale_shift_norm back to tts9
2022-03-12 20:42:13 -07:00
James Betker
9bbbe26012
update audio_with_noise
2022-03-12 20:41:47 -07:00
James Betker
e754c4fbbc
sweep update
2022-03-12 15:33:00 -07:00
James Betker
73bfd4a86d
another tts9 update
2022-03-12 15:17:06 -07:00
James Betker
0523777ff7
add efficient config to tts9
2022-03-12 15:10:35 -07:00
James Betker
896accb71f
data and prep improvements
2022-03-12 15:10:11 -07:00
James Betker
1e87b934db
potentially average conditioning inputs
2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11
Update tts9: Remove torchscript provisions and add mechanism to train solely on codes
2022-03-09 09:43:38 -07:00
James Betker
726e30c4f7
Update noise augmentation dataset to include voices that are appended at the end of another clip.
2022-03-09 09:43:10 -07:00
James Betker
c4e4cf91a0
add support for the original vocoder to audio_diffusion_fid; also add a new "intelligibility" metric
2022-03-08 15:53:27 -07:00
James Betker
3e5da71b16
add grad scaler scale to metrics
2022-03-08 15:52:42 -07:00
James Betker
d2bdeb6f20
misc audio support
2022-03-08 15:52:26 -07:00
James Betker
d553808d24
misc
2022-03-08 15:52:16 -07:00
James Betker
7dabc17626
phase2 filter initial commit
2022-03-08 15:51:55 -07:00
James Betker
f56edb2122
minicoder with classifier head: spread out probability mass for 0 predictions
2022-03-08 15:51:31 -07:00
James Betker
29b2921222
move diffusion vocoder
2022-03-08 15:51:05 -07:00
James Betker
94222b0216
tts9 initial commit
2022-03-08 15:50:45 -07:00
James Betker
38fd9fc985
Improve efficiency of audio_with_noise_dataset
2022-03-08 15:50:13 -07:00
James Betker
b3def182de
move processing pipeline to "phase_1"
2022-03-08 15:49:51 -07:00
James Betker
30ddac69aa
lots of bad entries
2022-03-05 23:15:59 -07:00
James Betker
dcf98df0c2
++
2022-03-05 23:12:34 -07:00
James Betker
64d764ccd7
fml
2022-03-05 23:11:10 -07:00
James Betker
ef63ff84e2
pvd2
2022-03-05 23:08:39 -07:00
James Betker
1a05712764
pvd
2022-03-05 23:05:29 -07:00
James Betker
d1dc8dbb35
Support tts9
2022-03-05 20:14:36 -07:00
James Betker
93a3302819
Push training_state data to CPU memory before saving it
...
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
James Betker
6000580e2e
df
2022-03-04 13:47:00 -07:00
James Betker
382681a35d
Load diffusion_fid DVAE into the correct cuda device
2022-03-04 13:42:14 -07:00
James Betker
e1052a5e32
Move log consensus to train for efficiency
2022-03-04 13:41:32 -07:00
James Betker
ce6dfdf255
Distributed "fixes"
2022-03-04 12:46:41 -07:00
James Betker
3ff878ae85
Accumulate loss & grad_norm metrics from all entities within a distributed graph
2022-03-04 12:01:16 -07:00
James Betker
79e5692388
Fix distributed bug
2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef
Make deterministic sampler work with distributed training & microbatches
2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3
Cap grad booster
2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d
Add a deterministic timestep sampler, with provisions to employ it every n steps
2022-03-04 10:40:14 -07:00
James Betker
f490eaeba7
Shuffle optimizer states back and forth between cpu memory during steps
2022-03-04 10:38:51 -07:00
James Betker
3c242403f5
adjust location of pre-optimizer step so I can visualize the new grad norms
2022-03-04 08:56:42 -07:00
James Betker
58019a2ce3
audio diffusion fid updates
2022-03-03 21:53:32 -07:00