James Betker
bb3d1ab03d
More cleanup
2022-02-04 11:06:17 -07:00
James Betker
5cc342de66
Clean up
2022-02-04 11:00:42 -07:00
James Betker
8fb147e8ab
add an autoregressive ctc code generator
2022-02-04 11:00:15 -07:00
James Betker
7f4fc55344
Update SR model
2022-02-03 21:42:53 -07:00
James Betker
de1a1d501a
Move audio injectors into their own file
2022-02-03 21:42:37 -07:00
James Betker
687393de59
Add a better split_on_silence (processing_pipeline)
...
Going to extend this a bit more going forwards to support the entire pipeline.
2022-02-03 20:00:26 -07:00
James Betker
1d29999648
Uupdates to the TTS production scripts
2022-02-03 20:00:01 -07:00
James Betker
bc506d4bcd
Mods to unet_diffusion_tts6 to support super resolution mode
2022-02-03 19:59:39 -07:00
James Betker
4249681c4b
Mods to support a autoregressive CTC code generator
2022-02-03 19:58:54 -07:00
James Betker
8132766d38
tts6
2022-01-31 20:15:06 -07:00
James Betker
fbea6e8eac
Adjustments to diffusion networks
2022-01-30 16:14:06 -07:00
James Betker
e58dab14c3
new diffusion updates from testing
2022-01-29 11:01:01 -07:00
James Betker
935a4e853e
get rid of nil tokens in <2>
2022-01-27 22:45:57 -07:00
James Betker
0152174c0e
Add wandb_step_factor argument
2022-01-27 19:58:58 -07:00
James Betker
e0e36ed98c
Update use_diffuse_tts
2022-01-27 19:57:28 -07:00
James Betker
a77d376ad2
rename unet diffusion tts and add 3
2022-01-27 19:56:24 -07:00
James Betker
7badbf1b4d
update usage scripts
2022-01-25 17:57:26 -07:00
James Betker
8c255811ad
more fixes
2022-01-25 17:57:16 -07:00
James Betker
0f3ca28e39
Allow diffusion model to be trained with masking tokens
2022-01-25 14:26:21 -07:00
James Betker
798ed7730a
i like wasting time
2022-01-24 18:12:08 -07:00
James Betker
fc09cff4b3
angry
2022-01-24 18:09:29 -07:00
James Betker
cc0d9f7216
Fix
2022-01-24 18:05:45 -07:00
James Betker
3a9e3a9db3
consolidate state
2022-01-24 17:59:31 -07:00
James Betker
dfef34ba39
Load ema to cpu memory if specified
2022-01-24 15:08:29 -07:00
James Betker
49edffb6ad
Revise device mapping
2022-01-24 15:08:13 -07:00
James Betker
33511243d5
load model state dicts into the correct device
...
it's not clear to me that this will make a huge difference, but it's a good idea anyways
2022-01-24 14:40:09 -07:00
James Betker
3e16c509f6
Misc fixes
2022-01-24 14:31:43 -07:00
James Betker
e2ed0adbd8
use_diffuse_tts updates
2022-01-24 14:31:28 -07:00
James Betker
e420df479f
Allow steps to specify which state keys to carry forward (reducing memory utilization)
2022-01-24 11:01:27 -07:00
James Betker
62475005e4
Sort data items in descending order, which I suspect will improve performance because we will hit GC less
2022-01-23 19:05:32 -07:00
James Betker
d18aec793a
Revert "(re) attempt diffusion checkpointing logic"
...
This reverts commit b22eec8fe3
.
2022-01-22 09:14:50 -07:00
James Betker
b22eec8fe3
(re) attempt diffusion checkpointing logic
2022-01-22 08:34:40 -07:00
James Betker
8f48848f91
misc
2022-01-22 08:23:29 -07:00
James Betker
851070075a
text<->cond clip
...
I need that universal clip..
2022-01-22 08:23:14 -07:00
James Betker
8ada52ccdc
Update LR layers to checkpoint better
2022-01-22 08:22:57 -07:00
James Betker
ce929a6b3f
Allow grad scaler to be enabled even in fp32 mode
2022-01-21 23:13:24 -07:00
James Betker
91b4b240ac
dont pickle unique files
2022-01-21 00:02:06 -07:00
James Betker
7fef7fb9ff
Update fast_paired_dataset to report how many audio files it is actually using
2022-01-20 21:49:38 -07:00
James Betker
ed35cfe393
Update inference scripts
2022-01-20 11:28:50 -07:00
James Betker
20312211e0
Fix bug in code alignment
2022-01-20 11:28:12 -07:00
James Betker
8e2439f50d
Decrease resolution requirements to 2048
2022-01-20 11:27:49 -07:00
James Betker
4af8525dc3
Adjust diffusion vocoder to allow training individual levels
2022-01-19 13:37:59 -07:00
James Betker
ac13bfefe8
use_diffuse_tts
2022-01-19 00:35:24 -07:00
James Betker
bcd8cc51e1
Enable collated data for diffusion purposes
2022-01-19 00:35:08 -07:00
James Betker
dc9cd8c206
Update use_gpt_tts to be usable with unified_voice2
2022-01-18 21:14:17 -07:00
James Betker
7b4544b83a
Add an experimental unet_diffusion_tts to perform experiments on
2022-01-18 08:38:24 -07:00
James Betker
b6190e96b2
fast_paired
2022-01-17 15:46:02 -07:00
James Betker
1d30d79e34
De-specify fast-paired-dataset
2022-01-16 21:20:00 -07:00
James Betker
2b36ca5f8e
Revert paired back
2022-01-16 21:10:46 -07:00
James Betker
ad3e7df086
Split the fast random into its own new dataset
2022-01-16 21:10:11 -07:00
James Betker
7331862755
Updated paired to randomly index data, offsetting memory costs and speeding up initialization
2022-01-16 21:09:22 -07:00
James Betker
37e4e737b5
a few fixes
2022-01-16 15:17:17 -07:00
James Betker
35db5ebf41
paired_voice_audio_dataset - aligned codes support
2022-01-15 17:38:26 -07:00
James Betker
3f177cd2b3
requirements
2022-01-15 17:28:59 -07:00
James Betker
b398ecca01
wer fix
2022-01-15 17:28:17 -07:00
James Betker
9100e7fa9b
Add a diffusion network that takes aligned text instead of MELs
2022-01-15 17:28:02 -07:00
James Betker
87c83e4957
update wer script
2022-01-13 17:08:49 -07:00
James Betker
009a1e8404
Add a new diffusion_vocoder that should be trainable faster
...
This new one has a "cheating" top layer, that does not feed down into the unet encoder,
but does consume the outputs of the unet. This cheater only operates on half of the input,
while the rest of the unet operates on the full input. This limits the dimensionality of this last
layer, on the assumption that these last layers consume by far the most computation and memory,
but do not require the full input context.
Losses are only computed on half of the aggregate input.
2022-01-11 17:26:07 -07:00
James Betker
d4e27ccf62
misc updates
2022-01-11 16:25:40 -07:00
James Betker
91f28580e2
fix unified_voice
2022-01-10 16:17:31 -07:00
James Betker
136744dc1d
Fixes
2022-01-10 14:32:04 -07:00
James Betker
ee3dfac2ae
unified_voice2: decouple positional embeddings and token embeddings from underlying gpt model
2022-01-10 08:14:41 -07:00
James Betker
f503d8d96b
Partially implement performers in transformer_builders
2022-01-09 22:35:03 -07:00
James Betker
ec456b6733
Revert unified_voice back to beginning
...
I'll be doing my work within unified_voice2
2022-01-09 22:34:30 -07:00
James Betker
432073c5ca
Make performer code functional
2022-01-09 22:32:50 -07:00
James Betker
f474a7ac65
unified_voice2
2022-01-09 22:32:34 -07:00
James Betker
c075fe72e2
import performer repo
2022-01-09 22:10:07 -07:00
James Betker
7de3874f15
Make dalle transformer checkpointable
2022-01-09 19:14:35 -07:00
James Betker
70b17da193
Alter unified_voice to use extensible transformer (still WIP)
2022-01-08 22:18:25 -07:00
James Betker
15d9517e26
Allow bi-directional clipping
2022-01-08 22:18:04 -07:00
James Betker
894d245062
More zero_grad fixes
2022-01-08 20:31:19 -07:00
James Betker
8bade38180
Add generic CLIP model based off of x_clip
2022-01-08 19:08:01 -07:00
James Betker
2a9a25e6e7
Fix likely defective nan grad recovery
2022-01-08 18:24:58 -07:00
James Betker
438dd9ed33
fix text-voice-clip bug
2022-01-08 08:55:00 -07:00
James Betker
34774f9948
unified_voice: begin decoupling from HF GPT
...
I'd like to try some different (newer) transformer variants. The way to get
there is softly decoupling the transformer portion of this architecture
from GPT. This actually should be fairly easy.
2022-01-07 22:51:24 -07:00
James Betker
1f6a5310b8
More fixes to use_gpt_tts
2022-01-07 22:30:55 -07:00
James Betker
68090ac3e9
Finish up the text->voice clip model
2022-01-07 22:28:45 -07:00
James Betker
65ffe38fce
misc
2022-01-06 22:16:17 -07:00
James Betker
6706591d3d
Fix dataset
2022-01-06 15:24:37 -07:00
James Betker
f4484fd155
Add "dataset_debugger" support
...
This allows the datasets themselves compile statistics and report them
via tensorboard and wandb.
2022-01-06 12:38:20 -07:00
James Betker
f3cab45658
Revise audio datasets to include interesting statistics in batch
...
Stats include:
- How many indices were skipped to retrieve a given index
- Whether or not a conditioning input was actually the file itself
2022-01-06 11:15:16 -07:00
James Betker
06c1093090
Remove collating from paired_voice_audio_dataset
...
This will now be done at the model level, which is more efficient
2022-01-06 10:29:39 -07:00
James Betker
e7a705fe6e
Make gpt_asr_hf2 more efficient at inference
2022-01-06 10:27:10 -07:00
James Betker
5e1d1da2e9
Clean paired_voice
2022-01-06 10:26:53 -07:00
James Betker
525addffab
Unified: automatically clip inputs according to specified max length to improve inference time
2022-01-06 10:13:45 -07:00
James Betker
61cd351b71
update unified
2022-01-06 09:48:11 -07:00
James Betker
10fd1110be
Fix (?) use_gpt_tts for unified_voice
2022-01-05 20:09:31 -07:00
James Betker
3c4301f085
Remove dvae_arch_playground
2022-01-05 17:06:45 -07:00
James Betker
a63a17e48f
Remove deepspeech models
2022-01-05 17:05:13 -07:00
James Betker
c584ba05ee
unified_voice improvements
...
- Rename max_symbols_per_phrase to max_text_tokens
- Remove max_total_tokens (no longer necessary)
- Fix integration with MelEncoder
2022-01-05 17:03:53 -07:00
James Betker
50d267ab1a
misc
2022-01-05 17:01:22 -07:00
James Betker
0fe34f57d1
Use torch resampler
2022-01-05 15:47:22 -07:00
James Betker
38aba6f88d
Another dumdum fix
2022-01-04 15:18:25 -07:00
James Betker
963c6072bb
Add mel_encoder and solo embeddings to unified_voice
2022-01-04 15:15:58 -07:00
James Betker
2165124f19
Add GPT documentation
2022-01-01 21:00:07 -07:00
James Betker
2635412291
doh
2022-01-01 14:29:59 -07:00
James Betker
d4a6298658
more debugging
2022-01-01 14:25:27 -07:00
James Betker
d8111e0477
misc
2022-01-01 14:05:33 -07:00
James Betker
dc535b5358
better bounds
2022-01-01 14:05:22 -07:00
James Betker
fe9ea4e01a
auto-fix text_inputs too big
2022-01-01 13:25:47 -07:00
James Betker
35abefd038
More fix
2022-01-01 10:31:03 -07:00
James Betker
d5a5111890
Fix collating on by default on grand_conjoined
2022-01-01 10:30:15 -07:00
James Betker
4d9ba4a48a
can i has fix now
2022-01-01 00:48:27 -07:00
James Betker
56752f1dbc
Fix collator bug
2022-01-01 00:33:31 -07:00
James Betker
c28d8770c7
fix tensor lengths
2022-01-01 00:23:46 -07:00
James Betker
bbacffb790
dataset improvements and fix to unified_voice_Bilevel
2022-01-01 00:16:30 -07:00
James Betker
eda753e776
Allow conditioning shuffling to be disabled
2021-12-31 23:32:08 -07:00
James Betker
17fb934575
wer update
2021-12-31 16:21:39 -07:00
James Betker
f0c4cd6317
Taking another stab at a BPE tokenizer
2021-12-30 13:41:24 -07:00
James Betker
9aa06542cd
Further reduce the complexity of the MEL encoder in GptAsrHf
2021-12-30 09:10:40 -07:00
James Betker
f2cd6a7f08
For loading conditional clips, default to falling back to loading the clip itself
2021-12-30 09:10:14 -07:00
James Betker
5ae7e0d9b0
Fix gapping bug in voice2voice clip
2021-12-29 14:44:46 -07:00
James Betker
51ce1b5007
Add conditioning clips features to grand_conjoined
2021-12-29 14:44:32 -07:00
James Betker
b12f47b36d
Add some noise to voice_voice_clip
2021-12-29 13:56:30 -07:00
James Betker
c6ef0eef0b
asdf
2021-12-29 10:07:39 -07:00
James Betker
53784ec806
grand conjoined dataset: support collating
2021-12-29 09:44:37 -07:00
James Betker
8a02ba5935
Transit s2s clips back to CPU memory after processing
2021-12-29 08:54:07 -07:00
James Betker
af6d5cd526
Add resume into speech-speech
2021-12-29 08:50:49 -07:00
James Betker
0e4bcc33ab
Additional debugging
2021-12-29 00:23:27 -07:00
James Betker
b24a51f0aa
Check in speech2speech CLIP inference tool
2021-12-29 00:19:44 -07:00
James Betker
c1bef01dfa
GptAsrHf2 checkin
2021-12-28 20:48:38 -07:00
James Betker
07c2b9907c
Add voice2voice clip model
2021-12-28 16:18:12 -07:00
James Betker
a9ee5b624f
Simplify and conform gpt_asr_hf2
2021-12-28 11:54:33 -07:00
James Betker
a5b4bee719
Improve asr_eval
2021-12-28 11:45:15 -07:00
James Betker
312f631c5b
gpt_asr_hf2: remove dual positional embeddings
2021-12-28 10:57:45 -07:00
James Betker
93624fa4b2
Don't use tqdm in ranks!=0
2021-12-28 10:06:54 -07:00
James Betker
a12042ea99
Allow multi-embeddings to be disabled
2021-12-28 09:00:53 -07:00
James Betker
4a32949b0e
update inference mode for unified
2021-12-26 15:33:21 -07:00
James Betker
a698d3f525
unified_voice: introduce paired embeddings
2021-12-26 15:33:05 -07:00
James Betker
6996dfd9d5
asr_hf2: add independent position embedders
2021-12-26 15:17:24 -07:00
James Betker
5b5cbc057c
Work checkpoint for gpt asr hf2
2021-12-26 10:29:12 -07:00
James Betker
cd89e6b42e
Initialize our embeddings the same way GPT-2 initializes theirs.
2021-12-26 00:20:30 -07:00
James Betker
8d01f7685c
Get rid of absolute positional embeddings in unifiedvoice
2021-12-26 00:10:24 -07:00
James Betker
6700f8851d
moar verbosity
2021-12-25 23:23:21 -07:00
James Betker
8acf3b3097
Better dimensional asserting
2021-12-25 23:18:25 -07:00
James Betker
e959541494
Add position embeddings back into unified_voice
...
I think this may be the solution behind the days problems.
2021-12-25 23:10:56 -07:00
James Betker
64cb4a92db
Support adamw_zero
2021-12-25 21:32:01 -07:00
James Betker
776a7abfcc
Support torch DDP _set_static_graph
2021-12-25 21:20:06 -07:00
James Betker
746392f35c
Fix DS
2021-12-25 15:28:59 -07:00
James Betker
736c2626ee
build in character tokenizer
2021-12-25 15:21:01 -07:00
James Betker
b595c62893
One way decoder for decoding from mel codes
2021-12-25 12:18:00 -07:00
James Betker
ab9cafa572
Make tokenization configs more configurable
2021-12-25 12:17:50 -07:00
James Betker
52410fd9d9
256-bpe tokenizer
2021-12-25 08:52:08 -07:00
James Betker
8e26400ce2
Add inference for unified gpt
2021-12-24 13:27:06 -07:00
James Betker
ead2a74bf0
Add debug_failures flag
2021-12-23 16:12:16 -07:00
James Betker
9677f7084c
dataset mod
2021-12-23 15:21:30 -07:00
James Betker
8b19c37409
UnifiedGptVoice!
2021-12-23 15:20:26 -07:00
James Betker
5bc9772cb0
grand: support validation mode
2021-12-23 15:03:20 -07:00
James Betker
e55d949855
GrandConjoinedDataset
2021-12-23 14:32:33 -07:00
James Betker
b9de8a8eda
More fixes
2021-12-22 19:21:29 -07:00
James Betker
191e0130ee
Another fix
2021-12-22 18:30:50 -07:00
James Betker
6c6daa5795
Build a bigger, better tokenizer
2021-12-22 17:46:18 -07:00
James Betker
c737632eae
Train and use a bespoke tokenizer
2021-12-22 15:06:14 -07:00
James Betker
66bc60aeff
Re-add start_text_token
2021-12-22 14:10:35 -07:00
James Betker
a9629f7022
Try out using the GPT tokenizer rather than nv_tacotron
...
This results in a significant compression of the text domain, I'm curious what the
effect on speech quality will be.
2021-12-22 14:03:18 -07:00
James Betker
ced81a760b
restore nv_tacotron
2021-12-22 13:48:53 -07:00
James Betker
7bf4f9f580
duplicate nvtacotron
2021-12-22 13:48:30 -07:00
James Betker
7ae7d423af
VoiceCLIP model
2021-12-22 13:44:11 -07:00
James Betker
09f7f3e615
Remove obsolete lucidrains DALLE stuff, re-create it in a dedicated folder
2021-12-22 13:44:02 -07:00
James Betker
a42b94ab72
gpt_tts_hf inference fixes
2021-12-22 13:22:15 -07:00
James Betker
48e3ee9a5b
Shuffle conditioning inputs along the positional axis to reduce fitting on prosody and other positional information
...
The mels should still retain some short-range positional information the model can use
for tone and frequencies, for example.
2021-12-20 19:05:56 -07:00
James Betker
53858b2055
Fix gpt_tts_hf inference
2021-12-20 17:45:26 -07:00
James Betker
712d746e9b
gpt_tts: format conditioning inputs more for contextual voice clues and less for prosidy
...
also support single conditional inputs
2021-12-19 17:42:29 -07:00
James Betker
c813befd53
Remove dedicated positioning embeddings
2021-12-19 09:01:31 -07:00
James Betker
b4ddcd7111
More inference improvements
2021-12-19 09:01:19 -07:00
James Betker
f9c45d70f0
Fix mel terminator
2021-12-18 17:18:06 -07:00
James Betker
937045cb63
Fixes
2021-12-18 16:45:38 -07:00
James Betker
9b9f7ea61b
GptTtsHf: Make the input/target placement easier to reason about
2021-12-17 10:24:14 -07:00
James Betker
2fb4213a3e
More lossy fixes
2021-12-17 10:01:42 -07:00
James Betker
dee34f096c
Add use_gpt_tts script
2021-12-16 23:28:54 -07:00
James Betker
9e8a9bf6ca
Various fixes to gpt_tts_hf
2021-12-16 23:28:44 -07:00
James Betker
62c8ed9a29
move speech utils
2021-12-16 20:47:37 -07:00
James Betker
e7957e4897
Make loss accumulator for logs accumulate better
2021-12-12 22:23:17 -07:00
James Betker
4f8c4d130c
gpt_tts_hf: pad mel tokens with an <end_of_sequence> token.
2021-12-12 20:04:50 -07:00
James Betker
76f86c0e47
gaussian_diffusion: support fp16
2021-12-12 19:52:21 -07:00
James Betker
aa7cfd1edf
Add support for mel norms across the channel dim
2021-12-12 19:52:08 -07:00
James Betker
8917c02a4d
gpt_tts_hf inference first pass
2021-12-12 19:51:44 -07:00
James Betker
63bf135b93
Support norms
2021-12-11 08:30:49 -07:00
James Betker
959979086d
fix
2021-12-11 08:18:00 -07:00
James Betker
5a664aa56e
misc
2021-12-11 08:17:26 -07:00
James Betker
d610540ce5
mel norm computation script
2021-12-11 08:16:50 -07:00
James Betker
306274245b
Also do dynamic range compression across mel
2021-12-10 20:06:24 -07:00
James Betker
faf55684b8
Use slaney norm in the mel filterbank computation
2021-12-10 20:04:52 -07:00
James Betker
b2d8fbcfc0
build a better speech synthesis toolset
2021-12-09 22:59:56 -07:00
James Betker
32cfcf3684
Turn off optimization in find_faulty_files
2021-12-09 09:02:09 -07:00
James Betker
a66a2bf91b
Update find_faulty_files
2021-12-09 09:00:00 -07:00
James Betker
9191201f05
asd
2021-12-07 09:55:39 -07:00
James Betker
ef15a39841
fix gdi bug?
2021-12-07 09:53:48 -07:00
James Betker
6ccff3f49f
Record codes more often
2021-12-07 09:22:45 -07:00
James Betker
d0b2f931bf
Add feature to diffusion vocoder where the spectrogram conditioning layers can be re-trained apart from the rest of the model
2021-12-07 09:22:30 -07:00
James Betker
662920bde3
Log codes when simply fetching codebook_indices
2021-12-06 09:21:43 -07:00
James Betker
380a5d5475
gdi..
2021-12-03 08:53:09 -07:00
James Betker
101a01f744
Fix dvae codes issue
2021-12-02 23:28:36 -07:00
James Betker
31fc693a8a
dafsdf
2021-12-02 22:55:36 -07:00
James Betker
040d998922
maasd
2021-12-02 22:53:48 -07:00
James Betker
cc10e7e7e8
Add tsv loader
2021-12-02 22:43:07 -07:00
James Betker
702607556d
nv_tacotron_dataset: allow it to load conditioning signals
2021-12-02 22:14:44 -07:00
James Betker
07b0124712
GptTtsHf!
2021-12-02 21:48:42 -07:00
James Betker
85542ec547
One last fix for gpt_asr_hf2
2021-12-02 21:19:28 -07:00
James Betker
68e9db12b5
Add interleaving and direct injectors
2021-12-02 21:04:49 -07:00
James Betker
04454ee63a
Add evaluation logic for gpt_asr_hf2
2021-12-02 21:04:36 -07:00
James Betker
47fe032a3d
Try to make diffusion validator more reproducible
2021-11-24 09:38:10 -07:00
James Betker
5956eb757c
ffffff
2021-11-24 00:19:47 -07:00
James Betker
f1ed0588e3
another fix
2021-11-24 00:11:21 -07:00
James Betker
7a3c4a4fc6
Fix lr quantizer decode
2021-11-24 00:01:26 -07:00
James Betker
3f6ecfe0db
q fix
2021-11-23 23:50:27 -07:00
James Betker
d9747fe623
Integrate with lr_quantizer
2021-11-23 19:48:22 -07:00
James Betker
82d0e7720e
Add choke to lucidrains_dvae
2021-11-23 18:53:37 -07:00
James Betker
934395d4b8
A few fixes for gpt_asr_hf2
2021-11-23 09:29:29 -07:00
James Betker
3b5c3d85d8
Allow specification of wandb run name
2021-11-22 17:31:29 -07:00
James Betker
01e635168b
whoops
2021-11-22 17:24:13 -07:00
James Betker
973f47c525
misc nonfunctional
2021-11-22 17:16:39 -07:00
James Betker
3125ca38f5
Further wandb logs
2021-11-22 16:40:19 -07:00
James Betker
19c80bf7a7
Improve wandb logging
2021-11-22 16:40:05 -07:00
James Betker
0604060580
Finish up mods for next version of GptAsrHf
2021-11-20 21:33:49 -07:00
James Betker
14f3155ec4
misc
2021-11-20 17:45:14 -07:00
James Betker
687e0746b3
Add Torch-derived MelSpectrogramInjector
2021-11-18 20:02:45 -07:00
James Betker
555b7e52ad
Add rev2 of GptAsrHf
2021-11-18 20:02:24 -07:00
James Betker
c30a38cdf1
Undo baseline GDI changes
2021-11-18 20:02:09 -07:00
James Betker
1287915f3c
Fix dvae test failure
2021-11-18 00:58:36 -07:00
James Betker
019acfa4c5
Allow flat dvae
2021-11-18 00:53:42 -07:00
James Betker
f3db41f125
Fix code logging
2021-11-18 00:34:37 -07:00
James Betker
f36bab95dd
Audio resample injector
2021-11-10 20:06:33 -07:00
James Betker
79367f753d
Fix error & add nonfinite warning
2021-11-09 23:58:41 -07:00
James Betker
5d5558893a
Merge remote-tracking branch 'origin/master'
2021-11-08 20:10:49 -07:00
James Betker
d43f25cc20
Update losses
2021-11-08 20:10:07 -07:00
James Betker
c584320cf3
Fix gpt_asr_hf distillation
2021-11-07 21:53:21 -07:00
James Betker
9b3c3b1227
use sets instead of list ops
2021-11-07 20:45:57 -07:00
James Betker
722d3dbdc2
f
2021-11-07 18:52:05 -07:00
James Betker
18b1de9b2c
Add exclusion_lists to unsupervised_audio_dataset
2021-11-07 18:46:47 -07:00
James Betker
9b693b0a54
Fixes to filter_clips_hifreq
2021-11-07 18:42:22 -07:00
James Betker
a367ea3fda
Add script for computing attention for gpt_asr
2021-11-07 18:42:06 -07:00
James Betker
3c0f2fbb21
Add filtration script for finding resampled clips (or phone calls)
2021-11-07 14:16:11 -07:00
James Betker
756b4dad09
Working gpt_asr_hf inference - and it's a beast!
2021-11-06 21:47:15 -06:00
James Betker
596a62fe01
Apply fix to gpt_asr_hf and prep it for inference
...
Fix is that we were predicting two characters in advance, not next character
2021-11-04 10:09:24 -06:00
James Betker
fd14746bf8
badtimes
2021-11-03 00:33:38 -06:00
James Betker
2fa80486de
tacotron_dataset: recover gracefully
2021-11-03 00:31:50 -06:00
James Betker
af51d00dee
Load wav files from voxpopuli instead of oggs
2021-11-02 09:32:26 -06:00
James Betker
3b65241b6b
Get rid of printing grad names (didn't work very well..)
2021-11-01 18:44:05 -06:00
James Betker
993bd52d42
Add spec_augment injector
2021-11-01 18:43:11 -06:00
James Betker
4cff774b0e
Reduce complexity of the encoder for gpt_asr_hf
2021-11-01 17:02:28 -06:00
James Betker
da55ca0438
gpt_asr using the huggingfaces transformer
2021-11-01 17:00:22 -06:00
James Betker
ee9b199d2b
Build in capacity to revert & resume networks that encounter a NaN
...
I'm increasingly seeing issues where something like this can be useful. In many (most?)
cases it's just a waste of compute, though. Still, better than a cold computer for a whole
night.
2021-11-01 16:14:59 -06:00
James Betker
87364b890f
Add custom clip_grad_norm that prints out the param names in error.
2021-11-01 11:12:20 -06:00
James Betker
f7d0901ce6
Decouple MEL from nv_tacotron_dataset
2021-10-31 15:01:38 -06:00
James Betker
b8b268b5f6
Misc
2021-10-31 14:29:23 -06:00
James Betker
b404a3b747
Revert recent changes to extr
2021-10-30 20:48:06 -06:00
James Betker
83cccef9d8
Condition on full signal
2021-10-30 19:58:34 -06:00
James Betker
e9dc37f19c
Mod trainer to copy config file into experiments root
2021-10-30 17:00:24 -06:00
James Betker
36ed28913a
Fix two scripts
2021-10-30 17:00:06 -06:00
James Betker
df45a9dec2
Fix inference mode for lucidrains_gpt
2021-10-30 16:59:18 -06:00
James Betker
466b9fbcaa
classify
2021-10-29 20:22:40 -06:00
James Betker
92fe8b4dd9
ffffpt2
2021-10-29 17:29:49 -06:00
James Betker
95ca88efce
Fix feedforward
2021-10-29 17:27:51 -06:00
James Betker
b476516340
Check in backing changes (which may have broken something?)
2021-10-29 17:22:33 -06:00
James Betker
986fc9628d
Check in GPT with new inference methods (but not the backing code..)
2021-10-29 17:21:40 -06:00
James Betker
0822792d79
Fix options.py bug
2021-10-29 14:47:31 -06:00
James Betker
928e7026c2
Mod STFT injector to be specifiable
2021-10-28 22:34:12 -06:00
James Betker
579f0a70ee
Move UnsupervisedAudioDataset to use my new mp3 loader
2021-10-28 22:33:12 -06:00
James Betker
2afea126d7
mod trainer to be very explicit about the fact that loading models and state together dont work, but allow it
2021-10-28 22:32:42 -06:00
James Betker
bb0a0c8264
classify_into_folders script
2021-10-27 14:56:16 -06:00
James Betker
d91dcbd404
Make classifier inference script more open
2021-10-27 13:18:54 -06:00
James Betker
58494b0888
Add support for distilling gpt_asr
2021-10-27 13:10:07 -06:00
James Betker
5d714bc566
Add deepspeech model and support for decoding with it
2021-10-27 13:09:46 -06:00
James Betker
15437b2fc3
WER script
2021-10-26 13:30:29 -06:00
James Betker
3a9d1c53ea
Rework conditioning inputs provided
2021-10-26 10:46:33 -06:00
James Betker
21b6daa0ed
Introduce clip resampling
2021-10-26 10:42:23 -06:00
James Betker
43e389aac6
Add time_embed_dim_multiplier
2021-10-26 08:55:55 -06:00
James Betker
ba6e46c02a
Further simplify diffusion_vocoder and make noise_surfer work
2021-10-26 08:54:30 -06:00
James Betker
c3421b7f6d
Dataset work for audio quality processor
2021-10-24 09:09:34 -06:00
James Betker
0ee1c67ce5
Rework how conditioning inputs are applied to DiffusionVocoder
2021-10-24 09:08:58 -06:00
James Betker
b1248e7114
Get rid of filter_urbansounds
2021-10-21 16:46:04 -06:00
James Betker
06ea6191a9
Initial implementation of audio_with_noise dataset
2021-10-21 16:45:19 -06:00
James Betker
9a3e89ec53
Force LR fix
2021-10-21 12:01:01 -06:00
James Betker
40cb25292a
Fix force_lr logic
2021-10-21 11:51:30 -06:00
James Betker
0dee15f875
base DVAE & vector_quantizer
2021-10-20 21:19:38 -06:00
James Betker
f2a31702b5
Clean stuff up, move more things into arch_util
2021-10-20 21:19:25 -06:00
James Betker
a6f0f854b9
Fix codes when inferring from dvae
2021-10-17 22:51:17 -06:00
James Betker
d016a2fbad
Go back to vanilla flavor of diffusion
2021-10-17 17:32:46 -06:00
James Betker
23da073037
Norm decoder outputs now
2021-10-16 09:07:10 -06:00
James Betker
0edc98f6c4
Throw out the idea of conditioning on discrete codes. Oh well :(
2021-10-16 09:02:01 -06:00
James Betker
62c8c5d93e
Zero out spectrogram code inputs initially.
2021-10-15 12:10:11 -06:00
James Betker
1d0b44ebc2
More tweaks to diffusion-vocoder
2021-10-15 11:51:17 -06:00
James Betker
3b19581f9a
Allow num_resblocks to specified per-level
2021-10-14 11:26:04 -06:00
James Betker
83798887a8
Mods to support unet diffusion vocoder with conditioning
2021-10-13 21:23:18 -06:00
James Betker
c861054218
Restore spleeter_splitter
...
The mods don't help - in TF mode, everything is done on the GPU anyways. Something else
is going to have to be done to fix this.
2021-10-09 23:55:42 -06:00
James Betker
32ba496632
More fixes
2021-10-09 23:27:14 -06:00
James Betker
932ea29a83
Add multiprocessing to the spleeter splitter script to try and improve performance further
2021-10-09 23:15:36 -06:00
James Betker
b94e587f46
Improvements to spleeter_filter_noisy_clips
2021-10-07 21:28:00 -06:00
James Betker
33120cb35c
Add norming to discretization_loss
2021-10-06 17:10:50 -06:00
James Betker
bb891a3a53
Add partitioning and improved resuming to the spleeter filtering
2021-10-06 17:10:12 -06:00
James Betker
f2977d360c
Allow attention_dim in channel attention to be specified, add converter
2021-10-05 17:29:38 -06:00
James Betker
9c0d7288ea
Discretization loss attempt
2021-10-04 20:59:21 -06:00
James Betker
66f99a159c
Rev2
2021-10-03 15:20:50 -06:00
James Betker
09f373e3b1
Add dvae with channel attention
2021-10-03 10:52:01 -06:00
James Betker
0396a9d2ca
Increase baseline codes recording across all dvae models
2021-09-30 08:09:07 -06:00
James Betker
f84ccbdfb2
Fix quantizer with balancing_heuristic
2021-09-29 14:46:05 -06:00
James Betker
4914c526dc
More cleanup
2021-09-29 14:24:49 -06:00
James Betker
6e550edfe3
Attentive dvae
2021-09-29 14:17:29 -06:00
James Betker
fc8ae4679a
Work on spleeter filtering script
2021-09-29 09:24:56 -06:00
James Betker
55b58fb67f
Clean up codebase
...
Remove stuff that I'm likely not going to use again (or generally failed experiments)
2021-09-29 09:21:44 -06:00
James Betker
4d1a42e944
Add switchnorm to gumbel_quantizer
2021-09-24 18:49:25 -06:00
James Betker
ac57cdc794
Add scheduling to quantizer, enable cudnn_benchmarking to be disabled
2021-09-24 17:01:36 -06:00
James Betker
3e64e847c2
Gumbel quantizer
2021-09-23 23:32:03 -06:00
James Betker
c5297ccec6
Add dvae balancing heuristic
2021-09-23 21:19:36 -06:00
James Betker
e24c619387
Fix
2021-09-23 16:07:58 -06:00
James Betker
6833048bf7
Alterations to diffusion_dvae so it can be used directly on spectrograms
2021-09-23 15:56:25 -06:00
James Betker
97ea329a59
Make spleeter filter simpler (and hopefully much faster)
2021-09-17 15:29:42 -06:00
James Betker
359e9e27a7
unsupervised_audio_dataset: try to recover from failures of audio2numpy
2021-09-17 15:25:57 -06:00
James Betker
5c8d266d4f
chk
2021-09-17 09:15:36 -06:00
James Betker
a6544f1684
More checkpointing fixes
2021-09-16 23:12:43 -06:00
James Betker
94899d88f3
Fix overuse of checkpointing
2021-09-16 23:00:28 -06:00
James Betker
f78ce9d924
Get diffusion_dvae ready for prime time!
2021-09-16 22:43:10 -06:00
James Betker
1197ae1928
Misc
2021-09-16 10:53:56 -06:00
James Betker
6f48674647
Support diffusion models with extra return values & inference in diffusion_dvae
2021-09-16 10:53:46 -06:00
James Betker
8d9857f33d
More fixes
2021-09-14 20:45:05 -06:00
James Betker
9a9c90660f
Fixes
2021-09-14 18:29:17 -06:00
James Betker
4334a67924
Spleeter mods
2021-09-14 17:43:40 -06:00
James Betker
0382660159
Get diffusion_dvae functional
2021-09-14 17:43:31 -06:00
James Betker
e513052fca
Add unsupervised_audio_dataset
2021-09-14 17:43:16 -06:00
James Betker
bc603c3231
Script adjustments and fixes
2021-09-12 21:26:45 -06:00
James Betker
76e2c497f7
Improvements to splitter
2021-09-09 23:34:56 -06:00
James Betker
742f9b4010
Batch spleeter cleaner using GPU
2021-09-09 23:14:32 -06:00
James Betker
73b930c0f6
Add diffusion_dvae
...
Increase split_on_silence interval
2021-09-09 16:22:05 -06:00
James Betker
b8f2e0f452
mydvae
2021-09-06 17:45:30 -06:00
James Betker
92e7e57f81
Update diffusion_noise_surfer to support audio
2021-09-01 08:34:47 -06:00
James Betker
3e073cff85
Set kernel_size in diffusion_vocoder
2021-09-01 08:33:46 -06:00
James Betker
30cd33fe44
another fix
2021-08-31 14:46:46 -06:00
James Betker
8810d3de97
fix wavfile_dataset
2021-08-31 14:45:29 -06:00
James Betker
dabd87246d
Add unet_diffusion_vocoder
2021-08-31 14:38:33 -06:00
James Betker
274d352e6f
dug
2021-08-30 21:45:58 -06:00
James Betker
f1a0c21fb2
asr_eval
2021-08-30 21:41:34 -06:00
James Betker
ed6eae407f
More scripts for splitting and formatting audio
2021-08-30 21:20:52 -06:00
James Betker
909754cc27
Add find_faulty_files.py
2021-08-25 18:00:43 -06:00
James Betker
08b33c8e3a
Support silu activation
2021-08-25 09:03:14 -06:00
James Betker
67bf7f5219
dvae mods
...
Trying to squeeze as much performance out of this net as possible
2021-08-25 08:55:13 -06:00
James Betker
d05cc1f46c
Misc
2021-08-24 17:12:04 -06:00
James Betker
9dfe936c16
Fix ddp for sampler
2021-08-19 16:45:34 -06:00
James Betker
b521d94b01
Make gpt-asr more configurable
2021-08-19 16:33:41 -06:00
James Betker
570ed327ed
Stop dataset - attempt #2
2021-08-18 18:29:38 -06:00
James Betker
17453ccbe8
Revert mods to lrdvae
...
They didn't really change anything
2021-08-17 09:09:29 -06:00
James Betker
8332923f5c
Two more tools to test the audio segmentor
2021-08-17 09:09:11 -06:00
James Betker
7c086d0c2c
libritts - only write on successful check
2021-08-16 22:52:55 -06:00
James Betker
93e903af15
Rework wavfile dataset to be usable for things other than augments
2021-08-16 22:52:35 -06:00
James Betker
d7f30232c3
Oh yeah
2021-08-16 22:52:15 -06:00
James Betker
4c01d82265
Fix for voxpopuli
2021-08-16 22:52:05 -06:00
James Betker
1fede41b7b
Audio segmentor
2021-08-16 22:51:53 -06:00
James Betker
2d3372054d
Add support for voxpopuli to nv_tacotron_dataset
2021-08-16 17:13:40 -06:00
James Betker
729c1fd5a9
Fix up max lengths to save memory
2021-08-15 21:29:28 -06:00
James Betker
9e47e64d5a
Add gpt_segmentor model
...
The idea is to specifically train a model that extracts phrases from
audio clips.
2021-08-15 21:23:07 -06:00
James Betker
a826d5f658
Mods to dvae
...
- Add resblock to each layer
- Increase filter size for each layer
- Use SiLU
2021-08-15 20:54:10 -06:00
James Betker
b8bec22f1a
Fix gpt_asr inference bug
2021-08-15 20:53:42 -06:00
James Betker
3580c52eac
Fix up wavfile_dataset to be able to provide a full clip
2021-08-15 20:53:26 -06:00
James Betker
a523c4f932
Auto-normalize wav files by data type
2021-08-15 09:09:51 -06:00
James Betker
98057b6516
Make lrdvae use quantized mode in eval()
2021-08-14 23:43:01 -06:00
James Betker
c28f657ab8
Allow usage of pre-rendered mels saved to npy files
2021-08-14 23:38:15 -06:00
James Betker
ad3391bd96
Fix nan issue when interpolating audio
2021-08-14 20:42:01 -06:00
James Betker
769f0acc53
Moar fix
2021-08-14 17:23:15 -06:00
James Betker
3d2e724083
Fix audio ranging problem
2021-08-14 17:18:55 -06:00
James Betker
d6a73acaed
Allow processing of multiple audio sources at once from nv_tacotron_dataset
2021-08-14 16:04:05 -06:00
James Betker
007976082b
GPT_asr for inference
2021-08-14 14:37:17 -06:00
James Betker
e1bdd3f7c7
Fix gpt_asr bug. Initial implementation of beam search
2021-08-13 22:47:00 -06:00
James Betker
72622b4d61
Allow saving mel strips as files from the dataset implementation
2021-08-13 22:46:41 -06:00
James Betker
cfd284f425
Fix up some stuff that allows the MEL to be computed on-GPU
2021-08-13 18:35:55 -06:00
James Betker
cdee31c60b
GPT_ASR
2021-08-13 15:02:18 -06:00
James Betker
81e91c99de
Misc
2021-08-13 13:58:59 -06:00
James Betker
fff1a59e08
max/min mel invalid fix
2021-08-13 09:36:31 -06:00
James Betker
4b2946e581
More fix
2021-08-12 15:51:23 -06:00
James Betker
4c76257c71
Dont require collation for nv_tacotron
2021-08-12 15:44:55 -06:00
James Betker
5b07d3b623
Found error that I was trying to fix with reload=True
2021-08-12 15:22:34 -06:00
James Betker
430b650a34
......
2021-08-12 10:31:10 -06:00
James Betker
b35d6ae028
Print some metrics from tacotron dataset when it croaks
2021-08-12 09:21:12 -06:00
James Betker
0c4d6b1916
Just offer generic re-load for nv-tacotron
2021-08-12 09:09:12 -06:00
James Betker
154f5aa73c
Fix annoying warning and add to requirements
2021-08-11 17:32:06 -06:00
James Betker
f5a9b88ef6
tacotron cleaners: remove quotation marks
...
these don't really have relevance for tts or asr
2021-08-11 16:18:44 -06:00
James Betker
20586a8edc
Fix LRDVAE bug with quantizer integration
2021-08-11 16:17:22 -06:00
James Betker
f04a7bdf63
Bug fixes for tacotron dataset on mozilla cv
...
- Support a max mel length (mozilla cv has some tracks that are basically unbounded..)
- Don't fail on low sample rates (mozilla cv has some of those)
2021-08-11 16:17:03 -06:00
James Betker
2d3f0cc33c
nv_tacotron_dataset - Allow training on mozilla cv
2021-08-11 13:34:31 -06:00
James Betker
d0c74278bf
Enable multiple wavfile paths to be specified, fix eps bug in mp3 splitter
2021-08-11 08:46:02 -06:00
James Betker
e19c00398e
More improvements to random_mp3_splitter
2021-08-09 21:31:12 -06:00
James Betker
04d14b3acc
No batch factors for eval
2021-08-09 16:02:01 -06:00
James Betker
82fc69abfa
Add "pure" evaluator
...
Which simply computes the training loss against an eval dataset
2021-08-09 14:58:35 -06:00
James Betker
080bea2f19
No, really
2021-08-09 12:02:31 -06:00
James Betker
e1ce4671e4
Apply dropout to gpt_tts, get rid of min_gpt implementation
2021-08-09 12:01:10 -06:00
James Betker
74342b860b
Revert "Undo forced text padding"
...
This reverts commit 83ab5e6a00
.
2021-08-09 11:56:34 -06:00
James Betker
1068f53b78
Add a sampling beam search
2021-08-09 11:56:06 -06:00
James Betker
d4e33bf15f
Fixes to the mp3 splitter
2021-08-09 11:55:46 -06:00
James Betker
4100469902
Add a tool to split mp3 files into arbitrary chunks of wav files
2021-08-08 23:23:13 -06:00
James Betker
01cfae28d8
Beam search implementation in one pass? Dayyyum
2021-08-08 23:22:42 -06:00
James Betker
83ab5e6a00
Undo forced text padding
2021-08-08 11:42:20 -06:00
James Betker
690d7e86d3
Fix nv_tacotron_dataset bug which incorrectly mapped filenames
...
dammit..
2021-08-08 11:38:52 -06:00
James Betker
a2afb25e42
Fix inference, always flow full text tokens through transformer
2021-08-07 20:11:10 -06:00
James Betker
4c678172d6
ugh
2021-08-06 22:10:18 -06:00
James Betker
e723137273
Make gpttts more configurable
2021-08-06 22:08:51 -06:00
James Betker
a7496b661c
combined dvae ftw
2021-08-06 22:01:06 -06:00
James Betker
0237e96b34
Fix dvae bug
2021-08-06 14:17:01 -06:00
James Betker
0799d95af5
Use quantizer from rosinality/vqvae with openai dvae
2021-08-06 14:06:26 -06:00
James Betker
d3ace153af
Add logic for performing inference using gpt_tts with dual-encoder modes
2021-08-06 12:04:12 -06:00
James Betker
b43683b772
Add lucidrains_dvae
2021-08-06 12:03:46 -06:00
James Betker
62c7570512
Constrain wav_aug a bit more
2021-08-06 08:19:38 -06:00
James Betker
f126040da2
Undo noise first
2021-08-05 23:24:38 -06:00
James Betker
908ef5495f
Add noise first to audio_aug
2021-08-05 23:22:44 -06:00
James Betker
d6007c6de1
dataset fixes
2021-08-05 23:12:59 -06:00
James Betker
3ca51e80b2
Only fix weird path bug in windows
2021-08-05 22:21:25 -06:00
James Betker
70dcd1107f
Fix byol_model_wrapper to function with audio inputs
2021-08-05 22:20:22 -06:00
James Betker
f86df53ce0
Export extract_byol_model as a function
2021-08-05 22:15:26 -06:00
James Betker
89d15c9e74
Move gpt-tts back to lucidrains implementation
...
Much better performance.
2021-08-05 22:15:13 -06:00
James Betker
d120e1aa99
Add audio augmentation to wavfile_dataset, utility to test audio similary
2021-08-05 22:14:49 -06:00
James Betker
c0f61a2e15
Rework how DVAE tokens are ordered
...
It might make more sense to have top tokens, then bottom tokens
with top tokens having different discretized values.
2021-08-05 07:07:17 -06:00
James Betker
4017236ba9
Fix up inference for gpt_tts
2021-08-05 06:46:30 -06:00
James Betker
5037220ac7
Mods to support contrastive learning on audio files
2021-08-05 05:57:04 -06:00
James Betker
341f28dd82
It works!
2021-08-04 20:07:51 -06:00
James Betker
36c7c1fbdb
Fix training flow for NEXT TOKEN prediction instead of same token prediction
...
doh
2021-08-04 10:28:09 -06:00
James Betker
d9936df363
Add gpt_tts dataset and implement inference
...
- Adds a script which preprocesses quantized mels given a DVAE
- Adds a dataset which can consume preprocessed qmels
- Reworks GPT TTS to consume the outputs of that dataset (removes logic to add padding and start/end tokens)
- Adds inference to gpt_tts
2021-08-04 00:44:04 -06:00
James Betker
4c98b9703f
Get dalle-style TTS to "work"
2021-08-03 21:08:27 -06:00
James Betker
2814307eee
Alterations to support VQVAE on mel spectrograms
2021-08-01 07:54:21 -06:00
James Betker
965f6e6b52
Fixes to weight_decay in adamw
2021-07-31 15:58:41 -06:00
James Betker
0c9e75bc69
Improvements to GptTts
2021-07-31 15:57:57 -06:00
James Betker
31ee9ae262
Checkin
2021-07-30 23:07:35 -06:00
James Betker
dadc54795c
Add gpt_tts
2021-07-27 20:33:30 -06:00
James Betker
398185e109
More work on wave-diffusion
2021-07-27 05:36:17 -06:00
James Betker
49e3b310ea
Allow audio sample rate interpolation for faster training
2021-07-26 17:44:06 -06:00
James Betker
96e90e7047
Add support for a gaussian-diffusion-based wave tacotron
2021-07-26 16:27:31 -06:00
James Betker
97d7cbbc34
Additional work for audio xformer (which doesnt really do a great job)
2021-07-23 10:58:14 -06:00
James Betker
2325e7a88c
Allow inference for vqvae
2021-07-20 10:40:05 -06:00
James Betker
d81386c1be
Mods to support vqvae in audio mode (1d)
2021-07-20 08:36:46 -06:00
James Betker
5584cfcc7a
tacotron2 work
2021-07-14 21:41:57 -06:00
James Betker
fe0c699ced
Various fixes
2021-07-14 00:08:42 -06:00
James Betker
be2745f42d
Add waveglow & inference capabilities to audio generator
2021-07-08 23:07:36 -06:00
James Betker
1ff434218e
tacotron2, ready for prime time!
2021-07-08 22:13:44 -06:00
James Betker
86fd3ad7fd
Initial checkin of nvidia tacotron model & dataset
...
These two are tested, full support for training to come.
2021-07-06 11:11:35 -06:00
James Betker
3801d5d55e
diffusion surfin'
2021-07-06 09:36:52 -06:00
James Betker
afa41f1804
Allow hq color jittering and corruptions that are not included in the corruption factor
2021-06-30 09:44:46 -06:00
James Betker
6fd16ea9c8
Add meta-anomaly detection, colorjitter augmentation
2021-06-29 13:41:55 -06:00
James Betker
46e9f62be0
Add unet with latent guide
...
This is a diffusion network that uses both a LQ image
and a reference sample HQ image that is compressed into
a latent vector to perform upsampling
The hope is that we can steer the upsampling network
with sample images.
2021-06-26 11:02:58 -06:00
James Betker
0ded106562
Merge remote-tracking branch 'origin/master'
2021-06-25 13:16:28 -06:00
James Betker
a57ed8e960
Various mods to support better jpeg image filtering
2021-06-25 13:16:15 -06:00
James Betker
61e7ca39cd
Update image_folder_dataset.py
2021-06-25 11:48:31 -06:00
James Betker
a0ef07ddb8
Create unet_latent_guide.py
2021-06-25 11:25:14 -06:00
James Betker
e7890dc0ba
Misc fixes for diffusion nets
2021-06-21 10:38:07 -06:00
James Betker
8e3a33e001
Fix a bug where non-rank-0 is computing FID before all images are saved.
2021-06-16 16:27:09 -06:00
James Betker
68cbbed886
Add some cool diffusion testing scripts
2021-06-16 16:26:36 -06:00
James Betker
ae8de0cb9d
fid saving images across all rank fix
2021-06-15 10:31:07 -06:00
James Betker
6a75bd0777
Another fix
2021-06-14 09:51:44 -06:00
James Betker
54bff35171
Fix issue where eval was not being used by all ddp processes
2021-06-14 09:50:04 -06:00
James Betker
60079a1572
Fix saver in distributed mode
2021-06-14 09:41:06 -06:00
James Betker
545f2db170
Distributed FID dataset across processes
2021-06-14 09:33:44 -06:00
James Betker
6b32c87dcb
Try to make diffusion fid more deterministic
2021-06-14 09:27:43 -06:00
James Betker
5b4f86293f
Add FID evaluator for diffusion models
2021-06-14 09:14:30 -06:00
James Betker
9cfe840872
Attempt to fix syncing multiple times when doing gradient accumulation
2021-06-13 14:30:30 -06:00
James Betker
1cd75dfd33
Fix ddp bug
2021-06-13 10:25:23 -06:00
James Betker
3e3ad7825f
Add support for training an EMA network alongside the main networks
2021-06-12 21:01:41 -06:00
James Betker
696f320820
Get rid of feature networks
2021-06-11 20:50:07 -06:00
James Betker
65c474eecf
Various changes to fix testing
2021-06-11 15:31:10 -06:00
James Betker
220f11a5e4
Half channel sizes in cifar_resnet
2021-06-09 17:06:37 -06:00
James Betker
aea12e1b9c
Fix cat eval hack
2021-06-09 17:05:11 -06:00
James Betker
9b5f4abb91
Add fade in for hard switch
2021-06-07 18:15:09 -06:00
James Betker
108c5d829c
Fix dropout norm
2021-06-07 16:13:23 -06:00
James Betker
438217094c
Also debug distribution of switch
2021-06-07 15:36:07 -06:00
James Betker
44b09e5f20
Amplify dropout rate
2021-06-07 15:20:53 -06:00
James Betker
f0d4eb9182
Fixor
2021-06-07 11:58:36 -06:00
James Betker
c456a60466
Another go at fixing nan
2021-06-07 11:51:43 -06:00
James Betker
1c574c5bd1
Attempt to fix nan
2021-06-07 11:43:42 -06:00
James Betker
eda796985b
Try out dropout norm
2021-06-07 11:33:33 -06:00
James Betker
6c6e82406e
Pass a corruption factor through the dataset into the upsampling network
...
The intuition is this will help guide the network to make better informed decisions
about how it performs upsampling based on how it perceives the underlying content.
(I'm giving up on letting networks detect their own quality - I'm not convinced it is
actually feasible)
2021-06-07 09:13:54 -06:00
James Betker
2ad2b56438
Don't do wandb except on rank 0
2021-06-06 16:52:07 -06:00
James Betker
7c5478bc2c
Formatting issue with gdi
2021-06-06 16:35:37 -06:00
James Betker
061dbcd458
Another fix to anorm
2021-06-06 15:09:49 -06:00
James Betker
9a6991e461
Fix switch norm average
2021-06-06 15:04:28 -06:00
James Betker
57e1a6a0f2
cifar: add hard routing
...
Also mods switched_routing to support non-pixular inputs
2021-06-06 14:53:43 -06:00
James Betker
692e9c417b
Support diffusion unet
2021-06-06 13:57:22 -06:00
James Betker
a0158ebc69
Simplify cifar resnet further for faster training
2021-06-06 10:02:24 -06:00
James Betker
75567a9814
Only head norm removed
2021-06-05 23:29:11 -06:00
James Betker
65d0376b90
Re-add normalization at the tail of the RRDB
2021-06-05 23:04:05 -06:00
James Betker
184e887122
Remove rrdb normalization
2021-06-05 21:39:19 -06:00
James Betker
f5e75602b9
Add regular attention to cifar_resnet
2021-06-05 21:34:07 -06:00
James Betker
16cd92acd5
hack
2021-06-05 14:23:41 -06:00
James Betker
af52751d6b
Fix device error
2021-06-05 14:21:32 -06:00
James Betker
5f0cc65f3b
Register branched resnet properly
2021-06-05 14:19:03 -06:00
James Betker
fb405d9ef1
CIFAR stuff
...
- Extract coarse labels for the CIFAR dataset
- Add simple resnet that branches lower layers based on coarse labels
- Some other cleanup
2021-06-05 14:16:02 -06:00
James Betker
80d4404367
A few fixes:
...
- Output better prediction of xstart from eps
- Support LossAwareSampler
- Support AdamW
2021-06-05 13:40:32 -06:00
James Betker
fa908a6a15
Fix wandb import issue
2021-06-04 23:27:15 -06:00
James Betker
103a88506e
Log eval to wandb
2021-06-04 23:23:20 -06:00
James Betker
7d45132f60
fdsa
2021-06-04 21:26:54 -06:00
James Betker
6c8c8087d5
asdf
2021-06-04 21:24:48 -06:00
James Betker
e6c537824a
Allow validation for ce
2021-06-04 21:21:04 -06:00
James Betker
7c251af7a8
Support cifar100 with resnet
2021-06-04 17:29:07 -06:00
James Betker
bf811f80c1
GD mods & fixes
...
- Report variational loss separately
- Report model prediction from injector
- Log these things
- Use respacing like guided diffusion
2021-06-04 17:13:16 -06:00
James Betker
6084915af8
Support gaussian diffusion models
...
Adds support for GD models, courtesy of some maths from openai.
Also:
- Fixes requirement for eval{} even when it isn't being used
- Adds support for denormalizing an imagenet norm
2021-06-02 21:47:32 -06:00
James Betker
45bc76ba92
Fixes and mods to support training classifiers on imagenet
2021-06-01 17:25:24 -06:00
James Betker
f129eaa39e
Clean up byol a bit
...
- Remove option to aug in dataset (there's really no reason for this now that kornia works on GPU on windows)
- Other stufff
2021-05-24 21:35:46 -06:00
James Betker
6649ef2dae
Add zipfilesdataset
2021-05-24 21:35:00 -06:00
James Betker
1a2b9fa130
Get rid of old byol net wrapping
...
Simplifies and makes this usable with DLAS' multi-gpu trainer
2021-04-27 12:48:34 -06:00
James Betker
119f17c808
Add testing capabilities for segformer & contrastive feature
2021-04-27 09:59:50 -06:00
James Betker
9bbe6fc81e
Get segformer to a trainable state
2021-04-25 11:45:20 -06:00
James Betker
23e01314d4
Add dataset, ui for labeling and evaluator for pointwise classification
2021-04-23 17:17:13 -06:00
James Betker
fc623d4b5a
Add segformer model. Start work on BYOL adaptation that will support training it.
2021-04-23 17:16:46 -06:00
James Betker
17555e7d07
misc adjustments for stylegan
2021-04-21 18:14:17 -06:00
James Betker
b687ef4cd0
Misc
2021-04-21 18:09:46 -06:00
James Betker
94e069bced
Misc changes
2021-03-13 10:45:26 -07:00
James Betker
9fc3df3f5b
Switched conv: add conversion function with allowlist
2021-03-13 10:44:56 -07:00
James Betker
cf9a6da889
Fix some bugs, checkin work on vqvae3
2021-03-02 20:56:19 -07:00
James Betker
f89ea5f1c6
Mods to support lightweight_gan model
2021-03-02 20:51:48 -07:00
James Betker
543d459b4e
extract_temporal_squares script
...
For extracting related patches across a video
2021-02-08 08:10:24 -07:00
James Betker
39fd755baa
New benchmark numbers
2021-02-08 08:09:41 -07:00
James Betker
784b96c059
Misc options to add support for training stylegan2-rosinality models:
...
- Allow image_folder_dataset to normalize inbound images
- ExtensibleTrainer can denormalize images on the output path
- Support .webp - an output from LSUN
- Support logistic GAN divergence loss
- Support stylegan2 TF weight extraction for discriminator
- New injector that produces latent noise (with separated paths)
- Modify FID evaluator to be operable with rosinality-style GANs
2021-02-08 08:09:21 -07:00
James Betker
e7be4bdff3
Revert
2021-02-05 08:43:07 -07:00
James Betker
6dec1f5968
Back to groupnorm
2021-02-05 08:42:11 -07:00
James Betker
336f807c8e
lambda2
2021-02-05 00:00:24 -07:00
James Betker
025a5867c4
Use syncbatchnorm instead
2021-02-04 22:26:36 -07:00
James Betker
bb79fafb89
Fix groupnorm specification
2021-02-04 22:15:38 -07:00
James Betker
43da1f9c4b
Convert lambda coupler to use groupnorm instead of batchnorm
2021-02-04 21:59:44 -07:00
James Betker
7070142805
Make vqvae3_hard more configurable
2021-02-04 09:03:22 -07:00
James Betker
b980028ca8
Add get_debug_values for vqvae_3_hardswitch
2021-02-03 14:12:24 -07:00
James Betker
1405ff06b8
Fix SwitchedConvHardRoutingFunction for current cuda router
2021-02-03 14:11:55 -07:00
James Betker
d7bec392dd
...
2021-02-02 23:50:25 -07:00
James Betker
b0a8fa00bc
Visual dbg in vqvae3hs
2021-02-02 23:50:01 -07:00
James Betker
f5f91850fd
hardswitch variant of vqvae3
2021-02-02 21:00:04 -07:00
James Betker
320edbaa3c
Move switched_conv logic around a bit
2021-02-02 20:41:24 -07:00
James Betker
0dca36946f
Hard Routing mods
...
- Turns out my custom convolution was RIDDLED with backwards bugs, which is
why the existing implementation wasn't working so well.
- Implements the switch logic from both Mixture of Experts and Switch Transformers
for testing purposes.
2021-02-02 20:35:58 -07:00
James Betker
29c1c3bede
Register vqvae3
2021-01-29 15:26:28 -07:00
James Betker
bc20b4739e
vqvae3
...
Changes VQVAE as so:
- Reverts back to smaller codebook
- Adds an additional conv layer at the highest resolution for both the encoder & decoder
- Uses LeakyReLU on trunk
2021-01-29 15:24:26 -07:00
James Betker
96bc80313c
Add switch norm, up dropout rate, detach selector
2021-01-26 09:31:53 -07:00
James Betker
97d895aebe
Add SrPixLoss, which focuses pixel-based losses on high-frequency regions
...
of the image.
2021-01-25 08:26:14 -07:00
James Betker
2cdac6bd09
Add PWCNet for human optical flow
2021-01-25 08:25:44 -07:00
James Betker
51b63b2aa6
Add switched_conv with hard routing and make vqvae use it.
2021-01-25 08:25:29 -07:00
James Betker
ae4ff4a1e7
Enable lambda visualization
2021-01-23 15:53:27 -07:00
James Betker
10ec6bda1d
lambda nets in switched_conv and a vqvae to use it
2021-01-23 14:57:57 -07:00
James Betker
b374dcdd46
update vqvae to double codebook size for bottom quantizer
2021-01-23 13:47:07 -07:00
James Betker
dac7d768fa
test uresnet playground mods
2021-01-23 13:46:43 -07:00
James Betker
1b8a26db93
New switched_conv
2021-01-23 13:46:30 -07:00
James Betker
557cdec116
misc
2021-01-23 13:45:17 -07:00
James Betker
d919ae7148
Add VQVAE with no Conv2dTranspose
2021-01-18 08:49:59 -07:00
James Betker
587a4f4050
resnet_unet_3
...
I'm being really lazy here - these nets are not really different from each other
except at which layer they terminate. This one terminates at 2x downsampling,
which is simply indicative of a direction I want to go for testing these pixpro networks.
2021-01-15 14:51:03 -07:00
James Betker
038b8654b6
Pixpro: unwrap losses
2021-01-13 11:54:25 -07:00
James Betker
8990801a3f
Fix pixpro stochastic sampling bugs
2021-01-13 11:34:24 -07:00
James Betker
19475a072f
Pixpro: Rather than using a latent square for pixpro, use an entirely stochastic sampling of the pixels
2021-01-13 11:26:51 -07:00
James Betker
d1007ccfe7
Adjustments to pixpro to allow training against networks with arbitrarily large structural latents
...
- The pixpro latent now rescales the latent space instead of using a "coordinate vector", which
**might** have performance implications.
- The latent against which the pixel loss is computed can now be a small, randomly sampled patch
out of the entire latent, allowing further memory/computational discounts. Since the loss
computation does not have a receptive field, this should not alter the loss.
- The instance projection size can now be separate from the pixel projection size.
- PixContrast removed entirely.
- ResUnet with full resolution added.
2021-01-12 09:17:45 -07:00
James Betker
34f8c8641f
Support training imagenet classifier
2021-01-11 20:09:16 -07:00
James Betker
f3db381fa1
Allow uresnet to use pretrained resnet50
2021-01-10 12:57:31 -07:00
James Betker
4119cd6240
Fix to image_folder_dataset to accomodate images with mismatched dimensions
2021-01-10 12:57:21 -07:00
James Betker
48f0d8964b
Allow dist_backend to be specified in options
2021-01-09 20:54:32 -07:00
James Betker
14a868e8e6
byol playground updates
2021-01-09 20:54:21 -07:00
James Betker
7c6c7a8014
Fix process_video
2021-01-09 20:53:46 -07:00
James Betker
07168ecfb4
Enable vqvae to use a switched_conv variant
2021-01-09 20:53:14 -07:00
James Betker
41b7d50944
Update extract_square_images
2021-01-08 13:16:34 -07:00
James Betker
5a8156026a
Did anyone ask for k-means clustering?
...
This is so cool...
2021-01-07 22:37:41 -07:00
James Betker
acf1535b14
Fix for randomresizedcrop injector
2021-01-07 16:31:43 -07:00
James Betker
659814c20f
BYOL script updates
2021-01-07 16:31:28 -07:00
James Betker
de10c7246a
Add injected noise into bypass maps
2021-01-07 16:31:12 -07:00
James Betker
04961b91cf
Add random-crop injector
2021-01-07 12:14:55 -07:00
James Betker
61a86a3c1e
VQVAE
2021-01-07 10:20:15 -07:00
James Betker
01a589e712
Adjustments to pixpro & resnet-unet
...
I'm not really satisfied with what I got out of these networks on round 1.
Lets try again..
2021-01-06 15:00:46 -07:00
James Betker
9680294430
Move byol scripts around
2021-01-06 14:52:17 -07:00
James Betker
2f2f87bbea
Styled SR fixes
2021-01-05 20:14:39 -07:00
James Betker
9fed90393f
Add lucidrains pixpro trainer
2021-01-05 20:14:22 -07:00
James Betker
39a94c74b5
Allow BYOL resnet playground to produce a latent dict
2021-01-04 20:11:29 -07:00
James Betker
ade2732c82
Transfer learning for styleSR
...
This is a concept from "Lifelong Learning GAN", although I'm skeptical of it's novelty -
basically you scale and shift the weights for the generator and discriminator of a pretrained
GAN to "shift" into new modalities, e.g. faces->birds or whatever. There are some interesting
applications of this that I would like to try out.
2021-01-04 20:10:48 -07:00
James Betker
2c65b6b28e
More mods to support styledsr
2021-01-04 11:32:28 -07:00
James Betker
2225fe6ac2
Undo lucidrains changes for new discriminator
...
This "new" code will live in the styledsr directory from now on.
2021-01-04 10:57:09 -07:00
James Betker
40ec71da81
Move styled_sr into its own folder
2021-01-04 10:54:34 -07:00
James Betker
5916f5f7d4
Misc fixes
2021-01-04 10:53:53 -07:00
James Betker
4d8064c32c
Modifications to allow partially trained stylegan discriminators to be used
2021-01-03 16:37:18 -07:00
James Betker
5e7ade0114
ImageFolderDataset - corrupt lq images alongside each other
2021-01-03 16:36:38 -07:00
James Betker
ce6524184c
Do the last commit but in a better way
2021-01-02 22:24:12 -07:00
James Betker
edf9c38198
Make ExtensibleTrainer set the starting step for the LR scheduler
2021-01-02 22:22:34 -07:00
James Betker
bdbab65082
Allow optimizers to train separate param groups, add higher dimensional VGG discriminator
...
Did this to support training 512x512px networks off of a pretrained 256x256 network.
2021-01-02 15:10:06 -07:00
James Betker
193cdc6636
Move discriminators to the create_model paradigm
...
Also cleans up a lot of old discriminator models that I have no intention
of using again.
2021-01-01 15:56:09 -07:00
James Betker
7976a5825d
srfid is incorrectly labeled
2021-01-01 13:00:59 -07:00
James Betker
f39179e85a
styled_sr: fix bug when using initial_stride
2021-01-01 12:13:21 -07:00
James Betker
913fc3b75e
Need init to pick up styled_sr
2021-01-01 12:10:32 -07:00
James Betker
aae65e6ed8
Mods to byol_resnet_playground for large batches
2021-01-01 11:59:54 -07:00
James Betker
e992e18767
Add initial_stride term to style_sr
...
Also fix fid and a networks.py issue.
2021-01-01 11:59:36 -07:00
James Betker
9864fe4c04
Fix for train.py
2021-01-01 11:59:00 -07:00
James Betker
e214e6ce33
Styled SR model
2020-12-31 20:54:18 -07:00
James Betker
0eb1f4dd67
Revert "Get rid of CUDA_VISIBLE_DEVICES"
...
It is actually necessary for training in distributed mode. Only
do it then.
2020-12-31 10:31:40 -07:00
James Betker
8de5a02a48
byol_resnet_playground
...
Similar to the spinenet playground, but tinkers with resnet instead
2020-12-31 10:15:04 -07:00
James Betker
8f18b2709e
Get rid of CUDA_VISIBLE_DEVICES
...
It is not clear to me what the purpose of this is, but it has recently
started causing failures.
2020-12-31 10:13:58 -07:00
James Betker
1de1fa30ac
Disable refs and centers altogether in single_image_dataset
...
I suspect that this might be a cause of failures on parallel datasets.
Plus it is unnecessary computation.
2020-12-31 10:13:24 -07:00
James Betker
8f0984cacf
Add sr_fid evaluator
2020-12-30 20:18:58 -07:00
James Betker
b1fb82476b
Add gp debug (fix)
2020-12-30 15:26:54 -07:00
James Betker
9c53314ea2
Add gradient penalty visual debug
2020-12-30 09:51:59 -07:00
James Betker
63cf3d3126
Injector auto-registration
...
I love it!
2020-12-29 20:58:02 -07:00
James Betker
a777c1e4f9
Misc script fixes
2020-12-29 20:25:09 -07:00
James Betker
9dc3c8f0ff
Script updates
2020-12-29 20:24:41 -07:00
James Betker
ba543d1152
Glean mods
...
- Fixes fixed upscale factor issues
- Refines a few ops to decrease computation & parameterization
2020-12-27 12:25:06 -07:00
James Betker
5e2e605a50
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
2020-12-26 13:51:19 -07:00
James Betker
f9be049adb
GLEAN mod to support custom initial strides
2020-12-26 13:51:14 -07:00
James Betker
2706a84f15
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
2020-12-26 13:50:34 -07:00
James Betker
90e2362c00
Fix bug with full_image_dataset
2020-12-26 13:50:27 -07:00
James Betker
3fd627fc62
Mods to support image classification & filtering
2020-12-26 13:49:27 -07:00
James Betker
10fdfa1563
Migrate generators to dynamic model registration
2020-12-24 23:02:10 -07:00
James Betker
29db7c7a02
Further mods to BYOL
2020-12-24 09:28:41 -07:00
James Betker
036684893e
Add LARS optimizer & support for BYOL idiosyncrasies
...
- Added LARS and SGD optimizer variants that support turning off certain
features for BN and bias layers
- Added a variant of pytorch's resnet model that supports gradient checkpointing.
- Modify the trainer infrastructure to support above
- Fix bug with BYOL (should have been nonfunctional)
2020-12-23 20:33:43 -07:00
James Betker
1bbcb96ee8
Implement a few changes to support training BYOL networks
2020-12-23 10:50:23 -07:00
James Betker
2437b33e74
Fix srflow_latent_space_playground bug
2020-12-22 15:42:38 -07:00
James Betker
e7aeb17404
ImageFolder dataset: allow intermediary downscale before corrupt
...
For massive upscales (ex: 8x), corruption does almost nothing when applied
at the HQ level. This patch adds support to perform corruption at a specified
intermediary scale. The dataset downscales to this level, performs the corruption,
then downscales the rest of the way to get the LQ image.
2020-12-22 15:42:21 -07:00
James Betker
7938f9f50b
Fix bug with single_image_dataset which prevented working on multiple directories from working
2020-12-19 15:13:46 -07:00
James Betker
ae666dc520
Fix bugs with srflow after refactor
2020-12-19 10:28:23 -07:00
James Betker
4328c2f713
Change default ReLU slope to .2 BREAKS COMPATIBILITY
...
This conforms my ConvGnLelu implementation with the generally accepted negative_slope=.2. I have no idea where I got .1. This will break backwards compatibility with some older models but will likely improve their performance when freshly trained. I did some auditing to find what these models might be, and I am not actively using any of them, so probably OK.
2020-12-19 08:28:03 -07:00
James Betker
9377d34ac3
glean mods
2020-12-19 08:26:07 -07:00
James Betker
f35c034fa5
Add trainer readme
2020-12-18 16:52:16 -07:00
James Betker
e82f4552db
Update other docs with dumb config options
2020-12-18 16:21:28 -07:00
James Betker
92f9a129f7
GLEAN!
2020-12-18 16:04:19 -07:00
James Betker
c717765bcb
Notes for lucidrains converter.
2020-12-18 09:55:38 -07:00
James Betker
b4720ea377
Move stylegan to new location
2020-12-18 09:52:36 -07:00
James Betker
1708136b55
Commit my attempt at "conforming" the lucidrains stylegan implementation to the reference spec. Not working. will probably be abandoned.
2020-12-18 09:51:48 -07:00
James Betker
209332292a
Rosinality stylegan fix
2020-12-18 09:50:41 -07:00
James Betker
d875ca8342
More refactor changes
2020-12-18 09:24:31 -07:00
James Betker
5640e4efe4
More refactoring
2020-12-18 09:18:34 -07:00
James Betker
b905b108da
Large cleanup
...
Removed a lot of old code that I won't be touching again. Refactored some
code elements into more logical places.
2020-12-18 09:10:44 -07:00
James Betker
2f0a52b7db
misc changes
2020-12-18 08:53:45 -07:00
James Betker
a8179ff53c
Image label work
2020-12-18 08:53:18 -07:00
James Betker
3074f41877
Get rosinality model converter to work
...
Mostly, just needed to remove the custom cuda ops, not so bueno on Windows.
2020-12-17 16:03:39 -07:00
James Betker
e838c6e75b
Rosinality stylegan2 port
2020-12-17 14:18:46 -07:00
James Betker
12cf052889
Add an image patch labeling UI
2020-12-17 10:16:21 -07:00
James Betker
49327b99fe
SRFlow outputs RRDB output
2020-12-16 10:28:02 -07:00
James Betker
c25b49bb12
Clean up of SRFlowNet_arch
2020-12-16 10:27:38 -07:00
James Betker
42ac8e3eeb
Remove unnecessary comment from SRFlowNet
2020-12-16 09:43:07 -07:00
James Betker
fb2cfc795b
Update requirements, add image_patch_classifier tool
2020-12-16 09:42:50 -07:00
James Betker
09de3052ac
Add softmax to spinenet classification head
2020-12-16 09:42:15 -07:00
James Betker
4310e66848
Fix bug in 'corrupt_before_downsize=true'
2020-12-16 09:41:59 -07:00
James Betker
8661207d57
Merge branch 'gan_lab' of https://github.com/neonbjb/DL-Art-School into gan_lab
2020-12-15 17:16:48 -07:00
James Betker
fc376d34b2
Spinenet with logits head
2020-12-15 17:16:19 -07:00
James Betker
8e0e883050
Mods to support labeled datasets & random augs for those datasets
2020-12-15 17:15:56 -07:00
James Betker
e5a3e6b9b5
srflow latent space misc
2020-12-14 23:59:49 -07:00
James Betker
1e14635d88
Add exclusions to extract_subimages_with_ref
2020-12-14 23:59:41 -07:00
James Betker
0a19e53df0
BYOL mods
2020-12-14 23:59:11 -07:00
James Betker
ef7eabf457
Allow RRDB to upscale 8x
2020-12-14 23:58:52 -07:00
James Betker
087e9280ed
Add labeling feature to image_folder_dataset
2020-12-14 23:58:37 -07:00
James Betker
ec0ee25f4b
Structural latents checkpoint
2020-12-11 12:01:09 -07:00
James Betker
26ceca68c0
BYOL with structure!
2020-12-10 15:07:35 -07:00
James Betker
9c5e272a22
Script to extract models from a wrapped BYOL model
2020-12-10 09:57:52 -07:00
James Betker
a5630d282f
Get rid of 2nd trainer
2020-12-10 09:57:38 -07:00
James Betker
8e4b9f42fd
New BYOL dataset which uses a form of RandomCrop that lends itself to
...
structural guidance to the latents.
2020-12-10 09:57:18 -07:00
James Betker
c203cee31e
Allow swapping to torch DDP as needed in code
2020-12-09 15:03:59 -07:00
James Betker
66cbae8731
Add random_dataset for testing
2020-12-09 14:55:05 -07:00
James Betker
97ff25a086
BYOL!
...
Man, is there anything ExtensibleTrainer can't train? :)
2020-12-08 13:07:53 -07:00
James Betker
5369cba8ed
Stage
2020-12-08 00:33:07 -07:00
James Betker
bca59ed98a
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
2020-12-07 12:51:04 -07:00
James Betker
ea56eb61f0
Fix DDP errors for discriminator
...
- Don't define training_net in define_optimizers - this drops the shell and leads to problems downstream
- Get rid of support for multiple training nets per opt. This was half baked and needs a better solution if needed
downstream.
2020-12-07 12:50:57 -07:00
James Betker
c0aeaabc31
Spinenet playground
2020-12-07 12:49:32 -07:00
James Betker
88fc049c8d
spinenet latent playground!
2020-12-05 20:30:36 -07:00
James Betker
11155aead4
Directly use dataset keys
...
This has been a long time coming. Cleans up messy "GT" nomenclature and simplifies ExtensibleTraner.feed_data
2020-12-04 20:14:53 -07:00
James Betker
8a83b1c716
Go back to apex DDP, fix distributed bugs
2020-12-04 16:39:21 -07:00
James Betker
7a81d4e2f4
Revert gaussian loss changes
2020-12-04 12:49:20 -07:00
James Betker
711780126e
Cleanup
2020-12-03 23:42:51 -07:00
James Betker
ac7256d4a3
Do tqdm reporting when calculating flow_gaussian_nll
2020-12-03 23:42:29 -07:00
James Betker
dc9ff8e05b
Allow the majority of the srflow steps to checkpoint
2020-12-03 23:41:57 -07:00
James Betker
06d1c62c5a
iGPT support!
...
Sweeeeet
2020-12-03 15:32:21 -07:00