I'd like to try some different (newer) transformer variants. The way to get there is softly decoupling the transformer portion of this architecture from GPT. This actually should be fairly easy. |
||
---|---|---|
.. | ||
.idea | ||
data | ||
models | ||
scripts | ||
trainer | ||
utils | ||
multi_modal_train.py | ||
process_video.py | ||
requirements.txt | ||
test.py | ||
train.py | ||
use_discriminator_as_filter.py |