forked from ecker/DL-Art-School
Found out that batch norm is causing the switches to init really poorly - not using a significant number of transforms. Might be a great time to re-consider using the attention norm, but for now just re-enable it. |
||
|---|---|---|
| .. | ||
| .idea | ||
| data | ||
| data_scripts | ||
| metrics | ||
| models | ||
| options | ||
| scripts | ||
| temp | ||
| utils | ||
| distill_torchscript.py | ||
| onnx_inference.py | ||
| process_video.py | ||
| requirements.txt | ||
| run_scripts.sh | ||
| test.py | ||
| train.py | ||