Commit Graph

14 Commits

Author SHA1 Message Date
Mark Baushenko
cbccc5e953
Optimizing graphics card memory
During inference it does not store gradients, which take up most of the video memory
2022-05-11 16:35:11 +03:00
James Betker
317d55c252 re-enable redaction 2022-05-06 09:36:42 -06:00
James Betker
8672075914 temporarily disable redaction 2022-05-06 09:06:20 -06:00
James Betker
ddb19f6b0f Enable redaction by default 2022-05-03 21:21:52 -06:00
James Betker
c1d004aeb0 change quality presets 2022-05-03 21:01:26 -06:00
James Betker
a4cda68ddf getting ready for 2.1 release 2022-05-02 20:20:50 -06:00
James Betker
f499d66493 misc fixes 2022-05-02 18:00:57 -06:00
James Betker
2888ae0337 Fix bug with k>1 2022-05-02 18:00:22 -06:00
James Betker
cdf44d7506 more fixes 2022-05-02 16:44:47 -06:00
James Betker
39ec1b0db5 Support totally random voices (and make fixes to previous changes) 2022-05-02 15:40:03 -06:00
James Betker
9007955d88 Add redaction support 2022-05-02 14:57:29 -06:00
James Betker
cd2d4229bf Better error messages when inputs are out of bounds. 2022-05-01 17:39:36 -06:00
James Betker
0ffc191408 Add support for extracting and feeding conditioning latents directly into the model
- Adds a new script and API endpoints for doing this
- Reworks autoregressive and diffusion models so that the conditioning is computed separately (which will actually provide a mild performance boost)
- Updates README

This is untested. Need to do the following manual tests (and someday write unit tests for this behemoth before
it becomes a problem..)
1) Does get_conditioning_latents.py work?
2) Can I feed those latents back into the model by creating a new voice?
3) Can I still mix and match voices (both with conditioning latents and normal voices) with read.py?
2022-05-01 17:25:18 -06:00
James Betker
f7c8decfdb Move everything into the tortoise/ subdirectory
For eventual packaging.
2022-05-01 16:24:24 -06:00