1
1
forked from mrq/tortoise-tts
Commit Graph

14 Commits

Author SHA1 Message Date
James Betker
0570034eda Automatically pick batch size based on available GPU memory 2022-05-13 10:30:02 -06:00
James Betker
f5ebd14d09 Add error message 2022-05-12 20:15:40 -06:00
James Betker
e4e9523900 re-enable redaction 2022-05-06 09:36:42 -06:00
James Betker
e18428166d v2.2 2022-05-06 00:11:10 -06:00
James Betker
b11f6ddd60 Enable redaction by default 2022-05-03 21:21:52 -06:00
James Betker
e23e6f6696 Use librosa for loading mp3s 2022-05-03 20:44:31 -06:00
James Betker
00e84bbd86 fix paths 2022-05-02 20:56:28 -06:00
James Betker
5663e98904 misc fixes 2022-05-02 18:00:57 -06:00
James Betker
ccf16f978e more fixes 2022-05-02 16:44:47 -06:00
James Betker
4836e1f792 fix warning 2022-05-02 16:36:02 -06:00
James Betker
ee24d3ee4b Support totally random voices (and make fixes to previous changes) 2022-05-02 15:40:03 -06:00
James Betker
f631123264 Add redaction support 2022-05-02 14:57:29 -06:00
James Betker
01b783fc02 Add support for extracting and feeding conditioning latents directly into the model
- Adds a new script and API endpoints for doing this
- Reworks autoregressive and diffusion models so that the conditioning is computed separately (which will actually provide a mild performance boost)
- Updates README

This is untested. Need to do the following manual tests (and someday write unit tests for this behemoth before
it becomes a problem..)
1) Does get_conditioning_latents.py work?
2) Can I feed those latents back into the model by creating a new voice?
3) Can I still mix and match voices (both with conditioning latents and normal voices) with read.py?
2022-05-01 17:25:18 -06:00
James Betker
23a3d5d00b Move everything into the tortoise/ subdirectory
For eventual packaging.
2022-05-01 16:24:24 -06:00