Commit Graph

31 Commits

Author SHA1 Message Date
mrq
fb467b19ba exposed rolling resp context to the web UI, added passing in language to inferencing command line 2023-10-12 23:21:01 -05:00
mrq
65f500083d tweaks to try and get deepspeed quantized inferencing, validating bitsandbytes and deepspeed quantization, nothing seems to work 2023-10-12 22:21:43 -05:00
mrq
8740cdefc6 added initial support for languages (still testing, marked as model version 3), added experimental 'context extend by limiting the resp context' (untested) 2023-10-11 20:38:40 -05:00
mrq
100dd164e6 apply phoneme cleanup in inferencing as well 2023-10-10 19:21:19 -05:00
mrq
e727b6e5c1 changed dynamic temperature trigger to be a min-(n)ar-temp value between [0,(n)ar-temp), flags to set min temp, checkbox in web UI to request it 2023-10-10 17:02:33 -05:00
mrq
893a610fad cleanup, use deepspeed inferencing pathway if requested 2023-10-09 15:24:04 -05:00
mrq
26fbb92ec6 reduced dynamic temperature threshold to > 1.0, as it seems to not quite be useful for audio LMs, sped up any sampling that touches logits by copying them to CPU first, as accessing tensors on the GPU is slow as balls) 2023-10-09 14:46:17 -05:00
mrq
c0b25541e3 restructured some things with the model to remove dead weights 2023-09-20 19:10:59 -05:00
mrq
a6bfe43590 added mirostat sampling (given a partially trained model, it got far decent output than I expected, need to test on a better trained model) 2023-09-18 18:55:41 -05:00
mrq
23a5fdd645 implemented a naive beam search (I really should be taking a break) 2023-09-12 21:28:07 -05:00
mrq
ba71020318 added option to limit (or exceed) inferenced RVQ-bin levels through the NAR 2023-09-10 13:50:13 -05:00
mrq
4f61f5c889 added option to set the trim length for an input prompt 2023-09-09 18:04:44 -05:00
mrq
5ac119a6e7 added light web UI (need to port the telemetry disabling bandaids from aivc) 2023-09-09 16:17:20 -05:00
mrq
10c34c5b98 added a length-based decay factor for repetition penalty 2023-09-08 21:02:00 -05:00
mrq
14c78bae39 added lots of sampling options (top-k/top-p, repetition penalty, length penalty) 2023-09-08 20:30:54 -05:00
mrq
b2907ae7e0 seems that my PromEmbedding/RespEmbedding doesn't actually work all that well, naively using dedicated MultiEmbeddings for AR/NAR in the monolithic model is the best way to go 2023-09-08 01:03:24 -05:00
mrq
ab5134f385 tweaks and fixes 2023-09-07 17:08:38 -05:00
mrq
e40c0d34a0 somewhat got recurrent forward working (it's as accurate as chunkwise forward: it's not accurate at all), added option to use AMP instead of blanket setting the weight's dtype 2023-09-01 20:58:29 -05:00
mrq
6455a2f9d7 I think I fixed a bug? 2023-08-24 23:33:36 -05:00
mrq
4585824cd3 tweaks, including exporting on save/quit 2023-08-23 16:43:03 -05:00
mrq
524d289c9c Forgot to re-add in setting the weight's dtype on model load 2023-08-22 22:57:23 -05:00
mrq
7b1b82e0e5 inferencing cleanup 2023-08-20 21:36:02 -05:00
mrq
a47029065b I don't know if the lack of start/stop tokens being added was causing my inference tests to fail, but it seems better now 2023-08-20 19:21:54 -05:00
mrq
bbb0563b3d pseudocode polyfill stub some other flavor of working on adding the tasks 2023-08-18 22:22:13 -05:00
mrq
ced31fd9b7 removed the sampler as it's very misleading 2023-08-18 14:47:48 -05:00
mrq
1e3e1d9315 tweaks 2023-08-15 21:58:16 -05:00
mrq
13571380be made exporter make more sense 2023-08-13 22:56:28 -05:00
mrq
d7deaf6def distributed training works now (hopefully) 2023-08-13 22:07:45 -05:00
mrq
d1b9770d41 set model to eval when inferencing (very important) 2023-08-05 04:29:05 +00:00
mrq
c85101403f big cleanup 2023-08-03 20:26:36 -05:00
mrq
bf8cedc9dd Rewrite init 2023-08-02 21:53:35 +00:00