|
0093a70c51
|
slight rewrite, mostly as functional as before
|
2023-05-22 15:40:11 +00:00 |
|
|
6fa2c18fb1
|
reworked saving/loading agent by just saving the memory documents themselves and adding them on load, rather than serializing the entire memory object and it breaking between systems / wizard prompt tune / more tunings
|
2023-05-15 07:28:04 +00:00 |
|
|
287406e7ba
|
tunings
|
2023-05-09 23:57:54 +00:00 |
|
|
f13d05dbb2
|
more tuning
|
2023-05-03 01:46:55 +00:00 |
|
|
8eaecaf643
|
updating for new langchain, more tunes
|
2023-05-03 01:01:58 +00:00 |
|
|
e152cd98a4
|
updated requirements because I had installed this in WSL2
|
2023-05-03 00:26:37 +00:00 |
|
|
41e48497cd
|
more tunes
|
2023-05-01 05:47:35 +00:00 |
|
|
f10ea1ec2a
|
added prompt tuning for superCOT (which 33B seems to be the best approach for a local LLM)
|
2023-04-30 22:56:02 +00:00 |
|
|
089b7043b9
|
more tuning
|
2023-04-30 20:00:46 +00:00 |
|
|
e9abd9e73f
|
Swapped to a much simpler way of formatting prompts given a finetune by recording prompts as system/user/assistant dicts, then combine them according to provided finetune
|
2023-04-30 17:57:53 +00:00 |
|
|
0964f48fc0
|
ui fixes
|
2023-04-30 03:30:46 +00:00 |
|
|
f9cfd1667f
|
I think I better tuned the prompts for vicuna, I forget (and added the licenses for Langchain code because I can't be assed to inject a bunch of functions now)
|
2023-04-30 01:27:34 +00:00 |
|
|
a1cb43da5e
|
speedups by terminating early and not having short observations take forever from ramblings
|
2023-04-29 22:04:18 +00:00 |
|
|
9e0fd8d79c
|
more changes to make LLaMAs cooperate better with better tuned prompts, just in time for todd to be put down
|
2023-04-29 18:48:33 +00:00 |
|
|
b553ffbc5f
|
added rudimentary web UI
|
2023-04-29 05:54:55 +00:00 |
|
|
b35f94d319
|
an amazing commit
|
2023-04-29 04:14:56 +00:00 |
|
mrq
|
10cf7746d8
|
Initial commit
|
2023-04-29 03:37:11 +00:00 |
|