samples | ||
src | ||
LICENSE | ||
README.md | ||
requirements.txt |
Generative Agents
This serves as yet-another cobbled together application of generative agents utilizing LangChain as the core dependency and subjugating a "proxy" for GPT4.
In short, by utilizing a language model to summarize, rank, and query against information, immersive agents can be attained.
Installation
pip install -r requirements.txt
Usage
Set your environment variables accordingly:
LLM_TYPE
: (oai
,llamacpp
): the LLM backend to use in LangChain. OpenAI requires some additional environment variables:OPENAI_API_BASE
: URL for your target OpenAIOPENAI_API_KEY
: authentication key for OpenAIOPENAI_API_MODEL
: target model
LLM_MODEL
: (./path/to/your/llama/model.bin
): path to your GGML-formatted LLaMA model, if usingllamacpp
as the LLM backendLLM_EMBEDDING_TYPE
: (oai
,llamacpp
,hf
): the embedding model to use for similarity computing.
To run:
python .\src\main.py
Plans
I do not plan on making this uber-user friendly like mrq/ai-voice-cloning, as this is just a stepping stone for a bigger project integrating generative agents.
I do, however, plan on adding a simple Gradio web UI to interface with this better.