Go to file
2023-04-30 20:00:46 +00:00
samples an amazing commit 2023-04-29 04:14:56 +00:00
src more tuning 2023-04-30 20:00:46 +00:00
LICENSE Initial commit 2023-04-29 03:37:11 +00:00
README.md more tuning 2023-04-30 20:00:46 +00:00
requirements.txt an amazing commit 2023-04-29 04:14:56 +00:00

Generative Agents

This serves as yet-another cobbled together application of generative agents utilizing LangChain as the core dependency and subjugating a "proxy" for GPT4.

In short, by utilizing a language model to summarize, rank, and query against information using NLP queries/instructions, immersive agents can be attained.

Features

  • gradio web UI
  • saving and loading of agents
  • works with non-OpenAI LLMs and embeddings (tested llamacpp)
  • modified prompts for use with vicuna

Installation

pip install -r requirements.txt

Usage

Set your environment variables accordingly:

  • LLM_TYPE: (oai, llamacpp): the LLM backend to use in LangChain. OpenAI requires some additional environment variables:
    • OPENAI_API_BASE: URL for your target OpenAI
    • OPENAI_API_KEY: authentication key for OpenAI
    • OPENAI_API_MODEL: target model
  • LLM_MODEL: (./path/to/your/llama/model.bin): path to your GGML-formatted LLaMA model, if using llamacpp as the LLM backend
  • LLM_EMBEDDING_TYPE: (oai, llamacpp, hf): the embedding model to use for similarity computing.
  • LLM_PROMPT_TUNE: (oai, vicuna): prompt formatting to use, for variants with specific finetunes for instructions, etc.
  • LLM_CONTEXT: sets maximum context size

To run:

python .\src\main.py

Plans

I do not plan on making this uber-user friendly like mrq/ai-voice-cloning, as this is just a stepping stone for a bigger project integrating generative agents.

Caveats

A local LM is quite slow. Even using one that's more instruction-tuned like Vicuna (with a SYSTEM:\nUSER:\nASSISTANT: structure of prompts) is still inconsistent.

GPT4 seems to Just Work, unfortunately.