vall-e/vall_e.cpp
2024-12-24 17:54:48 -06:00
..
include more work on vall_e.cpp (need to resolve why the embeddings (and maybe the weights as a whole) are different from the base model) 2024-12-23 20:36:40 -06:00
Makefile working vall_e.cpp 2024-12-24 17:54:48 -06:00
README.md ugh 2024-12-23 23:42:44 -06:00
vall_e.cpp working vall_e.cpp 2024-12-24 17:54:48 -06:00
vall_e.h working vall_e.cpp 2024-12-24 17:54:48 -06:00

vall_e.cpp

This is an implementation that makes use of llama.cpp and encodec.cpp.

At the moment it's very work in progress.

Build

Populate ./include/ with the llama.cpp and encodec.cpp headers.

Populate ./libs/ with the compiled libraries of llama.cpp and encodec.cpp.

Run make.

Required Modifications

encodec.cpp requires updating its GGML copy to the latest version, which requires a few lines to get the CPU backend working (per my fork).

llama.cpp only possible modification needs to ensure that a non-causal attention mask is used; everything necessary can be hacked together with clever tricks.

To-Do

  • converted model to GGUF
    • convert it without modifying any of the existing code, as the tokenizer requires some care
  • basic framework
    • load the quantized model
    • orchestrate the required embeddings
    • juggle the output head / classifier properly
  • phonemize text
    • with the help of espeak-ng
  • tokenize phonemes
    • the tokenizer is being a huge thorn on actual sequences
  • load audio from disk
  • encode audio
  • sum embeddings for the prom and prior resps
  • working AR output
    • AR sampling
  • working NAR-len output
    • NAR-len sampling
  • working NAR output
    • NAR sampling
  • decode audio to disk
  • a functional CLI
  • actually make it work