Training multiple finetunes on a single dataset in GUI? #433

Open
opened 2023-10-30 07:31:14 +00:00 by scghtnz · 1 comment

Right now, we can finetune with a dataset just fine, but if I want to use different settings but with the same dataset, I'd need to duplicate the dataset files and rename the folder.

Is there a supported method in the GUI for testing out different training settings and switching between the final finetune?

also another question, https://git.ecker.tech/mrq/ai-voice-cloning/wiki/Training#changing-base-model where exactly can we find another base model? Or is this like finetuning a finetuned model?

Right now, we can finetune with a dataset just fine, but if I want to use different settings but with the same dataset, I'd need to duplicate the dataset files and rename the folder. Is there a supported method in the GUI for testing out different training settings and switching between the final finetune? also another question, https://git.ecker.tech/mrq/ai-voice-cloning/wiki/Training#changing-base-model where exactly can we find another base model? Or is this like finetuning a finetuned model?

Is there a supported method in the GUI for testing out different training settings and switching between the final finetune?

No not really. It would be nice to have learning rate finder, but it's kinda hard to implement because there are multiple models packed into one and the code doesn't use the default Pytorch DataLoader...

There is a quirk in a code - the code is somehow writen in a way that it only 'sees' the latest model writen in the finetune folder. that means that when you start another finetuning with different parameters, the script will rename the original finetune folder into something like finetune-231021-etc and those models in the original finetune folder will not be available in gui.

The simplest solution is to rename the folder back to finetune.

Otherwise you can select the models in the settings tabs. Again this selector only sees the finetune directories for all datasets in training directory.

> Is there a supported method in the GUI for testing out different training settings and switching between the final finetune? No not really. It would be nice to have learning rate finder, but it's kinda hard to implement because there are multiple models packed into one and the code doesn't use the default Pytorch DataLoader... There is a quirk in a code - the code is somehow writen in a way that it only 'sees' the latest model writen in the finetune folder. that means that when you start another finetuning with different parameters, the script will rename the original finetune folder into something like finetune-231021-etc and those models in the original finetune folder will not be available in gui. The simplest solution is to rename the folder back to finetune. Otherwise you can select the models in the settings tabs. Again this selector only sees the finetune directories for all datasets in training directory.
Sign in to join this conversation.
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/ai-voice-cloning#433
No description provided.