Different sources often offer contradictory arguments, showing the complexity of the problem that we are tackling here. Going through several articles and fine-tuning a few models yourself should help you form your own opinion.
There are so many tutorials for this first step, in both text and video formats, that you can easily find one by just searching on google, civitai, reddit, youtube, or whatever. We just randomly list some of them here.
- LoRA Training Guide
- Make your own Loras, easy and free
- RFKTR's in-depth guide to Training high quality models
- How to create near-perfect character and style LoRa's for SDXL 1.0
Resources with detailed explanation on the effects of training parameters are much scarce. Suggestions for this block are welcome.
- bmaltais/LoRA training parameters: Simple explanation of what each argument in kohya/sd-scripts does
- THE OTHER LoRA TRAINING RENTRY: A deep dive into different aspects of training
- LyCORIS-experiments: Investigation on transfer between base models and impacts of various training decisions
- lora training logs & notes - crude science: On U-Net and TextEncoder learning rate
- A simple guide for VEHICLE LORA: Captioning for vehicle LoRA
- followfox blog: This blog contains many interesting experiments on Stable Diffusion fine-tuning