You can use use the mlx-lm
package to fine-tune an LLM with low rank
adaptation (LoRA) for a target task.1 The example also supports quantized
LoRA (QLoRA).2 LoRA fine-tuning works with the following model families:
- Mistral
- Llama
- Phi2
- Mixtral
- Qwen2
- Gemma
- OLMo
- MiniCPM
- InternLM2
The main command is mlx_lm.lora
. To see a full list of command-line options run:
mlx_lm.lora --help
Note, in the following the --model
argument can be any compatible Hugging
Face repo or a local path to a converted model.
You can also specify a YAML config with -c
/--config
. For more on the format see the
example YAML. For example:
mlx_lm.lora --config /path/to/config.yaml
If command-line flags are also used, they will override the corresponding values in the config.
To fine-tune a model use:
mlx_lm.lora \
--model <path_to_model> \
--train \
--data <path_to_data> \
--iters 600
To fine-tune the full model weights, add the --fine-tune-type full
flag.
Currently supported fine-tuning types are lora
(default), dora
, and full
.
The --data
argument must specify a path to a train.jsonl
, valid.jsonl
when using --train
and a path to a test.jsonl
when using --test
. For more
details on the data format see the section on Data.
For example, to fine-tune a Mistral 7B you can use --model mistralai/Mistral-7B-v0.1
.
If --model
points to a quantized model, then the training will use QLoRA,
otherwise it will use regular LoRA.
By default, the adapter config and learned weights are saved in adapters/
.
You can specify the output location with --adapter-path
.
You can resume fine-tuning with an existing adapter with
--resume-adapter-file <path_to_adapters.safetensors>
.
To compute test set perplexity use:
mlx_lm.lora \
--model <path_to_model> \
--adapter-path <path_to_adapters> \
--data <path_to_data> \
--test
For generation use mlx_lm.generate
:
mlx_lm.generate \
--model <path_to_model> \
--adapter-path <path_to_adapters> \
--prompt "<your_model_prompt>"
You can generate a model fused with the low-rank adapters using the
mlx_lm.fuse
command. This command also allows you to optionally:
- Upload the fused model to the Hugging Face Hub.
- Export the fused model to GGUF. Note GGUF support is limited to Mistral, Mixtral, and Llama style models in fp16 precision.
To see supported options run:
mlx_lm.fuse --help
To generate the fused model run:
mlx_lm.fuse --model <path_to_model>
This will by default load the adapters from adapters/
, and save the fused
model in the path fused_model/
. All of these are configurable.
To upload a fused model, supply the --upload-repo
and --hf-path
arguments
to mlx_lm.fuse
. The latter is the repo name of the original model, which is
useful for the sake of attribution and model versioning.
For example, to fuse and upload a model derived from Mistral-7B-v0.1, run:
mlx_lm.fuse \
--model mistralai/Mistral-7B-v0.1 \
--upload-repo mlx-community/my-lora-mistral-7b \
--hf-path mistralai/Mistral-7B-v0.1
To export a fused model to GGUF, run:
mlx_lm.fuse \
--model mistralai/Mistral-7B-v0.1 \
--export-gguf
This will save the GGUF model in fused_model/ggml-model-f16.gguf
. You
can specify the file name with --gguf-path
.
The LoRA command expects you to provide a dataset with --data
. The MLX
Examples GitHub repo has an example of the WikiSQL
data in the
correct format.
Datasets can be specified in *.jsonl
files locally or loaded from Hugging
Face.
For fine-tuning (--train
), the data loader expects a train.jsonl
and a
valid.jsonl
to be in the data directory. For evaluation (--test
), the data
loader expects a test.jsonl
in the data directory.
Currently, *.jsonl
files support chat
, tools
, completions
, and text
data formats. Here are examples of these formats:
chat
:
{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello."}, {"role": "assistant", "content": "How can I assistant you today."}]}
tools
:
{"messages":[{"role":"user","content":"What is the weather in San Francisco?"},{"role":"assistant","tool_calls":[{"id":"call_id","type":"function","function":{"name":"get_current_weather","arguments":"{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"}}]}],"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and country, eg. San Francisco, USA"},"format":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location","format"]}}}]}
View the expanded single data tool format
{
"messages": [
{ "role": "user", "content": "What is the weather in San Francisco?" },
{
"role": "assistant",
"tool_calls": [
{
"id": "call_id",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"
}
}
]
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and country, eg. San Francisco, USA"
},
"format": { "type": "string", "enum": ["celsius", "fahrenheit"] }
},
"required": ["location", "format"]
}
}
}
]
}
The format for the arguments
field in a function varies for different models.
Common formats include JSON strings and dictionaries. The example provided
follows the format used by
OpenAI
and Mistral
AI.
A dictionary format is used in Hugging Face's chat
templates.
Refer to the documentation for the model you are fine-tuning for more details.
completions
:
{"prompt": "What is the capital of France?", "completion": "Paris."}
For the completions
data format, a different key can be used for the prompt
and completion by specifying the following in the YAML config:
prompt_feature: "input"
completion_feature: "output"
Here, "input"
is the expected key instead of the default "prompt"
, and
"output"
is the expected key instead of "completion"
.
text
:
{"text": "This is an example for the model."}
Note, the format is automatically determined by the dataset. Note also, keys in each line not expected by the loader will be ignored.
Note
Each example in the datasets must be on a single line. Do not put more than one example per line and do not split an example across multiple lines.
To use Hugging Face datasets, first install the datasets
package:
pip install datasets
If the Hugging Face dataset is already in a supported format, you can specify
it on the command line. For example, pass --data mlx-community/wikisql
to
train on the pre-formatted WikiwSQL data.
Otherwise, provide a mapping of keys in the dataset to the features MLX LM expects. Use a YAML config to specify the Hugging Face dataset arguments. For example:
hf_dataset:
name: "billsum"
prompt_feature: "text"
completion_feature: "summary"
-
Use
prompt_feature
andcompletion_feature
to specify keys for acompletions
dataset. Usetext_feature
to specify the key for atext
dataset. -
To specify the train, valid, or test splits, set the corresponding
{train,valid,test}_split
argument. -
Arguments specified in
config
will be passed as keyword arguments todatasets.load_dataset
.
In general, for the chat
, tools
and completions
formats, Hugging Face
chat
templates
are used. This applies the model's chat template by default. If the model does
not have a chat template, then Hugging Face will use a default. For example,
the final text in the chat
example above with Hugging Face's default template
becomes:
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Hello.<|im_end|>
<|im_start|>assistant
How can I assistant you today.<|im_end|>
If you are unsure of the format to use, the chat
or completions
are good to
start with. For custom requirements on the format of the dataset, use the
text
format to assemble the content yourself.
Fine-tuning a large model with LoRA requires a machine with a decent amount of memory. Here are some tips to reduce memory use should you need to do so:
-
Try quantization (QLoRA). You can use QLoRA by generating a quantized model with
convert.py
and the-q
flag. See the Setup section for more details. -
Try using a smaller batch size with
--batch-size
. The default is4
so setting this to2
or1
will reduce memory consumption. This may slow things down a little, but will also reduce the memory use. -
Reduce the number of layers to fine-tune with
--num-layers
. The default is16
, so you can try8
or4
. This reduces the amount of memory needed for back propagation. It may also reduce the quality of the fine-tuned model if you are fine-tuning with a lot of data. -
Longer examples require more memory. If it makes sense for your data, one thing you can do is break your examples into smaller sequences when making the
{train, valid, test}.jsonl
files. -
Gradient checkpointing lets you trade-off memory use (less) for computation (more) by recomputing instead of storing intermediate values needed by the backward pass. You can use gradient checkpointing by passing the
--grad-checkpoint
flag. Gradient checkpointing will be more helpful for larger batch sizes or sequence lengths with smaller or quantized models.
For example, for a machine with 32 GB the following should run reasonably fast:
mlx_lm.lora \
--model mistralai/Mistral-7B-v0.1 \
--train \
--batch-size 1 \
--num-layers 4 \
--data wikisql
The above command on an M1 Max with 32 GB runs at about 250
tokens-per-second, using the MLX Example
wikisql
data set.
Footnotes
-
Refer to the arXiv paper for more details on LoRA. ↩
-
Refer to the paper QLoRA: Efficient Finetuning of Quantized LLMs ↩