Skip to content

Commit

Permalink
remove ignore patterns none
Browse files Browse the repository at this point in the history
  • Loading branch information
Felipe Mello committed Nov 21, 2024
1 parent 2a1d0c2 commit b05689b
Show file tree
Hide file tree
Showing 40 changed files with 44 additions and 44 deletions.
6 changes: 3 additions & 3 deletions docs/source/api_ref_models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ To download the Qwen2.5 1.5B model, for example:

.. code-block:: bash
tune download Qwen/Qwen2.5-1.5B-Instruct --output-dir /tmp/Qwen2_5-1_5B-Instruct --ignore-patterns None
tune download Qwen/Qwen2.5-1.5B-Instruct --output-dir /tmp/Qwen2_5-1_5B-Instruct
.. autosummary::
:toctree: generated/
Expand Down Expand Up @@ -258,7 +258,7 @@ To download the Qwen2 1.5B model, for example:

.. code-block:: bash
tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct --ignore-patterns None
tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct
.. autosummary::
:toctree: generated/
Expand All @@ -283,7 +283,7 @@ To download the Phi-3 Mini 4k instruct model:

.. code-block:: bash
tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --ignore-patterns None --hf-token <HF_TOKEN>
tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --hf-token <HF_TOKEN>
.. autosummary::
:toctree: generated/
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/mistral/7B_full_ppo_low_memory.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download weqweasdas/RM-Mistral-7B --output-dir /tmp/RM-Mistral-7B/ --ignore-patterns None
# tune download weqweasdas/RM-Mistral-7B --output-dir /tmp/RM-Mistral-7B/
# tune download mistralai/Mistral-7B-Instruct-v0.2 --output-dir /tmp/Mistral-7B-Instruct-v0.2/ --ignore-patterns "*.safetensors" --hf-token <HF_TOKEN>
#
# You'll also need to ensure that {output_dir} exists beforehand, as checkpoints for policy and value models are saved in sub-folders.
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/phi3/mini_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --ignore-patterns None --hf-token <HF_TOKEN>
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --hf-token <HF_TOKEN>
#
# Run this config on 4 GPUs using the following:
# tune run --nproc_per_node 4 full_finetune_distributed --config phi3/mini_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/phi3/mini_full_low_memory.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --ignore-patterns None --hf-token <HF_TOKEN>
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --hf-token <HF_TOKEN>
#
# The default config uses an optimizer from bitsandbytes. If you do not have it installed,
# you can install it with
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/phi3/mini_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --ignore-patterns None --hf-token <HF_TOKEN>
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --hf-token <HF_TOKEN>
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config phi3/mini_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/phi3/mini_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --ignore-patterns None --hf-token <HF_TOKEN>
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --hf-token <HF_TOKEN>
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config phi3/mini_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/phi3/mini_qlora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --ignore-patterns None --hf-token <HF_TOKEN>
# tune download microsoft/Phi-3-mini-4k-instruct --output-dir /tmp/Phi-3-mini-4k-instruct --hf-token <HF_TOKEN>
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config phi3/mini_qlora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/0.5B_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct
#
# To launch on 4 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 4 full_finetune_distributed --config qwen2/0.5B_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/0.5B_full_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run full_finetune_single_device --config qwen2/0.5B_full_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/0.5B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2/0.5B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/0.5B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2/0.5B_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/1.5B_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct
#
# To launch on 4 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 4 full_finetune_distributed --config qwen2/1.5B_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/1.5B_full_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct
#
# The default config uses an optimizer from bitsandbytes. If you do not have it installed,
# you can install it with
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/1.5B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2/1.5B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/1.5B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2/1.5B_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/7B_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct
#
# To launch on 4 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 4 full_finetune_distributed --config qwen2/7B_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/7B_full_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct
#
# The default config uses an optimizer from bitsandbytes. If you do not have it installed,
# you can install it with
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/7B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2/7B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2/7B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-7B-Instruct --output-dir /tmp/Qwen2-7B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2/7B_lora_single_device
Expand Down
4 changes: 2 additions & 2 deletions recipes/configs/qwen2/knowledge_distillation_distributed.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
#
# This config assumes that you've ran the following commands before launching KD:
# First download the student and teacher models
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct
#
# You get better results using KD if the teacher model has already been fine-tuned on the target dataset:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2/1.5B_lora
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
#
# This config assumes that you've ran the following commands before launching KD:
# First download the student and teacher models
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2-0.5B-Instruct --output-dir /tmp/Qwen2-0.5B-Instruct
# tune download Qwen/Qwen2-1.5B-Instruct --output-dir /tmp/Qwen2-1.5B-Instruct
#
# You get better results using KD if the teacher model has already been fine-tuned on the target dataset:
# tune run lora_finetune_single_device --config qwen2/1.5B_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/0.5B_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 0.5B model
#
# This config assumes that you've run the following command before launching:
# tune download Qwen/Qwen2.5-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-0.5B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nproc_per_node 2 full_finetune_distributed --config qwen2_5/0.5B_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/0.5B_full_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 0.5B
#
# This config assumes that you've run the following command before launching:
# tune download Qwen/Qwen2.5-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-0.5B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run full_finetune_single_device --config qwen2_5/0.5B_full_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/0.5B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 0.5B model
#
# This config assumes that you've run the following command before launching:
# tune download Qwen/Qwen2.5-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-0.5B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nproc_per_node 2 lora_finetune_distributed --config qwen2_5/0.5B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/0.5B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 0.5B model
#
# This config assumes that you've run the following command before launching
# tune download Qwen/Qwen2.5-0.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-0.5B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2_5/0.5B_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/1.5B_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 1.5B model
#
# This config assumes that you've run the following command before launching:
# tune download Qwen/Qwen2.5-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-1.5B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nproc_per_node 2 full_finetune_distributed --config qwen2_5/1.5B_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/1.5B_full_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 1.5B
#
# This config assumes that you've run the following command before launching:
# tune download Qwen/Qwen2.5-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-1.5B-Instruct
#
# The default config uses an optimizer from bitsandbytes. If you do not have it installed,
# you can install it with:
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/1.5B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 1.5B model
#
# This config assumes that you've run the following command before launching:
# tune download Qwen/Qwen2.5-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-1.5B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nproc_per_node 2 lora_finetune_distributed --config qwen2_5/1.5B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/1.5B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# using a Qwen2.5 1.5B model
#
# This config assumes that you've run the following command before launching:
# tune download Qwen/Qwen2.5-1.5B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-1.5B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2_5/1.5B_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/14B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-14B-Instruct --output-dir /tmp/Qwen2_5-14B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-14B-Instruct --output-dir /tmp/Qwen2_5-14B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2_5/14B_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/32B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-32B-Instruct --output-dir /tmp/Qwen2_5-32B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-32B-Instruct --output-dir /tmp/Qwen2_5-32B-Instruct
#
# To launch on 8 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 8 lora_finetune_distributed --config qwen2_5/32B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/3B_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 full_finetune_distributed --config qwen2_5/3B_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/3B_full_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct
#
# The default config uses an optimizer from bitsandbytes. If you do not have it installed,
# you can install it with
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/3B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2_5/3B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/3B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-3B-Instruct --output-dir /tmp/Qwen2_5-3B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2_5/3B_lora_single_device
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/72B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-72B-Instruct --output-dir /tmp/Qwen2_5-72B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-72B-Instruct --output-dir /tmp/Qwen2_5-72B-Instruct
#
# To launch on 8 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 8 lora_finetune_distributed --config qwen2_5/72B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/7B_full.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 full_finetune_distributed --config qwen2_5/7B_full
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/7B_full_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct
#
# The default config uses an optimizer from bitsandbytes. If you do not have it installed,
# you can install it with
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/7B_lora.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct
#
# To launch on 2 devices, run the following command from root:
# tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2_5/7B_lora
Expand Down
2 changes: 1 addition & 1 deletion recipes/configs/qwen2_5/7B_lora_single_device.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#
# This config assumes that you've run the following command before launching
# this run:
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct --ignore-patterns None
# tune download Qwen/Qwen2.5-7B-Instruct --output-dir /tmp/Qwen2_5-7B-Instruct
#
# To launch on a single device, run the following command from root:
# tune run lora_finetune_single_device --config qwen2_5/7B_lora_single_device
Expand Down

0 comments on commit b05689b

Please sign in to comment.