Skip to content

Commit

Permalink
[Bugfix] Fix Weight Loading Multiple GPU Test - Large Models (vllm-pr…
Browse files Browse the repository at this point in the history
…oject#9213)

Signed-off-by: Amit Garg <[email protected]>
  • Loading branch information
mgoin authored and garg-amit committed Oct 28, 2024
1 parent 1324d4b commit 456df95
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 1 deletion.
1 change: 0 additions & 1 deletion tests/weight_loading/models-large.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
compressed-tensors, nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-quantized, main
compressed-tensors, nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-channel-quantized, main
compressed-tensors, nm-testing/Mixtral-8x7B-Instruct-v0.1-W8A16-quantized, main
compressed-tensors, mgoin/DeepSeek-Coder-V2-Lite-Instruct-FP8, main
gptq_marlin, TheBloke/Mixtral-8x7B-v0.1-GPTQ, main
awq_marlin, casperhansen/deepseek-coder-v2-instruct-awq, main
1 change: 1 addition & 0 deletions tests/weight_loading/models.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ compressed-tensors, nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test, main
compressed-tensors, nm-testing/Phi-3-mini-128k-instruct-FP8, main
compressed-tensors, neuralmagic/Phi-3-medium-128k-instruct-quantized.w4a16, main
compressed-tensors, nm-testing/TinyLlama-1.1B-Chat-v1.0-actorder-group, main
compressed-tensors, mgoin/DeepSeek-Coder-V2-Lite-Instruct-FP8, main
awq, casperhansen/mixtral-instruct-awq, main
awq_marlin, casperhansen/mixtral-instruct-awq, main
fp8, neuralmagic/Meta-Llama-3-8B-Instruct-FP8-KV, main
Expand Down

0 comments on commit 456df95

Please sign in to comment.