Skip to content

Commit

Permalink
[CI] Fix failing FP8 cpu offload test (vllm-project#13170)
Browse files Browse the repository at this point in the history
Signed-off-by: mgoin <[email protected]>
  • Loading branch information
mgoin authored Feb 12, 2025
1 parent 09972e7 commit 14b7899
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions tests/quantization/test_cpu_offload.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: Apache-2.0

# SPDX-License-Identifier: Apache-2.0

# Expanded quantized model tests for CPU offloading
# Base tests: tests/basic_correctness/test_cpu_offload.py

Expand All @@ -14,13 +14,13 @@
reason="fp8 is not supported on this GPU type.")
def test_cpu_offload_fp8():
# Test quantization of an unquantized checkpoint
compare_two_settings("meta-llama/Meta-Llama-3-8B-Instruct",
compare_two_settings("meta-llama/Llama-3.2-1B-Instruct",
["--quantization", "fp8"],
["--quantization", "fp8", "--cpu-offload-gb", "2"],
["--quantization", "fp8", "--cpu-offload-gb", "1"],
max_wait_seconds=480)
# Test loading a quantized checkpoint
compare_two_settings("neuralmagic/Meta-Llama-3-8B-Instruct-FP8", [],
["--cpu-offload-gb", "2"],
compare_two_settings("neuralmagic/Qwen2-1.5B-Instruct-FP8", [],
["--cpu-offload-gb", "1"],
max_wait_seconds=480)


Expand Down

0 comments on commit 14b7899

Please sign in to comment.