You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
step 1: create ray cluster
on head node : ray start --head --port=40000 --num-gpus=1
on each sub node : ray start --address=21.12.9.115:40000 --num-gpus=1
And encounter error below, anyone know the problem?
INFO 02-11 15:09:33 init.py:190] Automatically detected platform cuda.
2025-02-11 15:09:33,647 INFO worker.py:1654 -- Connecting to existing Ray cluster at address: 21.12.9.115:40000...
2025-02-11 15:09:33,657 INFO worker.py:1832 -- Connected to Ray cluster. View the dashboard at http://127.0.0.1:8265
INFO 02-11 15:09:33 config.py:137] Replacing legacy 'type' key with 'rope_type'
INFO 02-11 15:09:37 config.py:542] This model supports multiple tasks: {'embed', 'reward', 'generate', 'score', 'classify'}. Defaulting to 'generate'.
INFO 02-11 15:09:38 config.py:1401] Defaulting to use ray for distributed inference
WARNING 02-11 15:09:38 arg_utils.py:1135] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False.
INFO 02-11 15:09:38 config.py:1556] Chunked prefill is enabled with max_num_batched_tokens=2048.
WARNING 02-11 15:09:38 config.py:669] Async output processing can not be enabled with pipeline parallel
WARNING 02-11 15:09:38 fp8.py:52] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
INFO 02-11 15:09:38 config.py:3275] MLA is enabled; forcing chunked prefill and prefix caching to be disabled.
INFO 02-11 15:09:38 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.2) with config: model='/data/code/models/671B/', speculative_config=None, tokenizer='/data/code/models/671B/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=163840, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=32, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/data/code/models/671B/, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False,
2025-02-11 15:09:38,509 INFO worker.py:1654 -- Connecting to existing Ray cluster at address: 21.12.9.115:40000...
2025-02-11 15:09:38,509 INFO worker.py:1672 -- Calling ray.init() again after it has already been called.
INFO 02-11 15:09:38 ray_distributed_executor.py:149] use_ray_spmd_worker: False
(pid=9278) INFO 02-11 15:09:40 init.py:190] Automatically detected platform cuda.
INFO 02-11 15:09:42 cuda.py:161] Using Triton MLA backend.
WARNING 02-11 15:09:42 triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(RayWorkerWrapper pid=1869, ip=21.12.9.156) INFO 02-11 15:09:42 cuda.py:161] Using Triton MLA backend.
(RayWorkerWrapper pid=1898, ip=21.12.9.192) WARNING 02-11 15:09:42 triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
INFO 02-11 15:09:43 utils.py:950] Found nccl from library libnccl.so.2
INFO 02-11 15:09:43 pynccl.py:69] vLLM is using nccl==2.21.5
(RayWorkerWrapper pid=1869, ip=21.12.9.140) INFO 02-11 15:09:43 utils.py:950] Found nccl from library libnccl.so.2
(RayWorkerWrapper pid=1869, ip=21.12.9.140) INFO 02-11 15:09:43 pynccl.py:69] vLLM is using nccl==2.21.5
INFO 02-11 15:09:43 model_runner.py:1110] Starting to load model /data/code/models/671B/...
(RayWorkerWrapper pid=1873, ip=30.183.51.186) INFO 02-11 15:09:43 model_runner.py:1110] Starting to load model /data/code/models/671B/...
(RayWorkerWrapper pid=1873, ip=21.12.9.163) WARNING 02-11 15:09:43 utils.py:159] The model class DeepseekV3ForCausalLM has not defined packed_modules_mapping, this may lead to incorrect mapping of quantized or ignored modules
WARNING 02-11 15:09:43 utils.py:159] The model class DeepseekV3ForCausalLM has not defined packed_modules_mapping, this may lead to incorrect mapping of quantized or ignored modules
INFO 02-11 15:09:43 cuda.py:161] Using Triton MLA backend.
Loading safetensors checkpoint shards: 0% Completed | 0/163 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 2% Completed | 4/163 [00:00<00:04, 33.91it/s]
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] Error executing method 'load_model'. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] Traceback (most recent call last):
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return run_method(target, method, args, kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return func(*args, **kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.model_runner.load_model()
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.model = get_model(vllm_config=self.vllm_config)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return loader.load_model(vllm_config=vllm_config)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 383, in load_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] model = _initialize_model(vllm_config=vllm_config)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return model_class(vllm_config=vllm_config, prefix=prefix)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 665, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.model = DeepseekV2Model(vllm_config=vllm_config,
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 599, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.start_layer, self.end_layer, self.layers = make_layers(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] [PPMissingLayer() for _ in range(start_layer)] + [
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 601, in
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] lambda prefix: DeepseekV2DecoderLayer(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 528, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.mlp = DeepseekV2MoE(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 129, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.experts = FusedMoE(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 309, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.quant_method.create_weights(layer=self, **moe_quant_params)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/quantization/fp8.py", line 428, in create_weights
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] w13_weight = torch.nn.Parameter(torch.empty(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in torch_function
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return func(*args, **kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.00 GiB. GPU 0 has a total capacity of 44.53 GiB of which 999.94 MiB is free. Process 937741 has 43.54 GiB memory in use. Of the allocated memory 43.09 GiB is allocated by PyTorch, and 24.09 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Loading safetensors checkpoint shards: 5% Completed | 8/163 [00:00<00:04, 33.39it/s]
Loading safetensors checkpoint shards: 7% Completed | 12/163 [00:00<00:04, 33.95it/s]
Loading safetensors checkpoint shards: 10% Completed | 16/163 [00:00<00:04, 33.57it/s]
Loading safetensors checkpoint shards: 12% Completed | 20/163 [00:00<00:04, 33.77it/s]
Loading safetensors checkpoint shards: 15% Completed | 24/163 [00:00<00:06, 20.23it/s]
Loading safetensors checkpoint shards: 17% Completed | 28/163 [00:01<00:05, 23.23it/s]
Loading safetensors checkpoint shards: 20% Completed | 32/163 [00:01<00:05, 25.71it/s]
Loading safetensors checkpoint shards: 22% Completed | 36/163 [00:01<00:04, 28.90it/s]
Loading safetensors checkpoint shards: 25% Completed | 40/163 [00:01<00:04, 30.23it/s]
Loading safetensors checkpoint shards: 27% Completed | 44/163 [00:01<00:03, 31.05it/s]
Loading safetensors checkpoint shards: 29% Completed | 48/163 [00:01<00:03, 31.71it/s]
Loading safetensors checkpoint shards: 32% Completed | 52/163 [00:01<00:03, 31.98it/s]
Loading safetensors checkpoint shards: 34% Completed | 56/163 [00:01<00:03, 32.49it/s]
Loading safetensors checkpoint shards: 37% Completed | 61/163 [00:02<00:02, 34.54it/s]
Loading safetensors checkpoint shards: 40% Completed | 65/163 [00:02<00:02, 35.74it/s]
Loading safetensors checkpoint shards: 42% Completed | 69/163 [00:02<00:02, 35.13it/s]
Loading safetensors checkpoint shards: 45% Completed | 73/163 [00:02<00:02, 36.13it/s]
Loading safetensors checkpoint shards: 47% Completed | 77/163 [00:02<00:02, 35.19it/s]
Loading safetensors checkpoint shards: 50% Completed | 81/163 [00:02<00:02, 34.39it/s]
Loading safetensors checkpoint shards: 52% Completed | 85/163 [00:02<00:02, 33.53it/s]
Loading safetensors checkpoint shards: 55% Completed | 89/163 [00:02<00:02, 33.01it/s]
Loading safetensors checkpoint shards: 57% Completed | 93/163 [00:02<00:02, 32.57it/s]
Loading safetensors checkpoint shards: 60% Completed | 97/163 [00:03<00:02, 32.37it/s]
Loading safetensors checkpoint shards: 62% Completed | 101/163 [00:03<00:01, 32.17it/s]
Loading safetensors checkpoint shards: 64% Completed | 105/163 [00:03<00:01, 32.11it/s]
Loading safetensors checkpoint shards: 67% Completed | 109/163 [00:03<00:01, 31.87it/s]
Loading safetensors checkpoint shards: 69% Completed | 113/163 [00:03<00:01, 31.83it/s]
Loading safetensors checkpoint shards: 72% Completed | 117/163 [00:03<00:01, 31.81it/s]
Loading safetensors checkpoint shards: 74% Completed | 121/163 [00:03<00:01, 31.75it/s]
Loading safetensors checkpoint shards: 77% Completed | 125/163 [00:03<00:01, 31.60it/s]
Loading safetensors checkpoint shards: 79% Completed | 129/163 [00:04<00:01, 31.61it/s]
Loading safetensors checkpoint shards: 82% Completed | 133/163 [00:04<00:00, 31.60it/s]
Loading safetensors checkpoint shards: 84% Completed | 137/163 [00:04<00:01, 23.26it/s]
Loading safetensors checkpoint shards: 86% Completed | 140/163 [00:04<00:00, 23.75it/s]
Loading safetensors checkpoint shards: 89% Completed | 145/163 [00:04<00:00, 28.78it/s]
Loading safetensors checkpoint shards: 91% Completed | 149/163 [00:04<00:00, 31.23it/s]
Loading safetensors checkpoint shards: 94% Completed | 154/163 [00:04<00:00, 34.63it/s]
Loading safetensors checkpoint shards: 97% Completed | 158/163 [00:05<00:00, 34.18it/s]
(RayWorkerWrapper pid=2407, ip=21.12.9.119) INFO 02-11 15:09:48 model_runner.py:1115] Loading model weights took 2.9150 GB
(pid=1873, ip=21.12.9.154) INFO 02-11 15:09:41 init.py:190] Automatically detected platform cuda. [repeated 31x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(RayWorkerWrapper pid=2201, ip=21.12.9.204) INFO 02-11 15:09:43 cuda.py:161] Using Triton MLA backend. [repeated 61x across cluster]
(RayWorkerWrapper pid=1869, ip=21.12.9.128) WARNING 02-11 15:09:42 triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored. [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.186) INFO 02-11 15:09:43 utils.py:950] Found nccl from library libnccl.so.2 [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.186) INFO 02-11 15:09:43 pynccl.py:69] vLLM is using nccl==2.21.5 [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.183) INFO 02-11 15:09:43 model_runner.py:1110] Starting to load model /data/code/models/671B/... [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.186) WARNING 02-11 15:09:43 utils.py:159] The model class DeepseekV3ForCausalLM has not defined packed_modules_mapping, this may lead to incorrect mapping of quantized or ignored modules [repeated 30x across cluster]
Loading safetensors checkpoint shards: 99% Completed | 162/163 [00:05<00:00, 33.63it/s]
Loading safetensors checkpoint shards: 100% Completed | 163/163 [00:05<00:00, 31.21it/s]
INFO 02-11 15:09:49 model_runner.py:1115] Loading model weights took 4.6415 GB
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/code/models/deploy.py", line 6, in
[rank0]: llm = LLM(
[rank0]: ^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 1051, in inner
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 242, in init
[rank0]: self.llm_engine = self.engine_class.from_engine_args(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 484, in from_engine_args
[rank0]: engine = cls(
[rank0]: ^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in init
[rank0]: self.model_executor = executor_class(vllm_config=vllm_config, )
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 262, in init
[rank0]: super().init(*args, **kwargs)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 51, in init
[rank0]: self._init_executor()
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/ray_distributed_executor.py", line 90, in _init_executor
[rank0]: self._init_workers_ray(placement_group)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/ray_distributed_executor.py", line 356, in _init_workers_ray
[rank0]: self._run_workers("load_model",
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/ray_distributed_executor.py", line 481, in _run_workers
[rank0]: ray_worker_outputs = ray.get(ray_worker_outputs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/worker.py", line 2772, in get
[rank0]: values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/worker.py", line 919, in get_objects
[rank0]: raise value.as_instanceof_cause()
[rank0]: ray.exceptions.RayTaskError(OutOfMemoryError): ray::RayWorkerWrapper.execute_method() (pid=2550, ip=30.183.51.226, actor_id=055fc7c8272a64b6f203111c06000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7fe7a811b110>)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 575, in execute_method
[rank0]: raise e
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
[rank0]: return run_method(target, method, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model
[rank0]: self.model_runner.load_model()
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
[rank0]: self.model = get_model(vllm_config=self.vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
[rank0]: return loader.load_model(vllm_config=vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 383, in load_model
[rank0]: model = _initialize_model(vllm_config=vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model
[rank0]: return model_class(vllm_config=vllm_config, prefix=prefix)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 665, in init
[rank0]: self.model = DeepseekV2Model(vllm_config=vllm_config,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in init
[rank0]: old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 599, in init
[rank0]: self.start_layer, self.end_layer, self.layers = make_layers(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers
[rank0]: [PPMissingLayer() for _ in range(start_layer)] + [
[rank0]: ^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in
[rank0]: maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 601, in
[rank0]: lambda prefix: DeepseekV2DecoderLayer(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 528, in init
[rank0]: self.mlp = DeepseekV2MoE(
[rank0]: ^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 129, in init
[rank0]: self.experts = FusedMoE(
[rank0]: ^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 309, in init
[rank0]: self.quant_method.create_weights(layer=self, **moe_quant_params)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/quantization/fp8.py", line 428, in create_weights
[rank0]: w13_weight = torch.nn.Parameter(torch.empty(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in torch_function
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.00 GiB. GPU 0 has a total capacity of 44.53 GiB of which 999.94 MiB is free. Process 937741 has 43.54 GiB memory in use. Of the allocated memory 43.09 GiB is allocated by PyTorch, and 24.09 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(RayWorkerWrapper pid=1927, ip=21.12.9.124) INFO 02-11 15:09:48 model_runner.py:1115] Loading model weights took 2.9150 GB
[rank0]:[W211 15:09:50.139271596 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
step 1: create ray cluster
on head node : ray start --head --port=40000 --num-gpus=1
on each sub node : ray start --address=21.12.9.115:40000 --num-gpus=1
step 2: running vll as
ray.init(address="auto")
llm = LLM(
model="/data/code/models/671B/",
tensor_parallel_size=1,
pipeline_parallel_size=32,
trust_remote_code=True
)
And encounter error below, anyone know the problem?
INFO 02-11 15:09:33 init.py:190] Automatically detected platform cuda.
2025-02-11 15:09:33,647 INFO worker.py:1654 -- Connecting to existing Ray cluster at address: 21.12.9.115:40000...
2025-02-11 15:09:33,657 INFO worker.py:1832 -- Connected to Ray cluster. View the dashboard at http://127.0.0.1:8265
INFO 02-11 15:09:33 config.py:137] Replacing legacy 'type' key with 'rope_type'
INFO 02-11 15:09:37 config.py:542] This model supports multiple tasks: {'embed', 'reward', 'generate', 'score', 'classify'}. Defaulting to 'generate'.
INFO 02-11 15:09:38 config.py:1401] Defaulting to use ray for distributed inference
WARNING 02-11 15:09:38 arg_utils.py:1135] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False.
INFO 02-11 15:09:38 config.py:1556] Chunked prefill is enabled with max_num_batched_tokens=2048.
WARNING 02-11 15:09:38 config.py:669] Async output processing can not be enabled with pipeline parallel
WARNING 02-11 15:09:38 fp8.py:52] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
INFO 02-11 15:09:38 config.py:3275] MLA is enabled; forcing chunked prefill and prefix caching to be disabled.
INFO 02-11 15:09:38 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.2) with config: model='/data/code/models/671B/', speculative_config=None, tokenizer='/data/code/models/671B/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=163840, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=32, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/data/code/models/671B/, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False,
2025-02-11 15:09:38,509 INFO worker.py:1654 -- Connecting to existing Ray cluster at address: 21.12.9.115:40000...
2025-02-11 15:09:38,509 INFO worker.py:1672 -- Calling ray.init() again after it has already been called.
INFO 02-11 15:09:38 ray_distributed_executor.py:149] use_ray_spmd_worker: False
(pid=9278) INFO 02-11 15:09:40 init.py:190] Automatically detected platform cuda.
INFO 02-11 15:09:42 cuda.py:161] Using Triton MLA backend.
WARNING 02-11 15:09:42 triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(RayWorkerWrapper pid=1869, ip=21.12.9.156) INFO 02-11 15:09:42 cuda.py:161] Using Triton MLA backend.
(RayWorkerWrapper pid=1898, ip=21.12.9.192) WARNING 02-11 15:09:42 triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
INFO 02-11 15:09:43 utils.py:950] Found nccl from library libnccl.so.2
INFO 02-11 15:09:43 pynccl.py:69] vLLM is using nccl==2.21.5
(RayWorkerWrapper pid=1869, ip=21.12.9.140) INFO 02-11 15:09:43 utils.py:950] Found nccl from library libnccl.so.2
(RayWorkerWrapper pid=1869, ip=21.12.9.140) INFO 02-11 15:09:43 pynccl.py:69] vLLM is using nccl==2.21.5
INFO 02-11 15:09:43 model_runner.py:1110] Starting to load model /data/code/models/671B/...
(RayWorkerWrapper pid=1873, ip=30.183.51.186) INFO 02-11 15:09:43 model_runner.py:1110] Starting to load model /data/code/models/671B/...
(RayWorkerWrapper pid=1873, ip=21.12.9.163) WARNING 02-11 15:09:43 utils.py:159] The model class DeepseekV3ForCausalLM has not defined
packed_modules_mapping
, this may lead to incorrect mapping of quantized or ignored modulesWARNING 02-11 15:09:43 utils.py:159] The model class DeepseekV3ForCausalLM has not defined
packed_modules_mapping
, this may lead to incorrect mapping of quantized or ignored modulesINFO 02-11 15:09:43 cuda.py:161] Using Triton MLA backend.
Loading safetensors checkpoint shards: 0% Completed | 0/163 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 2% Completed | 4/163 [00:00<00:04, 33.91it/s]
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] Error executing method 'load_model'. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] Traceback (most recent call last):
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return run_method(target, method, args, kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return func(*args, **kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.model_runner.load_model()
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.model = get_model(vllm_config=self.vllm_config)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return loader.load_model(vllm_config=vllm_config)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 383, in load_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] model = _initialize_model(vllm_config=vllm_config)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return model_class(vllm_config=vllm_config, prefix=prefix)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 665, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.model = DeepseekV2Model(vllm_config=vllm_config,
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 599, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.start_layer, self.end_layer, self.layers = make_layers(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] [PPMissingLayer() for _ in range(start_layer)] + [
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 601, in
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] lambda prefix: DeepseekV2DecoderLayer(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 528, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.mlp = DeepseekV2MoE(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 129, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.experts = FusedMoE(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 309, in init
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] self.quant_method.create_weights(layer=self, **moe_quant_params)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/quantization/fp8.py", line 428, in create_weights
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] w13_weight = torch.nn.Parameter(torch.empty(
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] File "/opt/conda/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in torch_function
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] return func(*args, **kwargs)
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] ^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=2550, ip=30.183.51.226) ERROR 02-11 15:09:43 worker_base.py:574] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.00 GiB. GPU 0 has a total capacity of 44.53 GiB of which 999.94 MiB is free. Process 937741 has 43.54 GiB memory in use. Of the allocated memory 43.09 GiB is allocated by PyTorch, and 24.09 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Loading safetensors checkpoint shards: 5% Completed | 8/163 [00:00<00:04, 33.39it/s]
Loading safetensors checkpoint shards: 7% Completed | 12/163 [00:00<00:04, 33.95it/s]
Loading safetensors checkpoint shards: 10% Completed | 16/163 [00:00<00:04, 33.57it/s]
Loading safetensors checkpoint shards: 12% Completed | 20/163 [00:00<00:04, 33.77it/s]
Loading safetensors checkpoint shards: 15% Completed | 24/163 [00:00<00:06, 20.23it/s]
Loading safetensors checkpoint shards: 17% Completed | 28/163 [00:01<00:05, 23.23it/s]
Loading safetensors checkpoint shards: 20% Completed | 32/163 [00:01<00:05, 25.71it/s]
Loading safetensors checkpoint shards: 22% Completed | 36/163 [00:01<00:04, 28.90it/s]
Loading safetensors checkpoint shards: 25% Completed | 40/163 [00:01<00:04, 30.23it/s]
Loading safetensors checkpoint shards: 27% Completed | 44/163 [00:01<00:03, 31.05it/s]
Loading safetensors checkpoint shards: 29% Completed | 48/163 [00:01<00:03, 31.71it/s]
Loading safetensors checkpoint shards: 32% Completed | 52/163 [00:01<00:03, 31.98it/s]
Loading safetensors checkpoint shards: 34% Completed | 56/163 [00:01<00:03, 32.49it/s]
Loading safetensors checkpoint shards: 37% Completed | 61/163 [00:02<00:02, 34.54it/s]
Loading safetensors checkpoint shards: 40% Completed | 65/163 [00:02<00:02, 35.74it/s]
Loading safetensors checkpoint shards: 42% Completed | 69/163 [00:02<00:02, 35.13it/s]
Loading safetensors checkpoint shards: 45% Completed | 73/163 [00:02<00:02, 36.13it/s]
Loading safetensors checkpoint shards: 47% Completed | 77/163 [00:02<00:02, 35.19it/s]
Loading safetensors checkpoint shards: 50% Completed | 81/163 [00:02<00:02, 34.39it/s]
Loading safetensors checkpoint shards: 52% Completed | 85/163 [00:02<00:02, 33.53it/s]
Loading safetensors checkpoint shards: 55% Completed | 89/163 [00:02<00:02, 33.01it/s]
Loading safetensors checkpoint shards: 57% Completed | 93/163 [00:02<00:02, 32.57it/s]
Loading safetensors checkpoint shards: 60% Completed | 97/163 [00:03<00:02, 32.37it/s]
Loading safetensors checkpoint shards: 62% Completed | 101/163 [00:03<00:01, 32.17it/s]
Loading safetensors checkpoint shards: 64% Completed | 105/163 [00:03<00:01, 32.11it/s]
Loading safetensors checkpoint shards: 67% Completed | 109/163 [00:03<00:01, 31.87it/s]
Loading safetensors checkpoint shards: 69% Completed | 113/163 [00:03<00:01, 31.83it/s]
Loading safetensors checkpoint shards: 72% Completed | 117/163 [00:03<00:01, 31.81it/s]
Loading safetensors checkpoint shards: 74% Completed | 121/163 [00:03<00:01, 31.75it/s]
Loading safetensors checkpoint shards: 77% Completed | 125/163 [00:03<00:01, 31.60it/s]
Loading safetensors checkpoint shards: 79% Completed | 129/163 [00:04<00:01, 31.61it/s]
Loading safetensors checkpoint shards: 82% Completed | 133/163 [00:04<00:00, 31.60it/s]
Loading safetensors checkpoint shards: 84% Completed | 137/163 [00:04<00:01, 23.26it/s]
Loading safetensors checkpoint shards: 86% Completed | 140/163 [00:04<00:00, 23.75it/s]
Loading safetensors checkpoint shards: 89% Completed | 145/163 [00:04<00:00, 28.78it/s]
Loading safetensors checkpoint shards: 91% Completed | 149/163 [00:04<00:00, 31.23it/s]
Loading safetensors checkpoint shards: 94% Completed | 154/163 [00:04<00:00, 34.63it/s]
Loading safetensors checkpoint shards: 97% Completed | 158/163 [00:05<00:00, 34.18it/s]
(RayWorkerWrapper pid=2407, ip=21.12.9.119) INFO 02-11 15:09:48 model_runner.py:1115] Loading model weights took 2.9150 GB
(pid=1873, ip=21.12.9.154) INFO 02-11 15:09:41 init.py:190] Automatically detected platform cuda. [repeated 31x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(RayWorkerWrapper pid=2201, ip=21.12.9.204) INFO 02-11 15:09:43 cuda.py:161] Using Triton MLA backend. [repeated 61x across cluster]
(RayWorkerWrapper pid=1869, ip=21.12.9.128) WARNING 02-11 15:09:42 triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored. [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.186) INFO 02-11 15:09:43 utils.py:950] Found nccl from library libnccl.so.2 [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.186) INFO 02-11 15:09:43 pynccl.py:69] vLLM is using nccl==2.21.5 [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.183) INFO 02-11 15:09:43 model_runner.py:1110] Starting to load model /data/code/models/671B/... [repeated 30x across cluster]
(RayWorkerWrapper pid=1873, ip=30.183.51.186) WARNING 02-11 15:09:43 utils.py:159] The model class DeepseekV3ForCausalLM has not defined
packed_modules_mapping
, this may lead to incorrect mapping of quantized or ignored modules [repeated 30x across cluster]Loading safetensors checkpoint shards: 99% Completed | 162/163 [00:05<00:00, 33.63it/s]
Loading safetensors checkpoint shards: 100% Completed | 163/163 [00:05<00:00, 31.21it/s]
INFO 02-11 15:09:49 model_runner.py:1115] Loading model weights took 4.6415 GB
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/code/models/deploy.py", line 6, in
[rank0]: llm = LLM(
[rank0]: ^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 1051, in inner
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 242, in init
[rank0]: self.llm_engine = self.engine_class.from_engine_args(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 484, in from_engine_args
[rank0]: engine = cls(
[rank0]: ^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in init
[rank0]: self.model_executor = executor_class(vllm_config=vllm_config, )
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 262, in init
[rank0]: super().init(*args, **kwargs)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 51, in init
[rank0]: self._init_executor()
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/ray_distributed_executor.py", line 90, in _init_executor
[rank0]: self._init_workers_ray(placement_group)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/ray_distributed_executor.py", line 356, in _init_workers_ray
[rank0]: self._run_workers("load_model",
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/executor/ray_distributed_executor.py", line 481, in _run_workers
[rank0]: ray_worker_outputs = ray.get(ray_worker_outputs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/worker.py", line 2772, in get
[rank0]: values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/ray/_private/worker.py", line 919, in get_objects
[rank0]: raise value.as_instanceof_cause()
[rank0]: ray.exceptions.RayTaskError(OutOfMemoryError): ray::RayWorkerWrapper.execute_method() (pid=2550, ip=30.183.51.226, actor_id=055fc7c8272a64b6f203111c06000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7fe7a811b110>)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 575, in execute_method
[rank0]: raise e
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
[rank0]: return run_method(target, method, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2220, in run_method
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model
[rank0]: self.model_runner.load_model()
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
[rank0]: self.model = get_model(vllm_config=self.vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
[rank0]: return loader.load_model(vllm_config=vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 383, in load_model
[rank0]: model = _initialize_model(vllm_config=vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model
[rank0]: return model_class(vllm_config=vllm_config, prefix=prefix)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 665, in init
[rank0]: self.model = DeepseekV2Model(vllm_config=vllm_config,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in init
[rank0]: old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 599, in init
[rank0]: self.start_layer, self.end_layer, self.layers = make_layers(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers
[rank0]: [PPMissingLayer() for _ in range(start_layer)] + [
[rank0]: ^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in
[rank0]: maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 601, in
[rank0]: lambda prefix: DeepseekV2DecoderLayer(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 528, in init
[rank0]: self.mlp = DeepseekV2MoE(
[rank0]: ^^^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 129, in init
[rank0]: self.experts = FusedMoE(
[rank0]: ^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 309, in init
[rank0]: self.quant_method.create_weights(layer=self, **moe_quant_params)
[rank0]: File "/opt/conda/lib/python3.11/site-packages/vllm/model_executor/layers/quantization/fp8.py", line 428, in create_weights
[rank0]: w13_weight = torch.nn.Parameter(torch.empty(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/opt/conda/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in torch_function
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.00 GiB. GPU 0 has a total capacity of 44.53 GiB of which 999.94 MiB is free. Process 937741 has 43.54 GiB memory in use. Of the allocated memory 43.09 GiB is allocated by PyTorch, and 24.09 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(RayWorkerWrapper pid=1927, ip=21.12.9.124) INFO 02-11 15:09:48 model_runner.py:1115] Loading model weights took 2.9150 GB
[rank0]:[W211 15:09:50.139271596 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
Beta Was this translation helpful? Give feedback.
All reactions