Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: No suitable kernel. h_in=16 h_out=3424 dtype=Float out_dtype=BFloat16 #3793

Closed
Edisonwei54 opened this issue Apr 2, 2024 · 35 comments
Labels
bug Something isn't working

Comments

@Edisonwei54
Copy link

Your current environment

Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35

Python version: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-100-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090

Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      46 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             88
On-line CPU(s) list:                0-87
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz
CPU family:                         6
Model:                              79
Thread(s) per core:                 2
Core(s) per socket:                 22
Socket(s):                          2
Stepping:                           1
CPU max MHz:                        3700.0000
CPU min MHz:                        1200.0000
BogoMIPS:                           4399.70
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization:                     VT-x
L1d cache:                          1.4 MiB (44 instances)
L1i cache:                          1.4 MiB (44 instances)
L2 cache:                           11 MiB (44 instances)
L3 cache:                           110 MiB (2 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-21,44-65
NUMA node1 CPU(s):                  22-43,66-87
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        KVM: Mitigation: VMX disabled
Vulnerability L1tf:                 Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:                  Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:             Mitigation; PTI
Vulnerability Mmio stale data:      Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; Clear CPU buffers; SMT vulnerable

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.1.2
[pip3] triton==2.1.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] torch                     2.1.2                    pypi_0    pypi
[conda] triton                    2.1.0                    pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PHB     SYS     SYS     0-21,44-65      0               N/A
GPU1    PHB      X      SYS     SYS     0-21,44-65      0               N/A
GPU2    SYS     SYS      X      PHB     22-43,66-87     1               N/A
GPU3    SYS     SYS     PHB      X      22-43,66-87     1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m vllm.entrypoints.openai.api_server \
    --model /home/greatwall/app/edison/models/Qwen1.5-14B-Chat \
    --served-model-name FUX14B2 \
    --enable-lora \
    --lora-modules lora_1=/home/greatwall/app/edison/output/qwen1half-14b-chat/v25-20240330-131708/checkpoint-300 lora_2=/home/greatwall/app/edison/output/qwen1half-14b-chat/v25-20240330-131708/checkpoint-270 lora_3=/home/greatwall/app/edison/output/qwen1half-14b-chat/v25-20240330-131708/checkpoint-240 \
    --gpu-memory-utilization 1 \
    --tensor-parallel-size 4 \
    --host 0.0.0.0 \
    --port 8001
INFO 04-02 09:47:24 api_server.py:148] vLLM API server version 0.4.0
INFO 04-02 09:47:24 api_server.py:149] args: Namespace(host='0.0.0.0', port=8001, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, served_model_name='FUX14B2', lora_modules=[LoRA(name='lora_1', local_path='/home/greatwall/app/edison/output/qwen1half-14b-chat/v25-20240330-131708/checkpoint-300'), LoRA(name='lora_2', local_path='/home/greatwall/app/edison/output/qwen1half-14b-chat/v25-20240330-131708/checkpoint-270'), LoRA(name='lora_3', local_path='/home/greatwall/app/edison/output/qwen1half-14b-chat/v25-20240330-131708/checkpoint-240')], chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='/home/greatwall/app/edison/models/Qwen1.5-14B-Chat', tokenizer=None, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', max_model_len=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=4, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=1.0, forced_num_gpu_blocks=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=5, disable_log_stats=False, quantization=None, enforce_eager=False, max_context_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=True, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', max_cpu_loras=None, device='auto', image_input_type=None, image_token_id=None, image_input_shape=None, image_feature_size=None, scheduler_delay_factor=0.0, enable_chunked_prefill=False, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
2024-04-02 09:47:26,188 INFO worker.py:1752 -- Started a local Ray instance.
INFO 04-02 09:47:27 llm_engine.py:75] Initializing an LLM engine (v0.4.0) with config: model='/home/greatwall/app/edison/models/Qwen1.5-14B-Chat', tokenizer='/home/greatwall/app/edison/models/Qwen1.5-14B-Chat', tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=4, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, seed=0)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 04-02 09:47:45 selector.py:51] Cannot use FlashAttention because the package is not found. Please install it for better performance.
INFO 04-02 09:47:45 selector.py:25] Using XFormers backend.
(RayWorkerVllm pid=43202) INFO 04-02 09:47:45 selector.py:51] Cannot use FlashAttention because the package is not found. Please install it for better performance.
(RayWorkerVllm pid=43202) INFO 04-02 09:47:45 selector.py:25] Using XFormers backend.
(RayWorkerVllm pid=43317) INFO 04-02 09:47:46 pynccl_utils.py:45] vLLM is using nccl==2.18.1
INFO 04-02 09:47:46 pynccl_utils.py:45] vLLM is using nccl==2.18.1
WARNING 04-02 09:47:48 custom_all_reduce.py:45] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(RayWorkerVllm pid=43202) WARNING 04-02 09:47:48 custom_all_reduce.py:45] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO 04-02 09:47:54 model_runner.py:104] Loading model weights took 6.6485 GB
(RayWorkerVllm pid=43202) INFO 04-02 09:47:56 model_runner.py:104] Loading model weights took 6.6485 GB
(RayWorkerVllm pid=43395) INFO 04-02 09:47:45 selector.py:51] Cannot use FlashAttention because the package is not found. Please install it for better performance. [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)
(RayWorkerVllm pid=43395) INFO 04-02 09:47:45 selector.py:25] Using XFormers backend. [repeated 2x across cluster]
(RayWorkerVllm pid=43202) INFO 04-02 09:47:46 pynccl_utils.py:45] vLLM is using nccl==2.18.1 [repeated 2x across cluster]
(RayWorkerVllm pid=43395) WARNING 04-02 09:47:48 custom_all_reduce.py:45] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly. [repeated 2x across cluster]
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44] Error executing method profile_num_available_blocks. This might cause deadlock in distributed execution.
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44] Traceback (most recent call last):
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/engine/ray_utils.py", line 37, in execute_method
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return executor(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return func(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/worker/worker.py", line 131, in profile_num_available_blocks
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     self.model_runner.profile_run()
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return func(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/worker/model_runner.py", line 742, in profile_run
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     self.execute_model(seqs, kv_caches)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return func(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/worker/model_runner.py", line 663, in execute_model
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     hidden_states = model_executable(**execute_model_kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return self._call_impl(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return forward_call(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/model_executor/models/qwen2.py", line 317, in forward
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     hidden_states = self.model(input_ids, positions, kv_caches,
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return self._call_impl(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return forward_call(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/model_executor/models/qwen2.py", line 254, in forward
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     hidden_states, residual = layer(
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return self._call_impl(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return forward_call(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/model_executor/models/qwen2.py", line 217, in forward
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     hidden_states = self.mlp(hidden_states)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return self._call_impl(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return forward_call(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/model_executor/models/qwen2.py", line 76, in forward
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     gate_up, _ = self.gate_up_proj(x)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return self._call_impl(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     return forward_call(*args, **kwargs)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/lora/layers.py", line 395, in forward
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     output_parallel = self.apply_weights(input_, bias)
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/lora/layers.py", line 509, in apply_weights
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     _apply_lora_packed_nslice(
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/lora/layers.py", line 97, in _apply_lora_packed_nslice
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     add_lora_slice(output, x, lora_a_stacked[slice_idx],
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]   File "/home/greatwall/app/edison/vllm/vllm/lora/punica.py", line 160, in add_lora_slice
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44]     punica_kernels.dispatch_bgmv_low_level(
(RayWorkerVllm pid=43395) ERROR 04-02 09:47:57 ray_utils.py:44] RuntimeError: No suitable kernel. h_in=16 h_out=3424 dtype=Float out_dtype=BFloat16
@Edisonwei54 Edisonwei54 added the bug Something isn't working label Apr 2, 2024
@Edisonwei54
Copy link
Author

@WoosukKwon How can I solve this problem

@jeejeelee
Copy link
Collaborator

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

@Edisonwei54
Copy link
Author

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

Thanks, It can work now, but I still want to use all gpu, because the memory is not enough...

@nlp-learner
Copy link

您好,我在加载baichuan2-13b时候遇到相似问题,在0.3.3与0.4版本均存在,RuntimeError: No suitable kernel. h_in=32 h_out=15360 dtype=Float out_dtype=BFloat16

@jeejeelee
Copy link
Collaborator

您好,我在加载baichuan2-13b时候遇到相似问题,在0.3.3与0.4版本均存在,RuntimeError: No suitable kernel. h_in=32 h_out=15360 dtype=Float out_dtype=BFloat16

当前的vllm版本中,punica的算子不支持15360,我之前的PR没有注意到这点,不好意思。
您可以在 https://github.com/vllm-project/vllm/blob/main/csrc/punica/bgmv/bgmv_config.h#L48 添加

f(in_T, out_T, W_T, narrow, 15360) \

然后重新编译vllm(0.4.0的版本)
如果测试没有问题的话,您也可以提个PR来解决这个BUG

@nlp-learner
Copy link

是的,我也是这样添加的,测试没有问题,在0.3.3与0.4中均没有问题

@jeejeelee
Copy link
Collaborator

是的,我也是这样添加的,测试没有问题,在0.3.3与0.4中均没有问题

hi,您可以提个PR解决这个问题吗

@nlp-learner
Copy link

nlp-learner commented Apr 11, 2024

我看您有不断提交pr,您可以帮忙在下次提交pr把这部分合并上去,我就不专门提pr了!另外问下,在0.4版本与您之前提交qkv的pr合并了之外,还做了哪些改动,比如我看到下面这部分,以便我选择是否需要更新到0.4版本,因为我基于0.3.3重写了ModelRunner,worker到LLMEngine部分
@classmethod def can_replace_layer(cls, source_layer: nn.Module, lora_config: LoRAConfig, packed_modules_list: List, model_config: Optional[PretrainedConfig]) -> bool:

@jeejeelee
Copy link
Collaborator

@nlp-learner 好的。
此外,查看版本间的改动及差异,可以在d对应仓库网址后添加compare做比较: https://github.com/vllm-project/vllm/compare

@kingljl
Copy link
Contributor

kingljl commented Apr 11, 2024

@jeejeelee 哥,我遇到了一个bug。
image
不知道怎么解决了
环境:
A10的GPU卡
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0

@kingljl
Copy link
Contributor

kingljl commented Apr 11, 2024

@jeejeelee 哥,我遇到了一个bug。 image 不知道怎么解决了 环境: A10的GPU卡 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Mon_Apr__3_17:16:06_PDT_2023 Cuda compilation tools, release 12.1, V12.1.105 Build cuda_12.1.r12.1/compiler.32688072_0

@jeejeelee 搞定了,我增加了640的算子

@AJAXLONG
Copy link

AJAXLONG commented May 7, 2024

我在chinese-alpaca-llama2-7B遇到此问题,是否有不重新编译的方法

@Xingxiangrui
Copy link

h_in=16 h_out=3424
+1 for this . Qwen-14B with lora rank=16.
vllm 0.4.2 version not fixed.

@heurainbow
Copy link

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

@jeejeelee can you support this on 8 gpus?

@AJAXLONG
Copy link

@jeejeelee
能在下一个版本中,适配一下算子?
No suitable kernel.h_in=16 h_out=76032 dtype=Float out _dtype=Half
No suitable kernel. h_in=16 h_out=55552 dtype=Float out_dtype=BFloat16
f(in_T, out_T, W_T, narrow, 55552)
f(in_T, out_T, W_T, narrow, 76032 ) \

@jeejeelee
Copy link
Collaborator

@jeejeelee 能在下一个版本中,适配一下算子? No suitable kernel.h_in=16 h_out=76032 dtype=Float out _dtype=Half No suitable kernel. h_in=16 h_out=55552 dtype=Float out_dtype=BFloat16 f(in_T, out_T, W_T, narrow, 55552) f(in_T, out_T, W_T, narrow, 76032 ) \

这个是什么模型的尺寸呢,此外你可以自己提个PR来解决这个size

@jeejeelee
Copy link
Collaborator

jeejeelee commented May 21, 2024

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

@jeejeelee can you support this on 8 gpus?

I'm trying to solve this issue

@AJAXLONG
Copy link

@jeejeelee
哥模型的尺寸是:
(in_T, out_T, W_T, narrow, 55552) \ 是chinese-alpaca-llama2-7b
f(in_T, out_T, W_T, narrow, 76032 ) \ 是自预训练的34B模型,
能否帮忙适配呢,如果自己提PR,需要在哪一个开发分支上修改

@jeejeelee
Copy link
Collaborator

能否帮忙适配呢,如果自己提PR,需要在哪一个开发分支上修改

可以去查下如何向github的工程提交PR

@ghost
Copy link

ghost commented May 22, 2024

@Edisonwei54 看了你的代码是跑的qwen1.5-14b+2个lora,可以请问有出现cannot access local variable 'lora_b_k' where it is not associated with a value 这个错误吗

@1149722739
Copy link

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

@jeejeelee can you support this on 8 gpus?

I'm trying to solve this issue

请问解决了吗?

@jeejeelee
Copy link
Collaborator

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

@jeejeelee can you support this on 8 gpus?

I'm trying to solve this issue

请问解决了吗?

您好,目前我已经了相关的PR, 参考:#5036 但是目前还处于开发中

@1149722739
Copy link

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

@jeejeelee can you support this on 8 gpus?

I'm trying to solve this issue

请问解决了吗?

您好,目前我已经了相关的PR, 参考:#5036 但是目前还处于开发中

好的

@zhanghanweii
Copy link

您好,我在加载baichuan2-13b时候遇到相似问题,在0.3.3与0.4版本均存在,RuntimeError: No suitable kernel. h_in=32 h_out=15360 dtype=Float out_dtype=BFloat16

当前的vllm版本中,punica的算子不支持15360,我之前的PR没有注意到这点,不好意思。 您可以在 https://github.com/vllm-project/vllm/blob/main/csrc/punica/bgmv/bgmv_config.h#L48 添加

f(in_T, out_T, W_T, narrow, 15360) \

然后重新编译vllm(0.4.0的版本) 如果测试没有问题的话,您也可以提个PR来解决这个BUG

你好,我是使用pip install vllm安装的vllm包,你说的vllm/blob/main/csrc/punica/bgmv/bgmv_config.h没有找到在哪

@jeejeelee
Copy link
Collaborator

您好,我在加载baichuan2-13b时候遇到相似问题,在0.3.3与0.4版本均存在,RuntimeError: No suitable kernel. h_in=32 h_out=15360 dtype=Float out_dtype=BFloat16

当前的vllm版本中,punica的算子不支持15360,我之前的PR没有注意到这点,不好意思。 您可以在 https://github.com/vllm-project/vllm/blob/main/csrc/punica/bgmv/bgmv_config.h#L48 添加

f(in_T, out_T, W_T, narrow, 15360) \

然后重新编译vllm(0.4.0的版本) 如果测试没有问题的话,您也可以提个PR来解决这个BUG

你好,我是使用pip install vllm安装的vllm包,你说的vllm/blob/main/csrc/punica/bgmv/bgmv_config.h没有找到在哪

你好,clone下源码,即可找到,在vllm/csrc/punica/bgmv/bgmv_config.h中
若是你是针对15360的问题,安装最新版本即可,已经解决了

@liangxiao777
Copy link

您好,我在微调Qwen2-7B后部署时遇到了同样的问题:
[rank0]: Traceback (most recent call last): [rank0]: File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main [rank0]: return _run_code(code, main_globals, None, [rank0]: File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code [rank0]: exec(code, run_globals) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 168, in <module> [rank0]: engine = AsyncLLMEngine.from_engine_args( [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 366, in from_engine_args [rank0]: engine = cls( [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 324, in __init__ [rank0]: self.engine = self._init_engine(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 442, in _init_engine [rank0]: return engine_class(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 172, in __init__ [rank0]: self._initialize_kv_caches() [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 249, in _initialize_kv_caches [rank0]: self.model_executor.determine_num_available_blocks()) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 106, in determine_num_available_blocks [rank0]: return self.driver_worker.determine_num_available_blocks() [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/worker/worker.py", line 139, in determine_num_available_blocks [rank0]: self.model_runner.profile_run() [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 888, in profile_run [rank0]: self.execute_model(seqs, kv_caches) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 808, in execute_model [rank0]: hidden_states = model_executable(**execute_model_kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 316, in forward [rank0]: hidden_states = self.model(input_ids, positions, kv_caches, [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 253, in forward [rank0]: hidden_states, residual = layer( [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 216, in forward [rank0]: hidden_states = self.mlp(hidden_states) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 75, in forward [rank0]: gate_up, _ = self.gate_up_proj(x) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/layers.py", line 461, in forward [rank0]: output_parallel = self.apply(input_, bias) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/layers.py", line 585, in apply [rank0]: _apply_lora_packed_nslice( [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/layers.py", line 127, in _apply_lora_packed_nslice [rank0]: add_lora_slice(output, x, lora_a_stacked[slice_idx], [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/punica.py", line 203, in add_lora_slice [rank0]: punica_kernels.dispatch_bgmv_low_level( [rank0]: RuntimeError: No suitable kernel. h_in=16 h_out=18944 dtype=Float out_dtype=BFloat16

您好,我在加载baichuan2-13b时候遇到相似问题,在0.3.3与0.4版本均存在,RuntimeError: No suitable kernel. h_in=32 h_out=15360 dtype=Float out_dtype=BFloat16

当前的vllm版本中,punica的算子不支持15360,我之前的PR没有注意到这点,不好意思。 您可以在 https://github.com/vllm-project/vllm/blob/main/csrc/punica/bgmv/bgmv_config.h#L48 添加

f(in_T, out_T, W_T, narrow, 15360) \

然后重新编译vllm(0.4.0的版本) 如果测试没有问题的话,您也可以提个PR来解决这个BUG

你好,我是使用pip install vllm安装的vllm包,你说的vllm/blob/main/csrc/punica/bgmv/bgmv_config.h没有找到在哪

你好,clone下源码,即可找到,在vllm/csrc/punica/bgmv/bgmv_config.h中 若是你是针对15360的问题,安装最新版本即可,已经解决了

@jeejeelee
Copy link
Collaborator

您好,我在微调Qwen2-7B后部署时遇到了同样的问题: [rank0]: Traceback (most recent call last): [rank0]: File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main [rank0]: return _run_code(code, main_globals, None, [rank0]: File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code [rank0]: exec(code, run_globals) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 168, in <module> [rank0]: engine = AsyncLLMEngine.from_engine_args( [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 366, in from_engine_args [rank0]: engine = cls( [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 324, in __init__ [rank0]: self.engine = self._init_engine(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 442, in _init_engine [rank0]: return engine_class(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 172, in __init__ [rank0]: self._initialize_kv_caches() [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 249, in _initialize_kv_caches [rank0]: self.model_executor.determine_num_available_blocks()) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 106, in determine_num_available_blocks [rank0]: return self.driver_worker.determine_num_available_blocks() [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/worker/worker.py", line 139, in determine_num_available_blocks [rank0]: self.model_runner.profile_run() [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 888, in profile_run [rank0]: self.execute_model(seqs, kv_caches) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 808, in execute_model [rank0]: hidden_states = model_executable(**execute_model_kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 316, in forward [rank0]: hidden_states = self.model(input_ids, positions, kv_caches, [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 253, in forward [rank0]: hidden_states, residual = layer( [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 216, in forward [rank0]: hidden_states = self.mlp(hidden_states) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 75, in forward [rank0]: gate_up, _ = self.gate_up_proj(x) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl [rank0]: return self._call_impl(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl [rank0]: return forward_call(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/layers.py", line 461, in forward [rank0]: output_parallel = self.apply(input_, bias) [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/layers.py", line 585, in apply [rank0]: _apply_lora_packed_nslice( [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/layers.py", line 127, in _apply_lora_packed_nslice [rank0]: add_lora_slice(output, x, lora_a_stacked[slice_idx], [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/lora/punica.py", line 203, in add_lora_slice [rank0]: punica_kernels.dispatch_bgmv_low_level( [rank0]: RuntimeError: No suitable kernel. h_in=16 h_out=18944 dtype=Float out_dtype=BFloat16

您好,我在加载baichuan2-13b时候遇到相似问题,在0.3.3与0.4版本均存在,RuntimeError: No suitable kernel. h_in=32 h_out=15360 dtype=Float out_dtype=BFloat16

当前的vllm版本中,punica的算子不支持15360,我之前的PR没有注意到这点,不好意思。 您可以在 https://github.com/vllm-project/vllm/blob/main/csrc/punica/bgmv/bgmv_config.h#L48 添加

f(in_T, out_T, W_T, narrow, 15360) \

然后重新编译vllm(0.4.0的版本) 如果测试没有问题的话,您也可以提个PR来解决这个BUG

你好,我是使用pip install vllm安装的vllm包,你说的vllm/blob/main/csrc/punica/bgmv/bgmv_config.h没有找到在哪

你好,clone下源码,即可找到,在vllm/csrc/punica/bgmv/bgmv_config.h中 若是你是针对15360的问题,安装最新版本即可,已经解决了

18944的尺寸不支持,可以参考15360的解决方式解决

@jeejeelee
Copy link
Collaborator

@liangxiao777 FYI #5441

@NiuBlibing
Copy link
Contributor

Same error with Qwen-72B-Instruct lora:

RuntimeError: No suitable kernel. h_in=16 h_out=3696 dtype=Float out_dtype=BFloat16

@gtpgg1013
Copy link

Hi, Thanks for your help. I encountered the similar situation while fine tuning.
My error :
RuntimeError: No suitable kernel. h_in=16 h_out=19200 dtype=Float out_dtype=BFloat16

Could you say how can I treat this? And how '-tensor-parallel-size' option can avoid this error?
Thanks a lot in advance!

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

@jeejeelee
Copy link
Collaborator

Hi, Thanks for your help. I encountered the similar situation while fine tuning. My error : RuntimeError: No suitable kernel. h_in=16 h_out=19200 dtype=Float out_dtype=BFloat16

Could you say how can I treat this? And how '-tensor-parallel-size' option can avoid this error? Thanks a lot in advance!

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

Hi,although the error messages are similar, your situation are not the same. You can refer to #5441 to add support for 19200.

3424 is not divisible by 64, so the Punica kernel can't process h_out=3424. However, 19200 is divisible by 64 and can be processed.

@gtpgg1013
Copy link

Hi, Thanks for your help. I encountered the similar situation while fine tuning. My error : RuntimeError: No suitable kernel. h_in=16 h_out=19200 dtype=Float out_dtype=BFloat16
Could you say how can I treat this? And how '-tensor-parallel-size' option can avoid this error? Thanks a lot in advance!

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

Hi,although the error messages are similar, your situation are not the same. You can refer to #5441 to add support for 19200.

3424 is not divisible by 64, so the Punica kernel can't process h_out=3424. However, 19200 is divisible by 64 and can be processed.

Thanks for your help! You mean referring to #5441 and changing 2 files (csrc/punica/bgmv/bgmv_config.h, tests/lora/test_punica.py). And build and install again. I am going to try that! Thanks!

@jeejeelee
Copy link
Collaborator

Hi, Thanks for your help. I encountered the similar situation while fine tuning. My error : RuntimeError: No suitable kernel. h_in=16 h_out=19200 dtype=Float out_dtype=BFloat16
Could you say how can I treat this? And how '-tensor-parallel-size' option can avoid this error? Thanks a lot in advance!

Current punica kernel can't process h_out=3424 , you can set -tensor-parallel-size 2 to avoid this error

Hi,although the error messages are similar, your situation are not the same. You can refer to #5441 to add support for 19200.
3424 is not divisible by 64, so the Punica kernel can't process h_out=3424. However, 19200 is divisible by 64 and can be processed.

Thanks for your help! You mean referring to #5441 and changing 2 files (csrc/punica/bgmv/bgmv_config.h, tests/lora/test_punica.py). And build and install again. I am going to try that! Thanks!

Yes, you can try it by:

export VLLM_INSTALL_PUNICA_KERNELS=1 #  build for multi-LoRA capability
pip install -e .  # This may take 5-10 minutes.

@mgoin
Copy link
Member

mgoin commented Aug 2, 2024

This should be resolved with the new landed Triton kernels #5036

@mgoin mgoin closed this as completed Aug 2, 2024
@4daJKong
Copy link

Dear all,

I use 0.4.3 version vllm face similar error: RuntimeError: No suitable kernel. h_in=32 h_out=65280 dtype=Float out_dtype=Half
I install vllm by pip install vllm==0.4.3 I know the latest version have solve this problem but it is difficult for me to upgrade the new version because there is some compatibility issues in my server(e.g. transformer and pytorch)

Besides, I didn't find the vllm/csrc/punica/bgmv/bgmv_config.h after git clone https://github.com/vllm-project/vllm.git and still don't know how to complie it after correcting.

Now, I have to merge lora with raw model to infer, but it is so large in disk, so is there any choice I can do to solve this issue?

@zhanghanweii @jeejeelee

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests