Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: AttributeError: 'MiniCPMVConfig' object has no attribute 'version' #6814

Closed
FrankYoungchen opened this issue Jul 26, 2024 · 7 comments · Fixed by #6939
Closed

[Bug]: AttributeError: 'MiniCPMVConfig' object has no attribute 'version' #6814

FrankYoungchen opened this issue Jul 26, 2024 · 7 comments · Fixed by #6939
Labels
bug Something isn't working

Comments

@FrankYoungchen
Copy link

Your current environment

/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700K
Stepping: 1
CPU MHz: 976.504
CPU max MHz: 6900.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 2048K
L3 cache: 30720K
NUMA node0 CPU(s): 0-23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.41.2
[pip3] triton==2.3.1
[conda] blas 1.0 mkl defaults
[conda] mkl 2023.1.0 h213fc3f_46344 defaults
[conda] mkl-service 2.4.0 py310h5eee18b_1 defaults
[conda] mkl_fft 1.3.8 py310h5eee18b_0 defaults
[conda] mkl_random 1.2.4 py310hdb19cb5_0 defaults
[conda] numpy 1.26.4 py310h5f9d8c6_0 defaults
[conda] numpy-base 1.26.4 py310hb5e798b_0 defaults
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
[conda] transformers 4.44.0.dev0 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-23 0 N/A

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

How you are installing vllm

git clone https://github.com/vllm-project/vllm.git
cd vllm

export VLLM_INSTALL_PUNICA_KERNELS=1 # optionally build for multi-LoRA capability

pip install -e . # This may take 5-10 minutes.

@FrankYoungchen FrankYoungchen added the installation Installation problems label Jul 26, 2024
@FrankYoungchen
Copy link
Author

python examples/minicpmv_example.py

The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'PreTrainedTokenizerFastWrapper'.
The class this function is called from is 'MiniCPMVTokenizerFast'.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 07-26 11:26:19 llm_engine.py:176] Initializing an LLM engine (v0.5.3.post1) with config: model='openbmb/MiniCPM-Llama3-V-2_5', speculative_config=None, tokenizer='openbmb/MiniCPM-Llama3-V-2_5', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=openbmb/MiniCPM-Llama3-V-2_5, use_v2_block_manager=False, enable_prefix_caching=False)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 07-26 11:26:23 model_runner.py:720] Starting to load model openbmb/MiniCPM-Llama3-V-2_5...
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/examples/minicpmv_example.py", line 14, in
[rank0]: llm = LLM(model=MODEL_NAME,
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/entrypoints/llm.py", line 155, in init
[rank0]: self.llm_engine = LLMEngine.from_engine_args(
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/engine/llm_engine.py", line 441, in from_engine_args
[rank0]: engine = cls(
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/engine/llm_engine.py", line 251, in init
[rank0]: self.model_executor = executor_class(
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/executor/executor_base.py", line 47, in init
[rank0]: self._init_executor()
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/executor/gpu_executor.py", line 36, in _init_executor
[rank0]: self.driver_worker.load_model()
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/worker/worker.py", line 139, in load_model
[rank0]: self.model_runner.load_model()
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/worker/model_runner.py", line 722, in load_model
[rank0]: self.model = get_model(model_config=self.model_config,
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/model_executor/model_loader/init.py", line 21, in get_model
[rank0]: return loader.load_model(model_config=model_config,
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/model_executor/model_loader/loader.py", line 280, in load_model
[rank0]: model = _initialize_model(model_config, self.load_config,
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/model_executor/model_loader/loader.py", line 111, in _initialize_model
[rank0]: return model_class(config=model_config.hf_config,
[rank0]: File "/home/clientadmin/cyk/code/llm/vllm/vllm/model_executor/models/minicpmv.py", line 393, in init
[rank0]: self.version = float(self.config.version)
[rank0]: File "/home/clientadmin/anaconda3/envs/vllm/lib/python3.10/site-packages/transformers/configuration_utils.py", line 264, in getattribute
[rank0]: return super().getattribute(key)
[rank0]: AttributeError: 'MiniCPMVConfig' object has no attribute 'version'

@HwwwwwwwH
Copy link
Contributor

Do you pull the latest code?
In config.json, we update version to let MiniCPMV(vllm) know which model(MiniCPM-V-2 or MiniCPM-Llama3-V-2_5) you are using. So you can try to update your code and run again.

@LSC527
Copy link

LSC527 commented Jul 26, 2024

@HwwwwwwwH
Copy link
Contributor

@HwwwwwwwH This may need to be updated too: https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/blob/main/config.json

Right! I'll do it as soon as possible.

@DarkLight1337
Copy link
Member

Perhaps we can infer the version automatically based on the vision config?

@DarkLight1337 DarkLight1337 added bug Something isn't working and removed installation Installation problems labels Jul 26, 2024
@DarkLight1337 DarkLight1337 changed the title [Installation]: AttributeError: 'MiniCPMVConfig' object has no attribute 'version' [Bug]: AttributeError: 'MiniCPMVConfig' object has no attribute 'version' Jul 26, 2024
@kenvix
Copy link

kenvix commented Jul 28, 2024

@HwwwwwwwH It looks like the repository on the modelscope site needs to be updated as well. I pulled MiniCPM-V from modelscope and got this error

@iceflame89
Copy link

@kenvix OpenBMB/MiniCPM-Llama3-V-2_5 on modelscope updated now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants