Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mPlug-Owl3 inference bug report #2172

Closed
goodstudent9 opened this issue Sep 30, 2024 · 0 comments · Fixed by #2175
Closed

mPlug-Owl3 inference bug report #2172

goodstudent9 opened this issue Sep 30, 2024 · 0 comments · Fixed by #2175
Labels

Comments

@goodstudent9
Copy link

Describe the bug

When using mplug-owl3 doing inference like provided best practice, the following bug will occur if the task includes images.
I remember before I updated the ms-swift repository it can run well. I don't knwo what happend!
Thank you for your help!

The inference code and error message is following:

code

import sys
sys.path.append('/data1/mplug_owl3/mPLUG-Owl3-7B-240728')
from swift.llm import (
    get_model_tokenizer, get_template, inference, ModelType, get_default_template_type
)
from swift.tuners import Swift
from typing import Tuple,Dict,List
ckpt_dir = '/output/mplug-owl3-7b-chat/v7-20240927-205109/checkpoint-300'
# ckpt_dir = /output/mplug-owl3-7b-chat/v46-20240929-161511/saved-1350'
model_id_or_path = 'mplug_owl3/mPLUG-Owl3-7B-240728'
class mplug_owl3_agent():
    def __init__(self, ckpt_dir, model_id_or_path, cuda_device) -> None:
        model_type = ModelType.mplug_owl3_7b_chat
        template_type = get_default_template_type(model_type)
        model, tokenizer = get_model_tokenizer(model_type, model_id_or_path=model_id_or_path, model_kwargs={'device_map': f'cuda:{cuda_device}'})
        self.model = Swift.from_pretrained(model, ckpt_dir, inference_mode=True)
        self.model.generation_config.max_new_tokens = 1024
        self.template = get_template(template_type, tokenizer)
        self.system_prompt = ""

    def do_inference(self, input: str, image_list:List[str]):
        response, history = inference(self.model, self.template, input, images=image_list)
        return response, history


if __name__=='__main__':
    agent = mplug_owl3_agent(ckpt_dir, model_id_or_path, '1')
    for i in range(10):
        response, history = agent.do_inference("Describe this image<image>", ['demo.png'])
        print(response)
        print(history)

error

Traceback (most recent call last):
rence(self.model, self.template, input, images=image_list)
  File "/data1/miniconda3/envs/owl3/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data1/ms-swift/swift/llm/utils/utils.py", line 869, in inference
    generate_ids = model.generate(streamer=streamer, generation_config=generation_config, **inputs)
  File "/data1/miniconda3/envs/owl3/lib/python3.9/site-packages/peft/peft_model.py", line 1638, in generate
    outputs = self.base_model.generate(*args, **kwargs)
  File "/data1/ms-swift/swift/llm/utils/model.py", line 4647, in _new_func
    res = _old_func(submodel, *args, **kwargs)
  File "/data1/miniconda3/envs/owl3/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data1/miniconda3/envs/owl3/lib/python3.9/site-packages/transformers/generation/utils.py", line 1689, in generate
    self._validate_model_kwargs(model_kwargs.copy())
  File "/data1/miniconda3/envs/owl3/lib/python3.9/site-packages/transformers/generation/utils.py", line 1243, in _validate_model_kwargs
    raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['num_images'] (note: typos in the generate arguments will also show up in this list)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants