Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
li-plus committed Jul 25, 2024
1 parent 1ad261c commit 1ad8694
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 10 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -486,9 +486,11 @@ python3 examples/openai_client.py --base_url http://127.0.0.1:8000/v1 --tool_cal
Request GLM4V with image inputs:
```sh
# request with local image file
python3 examples/openai_client.py --base_url http://127.0.0.1:8000/v1 --tool_call --prompt 描述这张图片 --image examples/03-Confusing-Pictures.jpg
python3 examples/openai_client.py --base_url http://127.0.0.1:8000/v1 --prompt "描述这张图片" \
--image examples/03-Confusing-Pictures.jpg --temp 0
# request with image url
python3 examples/openai_client.py --base_url http://127.0.0.1:8000/v1 --tool_call --prompt 描述这张图片 --image https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg
python3 examples/openai_client.py --base_url http://127.0.0.1:8000/v1 --prompt "描述这张图片" \
--image https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg --temp 0
```
With this API server as backend, ChatGLM.cpp models can be seamlessly integrated into any frontend that uses OpenAI-style API, including [mckaywrigley/chatbot-ui](https://github.com/mckaywrigley/chatbot-ui), [fuergaosi233/wechat-chatgpt](https://github.com/fuergaosi233/wechat-chatgpt), [Yidadaa/ChatGPT-Next-Web](https://github.com/Yidadaa/ChatGPT-Next-Web), and more.
Expand Down
19 changes: 11 additions & 8 deletions examples/openai_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,14 @@
from openai import OpenAI

parser = argparse.ArgumentParser()
parser.add_argument("--api_key", default="Bearer chatglm-cpp-example", type=str)
parser.add_argument("--base_url", default=None, type=str)
parser.add_argument("--stream", action="store_true")
parser.add_argument("--prompt", default="你好", type=str)
parser.add_argument("--tool_call", action="store_true")
parser.add_argument("--image", default=None, type=str)
parser.add_argument("--api_key", default="Bearer chatglm-cpp-example", type=str, help="API key of OpenAI api server")
parser.add_argument("--base_url", default=None, type=str, help="base url of OpenAI api server")
parser.add_argument("--stream", action="store_true", help="enable stream generation")
parser.add_argument("-p", "--prompt", default="你好", type=str, help="prompt to start generation with")
parser.add_argument("--tool_call", action="store_true", help="enable function call")
parser.add_argument("--image", default=None, type=str, help="path to the input image for visual language models")
parser.add_argument("--temp", default=0.95, type=float, help="temperature")
parser.add_argument("--top_p", default=0.7, type=float, help="top-p sampling")
args = parser.parse_args()

client = OpenAI(api_key=args.api_key, base_url=args.base_url)
Expand Down Expand Up @@ -48,13 +50,14 @@
user_content = args.prompt

messages = [{"role": "user", "content": user_content}]
response = client.chat.completions.create(
model="default-model", messages=messages, stream=args.stream, temperature=args.temp, top_p=args.top_p, tools=tools
)
if args.stream:
response = client.chat.completions.create(model="default-model", messages=messages, stream=True, tools=tools)
for chunk in response:
content = chunk.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)
print()
else:
response = client.chat.completions.create(model="default-model", messages=messages, tools=tools)
print(response.choices[0].message.content)

0 comments on commit 1ad8694

Please sign in to comment.