Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

webui 启动报错,对话界面报错 #3603

Closed
huitao2018 opened this issue Apr 1, 2024 · 3 comments
Closed

webui 启动报错,对话界面报错 #3603

huitao2018 opened this issue Apr 1, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@huitao2018
Copy link

操作系统:Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17
python版本:3.9.10 (main, Mar 28 2024, 11:41:54)
项目版本:v0.2.10
langchain版本:0.1.5. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b', 'openai-api'] @ cuda
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/numax/langchain-Chatchat/Langchain-Chatchat/model/chatglm3-6b-int4',
'model_path_exists': True,
'port': 20002}
{'api_base_url': 'https://api.openai.com/v1',
'api_key': '',
'device': 'auto',
'host': '0.0.0.0',
'infer_turbo': False,
'model_name': 'gpt-4',
'online_api': True,
'openai_proxy': '',
'port': 20002}
当前Embbedings模型: bge-large-zh @ cuda

服务端运行信息:
Chatchat WEBUI Server: http://0.0.0.0:8501
==============================Langchain-Chatchat Configuration==============================

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.

Git integration is disabled.

Streamlit requires Git 2.7.0 or later, but you have 1.8.3.1.
Git is used by Streamlit Cloud (https://streamlit.io/cloud).
To enable this feature, please update Git.

You can now view your Streamlit app in your browser.

URL: http://0.0.0.0:8501

打开界面,报错如下:

/usr/local/python3/lib/python3.9/site-packages/langchain/chat_models/init.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.chat_models import ChatOpenAI.

To install langchain-community run pip install -U langchain-community.
warnings.warn(
/usr/local/python3/lib/python3.9/site-packages/langchain/llms/init.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.llms import OpenAI.

To install langchain-community run pip install -U langchain-community.
warnings.warn(
/usr/local/python3/lib/python3.9/site-packages/pydantic/_internal/_config.py:322: UserWarning: Valid config keys have changed in V2:

  • 'schema_extra' has been renamed to 'json_schema_extra'
    warnings.warn(message, UserWarning)
    /usr/local/python3/lib/python3.9/site-packages/langchain/document_loaders/init.py:36: LangChainDeprecationWarning: Importing document loaders from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.document_loaders import JSONLoader.

To install langchain-community run pip install -U langchain-community.
warnings.warn(
2024-04-01 17:55:44,829 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [Errno 111] Connection refused
2024-04-01 17:55:44,830 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [Errno 111] Connection refused
2024-04-01 17:55:44,830 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [Errno 111] Connection refused
2024-04-01 17:55:44,937 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [Errno 111] Connection refused
2024-04-01 17:55:44,938 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [Errno 111] Connection refused
2024-04-01 17:55:44,940 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [Errno 111] Connection refused
2024-04-01 17:55:44.940 Uncaught app exception
Traceback (most recent call last):
File "/usr/local/python3/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "/home/numax/langchain-Chatchat/Langchain-Chatchat/webui.py", line 61, in
pages[selected_page]["func"](api=api, is_lite=is_lite)
File "/home/numax/langchain-Chatchat/Langchain-Chatchat/webui_pages/dialogue/dialogue.py", line 163, in dialogue_page
running_models = list(api.list_running_models())
TypeError: 'NoneType' object is not iterable
2024-04-01 17:56:29,561 - utils.py[line:95] - ERROR: ConnectError: error when post /knowledge_base/search_docs: [Errno 111] Connection refused
2024-04-01 17:56:29,561 - utils.py[line:95] - ERROR: ConnectError: error when post /knowledge_base/search_docs: [Errno 111] Connection refused
2024-04-01 17:56:29,561 - utils.py[line:95] - ERROR: ConnectError: error when post /knowledge_base/search_docs: [Errno 111] Connection refused
2024-04-01 17:56:29.561 Uncaught app exception
Traceback (most recent call last):
File "/usr/local/python3/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "/home/numax/langchain-Chatchat/Langchain-Chatchat/webui.py", line 61, in
pages[selected_page]["func"](api=api, is_lite=is_lite)
File "/home/numax/langchain-Chatchat/Langchain-Chatchat/webui_pages/knowledge_base/knowledge_base.py", line 328, in knowledge_base_page
data = [
File "/home/numax/langchain-Chatchat/Langchain-Chatchat/webui_pages/knowledge_base/knowledge_base.py", line 329, in
{"seq": i + 1, "id": x["id"], "page_content": x["page_content"], "source": x["metadata"].get("source"),
TypeError: string indices must be integers

image
image

@huitao2018 huitao2018 added the bug Something isn't working label Apr 1, 2024
Copy link

dosubot bot commented Apr 1, 2024

Hey @huitao2018, great to see you back! Hope all's been well on your end since our last interaction. 🚀

Based on the issues you're encountering, here are some steps to resolve them:

  1. Connection Refused Error: This error usually indicates that the service you're trying to connect to is not running. Ensure that all necessary backend services Langchain depends on are up and running. If you're running services in Docker containers or similar, ensure they're correctly started.

  2. TypeError: 'NoneType' object is not iterable: This error can occur if a function expects a list or a similar iterable object but gets None instead. Check the stack trace to find where this occurs and ensure that the variable in question is correctly initialized or checked for None before iterating.

  3. Deprecation Warnings from Langchain: Deprecation warnings are common when libraries are updated and certain functions or features are phased out. To address these, check the Langchain documentation or GitHub repository for information on what has been deprecated and the recommended alternatives. Updating your code to use the new alternatives will resolve these warnings.

  4. Compatibility with Latest Versions and Dependencies: Ensure that you have the correct versions of all dependencies installed. You can do this by creating a virtual environment and installing the specific versions required by Langchain-Chatchat v0.2.10. Use the pip install -r requirements.txt command, ensuring that the requirements.txt file specifies the correct versions. If specific versions are not listed, check the project's documentation or GitHub issues for compatibility notes.

  5. Git Version Incompatibility: If you're facing issues with Git, ensure that your Git version is compatible with the operations you're trying to perform. If Langchain requires features from a newer version of Git, you may need to update Git. Use your Linux distribution's package manager to update Git.

  6. ImportError Related to Numpy: This error can occur if Numpy is not installed or if there's a version mismatch. First, try reinstalling Numpy with pip install numpy --upgrade. If Langchain requires a specific version of Numpy, install that version directly using pip install numpy==<version_number>.

  7. Launching the Web UI: Before launching the web UI, ensure all the above steps are correctly addressed. Then, follow the instructions provided in the Langchain-Chatchat documentation for starting the web server. This usually involves running a Python script, such as python webui.py.

Remember to activate your virtual environment whenever working on the project to ensure that the correct dependencies are used. If any errors occur during these steps, the error messages and stack traces will provide valuable information for troubleshooting.

To debug the controller_address + "/list_models" endpoint and the client.post(controller_address + "/list_models") request in the Langchain-Chatchat codebase, follow these steps:

  1. Verify Endpoint Registration: Ensure that the endpoint "/list_models" is correctly registered in the FastAPI application. However, in the provided code snippet, there is no endpoint directly matching "/list_models". The closest matches are "/llm_model/list_running_models" and "/llm_model/list_config_models". Verify you are using the correct endpoint path in your request.

  2. Check Method Type: Ensure that the request method type matches the endpoint's method type. The closest matching endpoints "/llm_model/list_running_models" and "/llm_model/list_config_models" are registered with the POST method, which matches the method used in the request (client.post).

  3. Review Function Definitions: For the endpoints "/llm_model/list_running_models" and "/llm_model/list_config_models", ensure their corresponding functions list_running_models and list_config_models are implemented correctly. Unfortunately, the implementations of these functions are not provided in the context, so you would need to review them in the actual codebase.

  4. Check Request Payload and Headers: If the endpoint expects a specific payload or headers, ensure that your request includes these. Since the provided code does not detail the expected request body for the endpoints, verify this against the actual function definitions or API documentation.

  5. Logging and Error Handling: Add logging within the endpoint functions (list_running_models, list_config_models) to capture incoming requests and any errors or processing logic. This can help identify issues with the request data or internal processing.

  6. Test with Tools: Use API testing tools like Postman or curl to manually test the endpoint with known good data to ensure it behaves as expected.

  7. Review FastAPI Documentation: If you're unfamiliar with FastAPI or need a refresher on how to define and work with endpoints, reviewing the FastAPI documentation can be helpful.

  8. Run the Application in Debug Mode: Running the FastAPI application in debug mode can provide more verbose output, which can be helpful for identifying issues.

Given the provided context, it's important to clarify that there is no direct "/list_models" endpoint. You should verify the correct endpoint path and follow the debugging steps accordingly for the closest matching endpoints.

To start the necessary backend services that Langchain depends on and verify that they are running correctly, follow these specific steps:

  1. Start the Controller Service:

    • Use the run_controller function. This function initializes and runs the FastAPI app for the controller service. It uses Uvicorn to serve the app on the host and port specified in the FSCHAT_CONTROLLER configuration.
  2. Start the Model Worker Service:

    • Use the run_model_worker function for each model you want to run. This function takes a model name and other parameters, initializes the FastAPI app for the model worker, and runs it using Uvicorn. The model worker configuration is retrieved using get_model_worker_config.
  3. Start the OpenAI API Service:

    • Use the run_openai_api function. This function initializes and runs the FastAPI app for the OpenAI API service, serving it on the host and port specified in the FSCHAT_OPENAI_API configuration.
  4. Start the API Server:

    • Use the run_api_server function. This function initializes and runs the main API server using Uvicorn on the host and port specified in the API_SERVER configuration.
  5. Start the Web UI:

    • Use the run_webui function. This function starts the Streamlit server for the Web UI on the host and port specified in the WEBUI_SERVER configuration.

To verify that these services are running correctly, you can:

  • Check the logs for any startup errors.
  • Use tools like curl or Postman to send requests to the services' endpoints and verify the responses.
  • For the Web UI, open the specified host and port in a web browser to see if the UI loads correctly.

Each service logs its status and any errors encountered during startup, so monitoring the log files specified in the LOG_PATH configuration can also help verify that the services are running as expected.

I hope this helps! If you have any further questions, feel free to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Apr 3, 2024
@zRzRzRzRzRzRzR
Copy link
Collaborator

依赖对吗,感觉依赖没装对

@luli-criminal
Copy link

您好,请问解决了吗

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR closed this as not planned Won't fix, can't repro, duplicate, stale May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants