Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Doesn't load local models using unicode paths? #3199

Closed
2 of 4 tasks
r57zone opened this issue Jul 25, 2024 · 2 comments
Closed
2 of 4 tasks

bug: Doesn't load local models using unicode paths? #3199

r57zone opened this issue Jul 25, 2024 · 2 comments
Labels
type: bug Something isn't working

Comments

@r57zone
Copy link

r57zone commented Jul 25, 2024

  • I have searched the existing issues

Current behavior

If try to import the file "D:\Софт\Windows\Базы данных\LM Studio - AI чаты общения\Модели\lmstudio-community\Meta-Llama-3.1-8B-Instruct-GGUF\Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf" it is imported, but the model does not run.

If import "D:\Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf" it works fine.

Minimum reproduction step

  1. Settings -> Import model
  2. Drag and drop "D:\Софт\Windows\Базы данных\LM Studio - AI чаты общения\Модели\lmstudio-community\Meta-Llama-3.1-8B-Instruct-GGUF\Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf".
  3. Start model in list.

Expected behavior

Work on any paths

Screenshots / Logs

2024-07-25T09:42:15.846Z [CORTEX]::Debug: Killing PID 13700
2024-07-25T09:42:15.846Z [CORTEX]::Debug: Request to kill cortex
2024-07-25T09:42:15.982Z [CORTEX]::Debug: cortex process is terminated
2024-07-25T09:42:16.054Z [CORTEX]::CPU information - 7
2024-07-25T09:42:16.055Z [CORTEX]::Debug: Killing PID 13700
2024-07-25T09:42:16.054Z [CORTEX]::Debug: Request to kill cortex
2024-07-25T09:42:16.159Z [CORTEX]::Failed to kill PID - sending request to kill
2024-07-25T09:42:16.161Z [CORTEX]::Debug: cortex process is terminated
2024-07-25T09:42:16.161Z [CORTEX]::Debug: Spawning cortex subprocess...
2024-07-25T09:42:16.161Z [APP]::C:\Users\r57zone\jan\extensions\@janhq\inference-cortex-extension\dist\bin\win-cpu
2024-07-25T09:42:16.161Z [CORTEX]::Debug: Spawn cortex at path: C:\Users\r57zone\jan\extensions\@janhq\inference-cortex-extension\dist\bin\win-cpu\cortex-cpp.exe, and args: 1,127.0.0.1,3928
2024-07-25T09:42:16.261Z [CORTEX]::Debug: cortex exited with code: 1
2024-07-25T09:42:16.265Z [CORTEX]::Debug: Loading model with params {"cpu_threads":7,"ctx_len":2048,"embedding":false,"prompt_template":"{system_message}\n### Instruction: {prompt}\n### Response:","llama_model_path":"D:\\Софт\\Windows\\Базы данных\\LM Studio - AI чаты общения\\Модели\\lmstudio-community\\Meta-Llama-3.1-8B-Instruct-GGUF\\Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf","system_prompt":"","user_prompt":"\n### Instruction: ","ai_prompt":"\n### Response:","model":"Meta-Llama-3.1-8B-Instruct-Q4_K_M-1","ngl":100}
2024-07-25T09:42:16.265Z [CORTEX]::Debug: cortex is ready
2024-07-25T09:42:16.268Z [CORTEX]::Debug: 20240725 09:42:16.181000 UTC 924 INFO  cortex-cpp version: default_version - main.cc:73
20240725 09:42:16.181000 UTC 924 INFO  cortex.llamacpp version: 0.1.20-30.06.24 - main.cc:78
20240725 09:42:16.181000 UTC 924 INFO  Server started, listening at: 127.0.0.1:3928 - main.cc:81
20240725 09:42:16.181000 UTC 924 INFO  Please load your model - main.cc:82
20240725 09:42:16.181000 UTC 924 INFO  Number of thread is:12 - main.cc:89
20240725 09:42:16.265000 UTC 11376 INFO  CPU instruction set: fpu = 1| mmx = 1| sse = 1| sse2 = 1| sse3 = 1| ssse3 = 1| sse4_1 = 1| sse4_2 = 1| pclmulqdq = 1| avx = 1| avx2 = 1| avx512_f = 0| avx512_dq = 0| avx512_ifma = 0| avx512_pf = 0| avx512_er = 0| avx512_cd = 0| avx512_bw = 0| has_avx512_vl = 0| has_avx512_vbmi = 0| has_avx512_vbmi2 = 0| avx512_vnni = 0| avx512_bitalg = 0| avx512_vpopcntdq = 0| avx512_4vnniw = 0| avx512_4fmaps = 0| avx512_vp2intersect = 0| aes = 1| f16c = 1| - server.cc:277
20240725 09:42:16.265000 UTC 11376 INFO  Loaded engine: cortex.llamacpp - server.cc:305
20240725 09:42:16.265000 UTC 11376 INFO  cortex.llamacpp version: default_version - llama_engine.cc:159
20240725 09:42:16.265000 UTC 11376 ERROR Could not find model in path D:\Софт\Windows\Базы данных\LM Studio - AI чаты общения\Модели\lmstudio-community\Meta-Llama-3.1-8B-Instruct-GGUF\Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf - llama_engine.cc:332

2024-07-25T09:42:16.270Z [CORTEX]::Debug: 20240725 09:42:16.265000 UTC 11376 DEBUG [LoadModelImpl] cache_type: f16 - llama_engine.cc:356
20240725 09:42:16.265000 UTC 11376 DEBUG [LoadModelImpl] Enabled Flash Attention - llama_engine.cc:365
20240725 09:42:16.265000 UTC 11376 DEBUG [LoadModelImpl] stop: null
 - llama_engine.cc:386
{"timestamp":1721900536,"level":"INFO","function":"LoadModelImpl","line":409,"message":"system info","n_threads":7,"total_threads":12,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | "}

2024-07-25T09:42:16.271Z [CORTEX]::Debug: Load model success with response {}
2024-07-25T09:42:16.271Z [CORTEX]::Debug: {"timestamp":1721900536,"level":"ERROR","function":"LoadModel","line":183,"message":"llama.cpp unable to load model","model":""}
20240725 09:42:16.265000 UTC 11376 ERROR Error loading the model - llama_engine.cc:414

2024-07-25T09:42:16.271Z [CORTEX]::Error: llama_model_load: error loading model: llama_model_loader: failed to load model from 

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model ''

2024-07-25T09:42:16.272Z [CORTEX]::Debug: Validate model state with response 409
2024-07-25T09:42:16.272Z [CORTEX]::Debug: Validate model state failed with response "Conflict"
2024-07-25T09:42:16.272Z [CORTEX]::Error: Validate model status failed

Jan version

0.5.2

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Environment details

No response

@r57zone r57zone added the type: bug Something isn't working label Jul 25, 2024
@r57zone
Copy link
Author

r57zone commented Jul 25, 2024

If there are Russian letters in the folder names, then the model does not start

@Van-QA
Copy link
Contributor

Van-QA commented Jul 28, 2024

@r57zone,
Sorry for the inconvenience,
The issue is related to #1713, dev team did spend time investigating for a solution but we are working on a refactoring version of the Jan x Cortex, so it might take a while for the issue to be resolved.
Feel free to follow up via the related issue.

@Van-QA Van-QA closed this as not planned Won't fix, can't repro, duplicate, stale Jul 28, 2024
@github-project-automation github-project-automation bot moved this to Done in Menlo Jul 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
Archived in project
Development

No branches or pull requests

2 participants