Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Android: Failed to lookup symbol 'llama_backend_init': undefined symbol: llama_backend_init #27

Open
riverzhou opened this issue Feb 27, 2024 · 6 comments

Comments

@riverzhou
Copy link

riverzhou commented Feb 27, 2024

Version:
llama_cpp_dart 0.0.6
llama.cpp tag: b2277

logcat:

02-28 00:21:29.079  5839  8926 E flutter : [ERROR:flutter/runtime/dart_isolate.cc(1107)] Unhandled exception:
02-28 00:21:29.079  5839  8926 E flutter : Invalid argument(s): Failed to lookup symbol 'llama_backend_init': undefined symbol: llama_backend_init
02-28 00:21:29.079  5839  8926 E flutter : #0      DynamicLibrary.lookup (dart:ffi-patch/ffi_dynamic_library_patch.dart:33)
02-28 00:21:29.079  5839  8926 E flutter : #1      llama_cpp._llama_backend_initPtr (package:llama_cpp_dart/src/llama_cpp.dart:10187)
02-28 00:21:29.079  5839  8926 E flutter : #2      llama_cpp._llama_backend_init (package:llama_cpp_dart/src/llama_cpp.dart)
02-28 00:21:29.079  5839  8926 E flutter : #3      llama_cpp.llama_backend_init (package:llama_cpp_dart/src/llama_cpp.dart)
02-28 00:21:29.079  5839  8926 E flutter : #4      new Llama (package:llama_cpp_dart/src/llama.dart:74)
02-28 00:21:29.079  5839  8926 E flutter : #5      LlamaProcessor._modelIsolateEntryPoint.<anonymous closure> (package:llama_cpp_dart/src/llama_processor.dart:96)
02-28 00:21:29.079  5839  8926 E flutter : #6      _RootZone.runUnaryGuarded (dart:async/zone.dart:1594)
02-28 00:21:29.079  5839  8926 E flutter : #7      _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:339)
02-28 00:21:29.079  5839  8926 E flutter : #8      _BufferingStreamSubscription._add (dart:async/stream_impl.dart:271)
02-28 00:21:29.079  5839  8926 E flutter : #9      _SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:784)
02-28 00:21:29.079  5839  8926 E flutter : #10     _StreamController._add (dart:async/stream_controller.dart:658)
02-28 00:21:29.079  5839  8926 E flutter : #11     _StreamController.add (dart:async/stream_controller.dart:606)
02-28 00:21:29.079  5839  8926 E flutter : #12     _RawReceivePort._handleMessage (dart:isolate-patch/isolate_patch.dart:184)
@netdur
Copy link
Owner

netdur commented Feb 27, 2024

@riverzhou the last llama.cpp has changed llama_backend_init to have bool argument, the version 0.0.7 is updated to match it

@riverzhou
Copy link
Author

riverzhou commented Feb 28, 2024

@riverzhou the last llama.cpp has changed llama_backend_init to have bool argument, the version 0.0.7 is updated to match it

They removed numa argument for llama_backend_init at Feb 16.
In my test, b2277 do not have this argument.

commit f486f6e1e5e9d01603d9325ab3e05f1edb362a95
Author: bmwl <[email protected]>
Date:   Fri Feb 16 01:31:07 2024 -0800

    ggml : add numa options (#5377)

diff --git a/llama.h b/llama.h
index 4a26bd61..f4ec6ea6 100644
--- a/llama.h
+++ b/llama.h
@@ -312,7 +312,10 @@ extern "C" {
     // Initialize the llama + ggml backend
     // If numa is true, use NUMA optimizations
     // Call once at the start of the program
-    LLAMA_API void llama_backend_init(bool numa);
+    LLAMA_API void llama_backend_init(void);
+
+    //optional:
+    LLAMA_API void llama_numa_init(enum ggml_numa_strategy numa);

     // Call once at the end of the program - currently only used for MPI

@riverzhou
Copy link
Author

riverzhou commented Feb 28, 2024

@riverzhou the last llama.cpp has changed llama_backend_init to have bool argument, the version 0.0.7 is updated to match it

They removed numa argument for llama_backend_init at Feb 16. In my test, b2277 do not have this argument.

commit f486f6e1e5e9d01603d9325ab3e05f1edb362a95
Author: bmwl <[email protected]>
Date:   Fri Feb 16 01:31:07 2024 -0800

    ggml : add numa options (#5377)

diff --git a/llama.h b/llama.h
index 4a26bd61..f4ec6ea6 100644
--- a/llama.h
+++ b/llama.h
@@ -312,7 +312,10 @@ extern "C" {
     // Initialize the llama + ggml backend
     // If numa is true, use NUMA optimizations
     // Call once at the start of the program
-    LLAMA_API void llama_backend_init(bool numa);
+    LLAMA_API void llama_backend_init(void);
+
+    //optional:
+    LLAMA_API void llama_numa_init(enum ggml_numa_strategy numa);

     // Call once at the end of the program - currently only used for MPI

I checked your source code.
Both in 0.0.6 and 0.0.7, they have numa argument. So they can not work on upstream llama.cpp after Feb 16.

  void llama_backend_init(
    bool numa,
  ) {
    return _llama_backend_init(
      numa,
    );
  }

@netdur
Copy link
Owner

netdur commented Feb 28, 2024

that weird, I will double check

@netdur
Copy link
Owner

netdur commented Feb 29, 2024

@riverzhou you are correct, turns out my git pull did not update llama.cpp code, I had to hard reset, please try the last update

@riverzhou
Copy link
Author

@riverzhou you are correct, turns out my git pull did not update llama.cpp code, I had to hard reset, please try the last update

Great! Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants