Skip to content

Commit

Permalink
Clean up llama library/binary dependency
Browse files Browse the repository at this point in the history
Summary: llama binary really depends on llama_lib, not necessarily needs to copy the dependency on both side.

Reviewed By: billmguo

Differential Revision: D69942904
  • Loading branch information
cccclai authored and facebook-github-bot committed Feb 20, 2025
1 parent fd318cc commit dc13935
Showing 1 changed file with 1 addition and 12 deletions.
13 changes: 1 addition & 12 deletions examples/qualcomm/oss_scripts/llama/TARGETS
Original file line number Diff line number Diff line change
Expand Up @@ -35,23 +35,12 @@ python_library(

python_binary(
name = "llama",
srcs = ["llama.py"],
main_function = "executorch.examples.qualcomm.oss_scripts.llama.llama.main",
preload_deps = [
"//executorch/extension/llm/custom_ops:model_sharding_py",
],
deps = [
"//executorch/examples/qualcomm/oss_scripts/llama:static_llama",
"//caffe2:torch",
"//executorch/extension/pybindings:aten_lib",
"//executorch/backends/qualcomm/partition:partition",
"//executorch/backends/qualcomm/quantizer:quantizer",
"//executorch/devtools/backend_debug:delegation_info",
"//executorch/devtools:lib",
"//executorch/examples/models:models",
"//executorch/examples/qualcomm:utils",
"//executorch/extension/export_util:export_util",
"//executorch/extension/llm/export:export_lib",
":llama_lib",
],
)

Expand Down

0 comments on commit dc13935

Please sign in to comment.