Skip to content

ANE-friendly static llama #30598

ANE-friendly static llama

ANE-friendly static llama #30598

Triggered via pull request February 20, 2025 22:12
Status Failure
Total duration 11m 26s
Artifacts

lint.yml

on: pull_request
lintrunner  /  linux-job
11m 18s
lintrunner / linux-job
android-java-format  /  linux-job
4m 36s
android-java-format / linux-job
Fit to window
Zoom out
Zoom in

Annotations

1 error and 3 warnings
lintrunner / linux-job
Process completed with exit code 1.
FLAKE8 F401: pytorch/executorch/examples/apple/coreml/llama/run.py#L3
'multiprocessing.process' imported but unused See https://www.flake8rules.com/rules/F401.html.
UFMT format: pytorch/executorch/examples/apple/coreml/llama/llama_transformer.py#L1
Run `lintrunner -a` to apply this patch.
FLAKE8 TOR102: pytorch/executorch/examples/apple/coreml/llama/export.py#L165
`torch.load` without `weights_only` parameter is unsafe. Explicitly set `weights_only` to False only if you trust the data you load and full pickle functionality is needed, otherwise set `weights_only=True`. See .