Skip to content

Commit 179b7a0

Browse files
Whaduppytorchmergebot
authored andcommitted
Do not crash when compiling quantized LORA models (pytorch#148435)
Fixes pytorch#148072 Pull Request resolved: pytorch#148435 Approved by: https://github.com/Valentine233, https://github.com/leslie-fang-intel
1 parent 24085db commit 179b7a0

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

torch/_inductor/fx_passes/quantization.py

+4
Original file line numberDiff line numberDiff line change
@@ -1071,6 +1071,10 @@ def _register_quantization_reshape():
10711071
def _is_valid_woq_optimization_pattern():
10721072
def fn(match):
10731073
assert all(k in match.kwargs for k in ("x", "weight", "scales"))
1074+
if not all(
1075+
hasattr(match.kwargs[key], "meta") for key in ["x", "weight", "scales"]
1076+
):
1077+
return False
10741078
x = match.kwargs["x"].meta["val"]
10751079
weight = match.kwargs["weight"].meta["val"]
10761080
scales = match.kwargs["scales"].meta["val"]

0 commit comments

Comments
 (0)