Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grouped conv2d: Use MLIR Op which matches memory layout of weight dimensions #2623

Merged
merged 4 commits into from
Dec 8, 2023

Conversation

ubfx
Copy link
Member

@ubfx ubfx commented Dec 8, 2023

The linalg Op linalg.conv_2d_ngchw_fgchw had a bug where

  1. Weights were accessed as G,F,C,H,W instead of as F,G,C,H,W
  2. Output was accessed as N,F,G,H,W instead of as N,G,F,H,W

Now this has been fixed in llvm/llvm-project#73855 which broke the torch-mlir lowering to that Op.

This patch switches lowering in torch-mlir to the newly introduced linalg.conv_2d_ngchw_gfchw op which accesses weights in an order that is compatible with PyTorch's memory layout.

Fix #2622

@ubfx ubfx changed the title Grouped conv2d: Expand weight dimensions in accordance with MLIR Op Grouped conv2d: Use MLIR Op which matches memory layout of weight dimensions Dec 8, 2023
@ubfx ubfx marked this pull request as ready for review December 8, 2023 10:42
Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ubfx ubfx merged commit fb21a85 into llvm:main Dec 8, 2023
5 checks passed
@ubfx ubfx deleted the conv2d-dimorder branch December 8, 2023 13:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Some linalg tests failing on Conv2d grouped/dilation for PyTorch nightly after LLVM bump
2 participants