Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PeepholeOpt: Fix looking for def of current copy to coalesce #125533

Conversation

arsenm
Copy link
Contributor

@arsenm arsenm commented Feb 3, 2025

This fixes the handling of subregister extract copies. This
will allow AMDGPU to remove its implementation of
shouldRewriteCopySrc, which exists as a 10 year old workaround
to this bug. peephole-opt-fold-reg-sequence-subreg.mir will
show the expected improvement once the custom implementation
is removed.

The copy coalescing processing here is overly abstracted
from what's actually happening. Previously when visiting
coalescable copy-like instructions, we would parse the
sources one at a time and then pass the def of the root
instruction into findNextSource. This means that the
first thing the new ValueTracker constructed would do
is getVRegDef to find the instruction we are currently
processing. This adds an unnecessary step, placing
a useless entry in the RewriteMap, and required skipping
the no-op case where getNewSource would return the original
source operand. This was a problem since in the case
of a subregister extract, shouldRewriteCopySource would always
say that it is useful to rewrite and the use-def chain walk
would abort, returning the original operand. Move the process
to start looking at the source operand to begin with.

This does not fix the confused handling in the uncoalescable
copy case which is proving to be more difficult. Some currently
handled cases have multiple defs from a single source, and other
handled cases have 0 input operands. It would be simpler if
this was implemented with isCopyLikeInstr, rather than guessing
at the operand structure as it does now.

There are some improvements and some regressions. The
regressions appear to be downstream issues for the most part. One
of the uglier regressions is in PPC, where a sequence of insert_subrgs
is used to build registers. I opened #125502 to use reg_sequence instead,
which may help.

The worst regression is an absurd SPARC testcase using a <251 x fp128>,
which uses a very long chain of insert_subregs.

We need improved subregister handling locally in PeepholeOptimizer,
and other pasess like MachineCSE to fix some of the other regressions.
We should handle subregister composes and folding more indexes
into insert_subreg and reg_sequence.

Copy link
Contributor Author

arsenm commented Feb 3, 2025

@llvmbot
Copy link
Member

llvmbot commented Feb 3, 2025

@llvm/pr-subscribers-backend-arm
@llvm/pr-subscribers-llvm-globalisel
@llvm/pr-subscribers-backend-powerpc
@llvm/pr-subscribers-backend-amdgpu
@llvm/pr-subscribers-backend-systemz
@llvm/pr-subscribers-backend-x86

@llvm/pr-subscribers-llvm-regalloc

Author: Matt Arsenault (arsenm)

Changes

This fixes the handling of subregister extract copies. This
will allow AMDGPU to remove its implementation of
shouldRewriteCopySrc, which exists as a 10 year old workaround
to this bug. peephole-opt-fold-reg-sequence-subreg.mir will
show the expected improvement once the custom implementation
is removed.

The copy coalescing processing here is overly abstracted
from what's actually happening. Previously when visiting
coalescable copy-like instructions, we would parse the
sources one at a time and then pass the def of the root
instruction into findNextSource. This means that the
first thing the new ValueTracker constructed would do
is getVRegDef to find the instruction we are currently
processing. This adds an unnecessary step, placing
a useless entry in the RewriteMap, and required skipping
the no-op case where getNewSource would return the original
source operand. This was a problem since in the case
of a subregister extract, shouldRewriteCopySource would always
say that it is useful to rewrite and the use-def chain walk
would abort, returning the original operand. Move the process
to start looking at the source operand to begin with.

This does not fix the confused handling in the uncoalescable
copy case which is proving to be more difficult. Some currently
handled cases have multiple defs from a single source, and other
handled cases have 0 input operands. It would be simpler if
this was implemented with isCopyLikeInstr, rather than guessing
at the operand structure as it does now.

There are some improvements and some regressions. The
regressions appear to be downstream issues for the most part. One
of the uglier regressions is in PPC, where a sequence of insert_subrgs
is used to build registers. I opened #125502 to use reg_sequence instead,
which may help.

The worst regression is an absurd SPARC testcase using a <251 x fp128>,
which uses a very long chain of insert_subregs.

We need improved subregister handling locally in PeepholeOptimizer,
and other pasess like MachineCSE to fix some of the other regressions.
We should handle subregister composes and folding more indexes
into insert_subreg and reg_sequence.


Patch is 475.60 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/125533.diff

105 Files Affected:

  • (modified) llvm/lib/CodeGen/PeepholeOptimizer.cpp (+28-12)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-lse2.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc3.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-v8a.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64_be-atomicrmw-lse2.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64_be-atomicrmw-rcpc.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64_be-atomicrmw-rcpc3.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/Atomics/aarch64_be-atomicrmw-v8a.ll (+30-30)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll (-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-pcsections.ll (+34-40)
  • (modified) llvm/test/CodeGen/AArch64/addsub_ext.ll (+4-22)
  • (modified) llvm/test/CodeGen/AArch64/and-mask-removal.ll (-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-ldxr-stxr.ll (-4)
  • (modified) llvm/test/CodeGen/AArch64/arm64-vaddv.ll (-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64_32-addrs.ll (-1)
  • (modified) llvm/test/CodeGen/AArch64/atomic-ops-msvc.ll (+4-7)
  • (modified) llvm/test/CodeGen/AArch64/atomic-ops.ll (-4)
  • (modified) llvm/test/CodeGen/AArch64/atomicrmw-fadd.ll (+10-12)
  • (modified) llvm/test/CodeGen/AArch64/atomicrmw-fmax.ll (+10-12)
  • (modified) llvm/test/CodeGen/AArch64/atomicrmw-fmin.ll (+10-12)
  • (modified) llvm/test/CodeGen/AArch64/atomicrmw-fsub.ll (+10-12)
  • (modified) llvm/test/CodeGen/AArch64/atomicrmw-xchg-fp.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/cmp-to-cmn.ll (-8)
  • (modified) llvm/test/CodeGen/AArch64/cmpxchg-idioms.ll (-1)
  • (modified) llvm/test/CodeGen/AArch64/extract-bits.ll (-6)
  • (modified) llvm/test/CodeGen/AArch64/fold-int-pow2-with-fmul-or-fdiv.ll (-2)
  • (modified) llvm/test/CodeGen/AArch64/fsh.ll (-2)
  • (modified) llvm/test/CodeGen/AArch64/funnel-shift.ll (-4)
  • (modified) llvm/test/CodeGen/AArch64/hoist-and-by-const-from-lshr-in-eqcmp-zero.ll (-8)
  • (modified) llvm/test/CodeGen/AArch64/hoist-and-by-const-from-shl-in-eqcmp-zero.ll (+5-14)
  • (modified) llvm/test/CodeGen/AArch64/logic-shift.ll (-9)
  • (modified) llvm/test/CodeGen/AArch64/neon-insextbitcast.ll (-2)
  • (modified) llvm/test/CodeGen/AArch64/shift-by-signext.ll (-2)
  • (modified) llvm/test/CodeGen/AArch64/shift.ll (-6)
  • (modified) llvm/test/CodeGen/AArch64/sink-and-fold.ll (-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-fixed-length-subvector.ll (+24-24)
  • (modified) llvm/test/CodeGen/AArch64/sve-streaming-mode-fixed-length-fp-to-int.ll (+76-76)
  • (modified) llvm/test/CodeGen/AArch64/sve-streaming-mode-fixed-length-permute-zip-uzp-trn.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/vec-combine-compare-to-bitmask.ll (+14-14)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.init.whole.wave-w32.ll (+3-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mul_int24.ll (+19-23)
  • (added) llvm/test/CodeGen/AMDGPU/peephole-opt-fold-reg-sequence-subreg.mir (+189)
  • (modified) llvm/test/CodeGen/ARM/aes-erratum-fix.ll (+36-34)
  • (modified) llvm/test/CodeGen/ARM/arm-bf16-dotprod-intrinsics.ll (-3)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (-2)
  • (modified) llvm/test/CodeGen/ARM/bf16-create-get-set-dup.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/bf16-shuffle.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/minnum-maxnum-intrinsics.ll (+11-11)
  • (modified) llvm/test/CodeGen/ARM/neon-copy.ll (-3)
  • (modified) llvm/test/CodeGen/ARM/neon-v8.1a.ll (-8)
  • (modified) llvm/test/CodeGen/ARM/vdup.ll (-4)
  • (modified) llvm/test/CodeGen/ARM/vext.ll (+6-6)
  • (modified) llvm/test/CodeGen/ARM/vmul.ll (-3)
  • (modified) llvm/test/CodeGen/ARM/vpadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/vuzp.ll (+26-26)
  • (modified) llvm/test/CodeGen/ARM/vzip.ll (+23-23)
  • (modified) llvm/test/CodeGen/AVR/return.ll (+36-36)
  • (modified) llvm/test/CodeGen/BPF/is_trunc_free.ll (-1)
  • (modified) llvm/test/CodeGen/PowerPC/aggressive-anti-dep-breaker-subreg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/mma-acc-copy-hints.ll (+11-7)
  • (modified) llvm/test/CodeGen/PowerPC/mma-acc-memops.ll (+12-12)
  • (modified) llvm/test/CodeGen/PowerPC/paired-vector-intrinsics.ll (+10-10)
  • (modified) llvm/test/CodeGen/PowerPC/peephole-subreg-def.mir (+11-10)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll (+70-56)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll (+299-250)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-scatter.ll (+30-30)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll (+26-26)
  • (modified) llvm/test/CodeGen/SPARC/fmuladd-soft-float.ll (+8-6)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPARC/fp16-promote.ll (+16-16)
  • (modified) llvm/test/CodeGen/SystemZ/int-uadd-01.ll (+8-8)
  • (modified) llvm/test/CodeGen/SystemZ/int-uadd-02.ll (+8-8)
  • (modified) llvm/test/CodeGen/SystemZ/pr60413.ll (+20-20)
  • (modified) llvm/test/CodeGen/Thumb2/mve-complex-deinterleaving-uniform-cases.ll (+40-40)
  • (modified) llvm/test/CodeGen/Thumb2/mve-float32regloops.ll (+28-28)
  • (modified) llvm/test/CodeGen/Thumb2/mve-masked-ldst.ll (+24-24)
  • (modified) llvm/test/CodeGen/Thumb2/mve-shuffle.ll (+28-28)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vcvt16.ll (+4-3)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vld2.ll (+4-3)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vld3.ll (+214-338)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vld4.ll (+56-56)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vldst4.ll (+86-90)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vst2.ll (+19-16)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vst3.ll (+263-278)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vst4.ll (+75-77)
  • (modified) llvm/test/CodeGen/VE/Scalar/select.ll (+2-2)
  • (modified) llvm/test/CodeGen/VE/Scalar/va_caller.ll (+12-9)
  • (modified) llvm/test/CodeGen/X86/AMX/amx-ldtilecfg-insert.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/avx512-calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/avx512-ext.ll (+195-197)
  • (modified) llvm/test/CodeGen/X86/div-rem-pair-recomposition-signed.ll (+5-6)
  • (modified) llvm/test/CodeGen/X86/fminimum-fmaximum.ll (+35-35)
  • (modified) llvm/test/CodeGen/X86/fminimumnum-fmaximumnum.ll (+144-144)
  • (modified) llvm/test/CodeGen/X86/half.ll (+5-5)
  • (modified) llvm/test/CodeGen/X86/smax.ll (+5-5)
  • (modified) llvm/test/CodeGen/X86/smin.ll (+5-5)
  • (modified) llvm/test/CodeGen/X86/test-shrink.ll (-1)
  • (modified) llvm/test/CodeGen/X86/umax.ll (+5-5)
  • (modified) llvm/test/CodeGen/X86/umin.ll (+5-5)
  • (modified) llvm/test/CodeGen/X86/vector-compress.ll (+52-46)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-256.ll (-1)
  • (modified) llvm/test/CodeGen/X86/wide-scalar-shift-legalization.ll (+24-24)
  • (modified) llvm/test/CodeGen/X86/widen-load-of-small-alloca-with-zero-upper-half.ll (+2-1)
  • (modified) llvm/test/CodeGen/X86/widen-load-of-small-alloca.ll (+2-1)
diff --git a/llvm/lib/CodeGen/PeepholeOptimizer.cpp b/llvm/lib/CodeGen/PeepholeOptimizer.cpp
index e0053fb243369c..6739199a802231 100644
--- a/llvm/lib/CodeGen/PeepholeOptimizer.cpp
+++ b/llvm/lib/CodeGen/PeepholeOptimizer.cpp
@@ -465,7 +465,8 @@ class PeepholeOptimizer : private MachineFunction::Delegate {
   bool optimizeUncoalescableCopy(MachineInstr &MI,
                                  SmallPtrSetImpl<MachineInstr *> &LocalMIs);
   bool optimizeRecurrence(MachineInstr &PHI);
-  bool findNextSource(RegSubRegPair RegSubReg, RewriteMapTy &RewriteMap);
+  bool findNextSource(const TargetRegisterClass *DefRC, unsigned DefSubReg,
+                      RegSubRegPair RegSubReg, RewriteMapTy &RewriteMap);
   bool isMoveImmediate(MachineInstr &MI, SmallSet<Register, 4> &ImmDefRegs,
                        DenseMap<Register, MachineInstr *> &ImmDefMIs);
   bool foldImmediate(MachineInstr &MI, SmallSet<Register, 4> &ImmDefRegs,
@@ -1002,17 +1003,15 @@ bool PeepholeOptimizer::optimizeCondBranch(MachineInstr &MI) {
 /// share the same register file as \p Reg and \p SubReg. The client should
 /// then be capable to rewrite all intermediate PHIs to get the next source.
 /// \return False if no alternative sources are available. True otherwise.
-bool PeepholeOptimizer::findNextSource(RegSubRegPair RegSubReg,
+bool PeepholeOptimizer::findNextSource(const TargetRegisterClass *DefRC,
+                                       unsigned DefSubReg,
+                                       RegSubRegPair RegSubReg,
                                        RewriteMapTy &RewriteMap) {
   // Do not try to find a new source for a physical register.
   // So far we do not have any motivating example for doing that.
   // Thus, instead of maintaining untested code, we will revisit that if
   // that changes at some point.
   Register Reg = RegSubReg.Reg;
-  if (Reg.isPhysical())
-    return false;
-  const TargetRegisterClass *DefRC = MRI->getRegClass(Reg);
-
   SmallVector<RegSubRegPair, 4> SrcToLook;
   RegSubRegPair CurSrcPair = RegSubReg;
   SrcToLook.push_back(CurSrcPair);
@@ -1076,7 +1075,7 @@ bool PeepholeOptimizer::findNextSource(RegSubRegPair RegSubReg,
 
       // Keep following the chain if the value isn't any better yet.
       const TargetRegisterClass *SrcRC = MRI->getRegClass(CurSrcPair.Reg);
-      if (!TRI->shouldRewriteCopySrc(DefRC, RegSubReg.SubReg, SrcRC,
+      if (!TRI->shouldRewriteCopySrc(DefRC, DefSubReg, SrcRC,
                                      CurSrcPair.SubReg))
         continue;
 
@@ -1184,21 +1183,33 @@ bool PeepholeOptimizer::optimizeCoalescableCopyImpl(Rewriter &&CpyRewriter) {
   bool Changed = false;
   // Get the right rewriter for the current copy.
   // Rewrite each rewritable source.
-  RegSubRegPair Src;
+  RegSubRegPair Dst;
   RegSubRegPair TrackPair;
-  while (CpyRewriter.getNextRewritableSource(Src, TrackPair)) {
+  while (CpyRewriter.getNextRewritableSource(TrackPair, Dst)) {
+    if (Dst.Reg.isPhysical()) {
+      // Do not try to find a new source for a physical register.
+      // So far we do not have any motivating example for doing that.
+      // Thus, instead of maintaining untested code, we will revisit that if
+      // that changes at some point.
+      continue;
+    }
+
+    const TargetRegisterClass *DefRC = MRI->getRegClass(Dst.Reg);
+
     // Keep track of PHI nodes and its incoming edges when looking for sources.
     RewriteMapTy RewriteMap;
     // Try to find a more suitable source. If we failed to do so, or get the
     // actual source, move to the next source.
-    if (!findNextSource(TrackPair, RewriteMap))
+    if (!findNextSource(DefRC, Dst.SubReg, TrackPair, RewriteMap))
       continue;
 
     // Get the new source to rewrite. TODO: Only enable handling of multiple
     // sources (PHIs) once we have a motivating example and testcases for it.
     RegSubRegPair NewSrc = getNewSource(MRI, TII, TrackPair, RewriteMap,
                                         /*HandleMultipleSources=*/false);
-    if (Src.Reg == NewSrc.Reg || NewSrc.Reg == 0)
+    assert(TrackPair.Reg != NewSrc.Reg &&
+           "should not rewrite source to original value");
+    if (!NewSrc.Reg)
       continue;
 
     // Rewrite source.
@@ -1325,9 +1336,14 @@ bool PeepholeOptimizer::optimizeUncoalescableCopy(
     if (Def.Reg.isPhysical())
       return false;
 
+    // FIXME: Uncoalescable copies are treated differently by
+    // UncoalescableRewriter, and this probably should not share
+    // API. getNextRewritableSource really finds rewritable defs.
+    const TargetRegisterClass *DefRC = MRI->getRegClass(Def.Reg);
+
     // If we do not know how to rewrite this definition, there is no point
     // in trying to kill this instruction.
-    if (!findNextSource(Def, RewriteMap))
+    if (!findNextSource(DefRC, Def.SubReg, Def, RewriteMap))
       return false;
 
     RewritePairs.push_back(Def);
diff --git a/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-lse2.ll b/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-lse2.ll
index d93ef6f8b2869b..94d46148f37e3a 100644
--- a/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-lse2.ll
+++ b/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-lse2.ll
@@ -12,8 +12,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_monotonic(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_monotonic:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value monotonic, align 1
     ret i8 %r
 }
@@ -27,8 +27,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_acquire(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_acquire:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acquire, align 1
     ret i8 %r
 }
@@ -42,8 +42,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_release(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_release:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value release, align 1
     ret i8 %r
 }
@@ -57,8 +57,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_acq_rel(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_acq_rel:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acq_rel, align 1
     ret i8 %r
 }
@@ -72,8 +72,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_seq_cst(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_seq_cst:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value seq_cst, align 1
     ret i8 %r
 }
@@ -86,8 +86,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_monotonic(ptr %ptr, i16 %value)
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_monotonic:
-; -O1:    ldxrh w8, [x0]
-; -O1:    stxrh w9, w1, [x0]
+; -O1:    ldxrh w0, [x8]
+; -O1:    stxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value monotonic, align 2
     ret i16 %r
 }
@@ -100,8 +100,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_acquire(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_acquire:
-; -O1:    ldaxrh w8, [x0]
-; -O1:    stxrh w9, w1, [x0]
+; -O1:    ldaxrh w0, [x8]
+; -O1:    stxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value acquire, align 2
     ret i16 %r
 }
@@ -114,8 +114,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_release(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_release:
-; -O1:    ldxrh w8, [x0]
-; -O1:    stlxrh w9, w1, [x0]
+; -O1:    ldxrh w0, [x8]
+; -O1:    stlxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value release, align 2
     ret i16 %r
 }
@@ -128,8 +128,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_acq_rel(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_acq_rel:
-; -O1:    ldaxrh w8, [x0]
-; -O1:    stlxrh w9, w1, [x0]
+; -O1:    ldaxrh w0, [x8]
+; -O1:    stlxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value acq_rel, align 2
     ret i16 %r
 }
@@ -142,8 +142,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_seq_cst(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_seq_cst:
-; -O1:    ldaxrh w8, [x0]
-; -O1:    stlxrh w9, w1, [x0]
+; -O1:    ldaxrh w0, [x8]
+; -O1:    stlxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value seq_cst, align 2
     ret i16 %r
 }
@@ -392,8 +392,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_monotonic(ptr %ptr, i8 %value)
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_monotonic:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value monotonic, align 1
     ret i8 %r
 }
@@ -407,8 +407,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_acquire(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_acquire:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acquire, align 1
     ret i8 %r
 }
@@ -422,8 +422,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_release(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_release:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value release, align 1
     ret i8 %r
 }
@@ -437,8 +437,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_acq_rel(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_acq_rel:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acq_rel, align 1
     ret i8 %r
 }
@@ -452,8 +452,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_seq_cst(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_seq_cst:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value seq_cst, align 1
     ret i8 %r
 }
diff --git a/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc.ll b/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc.ll
index 912d87dcd2b9b6..57cfeb78b69802 100644
--- a/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc.ll
+++ b/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc.ll
@@ -12,8 +12,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_monotonic(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_monotonic:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value monotonic, align 1
     ret i8 %r
 }
@@ -27,8 +27,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_acquire(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_acquire:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acquire, align 1
     ret i8 %r
 }
@@ -42,8 +42,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_release(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_release:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value release, align 1
     ret i8 %r
 }
@@ -57,8 +57,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_acq_rel(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_acq_rel:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acq_rel, align 1
     ret i8 %r
 }
@@ -72,8 +72,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_seq_cst(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_seq_cst:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value seq_cst, align 1
     ret i8 %r
 }
@@ -86,8 +86,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_monotonic(ptr %ptr, i16 %value)
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_monotonic:
-; -O1:    ldxrh w8, [x0]
-; -O1:    stxrh w9, w1, [x0]
+; -O1:    ldxrh w0, [x8]
+; -O1:    stxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value monotonic, align 2
     ret i16 %r
 }
@@ -100,8 +100,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_acquire(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_acquire:
-; -O1:    ldaxrh w8, [x0]
-; -O1:    stxrh w9, w1, [x0]
+; -O1:    ldaxrh w0, [x8]
+; -O1:    stxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value acquire, align 2
     ret i16 %r
 }
@@ -114,8 +114,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_release(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_release:
-; -O1:    ldxrh w8, [x0]
-; -O1:    stlxrh w9, w1, [x0]
+; -O1:    ldxrh w0, [x8]
+; -O1:    stlxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value release, align 2
     ret i16 %r
 }
@@ -128,8 +128,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_acq_rel(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_acq_rel:
-; -O1:    ldaxrh w8, [x0]
-; -O1:    stlxrh w9, w1, [x0]
+; -O1:    ldaxrh w0, [x8]
+; -O1:    stlxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value acq_rel, align 2
     ret i16 %r
 }
@@ -142,8 +142,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_seq_cst(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_seq_cst:
-; -O1:    ldaxrh w8, [x0]
-; -O1:    stlxrh w9, w1, [x0]
+; -O1:    ldaxrh w0, [x8]
+; -O1:    stlxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value seq_cst, align 2
     ret i16 %r
 }
@@ -392,8 +392,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_monotonic(ptr %ptr, i8 %value)
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_monotonic:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value monotonic, align 1
     ret i8 %r
 }
@@ -407,8 +407,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_acquire(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_acquire:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acquire, align 1
     ret i8 %r
 }
@@ -422,8 +422,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_release(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_release:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value release, align 1
     ret i8 %r
 }
@@ -437,8 +437,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_acq_rel(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_acq_rel:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acq_rel, align 1
     ret i8 %r
 }
@@ -452,8 +452,8 @@ define dso_local i8 @atomicrmw_xchg_i8_unaligned_seq_cst(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_unaligned_seq_cst:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value seq_cst, align 1
     ret i8 %r
 }
diff --git a/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc3.ll b/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc3.ll
index 725558f2dcf727..28ee1a2a70c4d7 100644
--- a/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc3.ll
+++ b/llvm/test/CodeGen/AArch64/Atomics/aarch64-atomicrmw-rcpc3.ll
@@ -12,8 +12,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_monotonic(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_monotonic:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value monotonic, align 1
     ret i8 %r
 }
@@ -27,8 +27,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_acquire(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_acquire:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acquire, align 1
     ret i8 %r
 }
@@ -42,8 +42,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_release(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_release:
-; -O1:    ldxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value release, align 1
     ret i8 %r
 }
@@ -57,8 +57,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_acq_rel(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_acq_rel:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value acq_rel, align 1
     ret i8 %r
 }
@@ -72,8 +72,8 @@ define dso_local i8 @atomicrmw_xchg_i8_aligned_seq_cst(ptr %ptr, i8 %value) {
 ; -O0:    subs w8, w8, w10, uxtb
 ;
 ; -O1-LABEL: atomicrmw_xchg_i8_aligned_seq_cst:
-; -O1:    ldaxrb w8, [x0]
-; -O1:    stlxrb w9, w1, [x0]
+; -O1:    ldaxrb w0, [x8]
+; -O1:    stlxrb w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i8 %value seq_cst, align 1
     ret i8 %r
 }
@@ -86,8 +86,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_monotonic(ptr %ptr, i16 %value)
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_monotonic:
-; -O1:    ldxrh w8, [x0]
-; -O1:    stxrh w9, w1, [x0]
+; -O1:    ldxrh w0, [x8]
+; -O1:    stxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value monotonic, align 2
     ret i16 %r
 }
@@ -100,8 +100,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_acquire(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_acquire:
-; -O1:    ldaxrh w8, [x0]
-; -O1:    stxrh w9, w1, [x0]
+; -O1:    ldaxrh w0, [x8]
+; -O1:    stxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value acquire, align 2
     ret i16 %r
 }
@@ -114,8 +114,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_release(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atomicrmw_xchg_i16_aligned_release:
-; -O1:    ldxrh w8, [x0]
-; -O1:    stlxrh w9, w1, [x0]
+; -O1:    ldxrh w0, [x8]
+; -O1:    stlxrh w9, w1, [x8]
     %r = atomicrmw xchg ptr %ptr, i16 %value release, align 2
     ret i16 %r
 }
@@ -128,8 +128,8 @@ define dso_local i16 @atomicrmw_xchg_i16_aligned_acq_rel(ptr %ptr, i16 %value) {
 ; -O0:    subs w8, w8, w9, uxth
 ;
 ; -O1-LABEL: atom...
[truncated]

@qcolombet
Copy link
Collaborator

How bad are the regressions you mentioned?

@arsenm arsenm force-pushed the users/arsenm/peephole-opt/fix-looking-for-def-of-current-instruction branch from a897dcf to 84db705 Compare February 4, 2025 06:04
@arsenm arsenm force-pushed the users/arsenm/x86/remove-shouldRewriteCopySrc-hack branch from bd710cb to 4b852e6 Compare February 4, 2025 06:04
@arsenm
Copy link
Contributor Author

arsenm commented Feb 4, 2025

How bad are the regressions you mentioned?

Not that bad. Most of the diff is shuffling around register numbers and instruction placement. The differences are 1-2 instructions in most cases. The worst are PPC adding 5+ and one RVV case that happens to hit a worse spill later

Copy link
Collaborator

@qcolombet qcolombet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's roll with it then!

; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 48
; RV32-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vand.vv v16, v24, v8, v0.t
; RV32-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For RISCV, this is a regerssion since it requires more spill/reload.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#120524 and similar may help here

; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl8r.v v0, (a1) # Unknown-size Folded Reload
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; RV32-NEXT: csrr a1, vlenb
; RV32-NEXT: li a3, 76
; RV32-NEXT: li a3, 72
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like an improvement.

Copy link
Contributor Author

arsenm commented Feb 5, 2025

Merge activity

  • Feb 5, 11:06 AM EST: A user started a stack merge that includes this pull request via Graphite.
  • Feb 5, 11:25 AM EST: Graphite rebased this pull request as part of a merge.
  • Feb 5, 11:29 AM EST: A user merged this pull request with Graphite.

@arsenm arsenm force-pushed the users/arsenm/x86/remove-shouldRewriteCopySrc-hack branch 3 times, most recently from d5712fd to 0abe405 Compare February 5, 2025 16:22
Base automatically changed from users/arsenm/x86/remove-shouldRewriteCopySrc-hack to main February 5, 2025 16:25
This fixes the handling of subregister extract copies. This
will allow AMDGPU to remove its implementation of
shouldRewriteCopySrc, which exists as a 10 year old workaround
to this bug. peephole-opt-fold-reg-sequence-subreg.mir will
show the expected improvement once the custom implementation
is removed.

The copy coalescing processing here is overly abstracted
from what's actually happening. Previously when visiting
coalescable copy-like instructions, we would parse the
sources one at a time and then pass the def of the root
instruction into findNextSource. This means that the
first thing the new ValueTracker constructed would do
is getVRegDef to find the instruction we are currently
processing. This adds an unnecessary step, placing
a useless entry in the RewriteMap, and required skipping
the no-op case where getNewSource would return the original
source operand. This was a problem since in the case
of a subregister extract, shouldRewriteCopySource would always
say that it is useful to rewrite and the use-def chain walk
would abort, returning the original operand. Move the process
to start looking at the source operand to begin with.

This does not fix the confused handling in the uncoalescable
copy case which is proving to be more difficult. Some currently
handled cases have multiple defs from a single source, and other
handled cases have 0 input operands. It would be simpler if
this was implemented with isCopyLikeInstr, rather than guessing
at the operand structure as it does now.

There are some improvements and some regressions. The
regressions appear to be downstream issues for the most part. One
of the uglier regressions is in PPC, where a sequence of insert_subrgs
is used to build registers. I opened #125502 to use reg_sequence instead,
which may help.

The worst regression is an absurd SPARC testcase using a <251 x fp128>,
which uses a very long chain of insert_subregs.

We need improved subregister handling locally in PeepholeOptimizer,
and other pasess like MachineCSE to fix some of the other regressions.
We should handle subregister composes and folding more indexes
into insert_subreg and reg_sequence.
@arsenm arsenm force-pushed the users/arsenm/peephole-opt/fix-looking-for-def-of-current-instruction branch from 84db705 to 0417e7f Compare February 5, 2025 16:25
@arsenm arsenm merged commit 58a8800 into main Feb 5, 2025
5 of 6 checks passed
@arsenm arsenm deleted the users/arsenm/peephole-opt/fix-looking-for-def-of-current-instruction branch February 5, 2025 16:29
Icohedron pushed a commit to Icohedron/llvm-project that referenced this pull request Feb 11, 2025
…5533)

This fixes the handling of subregister extract copies. This
will allow AMDGPU to remove its implementation of
shouldRewriteCopySrc, which exists as a 10 year old workaround
to this bug. peephole-opt-fold-reg-sequence-subreg.mir will
show the expected improvement once the custom implementation
is removed.

The copy coalescing processing here is overly abstracted
from what's actually happening. Previously when visiting
coalescable copy-like instructions, we would parse the
sources one at a time and then pass the def of the root
instruction into findNextSource. This means that the
first thing the new ValueTracker constructed would do
is getVRegDef to find the instruction we are currently
processing. This adds an unnecessary step, placing
a useless entry in the RewriteMap, and required skipping
the no-op case where getNewSource would return the original
source operand. This was a problem since in the case
of a subregister extract, shouldRewriteCopySource would always
say that it is useful to rewrite and the use-def chain walk
would abort, returning the original operand. Move the process
to start looking at the source operand to begin with.

This does not fix the confused handling in the uncoalescable
copy case which is proving to be more difficult. Some currently
handled cases have multiple defs from a single source, and other
handled cases have 0 input operands. It would be simpler if
this was implemented with isCopyLikeInstr, rather than guessing
at the operand structure as it does now.

There are some improvements and some regressions. The
regressions appear to be downstream issues for the most part. One
of the uglier regressions is in PPC, where a sequence of insert_subrgs
is used to build registers. I opened llvm#125502 to use reg_sequence instead,
which may help.

The worst regression is an absurd SPARC testcase using a <251 x fp128>,
which uses a very long chain of insert_subregs.

We need improved subregister handling locally in PeepholeOptimizer,
and other pasess like MachineCSE to fix some of the other regressions.
We should handle subregister composes and folding more indexes
into insert_subreg and reg_sequence.
Comment on lines +37 to +40
; CHECK-NEXT: xxlor vs0, vs4, vs4
; CHECK-NEXT: xxlor vs1, vs5, vs5
; CHECK-NEXT: xxlor vs2, vs6, vs6
; CHECK-NEXT: xxlor vs3, vs7, vs7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These extra copies are a result of using acc1 vs acc0 on line 29. Any idea as to why this patch caused it to use acc1 vs acc0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into it a bit and gave up because PPC is using a lot of complex nested custom instruction patterns that will never be handled as nicely as the generic operators. I opened #125502 to start using reg_sequence

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants