Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mir-opt] GVN some more transmute cases #133324

Merged
merged 5 commits into from
Jan 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
179 changes: 139 additions & 40 deletions compiler/rustc_mir_transform/src/gvn.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1366,57 +1366,108 @@ impl<'body, 'tcx> VnState<'body, 'tcx> {
return self.new_opaque();
}

let mut was_updated = false;

// If that cast just casts away the metadata again,
if let PtrToPtr = kind
&& let Value::Aggregate(AggregateTy::RawPtr { data_pointer_ty, .. }, _, fields) =
self.get(value)
&& let ty::RawPtr(to_pointee, _) = to.kind()
&& to_pointee.is_sized(self.tcx, self.typing_env())
{
from = *data_pointer_ty;
value = fields[0];
was_updated = true;
if *data_pointer_ty == to {
return Some(fields[0]);
let mut was_ever_updated = false;
loop {
let mut was_updated_this_iteration = false;

// Transmuting between raw pointers is just a pointer cast so long as
// they have the same metadata type (like `*const i32` <=> `*mut u64`
// or `*mut [i32]` <=> `*const [u64]`), including the common special
// case of `*const T` <=> `*mut T`.
if let Transmute = kind
&& from.is_unsafe_ptr()
&& to.is_unsafe_ptr()
&& self.pointers_have_same_metadata(from, to)
{
*kind = PtrToPtr;
was_updated_this_iteration = true;
}
}

// PtrToPtr-then-PtrToPtr can skip the intermediate step
if let PtrToPtr = kind
&& let Value::Cast { kind: inner_kind, value: inner_value, from: inner_from, to: _ } =
*self.get(value)
&& let PtrToPtr = inner_kind
{
from = inner_from;
value = inner_value;
was_updated = true;
if inner_from == to {
return Some(inner_value);
// If a cast just casts away the metadata again, then we can get it by
// casting the original thin pointer passed to `from_raw_parts`
if let PtrToPtr = kind
&& let Value::Aggregate(AggregateTy::RawPtr { data_pointer_ty, .. }, _, fields) =
self.get(value)
&& let ty::RawPtr(to_pointee, _) = to.kind()
&& to_pointee.is_sized(self.tcx, self.typing_env())
{
from = *data_pointer_ty;
value = fields[0];
was_updated_this_iteration = true;
if *data_pointer_ty == to {
return Some(fields[0]);
}
}
}

// PtrToPtr-then-Transmute can just transmute the original, so long as the
// PtrToPtr didn't change metadata (and thus the size of the pointer)
if let Transmute = kind
&& let Value::Cast {
kind: PtrToPtr,
// Aggregate-then-Transmute can just transmute the original field value,
// so long as the bytes of a value from only from a single field.
if let Transmute = kind
&& let Value::Aggregate(_aggregate_ty, variant_idx, field_values) = self.get(value)
&& let Some((field_idx, field_ty)) =
self.value_is_all_in_one_field(from, *variant_idx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we gate this on self.type_may_have_niche_of_interest_to_backend(from)? It seems we lose information when transmuting from a NonNull aggregate.

Copy link
Member Author

@scottmcm scottmcm Jan 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting point!

  1. You're right that this does conceptually lose information.
  2. It turns out that we actually don't use this information in the backend today at all: https://rust.godbolt.org/z/1jWYeW4rv
  3. I was going to blame that on my Avoid allocas in codegen for simple mir::Aggregate statements #123886, but even before that when we had the alloca there still wasn't any !nonnull metadata anywhere: https://rust.godbolt.org/z/xoj1G4WWj
  4. I don't think we want to block this optimization just because the struct has a niche, because that niche would normally come from a field of the struct -- like blocking the optimization on #[repr(transparent)] struct MyBool(bool); isn't something we need to do despite that type having an interesting niche in its Scalar.
  5. So I think what I'd propose here is to leave this optimization as-is, and say that if people want to actually take advantage of the niche for this, there should be another Ban field-projecting into [rustc_layout_scalar_valid_range_*] types compiler-team#807 -style proposal to also ban Aggregateing a rustc_layout_scalar_valid_range_* type, and require Transmuteing to it instead. That way the niche would be asserted on construction too, not just on read. (Of course, the library could also choose to use transmute without such an MCP requiring it.)

{
from = field_ty;
value = field_values[field_idx.as_usize()];
was_updated_this_iteration = true;
if field_ty == to {
return Some(value);
}
}

// Various cast-then-cast cases can be simplified.
if let Value::Cast {
kind: inner_kind,
value: inner_value,
from: inner_from,
to: inner_to,
} = *self.get(value)
&& self.pointers_have_same_metadata(inner_from, inner_to)
{
from = inner_from;
value = inner_value;
was_updated = true;
if inner_from == to {
return Some(inner_value);
{
let new_kind = match (inner_kind, *kind) {
// Even if there's a narrowing cast in here that's fine, because
// things like `*mut [i32] -> *mut i32 -> *const i32` and
// `*mut [i32] -> *const [i32] -> *const i32` can skip the middle in MIR.
(PtrToPtr, PtrToPtr) => Some(PtrToPtr),
// PtrToPtr-then-Transmute is fine so long as the pointer cast is identity:
// `*const T -> *mut T -> NonNull<T>` is fine, but we need to check for narrowing
// to skip things like `*const [i32] -> *const i32 -> NonNull<T>`.
(PtrToPtr, Transmute)
if self.pointers_have_same_metadata(inner_from, inner_to) =>
{
Some(Transmute)
}
// Similarly, for Transmute-then-PtrToPtr. Note that we need to check different
// variables for their metadata, and thus this can't merge with the previous arm.
(Transmute, PtrToPtr) if self.pointers_have_same_metadata(from, to) => {
Some(Transmute)
}
// If would be legal to always do this, but we don't want to hide information
// from the backend that it'd otherwise be able to use for optimizations.
(Transmute, Transmute)
if !self.type_may_have_niche_of_interest_to_backend(inner_to) =>
{
Some(Transmute)
}
_ => None,
};
if let Some(new_kind) = new_kind {
*kind = new_kind;
from = inner_from;
value = inner_value;
was_updated_this_iteration = true;
if inner_from == to {
return Some(inner_value);
}
}
}

if was_updated_this_iteration {
was_ever_updated = true;
} else {
break;
}
}

if was_updated && let Some(op) = self.try_as_operand(value, location) {
if was_ever_updated && let Some(op) = self.try_as_operand(value, location) {
*operand = op;
}

Expand All @@ -1438,6 +1489,54 @@ impl<'body, 'tcx> VnState<'body, 'tcx> {
false
}
}

/// Returns `false` if we know for sure that this type has no interesting niche,
/// and thus we can skip transmuting through it without worrying.
///
/// The backend will emit `assume`s when transmuting between types with niches,
/// so we want to preserve `i32 -> char -> u32` so that that data is around,
/// but it's fine to skip whole-range-is-value steps like `A -> u32 -> B`.
fn type_may_have_niche_of_interest_to_backend(&self, ty: Ty<'tcx>) -> bool {
let Ok(layout) = self.ecx.layout_of(ty) else {
// If it's too generic or something, then assume it might be interesting later.
return true;
};

match layout.backend_repr {
BackendRepr::Uninhabited => true,
BackendRepr::Scalar(a) => !a.is_always_valid(&self.ecx),
BackendRepr::ScalarPair(a, b) => {
!a.is_always_valid(&self.ecx) || !b.is_always_valid(&self.ecx)
}
BackendRepr::Vector { .. } | BackendRepr::Memory { .. } => false,
}
}

fn value_is_all_in_one_field(
&self,
ty: Ty<'tcx>,
variant: VariantIdx,
) -> Option<(FieldIdx, Ty<'tcx>)> {
if let Ok(layout) = self.ecx.layout_of(ty)
&& let abi::Variants::Single { index } = layout.variants
&& index == variant
&& let Some((field_idx, field_layout)) = layout.non_1zst_field(&self.ecx)
&& layout.size == field_layout.size
{
// We needed to check the variant to avoid trying to read the tag
// field from an enum where no fields have variants, since that tag
// field isn't in the `Aggregate` from which we're getting values.
Some((FieldIdx::from_usize(field_idx), field_layout.ty))
} else if let ty::Adt(adt, args) = ty.kind()
&& adt.is_struct()
&& adt.repr().transparent()
&& let [single_field] = adt.non_enum_variant().fields.raw.as_slice()
{
Some((FieldIdx::ZERO, single_field.ty(self.tcx, args)))
} else {
None
}
}
}

fn op_to_prop_const<'tcx>(
Expand Down
23 changes: 0 additions & 23 deletions compiler/rustc_mir_transform/src/instsimplify.rs
Original file line number Diff line number Diff line change
Expand Up @@ -173,29 +173,6 @@ impl<'tcx> InstSimplifyContext<'_, 'tcx> {
*kind = CastKind::IntToInt;
return;
}

// Transmuting a transparent struct/union to a field's type is a projection
if let ty::Adt(adt_def, args) = operand_ty.kind()
&& adt_def.repr().transparent()
&& (adt_def.is_struct() || adt_def.is_union())
&& let Some(place) = operand.place()
{
let variant = adt_def.non_enum_variant();
for (i, field) in variant.fields.iter_enumerated() {
let field_ty = field.ty(self.tcx, args);
if field_ty == *cast_ty {
let place = place
.project_deeper(&[ProjectionElem::Field(i, *cast_ty)], self.tcx);
let operand = if operand.is_move() {
Operand::Move(place)
} else {
Operand::Copy(place)
};
*rvalue = Rvalue::Use(operand);
return;
}
}
}
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@
StorageLive(_7);
_7 = const 1_usize;
_6 = const {0x1 as *mut [bool; 0]};
StorageDead(_7);
StorageLive(_11);
StorageLive(_8);
_8 = UbChecks();
Expand Down Expand Up @@ -79,6 +78,7 @@
_11 = const {0x1 as *const [bool; 0]};
_5 = const NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }};
StorageDead(_11);
StorageDead(_7);
StorageDead(_6);
_4 = const Unique::<[bool; 0]> {{ pointer: NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }}, _marker: PhantomData::<[bool; 0]> }};
StorageDead(_5);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@
StorageLive(_7);
_7 = const 1_usize;
_6 = const {0x1 as *mut [bool; 0]};
StorageDead(_7);
StorageLive(_11);
StorageLive(_8);
_8 = UbChecks();
Expand Down Expand Up @@ -83,6 +82,7 @@
_11 = const {0x1 as *const [bool; 0]};
_5 = const NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }};
StorageDead(_11);
StorageDead(_7);
StorageDead(_6);
_4 = const Unique::<[bool; 0]> {{ pointer: NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }}, _marker: PhantomData::<[bool; 0]> }};
StorageDead(_5);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@
StorageLive(_7);
_7 = const 1_usize;
_6 = const {0x1 as *mut [bool; 0]};
StorageDead(_7);
StorageLive(_11);
StorageLive(_8);
_8 = UbChecks();
Expand Down Expand Up @@ -79,6 +78,7 @@
_11 = const {0x1 as *const [bool; 0]};
_5 = const NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }};
StorageDead(_11);
StorageDead(_7);
StorageDead(_6);
_4 = const Unique::<[bool; 0]> {{ pointer: NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }}, _marker: PhantomData::<[bool; 0]> }};
StorageDead(_5);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@
StorageLive(_7);
_7 = const 1_usize;
_6 = const {0x1 as *mut [bool; 0]};
StorageDead(_7);
StorageLive(_11);
StorageLive(_8);
_8 = UbChecks();
Expand Down Expand Up @@ -83,6 +82,7 @@
_11 = const {0x1 as *const [bool; 0]};
_5 = const NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }};
StorageDead(_11);
StorageDead(_7);
StorageDead(_6);
_4 = const Unique::<[bool; 0]> {{ pointer: NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }}, _marker: PhantomData::<[bool; 0]> }};
StorageDead(_5);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@
- _6 = copy _7 as *mut [bool; 0] (Transmute);
+ _7 = const 1_usize;
+ _6 = const {0x1 as *mut [bool; 0]};
StorageDead(_7);
StorageLive(_11);
StorageLive(_8);
_8 = UbChecks();
Expand All @@ -67,7 +66,7 @@

bb2: {
StorageLive(_10);
- _10 = copy _6 as *mut () (PtrToPtr);
- _10 = copy _7 as *mut () (Transmute);
- _9 = NonNull::<T>::new_unchecked::precondition_check(move _10) -> [return: bb3, unwind unreachable];
+ _10 = const {0x1 as *mut ()};
+ _9 = NonNull::<T>::new_unchecked::precondition_check(const {0x1 as *mut ()}) -> [return: bb3, unwind unreachable];
Expand All @@ -80,11 +79,12 @@

bb4: {
StorageDead(_8);
- _11 = copy _6 as *const [bool; 0] (PtrToPtr);
- _11 = copy _7 as *const [bool; 0] (Transmute);
- _5 = NonNull::<[bool; 0]> { pointer: copy _11 };
+ _11 = const {0x1 as *const [bool; 0]};
+ _5 = const NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }};
StorageDead(_11);
StorageDead(_7);
StorageDead(_6);
- _4 = Unique::<[bool; 0]> { pointer: move _5, _marker: const PhantomData::<[bool; 0]> };
+ _4 = const Unique::<[bool; 0]> {{ pointer: NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }}, _marker: PhantomData::<[bool; 0]> }};
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@
- _6 = copy _7 as *mut [bool; 0] (Transmute);
+ _7 = const 1_usize;
+ _6 = const {0x1 as *mut [bool; 0]};
StorageDead(_7);
StorageLive(_11);
StorageLive(_8);
_8 = UbChecks();
Expand All @@ -71,7 +70,7 @@

bb3: {
StorageLive(_10);
- _10 = copy _6 as *mut () (PtrToPtr);
- _10 = copy _7 as *mut () (Transmute);
- _9 = NonNull::<T>::new_unchecked::precondition_check(move _10) -> [return: bb4, unwind unreachable];
+ _10 = const {0x1 as *mut ()};
+ _9 = NonNull::<T>::new_unchecked::precondition_check(const {0x1 as *mut ()}) -> [return: bb4, unwind unreachable];
Expand All @@ -84,11 +83,12 @@

bb5: {
StorageDead(_8);
- _11 = copy _6 as *const [bool; 0] (PtrToPtr);
- _11 = copy _7 as *const [bool; 0] (Transmute);
- _5 = NonNull::<[bool; 0]> { pointer: copy _11 };
+ _11 = const {0x1 as *const [bool; 0]};
+ _5 = const NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }};
StorageDead(_11);
StorageDead(_7);
StorageDead(_6);
- _4 = Unique::<[bool; 0]> { pointer: move _5, _marker: const PhantomData::<[bool; 0]> };
+ _4 = const Unique::<[bool; 0]> {{ pointer: NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }}, _marker: PhantomData::<[bool; 0]> }};
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@
- _6 = copy _7 as *mut [bool; 0] (Transmute);
+ _7 = const 1_usize;
+ _6 = const {0x1 as *mut [bool; 0]};
StorageDead(_7);
StorageLive(_11);
StorageLive(_8);
_8 = UbChecks();
Expand All @@ -67,7 +66,7 @@

bb2: {
StorageLive(_10);
- _10 = copy _6 as *mut () (PtrToPtr);
- _10 = copy _7 as *mut () (Transmute);
- _9 = NonNull::<T>::new_unchecked::precondition_check(move _10) -> [return: bb3, unwind unreachable];
+ _10 = const {0x1 as *mut ()};
+ _9 = NonNull::<T>::new_unchecked::precondition_check(const {0x1 as *mut ()}) -> [return: bb3, unwind unreachable];
Expand All @@ -80,11 +79,12 @@

bb4: {
StorageDead(_8);
- _11 = copy _6 as *const [bool; 0] (PtrToPtr);
- _11 = copy _7 as *const [bool; 0] (Transmute);
- _5 = NonNull::<[bool; 0]> { pointer: copy _11 };
+ _11 = const {0x1 as *const [bool; 0]};
+ _5 = const NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }};
StorageDead(_11);
StorageDead(_7);
StorageDead(_6);
- _4 = Unique::<[bool; 0]> { pointer: move _5, _marker: const PhantomData::<[bool; 0]> };
+ _4 = const Unique::<[bool; 0]> {{ pointer: NonNull::<[bool; 0]> {{ pointer: {0x1 as *const [bool; 0]} }}, _marker: PhantomData::<[bool; 0]> }};
Expand Down
Loading
Loading