Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up compilation of large constant arrays #51833

Merged
merged 9 commits into from
Jul 1, 2018
12 changes: 8 additions & 4 deletions src/librustc_mir/interpret/eval_context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -591,10 +591,14 @@ impl<'a, 'mir, 'tcx: 'mir, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M

let (dest, dest_align) = self.force_allocation(dest)?.to_ptr_align();

// FIXME: speed up repeat filling
for i in 0..length {
let elem_dest = dest.ptr_offset(elem_size * i as u64, &self)?;
self.write_value_to_ptr(value, elem_dest, dest_align, elem_ty)?;
if length > 0 {
//write the first value
self.write_value_to_ptr(value, dest, dest_align, elem_ty)?;

if length > 1 {
let rest = dest.ptr_offset(elem_size * 1 as u64, &self)?;
self.memory.copy_repeatedly(dest, dest_align, rest, dest_align, elem_size, length - 1, false)?;
}
}
}

Expand Down
44 changes: 31 additions & 13 deletions src/librustc_mir/interpret/memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -594,6 +594,19 @@ impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> Memory<'a, 'mir, 'tcx, M> {
dest_align: Align,
size: Size,
nonoverlapping: bool,
) -> EvalResult<'tcx> {
self.copy_repeatedly(src, src_align, dest, dest_align, size, 1, nonoverlapping)
}

pub fn copy_repeatedly(
&mut self,
src: Scalar,
src_align: Align,
dest: Scalar,
dest_align: Align,
size: Size,
length: u64,
nonoverlapping: bool,
) -> EvalResult<'tcx> {
// Empty accesses don't need to be valid pointers, but they should still be aligned
self.check_align(src, src_align)?;
Expand All @@ -617,7 +630,7 @@ impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> Memory<'a, 'mir, 'tcx, M> {
.collect();

let src_bytes = self.get_bytes_unchecked(src, size, src_align)?.as_ptr();
let dest_bytes = self.get_bytes_mut(dest, size, dest_align)?.as_mut_ptr();
let dest_bytes = self.get_bytes_mut(dest, size * length, dest_align)?.as_mut_ptr();

// SAFE: The above indexing would have panicked if there weren't at least `size` bytes
// behind `src` and `dest`. Also, we use the overlapping-safe `ptr::copy` if `src` and
Expand All @@ -634,13 +647,18 @@ impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> Memory<'a, 'mir, 'tcx, M> {
));
}
}
ptr::copy(src_bytes, dest_bytes, size.bytes() as usize);

for i in 0..length {
ptr::copy(src_bytes, dest_bytes.offset((size.bytes() * i) as isize), size.bytes() as usize);
}
} else {
ptr::copy_nonoverlapping(src_bytes, dest_bytes, size.bytes() as usize);
for i in 0..length {
ptr::copy_nonoverlapping(src_bytes, dest_bytes.offset((size.bytes() * i) as isize), size.bytes() as usize);
}
}
}

self.copy_undef_mask(src, dest, size)?;
self.copy_undef_mask(src, dest, size * length)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this results in the correct result, it does n^2/2 copies instead of n copies. Inside the function itself we should probably move the self.get(src.alloc_id)? out of the loops, too. We can probably improve the nonoverlapping case enormously, too by not requiring an intermediate allocation.

// copy back the relocations
self.get_mut(dest.alloc_id)?.relocations.insert_presorted(relocations);
Copy link
Contributor

@oli-obk oli-obk Jun 27, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to reapeat this, too (and offset the indices).

Try a [&FOO; 500] (for non-ZST FOO) and then access any field but the first (at compile-time! at runtime you'll get a segfault). If I'm reading the code correctly this will tell you about a dangling pointer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks! Can you double check my math?


Expand Down Expand Up @@ -864,18 +882,18 @@ impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> Memory<'a, 'mir, 'tcx, M> {
) -> EvalResult<'tcx> {
// The bits have to be saved locally before writing to dest in case src and dest overlap.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment makes me think that we should not do this commit, otherwise we'll run into trouble in the future (and in miri right now). Can you do an if for whether there is overlap and if there is, just run the old code?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. I thought I preserved the existing behavior by cloning the source allocation's undef_mask before writing to the destination's. Is that sufficient?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh right. sorry. I misread the code.

I still think the code isn't doing the right thing. It's only copying once, when it should be copying N-1 times.

You can try this out by creating an array of types with padding, everything starting at the third element will probably not have undef masks for the padding. (you'll need unions to get the bits and then attempt to use them for an array length to actually get a compiler error from that)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid I'm not quite following. We do call this function with size * length so shouldn't it cover all of the repeated copies? Can you provide a sample program that will fail?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, you are using the length, but that just means that the entire array is copied from 0..N to 1..=N, not that the 1st element is copied N times.

I'll make a regression test

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fairly certain that the following test will succeed to compile on your PR: http://play.rust-lang.org/?gist=1d0183fcfb65164d1ca58ccd9614c33c

assert_eq!(size.bytes() as usize as u64, size.bytes());
let mut v = Vec::with_capacity(size.bytes() as usize);

let undef_mask = self.get(src.alloc_id)?.undef_mask.clone();
let dest_allocation = self.get_mut(dest.alloc_id)?;

for i in 0..size.bytes() {
let defined = self.get(src.alloc_id)?.undef_mask.get(src.offset + Size::from_bytes(i));
v.push(defined);
}
for (i, defined) in v.into_iter().enumerate() {
self.get_mut(dest.alloc_id)?.undef_mask.set(
dest.offset +
Size::from_bytes(i as u64),
defined,
let defined = undef_mask.get(src.offset + Size::from_bytes(i));
Copy link
Contributor

@oli-obk oli-obk Jun 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you pass a repeat counter to the function, you should be able to just modulo the i here over the size and have the for loop go from 0 ot size.bytes() * repeat

dest_allocation.undef_mask.set(
dest.offset + Size::from_bytes(i),
defined
);
}

Ok(())
}

Expand Down
13 changes: 13 additions & 0 deletions src/librustc_target/abi/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -229,37 +229,44 @@ pub struct Size {
impl Size {
pub const ZERO: Size = Self::from_bytes(0);

#[inline]
pub fn from_bits(bits: u64) -> Size {
// Avoid potential overflow from `bits + 7`.
Size::from_bytes(bits / 8 + ((bits % 8) + 7) / 8)
}

#[inline]
pub const fn from_bytes(bytes: u64) -> Size {
Size {
raw: bytes
}
}

#[inline]
pub fn bytes(self) -> u64 {
self.raw
}

#[inline]
pub fn bits(self) -> u64 {
self.bytes().checked_mul(8).unwrap_or_else(|| {
panic!("Size::bits: {} bytes in bits doesn't fit in u64", self.bytes())
})
}

#[inline]
pub fn abi_align(self, align: Align) -> Size {
let mask = align.abi() - 1;
Size::from_bytes((self.bytes() + mask) & !mask)
}

#[inline]
pub fn is_abi_aligned(self, align: Align) -> bool {
let mask = align.abi() - 1;
self.bytes() & mask == 0
}

#[inline]
pub fn checked_add<C: HasDataLayout>(self, offset: Size, cx: C) -> Option<Size> {
let dl = cx.data_layout();

Expand All @@ -272,6 +279,7 @@ impl Size {
}
}

#[inline]
pub fn checked_mul<C: HasDataLayout>(self, count: u64, cx: C) -> Option<Size> {
let dl = cx.data_layout();

Expand All @@ -289,6 +297,7 @@ impl Size {

impl Add for Size {
type Output = Size;
#[inline]
fn add(self, other: Size) -> Size {
Size::from_bytes(self.bytes().checked_add(other.bytes()).unwrap_or_else(|| {
panic!("Size::add: {} + {} doesn't fit in u64", self.bytes(), other.bytes())
Expand All @@ -298,6 +307,7 @@ impl Add for Size {

impl Sub for Size {
type Output = Size;
#[inline]
fn sub(self, other: Size) -> Size {
Size::from_bytes(self.bytes().checked_sub(other.bytes()).unwrap_or_else(|| {
panic!("Size::sub: {} - {} would result in negative size", self.bytes(), other.bytes())
Expand All @@ -307,13 +317,15 @@ impl Sub for Size {

impl Mul<Size> for u64 {
type Output = Size;
#[inline]
fn mul(self, size: Size) -> Size {
size * self
}
}

impl Mul<u64> for Size {
type Output = Size;
#[inline]
fn mul(self, count: u64) -> Size {
match self.bytes().checked_mul(count) {
Some(bytes) => Size::from_bytes(bytes),
Expand All @@ -325,6 +337,7 @@ impl Mul<u64> for Size {
}

impl AddAssign for Size {
#[inline]
fn add_assign(&mut self, other: Size) {
*self = *self + other;
}
Expand Down