-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
incr.comp.: Speed up result hashing and DepNode computation by caching certain stable hashes #47294
Comments
I'd like to take this. Is there a good test case I can use to measure the speedup? |
Great, thank you @wesleywiser! You should see speedups in all incremental builds. In debug builds they should be more pronounced. In builds with an empty cache the absolute compile time difference should be greater because those builds have to do the most result hashing. In builds with a full cache there should also be a speedup but it might be small. Result hashes are loaded from disk in this case but the compiler still needs to hash types when constructing DepNode UUIDs. I also recommend that just do the implementation and, when it works, open a PR. Then we'll let @bors do a try-build and run that through perf.rlo. That will give us a broader idea of the performance impact. The two implementations that need updating are: rust/src/librustc/ich/impls_ty.rs Lines 23 to 31 in 619ced0
and: Lines 1475 to 1491 in 619ced0
The nice thing about these interned data structures is we can use their memory address as hashmap key. I suggest you get the address like this: impl<'gcx> HashStable<StableHashingContext<'gcx>> for AdtDef {
fn hash_stable<W: StableHasherResult>(&self,
hcx: &mut StableHashingContext<'gcx>,
hasher: &mut StableHasher<W>) {
let cache_key = self as *const AdtDef as usize;
...
}
} The rest should look very similar to the implementation for expansion contexts: Lines 357 to 380 in 2e33c89
Please cache the full Fingerprint though, not just half of it as in the implementation above.
|
Incremental compilation often spends quite a bit of time computing stable hashes of various things. Profiling shows that a large part of this time is spent hashing
AdtDef
andSubsts
ty::Slice
values. In both cases it is likely that the same values are hashed over and over again. It's worth investigating whether doing some caching here, as we do for macro expansion contexts, is viable and brings speed ups.The text was updated successfully, but these errors were encountered: