-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache verification of unchanged methods #799
Cache verification of unchanged methods #799
Conversation
With the above changes the |
Bummer. You could try sorting the |
I should really start doing these checks locally...
@@ -255,7 +255,7 @@ impl<'p, 'v: 'p, 'tcx: 'v> SpecificationEncoder<'p, 'v, 'tcx> { | |||
self.encode_quantifier_arg( | |||
*arg, | |||
arg_ty, | |||
&format!("{}_{}", vars.spec_id, vars.pre_id), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you checked whether this does not lead to strange name conflicts in encoded programs that are insanely hard to understand on the source level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to have a bunch of tests that check this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that might be possible; I'll take a look at creating some test cases for this. However, in general I think this is very hard to avoid with deterministic names - one could just look at the generated Viper code and name another local variable in a way which clashes. The two solutions to avoid this that I see: (A) use hash of the HIR/MIR of the function as a sort of 'spec_id' which could be used in these cases (then if another variable is defined after the fact, this hash will change) or (B) look at all defined variables in scope and pick a name that doesn't clash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(B) look at all defined variables in scope and pick a name that doesn't clash.
We could do this when lowering to vir_legacy
– at that point, we have all variable names present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one could just look at the generated Viper code and name another local variable in a way which clashes
Would that still be possible if we add a prefix quantified_
(or just q_
) to all quantified variables?
Even with that, as Vytautas asked I wonder what happens to the encoding of forall x: u32 :: forall x: i32 :: ...
. Will the Viper names clash?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my WIP branch, quantifier variables are named with format!("_{}_quant_{}", arg_idx, body_def_id.index.index())
, where arg_idx
is the position in the quantifier's arguments and body_def_id
is the DefId
of the closure containing the quantifier body. It is a local DefId
, so index
is the only number we use here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Aurel300 Hmm, but using the DefId
is problematic in this case since it won't be stable if other stuff in the file/crate is added/deleted. I'm not exactly sure what position in quantifiers args (arg_idx
) means (do you mean that for a forall(|a: i32, b: i32| forall(|c: i32| ...))
, then a
is 1, b
is 2, c
is 1?), but could we not use only this, with also a quantifier_nesting_count
in the name as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it wasn't chosen with caching in mind, and it is still better than the original approach of random UUIDs. A nesting count would work, but I'm not sure it would be easy to introduce? When encoding specifications into Viper expressions we would generally work inside out, so it's hard for the inner forall
to know its nesting depth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current vars.pre_id
seems to do this now though (at least from looking at the generated Viper), I'm not sure how it's implemented but maybe if we just copied the ideas from that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought vars.pre_id
corresponded to another specification UUID? Maybe it's being interned somewhere?
I have one small additional feature request for the caching of the verification, if time allows :) Would it be possible to introduce a new Prusti configuration flag that allows to disable the caching for a certain run? The use case for me is that I'm currently trying to tweak the performance of some parts in Prusti, but if the benchmark I'm trying to run is in the cache, then I won't be able to see the actual performance difference 😄 I could also add the configuration flag myself in a separate PR, if that is more convenient :) Thanks! |
@Pointerbender good suggestion, I'll add such a flag. |
Can't wait to use the verification cache on CI :) |
`print_hash` skips verification and just print the hash of the verification request. `disable_cache` will prevent caching to enable the debugging of performance. Both should be documented in the manual
This was to test if the error reporting still works with caching - it works
Caching should work quite nicely now. Saving to disk is done by implementing a destructor here: prusti-dev/viper/src/verification_result.rs Lines 129 to 139 in 0eea62c
I'm not sure how well that interacts with the Prusti server and IDE plugin? The test I added still fails in some cases I think. Error reporting seems to work out of the box with cached results. I'm going on holiday for two weeks from tomorrow so won't be able to finish merging before I'm back, but I'd be happy to let someone else take over - most of the work should be done (and this branch can be used if needed just fine) and I'll be online if needed. |
…aif/prusti-dev into stable-viper-for-caching
The CI is now failing on #827 |
The binary file should take up much less space than a json, and can easily use compression in the future if required
The reason the cache is not saved when sending a prusti-dev/prusti-launch/src/lib.rs Line 207 in be28ed5
If we do so, we should also remove the prusti-dev/prusti-launch/src/lib.rs Line 218 in be28ed5
(A related issue is #754) |
I tried replacing the
I've implemented the latter for now as it was easier, but we may want to consider switching in the future (the advantage would be when running prusti-server from the command line rather than IDE) |
The caching appears to also affect the automatic benchmarks (especially the speedup for Knights_tour.rs is very impressive). Would it maybe be a good idea to set the |
Thank you, @tillarnold, for noticing this. @JonasAlaif Could you please fix this? |
This caching will be implemented at the
VerificationRequest
level