-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solve bit decomposition with negative coefficients. #2554
base: main
Are you sure you want to change the base?
Conversation
"\ | ||
S::Y[0] = params[0]; | ||
S::Z[0] = params[1]; | ||
S::X[0] = (-(S::Y[0] + -S::Z[0]) & 0xff); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks kind of correct. The only issue is that we have to ensure that the & 0xff
is done on the two's complement representation. Maybe we should have a special operation for that.
S::Z[0] = params[1]; | ||
S::X[0] = (-(S::Y[0] + -S::Z[0]) & 0xff); | ||
S::carry[0] = -((-(S::Y[0] + -S::Z[0]) & 0x100) // 256); | ||
assert (-(S::Y[0] + -S::Z[0]) & 0xfffffffffffffe00) == 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the other hand, if we use two's complement, then this won't work.
1c1ca8e
to
bf43c73
Compare
Ok I think this is working. What is still missing:
|
if r.complete { | ||
r | ||
} else { | ||
let negated = -self; | ||
let r = negated.solve_bit_decomposition()?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The negated version is now covered already by the non-negated check.
@@ -280,18 +284,20 @@ impl<T: FieldElement, V: Ord + Clone + Display> AffineSymbolicExpression<T, V> { | |||
// We need to assert that the masks cover "-offset", | |||
// otherwise the equation is not solvable. | |||
// We assert -offset & !masks == 0 | |||
|
|||
// TODO can this be done even with negative coefficients? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if offset.to_integer() & !covered_bits != 0.into()
, then we certainly have a failure, but we still have to do runtime checks (which we do). But please double-check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if offset
is negative?
For example, if we had the constraint -1 = bit0 - 2 * bit1
, then I think the covered bits would be 0b11
(because we ignore the signs), but we'd have offset.to_integer() == 0xffff0000
(for Goldilocks), so the check would fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's the point. So I think I would change this check to only handle the old case where there are no negative coefficients. The rest has to be handled at run time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks cool!
I think it's nice to have an effect for these decompositions.
I think bases other than two could make sense for constraint systems that anyway have a higher degree bound: Instead of bits, we could have limbs in basis 3, for example, range-constrained as limb * (limb - 1) * (limb - 2) = 0
. But I think the BitDecomposition
effect could be renamed and modified to accommodate for that if needed?
I'm still meditating over what exactly it means to have decomposition with negative coefficients. My current understanding is that it is mainly useful to represent negative numbers, i.e. do something like value = byte0 + 2**8 * byte1 - is_negative * 2**16
(where is_negative
is bit-constrained). I wonder if we still need to distinguish whether we have negative coefficients, as we currently do when compiling to Rust?
/// The exponent of two, which forms the coefficient of the variable. | ||
pub exponent: u64, | ||
/// The bit mask for this component, relative to the value to be decomposed, | ||
/// i.e. already scaled by the coefficient. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd find this more intuitive if this was the mask before scaling, and I don't see a disadvantage to that!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean if this was the mask of the range constraint of the variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I started changing, but it's quite substantial because then we need to shift first and apply the bit mask afterwards. This means we need a signed shift, if I see it correctly.
value: &SymbolicExpression<T, Variable>, | ||
components: &[BitDecompositionComponent<T, Variable>], | ||
) -> String { | ||
let mut result = format!("let bit_decomp_value = {};\n", format_expression(value)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, so one effect compiles to 2 + 3 * <num components>
Rust statements. I guess the reason that's not a function is so that we can unroll the loop and select the bitand
variant at compile time? Then maybe at least we can add a comment in front of this block (like what format_code
outputs)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean we don't group things in functions anywhere (except the prover functions, which are functions to begin with...).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but I think we should :) That would produce more readable code (both the code generating the Rust code and the generated Rust code) and I think it would be more friendly to the compiler? But this shouldn't block this PR.
let signed = if components.iter().any(|c| c.is_negative) { | ||
"signed" | ||
} else { | ||
"unsigned" | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't we be able to always use the signed version?
With this, solving v = byte0 + 256 * byte1
and -v = -byte0 - 256 * byte1
would result in different code, but the constraints are equivalent.
Why do we still need the unsigned version? I think it doesn't matter as long as we don't get near
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but the signed version is much more expensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see! Please add a comment that that's why :) Wasn't obvious to me, but makes sense.
@@ -280,18 +284,20 @@ impl<T: FieldElement, V: Ord + Clone + Display> AffineSymbolicExpression<T, V> { | |||
// We need to assert that the masks cover "-offset", | |||
// otherwise the equation is not solvable. | |||
// We assert -offset & !masks == 0 | |||
|
|||
// TODO can this be done even with negative coefficients? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if offset
is negative?
For example, if we had the constraint -1 = bit0 - 2 * bit1
, then I think the covered bits would be 0b11
(because we ignore the signs), but we'd have offset.to_integer() == 0xffff0000
(for Goldilocks), so the check would fail?
This is just an optimization, the code should be equivalent. |
if (-offset).to_integer() & !covered_bits != 0.into() { | ||
return Err(Error::ConflictingRangeConstraints); | ||
if !components.iter().any(|c| c.is_negative) { | ||
// If all coefficients are positive and the offset is known, we can check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean if the offset is a known number, we can actually determine that in all cases, but the ones with negative coefficients are more complicated...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, if the offset is a known constant, we could just "run" the algorithm and make all the components known constants as well. Let's do that if we need it.
Co-authored-by: Georg Wiese <[email protected]>
This reverts commit 0906258.
…on_with_negative_numbers
No description provided.