Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solve bit decomposition with negative coefficients. #2554

Open
wants to merge 22 commits into
base: main
Choose a base branch
from

Conversation

chriseth
Copy link
Member

No description provided.

"\
S::Y[0] = params[0];
S::Z[0] = params[1];
S::X[0] = (-(S::Y[0] + -S::Z[0]) & 0xff);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks kind of correct. The only issue is that we have to ensure that the & 0xff is done on the two's complement representation. Maybe we should have a special operation for that.

S::Z[0] = params[1];
S::X[0] = (-(S::Y[0] + -S::Z[0]) & 0xff);
S::carry[0] = -((-(S::Y[0] + -S::Z[0]) & 0x100) // 256);
assert (-(S::Y[0] + -S::Z[0]) & 0xfffffffffffffe00) == 0;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the other hand, if we use two's complement, then this won't work.

@chriseth chriseth force-pushed the solve_bit_decomposition_with_negative_numbers branch from 1c1ca8e to bf43c73 Compare March 17, 2025 14:26
@chriseth
Copy link
Member Author

chriseth commented Mar 18, 2025

Ok I think this is working. What is still missing:

  • interpreter (will do in separate PR)
  • test with negative target value

if r.complete {
r
} else {
let negated = -self;
let r = negated.solve_bit_decomposition()?;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The negated version is now covered already by the non-negated check.

@@ -280,18 +284,20 @@ impl<T: FieldElement, V: Ord + Clone + Display> AffineSymbolicExpression<T, V> {
// We need to assert that the masks cover "-offset",
// otherwise the equation is not solvable.
// We assert -offset & !masks == 0

// TODO can this be done even with negative coefficients?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if offset.to_integer() & !covered_bits != 0.into(), then we certainly have a failure, but we still have to do runtime checks (which we do). But please double-check.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if offset is negative?

For example, if we had the constraint -1 = bit0 - 2 * bit1, then I think the covered bits would be 0b11 (because we ignore the signs), but we'd have offset.to_integer() == 0xffff0000 (for Goldilocks), so the check would fail?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's the point. So I think I would change this check to only handle the old case where there are no negative coefficients. The rest has to be handled at run time.

Copy link
Collaborator

@georgwiese georgwiese left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks cool!

I think it's nice to have an effect for these decompositions.

I think bases other than two could make sense for constraint systems that anyway have a higher degree bound: Instead of bits, we could have limbs in basis 3, for example, range-constrained as limb * (limb - 1) * (limb - 2) = 0. But I think the BitDecomposition effect could be renamed and modified to accommodate for that if needed?

I'm still meditating over what exactly it means to have decomposition with negative coefficients. My current understanding is that it is mainly useful to represent negative numbers, i.e. do something like value = byte0 + 2**8 * byte1 - is_negative * 2**16 (where is_negative is bit-constrained). I wonder if we still need to distinguish whether we have negative coefficients, as we currently do when compiling to Rust?

/// The exponent of two, which forms the coefficient of the variable.
pub exponent: u64,
/// The bit mask for this component, relative to the value to be decomposed,
/// i.e. already scaled by the coefficient.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd find this more intuitive if this was the mask before scaling, and I don't see a disadvantage to that!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean if this was the mask of the range constraint of the variable?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I started changing, but it's quite substantial because then we need to shift first and apply the bit mask afterwards. This means we need a signed shift, if I see it correctly.

value: &SymbolicExpression<T, Variable>,
components: &[BitDecompositionComponent<T, Variable>],
) -> String {
let mut result = format!("let bit_decomp_value = {};\n", format_expression(value));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, so one effect compiles to 2 + 3 * <num components> Rust statements. I guess the reason that's not a function is so that we can unroll the loop and select the bitand variant at compile time? Then maybe at least we can add a comment in front of this block (like what format_code outputs)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean we don't group things in functions anywhere (except the prover functions, which are functions to begin with...).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but I think we should :) That would produce more readable code (both the code generating the Rust code and the generated Rust code) and I think it would be more friendly to the compiler? But this shouldn't block this PR.

Comment on lines +506 to +510
let signed = if components.iter().any(|c| c.is_negative) {
"signed"
} else {
"unsigned"
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't we be able to always use the signed version?

With this, solving v = byte0 + 256 * byte1 and -v = -byte0 - 256 * byte1 would result in different code, but the constraints are equivalent.

Why do we still need the unsigned version? I think it doesn't matter as long as we don't get near $(p - 1) / 2$?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but the signed version is much more expensive.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see! Please add a comment that that's why :) Wasn't obvious to me, but makes sense.

@@ -280,18 +284,20 @@ impl<T: FieldElement, V: Ord + Clone + Display> AffineSymbolicExpression<T, V> {
// We need to assert that the masks cover "-offset",
// otherwise the equation is not solvable.
// We assert -offset & !masks == 0

// TODO can this be done even with negative coefficients?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if offset is negative?

For example, if we had the constraint -1 = bit0 - 2 * bit1, then I think the covered bits would be 0b11 (because we ignore the signs), but we'd have offset.to_integer() == 0xffff0000 (for Goldilocks), so the check would fail?

@chriseth
Copy link
Member Author

I wonder if we still need to distinguish whether we have negative coefficients, as we currently do when compiling to Rust?

This is just an optimization, the code should be equivalent.

if (-offset).to_integer() & !covered_bits != 0.into() {
return Err(Error::ConflictingRangeConstraints);
if !components.iter().any(|c| c.is_negative) {
// If all coefficients are positive and the offset is known, we can check
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean if the offset is a known number, we can actually determine that in all cases, but the ones with negative coefficients are more complicated...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, if the offset is a known constant, we could just "run" the algorithm and make all the components known constants as well. Let's do that if we need it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants