linineq.py
contains functions for solving linear inequalities in integers. It utilizes methods outlined here on pages 80-81. It combines lattice reduction and linear programming to first reduce the problem to a simpler one and then solve it.
All
-
solve_eq_ineq(M, Mineq, b, bineq)
solves$\mathbf{Mx} = \mathbf{b}$ and$\mathbf{M_{ineq}x} \ge \mathbf{b_{ineq}}$ . This is the most general form of the problem. -
solve_ineq(Mineq, bineq)
solves$\mathbf{M_{ineq}x} \ge \mathbf{b_{ineq}}$ -
solve_bounded(M, b, lb, ub)
solves$\mathbf{Mx} = \mathbf{b}$ and$\mathbf{lb} \le \mathbf{x} \le \mathbf{ub}$ . -
solve_bounded_mod(M, b, lb, ub, N)
solves$\mathbf{Mx} \equiv \mathbf{b}\ (\bmod{\ N})$ and$\mathbf{lb} \le \mathbf{x} \le \mathbf{ub}$ . -
solve_bounded_lcg(a, b, m, lb, ub)
solves for$n$ consecutive outputs$\mathbf{s}=(s_0, s_1, ..., s_{n-1})$ of the LCG given by$s_{i+1} \equiv a s_i + b \pmod{m}$ where$\mathbf{lb} \le \mathbf{s} \le \mathbf{ub}$ . -
solve_truncated_lcg(a, b, m, ys, ntrunc)
solves for$n$ consecutive outputs$\mathbf{s}=(s_0, s_1, ..., s_{n-1})$ of the LCG given by$s_{i+1} \equiv a s_i + b \pmod{m}$ where$(\mathbf{s}>>\mathrm{ntrunc}) = \mathbf{ys}$
Each of these methods has a _gen
variant (e.g solve_bounded_gen
) which returns a generator instead of a single solution. This generator will yield all solutions which are findable for a given lp_bound
parameter.
Keyword arguments:
-
solver
(defaultORTOOLS
) is one ofORTOOLS
orPPL
and decides which integer programming solver to use.ortools
is faster and usually preferred, but as it only supports 64-bit numbers it is sometimes insufficient. In those casesPPL
is used automatically. If you are enumerating solutions using a_gen
function together withPPL
then a Python implementation of branch-and-bound will be used withPPL
as a subroutine. This has no size-restrictions but can be relatively slow. -
lp_bound
(default100
) is an internal restriction on the size of the unknowns for the linear programming step, and is currently only used byortools
. This does not correspond to the size of the solutions you will find, rather the size of the coefficients a lattice-reduced matrix is multiplied by. You can often get by with surprisingly small values forlp_bound
(e.g.5
).
If you're unsure what to setlp_bound
to you can callset_verbose(1)
before solving an instance. It will then log what internal coefficients where used to find that solution. These will be<= lp_bound
and>= -lp_bound
. -
reduce
(defaultLLL
) is a function which will be used to reduce a lattice basis. If you want to specify paremeters to use you can usepartial
(imported fromfunctools
):solve_bounded(..., reduce=partial(BKZ, block_size=20))
. The passed function should accept the same parameters as predefined ones. -
cvp
(defaultkannan_cvp
) is function which will be used to solve the (approximate) closest vector problem. If you want to specify paremeters to use you can usepartial
(imported fromfunctools
):solve_bounded(..., cvp=partial(fplll_cvp, prec=4096))
. The passed function should accept the same parameters as the predefined ones. -
kernel_algo
(defaultNone
) is a string which is passed to thealgorithm
parameter ofMatrix_integer_dense.right_kernel_matrix()
. IfNone
a heuristic is used, if you notice freezing while computing the kernel try switching between'pari'
and'flint'
.
-
BKZ(M, no_cli=False, block_size=20, fplll_path='fplll', auto_abort=True)
returns the BKZ reduction of$\mathbf{M}$ . Ifno_cli
it will useM.BKZ()
instead of the fplll CLI (this will require postcomputation of the transformation matrix). -
flatter(M, path='flatter')
returns a the result of running flatter on$\mathbf{M}$ . This requiresflatter
to be installed and inPATH
, or you can specify the path to the executable with thepath
argument. -
LLL(M)
returns the LLL reduction of$\mathbf{M}$ . This is just a wrapper aroundM.LLL()
and is only here for consistency.
Each of these accept the following keyword arguments:
-
transformation
(defaultFalse
) is a boolean which indicates if the function should instead return the tuple$(\mathbf{L}, \mathbf{R})$ where$\mathbf{L}$ is the reduced lattice basis and$\mathbf{R M} = \mathbf{L}$ .
Warning
flatter (and BKZ if no_cli=True
) do not provide the transformation matrix themselves, it is calculated after the fact and this can be slow for large matrices. Use set_verbose(1)
and look for if it freezes on computing smith normal form...
-
kannan_cvp(B, t)
(aliascvp()
) finds an approximate closest vector to$\mathbf{t}$ in the lattice$\mathbf{B}$ using the Kannan embedding. This uses lattice reduction so thereduce
argument is relevant even ifis_reduced=True
. -
babai_cvp(B, t)
finds an approximate closest vector to$\mathbf{t}$ in the lattice$\mathbf{B}$ using Babai's closest plane algorithm. -
rounding_cvp(B, t)
finds an approximate closest vector to$\mathbf{t}$ in the lattice$\mathbf{B}$ using Babai's rounding-off algorithm. -
fplll_cvp(B, t, prec=4096)
finds an approximate closest vector to$\mathbf{t}$ in the lattice$\mathbf{B}$ usingfplll
'sMatGSO.babai()
method.
Each of these accept the following keyword arguments:
-
reduce
(defaultLLL
) is a function which will be used to reduce the lattice basis, seereduce
above. -
is_reduced
(defaultFalse
) indicates if the given lattice basis is already reduced, else it will be done internally usingreduce
. -
coords
(defaultFalse
) ifTrue
a tuple$(\mathbf{u}, \mathbf{v})$ will instead be returned where$\mathbf{u} = \mathbf{v B}$ and$\mathbf{u}$ is the closest (approximate) vector to$\mathbf{t}$ .
You will need Sage, and you can optionally pip install ortools
which allows for enumerating solutions and is often faster than ppl
(which comes bundled with Sage).
Then either
-
pip install git+https://github.com/TheBlupper/linineq.git
-
Type
load('https://raw.githubusercontent.com/TheBlupper/linineq/main/linineq.py')
in Sage -
Or manually download linineq.py