-
Notifications
You must be signed in to change notification settings - Fork 2.6k
runtime: Refactor staking reward curve to be cached & dynamic #6469
Comments
cc @thiolliere |
Currently the calculation of the points happens at compile time, result is: const REWARD_CURVE: PiecewiseLinear<'static> =
{
extern crate sp_runtime as _sp_runtime;
_sp_runtime::curve::PiecewiseLinear::<'static>{points:
&[(_sp_runtime::Perbill::from_parts(0u32),
_sp_runtime::Perbill::from_parts(25000000u32)),
.......
(_sp_runtime::Perbill::from_parts(1000000000u32),
_sp_runtime::Perbill::from_parts(25073000u32))],
maximum:
_sp_runtime::Perbill::from_parts(100000000u32),}
}; (reason for piecewise approximation was to avoid having to compute
how often would
|
This needs to be addressed. It's made more urgent by the introduction of gilts, which will also affect the ideal amount. Perhaps there's some way of precomputing the shape of the curve at compile time and doing a linear interpolation at runtime.
No - changes can happen often and we can't upgrade the runtime for each of them at exactly the same time.
This is the easiest way forward for now. We can save the computed points somewhere in storage to make it a bit faster(?) in case the inputs haven't changed since the last computation.
This might be a sensible optimisation. |
I try to be careful before moving the compile-time curve generation to on-chain because I'm not 100% sure that the generation is sensible for any input, we should put in integration test that all possible value of x_ideal generate a correct curve. another possible implementationMaybe the easiest is to compute the reward curve without going through this approximation generation. The function is (in desmos: https://www.desmos.com/calculator/zbbwa1arpw) Because the exponant to 2 is close to 0 we can approximate using taylor serie. With enough term it seems the serie is close to the result. The taylor terms can be computed from previous ones. Resulting in this implementation: ///! all integer are expressed per billion
const LN2: i128 = 0_693_140_000;
const D: i128 = 0_050_000_000;
const LN2_DIV_D: i128 = LN2 * 1_000_000_000 / D;
const I0: i128 = 0_025_000_000;
const X_IDEAL: i128 = 0_500_000_000;
const I_IDEAL: i128 = 0_200_000_000;
/// Return the next taylor term in per billion representation
fn taylor_term(k: i128, previous: i128, x: i128) -> i128 {
previous * (x - X_IDEAL) * LN2_DIV_D / 1_000_000_000 / 1_000_000_000 / k
}
/// Compute desired inflation
fn compute(x: i128) -> i128 {
let mut prev_taylor_term = 1_000_000_000;
let mut taylor_sum = prev_taylor_term;
for k in 1..40 {
prev_taylor_term = taylor_term(k, prev_taylor_term, x);
if k % 2 == 0 {
taylor_sum += prev_taylor_term;
} else {
taylor_sum -= prev_taylor_term;
}
}
I0 + (X_IDEAL * I_IDEAL / 1_000_000_000 - I0) * taylor_sum / 1_000_000_000
}
fn main() {
println!("{}", compute(0_900_000_000) as f32 / 1_000_000_000f32);
} Then we can assert in staking integration test (thus using runtime configured I0, D, etc.. except x_ideal which should be in storage) that all possible x_ideal and x result in good approximation. |
staking pallet is now generic on the reward curve and pallet-staking-reward-fn #8332 allows to compute some inflation on-chain. |
Staking reward curve is currently a static
PiecewiseLinear
, calculated once and only once in the block.Instead, it should be a static, lazily cached value, not built unless needed, and re-built
in the case that input parameters have changed. The
ideal_stake
should be determined by the amount of parachain slots being bid on: this should be around(75 - 25.min(slots / 4))%
.We should probably introduce a new
trait Curve
, which declares thecalculate_for_fraction_times_denominator
function.PiecewiseLinear
can impl this and staking pallet can requireRewardCurve
toimpl
it. We might get away with a simpleRefCell
to manage the cache and ensure that it can be updated as needed. Either way, we must ensure that the pieces are not recalculated on repeated calls.The text was updated successfully, but these errors were encountered: