Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
nfsd: fix incorrect high limit in clamp() on over-allocation
If over allocation occurs in nfsd4_get_drc_mem(), total_avail is set to zero. Consequently, clamp_t(unsigned long, avail, slotsize, total_avail/scale_factor); gives: clamp_t(unsigned long, avail, slotsize, 0); resulting in a clamp() call where the high limit is smaller than the low limit, which is undefined: the result could be either slotsize or zero depending on the order of evaluation. Luckily, the two instructions just below the clamp() recover the undefined behaviour: num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1); If avail = slotsize, the min_t() sets it back to 1. If avail = 0, the max_t() sets it back to 1. So this undefined behaviour has no visible effect. Anyway, remove the undefined behaviour in clamp() by only calling it and only doing the calculation of num if memory is still available. Otherwise, if over-allocation occurred, directly set num to 1 as intended by the author. While at it, apply below checkpatch fix: WARNING: min() should probably be min_t(unsigned long, NFSD_MAX_MEM_PER_SESSION, total_avail) torvalds#100: FILE: fs/nfsd/nfs4state.c:1954: + avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); Fixes: 7f49fd5 ("nfsd: handle drc over-allocation gracefully.") Signed-off-by: Vincent Mailhol <[email protected]>
- Loading branch information