Skip to content

Commit

Permalink
Fix/simplify chunk_recycle() allocation size computations.
Browse files Browse the repository at this point in the history
Remove outer CHUNK_CEILING(s2u(...)) from alloc_size computation, since
s2u() may overflow (and return 0), and CHUNK_CEILING() is only needed
around the alignment portion of the computation.

This fixes a regression caused by
5707d6f (Quantize szad trees by size
class.) and first released in 4.0.0.

This resolves jemalloc#497.
  • Loading branch information
jasone committed Nov 12, 2016
1 parent 2cdf07a commit b9408d7
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion src/chunk.c
Original file line number Diff line number Diff line change
Expand Up @@ -209,15 +209,18 @@ chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t alloc_size, leadsize, trailsize;
bool zeroed, committed;

assert(CHUNK_CEILING(size) == size);
assert(alignment > 0);
assert(new_addr == NULL || alignment == chunksize);
assert(CHUNK_ADDR2BASE(new_addr) == new_addr);
/*
* Cached chunks use the node linkage embedded in their headers, in
* which case dalloc_node is true, and new_addr is non-NULL because
* we're operating on a specific chunk.
*/
assert(dalloc_node || new_addr != NULL);

alloc_size = CHUNK_CEILING(s2u(size + alignment - chunksize));
alloc_size = size + CHUNK_CEILING(alignment) - chunksize;
/* Beware size_t wrap-around. */
if (alloc_size < size)
return (NULL);
Expand Down

0 comments on commit b9408d7

Please sign in to comment.