Fix adfGetCacheEntry() crashing on bad data #35
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Debian bug 862740: unadf crashes with segment fault (core dumped)
There is no length checking of the filename or comment length in
adfGetCacheEntry()
before usingmemcpy()
to copy them into astruct CacheEntry
(which only has space for MAXNAMELEN and MAXCMMTLEN respectively)Furthermore, while the raw data is read from a
struct bDirCacheBlock
which declares the cache entries asuint8_t records[488]
, these raw values are converted to signed integers becausenLen
andcLen
instruct CacheEntry
are declared as (signed)char
, so any value over 0x80 will be interpreted as negative, e.g. -1 to -128, and then when given as thesize_t
length of bytes to copy, it will be cast to 0xFFFFFFxx or 0xFFFFFFFFFFFFFFxx. Copying this many bytes inevitably crashes the processThere is a proof-of-concept sample file given with the Debian bug.
The fix:
I have not changed the definition of cLen and nLen from signed to unsigned char as I haven't looked at all the places they're used, but that could also be done.