-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Malformed string compression table causes panic/abort in parser #2
Comments
I don't think a very large |
Looks like it's not bitbuffer, it's the compressed data. https://docs.rs/snap/1.0.5/snap/raw/struct.Decoder.html#method.decompress_vec is being used. Maybe instead we should use https://docs.rs/snap/1.0.5/snap/read/struct.FrameDecoder.html and limit the amount of bytes we're willing to read to something reasonable? In any case, this will OOM it. oom-6cba3d29efefe05228de10e9e770d6544a8eb1e9.zip My copy of the fuzzer looks like #![no_main]
use libfuzzer_sys::fuzz_target;
pub use tf_demo_parser::{Demo, DemoParser, Parse, ParseError, ParserState, Stream};
fuzz_target!(|data: &[u8]| {
let demo = Demo::new(&data);
let parser = DemoParser::new_all(demo.get_stream());
let _ = parser.parse();
}); and the stack trace for the OOM is
edit: I did try fuzzing |
Ah yes, I didn't think about the decompression step itself, bb79d26 adds some hard limits for compressed and decompressed size (10mb and 100mb) |
The demo file is rather large and I couldn't minify it, but I've attached it anyways.
crash.zip
The crash is
parser/src/demo/message/stringtable.rs
Line 57 in e2a631c
compressed_size
is less than 4, it will either panic on wraparound with debug assertions enabled, or tries to allocate a huge amount of memory in release mode, and aborts. (This could probably be used cause a denial of service by trying to allocate large buffers).The fix here IMO is to do a checked subtraction there, but also not to trust the value we got from the file for the compressed size. That might be a change in
bitbuffer
too, which should have tests that trying to read huge amounts of data doesn't crash, and instead returns an error, which it doesn't seem to be doing.The text was updated successfully, but these errors were encountered: