-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop caching Streams in XRef.fetchCompressed
#11370
Stop caching Streams in XRef.fetchCompressed
#11370
Conversation
- Change all occurences of `var` to `let`/`const`. - Initialize the (temporary) Arrays with the correct sizes upfront. - Inline the `isCmd` check. Obviously this won't make a huge difference, but given that the check is only relevant for corrupt documents it cannot hurt.
I'm slightly surprised that this hasn't actually caused any (known) bugs, but that may be more luck than anything else since it fortunately doesn't seem common for Streams to be defined inside of an 'ObjStm'.[1] Note that in the `XRef.fetchUncompressed` method we're *not* caching Streams, and that for very good reasons too. - Streams, especially the `DecodeStream` ones, can become *very* large once read. Hence caching them really isn't a good idea simply because of the (potential) memory impact of doing so. - Attempting to read from the *same* Stream more than once won't work, unless it's `reset` in between, since using any method such as e.g. `getBytes` always starts at the current data position. - Given that even the `src/core/` code is now fairly asynchronous, see e.g. the `PartialEvaluator`, it's generally impossible to assert that any one Stream isn't being accessed "concurrently" by e.g. different `getOperatorList` calls. Hence `reset`-ing a cached Streams isn't going to work in the general case. All in all, I cannot understand why it'd ever be correct to cache Streams in the `XRef.fetchCompressed` method. --- [1] One example where that happens is the `issue3115r.pdf` file in the test-suite, where the streams in question are not actually used for anything within the PDF.js code.
/botio test |
From: Bot.io (Linux m4)ReceivedCommand cmd_test from @Snuffleupagus received. Current queue size: 0 Live output at: http://54.67.70.0:8877/5662bfe67d2732d/output.txt |
From: Bot.io (Windows)ReceivedCommand cmd_test from @Snuffleupagus received. Current queue size: 0 Live output at: http://54.215.176.217:8877/777751a13e2419c/output.txt |
From: Bot.io (Linux m4)SuccessFull output at http://54.67.70.0:8877/5662bfe67d2732d/output.txt Total script time: 18.73 mins
|
From: Bot.io (Windows)FailedFull output at http://54.215.176.217:8877/777751a13e2419c/output.txt Total script time: 26.59 mins
Image differences available at: http://54.215.176.217:8877/777751a13e2419c/reftest-analyzer.html#web=eq.log |
Thank you! I agree with your analysis, and looking at the original code I think it was an oversight. |
/botio-windows makeref |
From: Bot.io (Windows)ReceivedCommand cmd_makeref from @timvandermeij received. Current queue size: 0 Live output at: http://54.215.176.217:8877/aa5d37e496074a0/output.txt |
From: Bot.io (Windows)SuccessFull output at http://54.215.176.217:8877/aa5d37e496074a0/output.txt Total script time: 23.73 mins
|
Yes, it absolutely looks like nothing more than an oversight since the original code didn't support parsing of Streams within a compressed XRef entry. That was changed in PR #2341, in order to address issue #2337 which so happens to be the very first PDF.js bug report I ever submitted :-) |
I'm slightly surprised that this hasn't actually caused any (known) bugs, but that may be more luck than anything else since it fortunately doesn't seem common for Streams to be defined inside of an 'ObjStm'.[1]
Note that in the
XRef.fetchUncompressed
method we're not caching Streams, and that for very good reasons too:Streams, especially the
DecodeStream
ones, can become very large once read. Hence caching them really isn't a good idea simply because of the (potential) memory impact of doing so.Attempting to read from the same Stream more than once won't work, unless it's
reset
in between, since using any method such as e.g.getBytes
always starts at the current data position.Given that even the
src/core/
code is now fairly asynchronous, see e.g. thePartialEvaluator
, it's generally impossible to assert that any one Stream isn't being accessed "concurrently" by e.g. differentgetOperatorList
calls. Hencereset
-ing a cached Streams isn't going to work in the general case.All in all, I cannot understand why it'd ever be correct to cache Streams in the
XRef.fetchCompressed
method.[1] One example where that happens is the
issue3115r.pdf
file in the test-suite, where the streams in question are not actually used for anything within the PDF.js code.