-
Notifications
You must be signed in to change notification settings - Fork 30.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fs: enable chunked reading for large files in readFileHandle #56022
base: main
Are you sure you want to change the base?
Conversation
I wonder if I should apply this sorting in in addition there are still places that use |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #56022 +/- ##
==========================================
+ Coverage 88.00% 89.21% +1.21%
==========================================
Files 656 663 +7
Lines 189000 192007 +3007
Branches 35995 36926 +931
==========================================
+ Hits 166320 171302 +4982
+ Misses 15840 13576 -2264
- Partials 6840 7129 +289
|
I removed the limit test because the limit for reading large files exceeding the GiB limit with fs.readFile has been removed. |
19dc0c4
to
6c85d68
Compare
6c85d68
to
ed82387
Compare
I added the tsc label to discuss, if we want to allow users to read such big files into memory, or if it would be better to try to point out streams instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error is removed from the promise version but it's missing the callback readFile implementation. The error itself would not be needed anymore due to that and also has to be removed.
This has to be addressed before we could land this.
We discussed in the TSC meeting that it's not a good idea to read beyond that, while it's acceptable for some cases.
We also discussed around warning when reaching that limit instead. We did not yet have consensus around it, but we'll discuss it again next week to finish the decision for that.
just wondering what is the next action here - @BridgeAR |
@gireeshpunathil I believe you wanted to think about the warning again. I kept my change request since the implementation should also include the callback version next to the warning. That's currently missing :) |
thanks a lot for your comments, unfortunately I saw this place late and now I sent a commit for the callback version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original reason for the limit was AFAIK something about some systems on some versions not being able to read that file size. If that's the case, we'd have to handle chunking on our side. I am just not certain if that still applies or if it's a legacy issue.
Please also add the warning instead of the error.
|
||
if (!tmpdir.hasEnoughSpace(kIoMaxLength)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe removing this is actually a mistake :)
// Variable taken from https://github.com/nodejs/node/blob/1377163f3351/lib/internal/fs/promises.js#L5 | ||
const kIoMaxLength = 2 ** 31 - 1; | ||
|
||
if (!tmpdir.hasEnoughSpace(kIoMaxLength)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should probably be kept, no matter that we do not check for the error but for the file being read.
I thought we are going to continue the discussion in the TSC on the necessity of the warning, as we didn't converge on that IIRC. |
Added chunked reading support to readFileHandle to handle files larger than 2 GiB, resolving size limitations while preserving existing functionality.
#55864