-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
INT-790: Remove extra await #257
Conversation
@@ -1061,7 +1061,8 @@ export async function readStream(cfg: DecryptConfiguration) { | |||
|
|||
const cipher = new AesGcmCipher(cfg.cryptoService); | |||
|
|||
await updateChunkQueue( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. This means if there is a network error in this function, it won't propagate anymore. Instead of throwing, you will have to explicitly call a _reject handler on the chunk.
- In this method: add a corresponding
_reject
parameter, in addition to the_resolve
handler below. This then propagate an error in chunk processing to the stream assembly code here by throwing it out on the await on line 1085
if (!chunk.decryptedChunk) {
await new Promise((resolve, reject) => {
chunk._resolve = resolve;
chunk._reject = reject;
});
}
- Above, in
updateChunkQueue
, instead of throwing the error, send it to the offending slice. It is kinda hard, since you won't know which slice the error happened on in the current double-looped code. So you'll have to refactor it a bit so the handler happens within the innermost loop, and add another handler for the zip format check which just attaches it to the first item in the stride.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be hooked up properly, though not sure exactly which zip format check you meant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Little context
- Without await its starts in parallel with file writing stream (decrypt and download) and population of chunks map with file chunks
- If they run in parallel stream eats basically those chunks from map as soon as they populate so memory never overlflow. Its dowloading, populating map writing stream instantly
- If we wait that function, its just polulate map and when its populated with file chunks then it gets writed to hard drive
- Its fine if file less then 5 gb but if more than its reaches limit of size for browser keeping stuff in memory
Why it didnt failed before - because it was instantly written to harddrive and removed from browser memory
If these changes look good, signoff on them with:
If they aren't any good, please remove them with:
|
d10c301
to
2818e0b
Compare
lib/tdf3/src/tdf.ts
Outdated
const encryptedChunk = new Uint8Array( | ||
buffer.slice(offset, offset + (encryptedSegmentSize as number)) | ||
); | ||
return Promise.all( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunetly it wont change anything. When you do Promise.All on decrypt its not paralelise decryption. I tried it donesnt give any peformnace boost.
here work that im doing for that uses workers. But without limiting promise.all of lot of chunks can freeze ui so i use pLimit.
Could you remove Promise.all
for now since changes is on the way? and leave just _reject part?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
2818e0b
to
9b22c67
Compare
If these changes look good, signoff on them with:
If they aren't any good, please remove them with:
|
907842e
to
a5a17d4
Compare
lib/tdf3/src/tdf.ts
Outdated
@@ -946,7 +947,7 @@ async function updateChunkQueue( | |||
bufferSize | |||
); | |||
if (buffer) { | |||
sliceAndDecrypt({ | |||
await sliceAndDecrypt({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation! Updated!
There was still an issue with decrypts, but I was able to test in SS and it seems fine to me so far. |
Kudos, SonarCloud Quality Gate passed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10gb file successfully decrypted. No memory leak
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could rewrite it to not use .then, but the logic seems correct as is if you'd rather not
This is to fix large file downloads from what I understand from @ivanovSPvirtru 😅