-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error on uploading more than 1.5Gb file "The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed." #3497
Comments
Hey @das-sukriti, thank-you for opening this issue, I was not able to reproduce this, I used 5GB file, I believe whats happening is one of the parts is not getting uploaded which gives out the error. This might because of network issues, in the low level API a multipart upload is initiated for larger file when The One option is to use the low level api and retry a part when it is not uploaded properly manually. You can find the information about the APIs here. Try increasing the You can also try to run this a little code snippet which can give you more visibility.
|
@ajredniwja Hi, Thank you for the quick response. I thought S3.upload() was supposed to take care of the multipart retries automatically. For my case, I am streaming the data from a source(which is decided at runtime).. I am zipping multiple files to a single zip file and writing to s3 location. The data size of the zip contents may vary from few Kbs to 8Gb. So, as per my understanding, to opt for the low level multipart api, I would need to know the data size in order to decide the part size and part number and accordingly send the multipart requests. I am refraining from loading the full data into memory to calculate the data size.. I am new to aws-sdk for node.. So.. I am not sure if there are any tweaks or libraries I could use to perform multipart without knowing the data size.. Any example of a possible approach will be very helpful. |
The low level API is used when the size of the data is unknown as mentioned in the documentation above. For your queries for part size and number of parts you might want to do something like: const partSize = 1024 * 1024 * 5; //each part of 5mb, except the last
const totalParts = Math.ceil(buffer.length / partSize); |
@ajredniwja Question is: How to read the file asynchronously and upload it in parallel in multipart? How will i get |
You dont't actually need to know the size of the object in multipart upload, you can see this implementation in V3 of the SDK: aws/aws-sdk-js-v3#1547 And examples in other languages: https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html If you are not able to come up with a solution I can co write the code. |
Hi, I will not be able to share the codebase with you, since this is a part of a product belonging to my company. As you have mentioned before that you have tried with 5GB data and it worked for you, can you please share the code that you have worked on. Maybe we can proceed with that.. I need to check if I am doing anything different from you. Also, here you have mentioned that we can try low level API. Are you saying that if we use low level APIs like CreateMultipartUpload , we do not need to know the size of the object? Do you have a real time code example in JavaScript that I can follow (not documentation)? Sorry, forgot to mention. I have tried the code snippet you provided. Same error came.
|
Apologies for the late reply, I used the same code snippet, can you share what printed on the console from the code snippet? |
This issue has not received a response in 1 week. If you still think there is a problem, please leave a comment to avoid the issue from automatically closing. |
Confirm by changing [ ] to [x] below to ensure that it's a bug:
Describe the bug
Hi,
Specs I am using:
node: v10.22.0
aws-sdk: 2.718.0
archiver: 3.0.0
json2csv: 4.5.1
I am getting "The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed." error while uploading bulk data using stream in s3.upload. This does not come for data below 1.5Gb size.
I am trying to upload a zip file to s3 endpoint using the SDK for Javascript. Aim is to zip multiple csv files containing data fetched from DB.
To do that I am following the below steps. Is there something wrong with the way I am using the combination of s3.upload and archiver?
Is the issue in the browser/Node.js?
Node.js
If on Node.js, are you running this on AWS Lambda?
No
Details of the browser/Node.js version
v10.22.0
SDK version number
v2.718.0
Expected behavior
Expected data.Location to have returned with a value. Value comes only for data within 1.5Gb.
The text was updated successfully, but these errors were encountered: