Skip to content

Commit

Permalink
s3store: Pass Content-Type from filetype metadata field to S3 (#1217)
Browse files Browse the repository at this point in the history
* feat: pass along the contentType metadata field if supplied

This change causes tusd for Go to match the NodeJS version's behavior, discussed here: https://stackoverflow.com/questions/74148196/how-to-resolve-application-octet-stream-in-s3-using-tus-node-tusd-uppy-or-net

* feat: respect filetype if passed and contentType is not

* chore: remove contentType handling

* Move test into existing test function

* Add documentation

---------

Co-authored-by: Marius Kleidl <[email protected]>
  • Loading branch information
mackinleysmith and Acconut authored Nov 25, 2024
1 parent 9d85248 commit b460d02
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 8 deletions.
2 changes: 2 additions & 0 deletions docs/_storage-backends/aws-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,8 @@ If [metadata](https://tus.io/protocols/resumable-upload#upload-metadata) is asso

In addition, the metadata is also stored in the informational object, which can be used to retrieve the original metadata without any characters being replaced.

If the metadata contains a `filetype` key, its value is used to set the `Content-Type` header of the file object. Setting the `Content-Disposition` or `Content-Encoding` headers is not yet supported.

## Considerations

When receiving a `PATCH` request, parts of its body will be temporarily stored on disk before they can be transferred to S3. This is necessary to meet the minimum part size for an S3 multipart upload enforced by S3 and to allow the AWS SDK to calculate a checksum. Once the part has been uploaded to S3, the temporary file will be removed immediately. Therefore, please ensure that the server running this storage backend has enough disk space available to hold these temporary files.
Expand Down
8 changes: 6 additions & 2 deletions pkg/s3store/s3store.go
Original file line number Diff line number Diff line change
Expand Up @@ -326,11 +326,15 @@ func (store S3Store) NewUpload(ctx context.Context, info handler.FileInfo) (hand

// Create the actual multipart upload
t := time.Now()
res, err := store.Service.CreateMultipartUpload(ctx, &s3.CreateMultipartUploadInput{
multipartUploadInput := &s3.CreateMultipartUploadInput{
Bucket: aws.String(store.Bucket),
Key: store.keyWithPrefix(objectId),
Metadata: metadata,
})
}
if fileType, found := info.MetaData["filetype"]; found {
multipartUploadInput.ContentType = aws.String(fileType)
}
res, err := store.Service.CreateMultipartUpload(ctx, multipartUploadInput)
store.observeRequestDuration(t, metricCreateMultipartUpload)
if err != nil {
return nil, fmt.Errorf("s3store: unable to create multipart upload:\n%s", err)
Expand Down
15 changes: 9 additions & 6 deletions pkg/s3store/s3store_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,26 +45,29 @@ func TestNewUpload(t *testing.T) {
Bucket: aws.String("bucket"),
Key: aws.String("uploadId"),
Metadata: map[string]string{
"foo": "hello",
"bar": "men???hi",
"foo": "hello",
"bar": "men???hi",
"filetype": "application/pdf",
},
ContentType: aws.String("application/pdf"),
}).Return(&s3.CreateMultipartUploadOutput{
UploadId: aws.String("multipartId"),
}, nil),
s3obj.EXPECT().PutObject(context.Background(), &s3.PutObjectInput{
Bucket: aws.String("bucket"),
Key: aws.String("uploadId.info"),
Body: bytes.NewReader([]byte(`{"ID":"uploadId+multipartId","Size":500,"SizeIsDeferred":false,"Offset":0,"MetaData":{"bar":"menü\r\nhi","foo":"hello"},"IsPartial":false,"IsFinal":false,"PartialUploads":null,"Storage":{"Bucket":"bucket","Key":"uploadId","Type":"s3store"}}`)),
ContentLength: aws.Int64(241),
Body: bytes.NewReader([]byte(`{"ID":"uploadId+multipartId","Size":500,"SizeIsDeferred":false,"Offset":0,"MetaData":{"bar":"menü\r\nhi","filetype":"application/pdf","foo":"hello"},"IsPartial":false,"IsFinal":false,"PartialUploads":null,"Storage":{"Bucket":"bucket","Key":"uploadId","Type":"s3store"}}`)),
ContentLength: aws.Int64(270),
}),
)

info := handler.FileInfo{
ID: "uploadId",
Size: 500,
MetaData: map[string]string{
"foo": "hello",
"bar": "menü\r\nhi",
"foo": "hello",
"bar": "menü\r\nhi",
"filetype": "application/pdf",
},
}

Expand Down

0 comments on commit b460d02

Please sign in to comment.