You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to use GCS' S3 interoperability to use GCS as a remote registry cache for Docker builds. The idea here is that with minimal changes, you should be able to use GCS in place of S3.
This failed with 403s (even though I tested the GCS HMAC key with Python/boto3 successfully). After digging into the code more and using the AWS Go SDK's logging, it looks like Google always adds gzip(gfe) to the Accept-Encoding header when computing the canonical request hash, whereas the the S3 client disables gzip and only uses identity. So the computed hashes for the canonical requests are different. It looks like this was intentional in aws/aws-sdk-go-v2#748, but a way to add it back was provided in the SetHeaderValue middleware.
Here's the output from the S3 client when sending the request:
Here's an example response from GCS (with some newlines for easier reading):
<?xml version='1.0' encoding='UTF-8'?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.</Message>
<StringToSign>AWS4-HMAC-SHA256
20230327T070514Z
20230327/auto/s3/aws4_request
63c93f54527f5afb265efb802ce4c44ac731d4c3ea9795d919dd1b506f2286a8</StringToSign>
<CanonicalRequest>PUT
/manifests/MY_IMAGE_NAME
x-id=PutObject
accept-encoding:identity,gzip(gfe)
amz-sdk-invocation-id:35119a7f-4e72-4f6d-a924-5b73363f12d6
amz-sdk-request:attempt=1; max=3
content-length:183
content-type:application/octet-stream
host:MY_BUCKET_NAME.storage.googleapis.com
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20230327T070514Z
accept-encoding;amz-sdk-invocation-id;amz-sdk-request;content-length;content-type;host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD</CanonicalRequest>
</Error>
As a test, I commented out this line and returned nil. This made signature checks work. The request still failed for another reason (context deadlines with the solver), but that'll have to be filed in a separate issue.
I don't know if this will break compatibility with S3. So maybe the best course of action here is to add an option to the S3 client in Buildkit that conditionally adds gzip to the Accept-Encoding header?
Alternatively, add support for GCS. But that's a bigger task.
The text was updated successfully, but these errors were encountered:
I tried to use GCS' S3 interoperability to use GCS as a remote registry cache for Docker builds. The idea here is that with minimal changes, you should be able to use GCS in place of S3.
This failed with 403s (even though I tested the GCS HMAC key with Python/boto3 successfully). After digging into the code more and using the AWS Go SDK's logging, it looks like Google always adds
gzip(gfe)
to theAccept-Encoding
header when computing the canonical request hash, whereas the the S3 client disables gzip and only usesidentity
. So the computed hashes for the canonical requests are different. It looks like this was intentional in aws/aws-sdk-go-v2#748, but a way to add it back was provided in theSetHeaderValue
middleware.Here's the output from the S3 client when sending the request:
Here's an example response from GCS (with some newlines for easier reading):
As a test, I commented out this line and returned nil. This made signature checks work. The request still failed for another reason (context deadlines with the solver), but that'll have to be filed in a separate issue.
buildkit/vendor/github.com/aws/aws-sdk-go-v2/service/s3/api_client.go
Lines 591 to 593 in 4b4a41f
I don't know if this will break compatibility with S3. So maybe the best course of action here is to add an option to the S3 client in Buildkit that conditionally adds gzip to the Accept-Encoding header?
Alternatively, add support for GCS. But that's a bigger task.
The text was updated successfully, but these errors were encountered: