Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Presigned url not creating a valid URL for af-south-1 #3015

Open
mangelozzi opened this issue Sep 28, 2021 · 8 comments
Open

Presigned url not creating a valid URL for af-south-1 #3015

mangelozzi opened this issue Sep 28, 2021 · 8 comments
Labels
documentation This is a problem with documentation. feature-request This issue requests a feature. p2 This is a standard priority issue s3

Comments

@mangelozzi
Copy link

mangelozzi commented Sep 28, 2021

Describe the bug
Presigned url does not create a url for the region_name specified

Steps to reproduce

Its worth noting the code I am working on has its only mechanism for storing secrets, so they are retrieved as the variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_STORAGE_BUCKET_NAME.

  1. Create a bucket in the region af-south-1
  2. Place a file in it.
  3. Try generate a presigned URL with the following code:
  4. I only specify the region_name again when creating the resouce because its not working:
import boto3

session = boto3.Session(
        aws_access_key_id=AWS_ACCESS_KEY_ID,
        aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
        region_name='af-south-1')
print(session.region_name)  # prints af-south-1
resource = session.resource('s3', region_name='af-south-1')
response = resource.meta.client.generate_presigned_url(
        'get_object',
        Params={'Bucket': AWS_STORAGE_BUCKET_NAME,  'Key': TEST_FILE_NAME},
        ExpiresIn=3600,
    )
print(response)
  1. It will generate a url like:
  2. https://bucket-name.s3.amazonaws.com/test1.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA32EU3DKFB3ACLEUV%2F20210928%2Faf-south-1%2Fs3%2Faws4_request&X-Amz-Date=20210928T130419Z&X-Amz-SignedHeaders=host&X-Amz-Expires=1800&X-Amz-Signature=0b684c8f423c9846cac3d437e3444972e2ead4f58ea967bc50c4942534aacf01

Which if tried gives an error:

<Error>
    <Message>
    <Code>IllegalLocationConstraintException</Code>
        The af-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.
    </Message>
    <RequestId>PJJQ0T7PAQHWQK5S</RequestId>
    <HostId>
        LmcUnFY3JJSd03OMzsGmxYO/tvl/PpYrEO6TtVI80AJiWHPrpge20gllm+RUkKC3ejuGwy8ZpG0=
    </HostId>
</Error>

Expected behavior
A url more like af-south-1.amazonaws.com

Related to #2098, however I set the region name as shown above.

@mangelozzi mangelozzi added the needs-triage This issue or PR still needs to be triaged. label Sep 28, 2021
@mangelozzi
Copy link
Author

mangelozzi commented Sep 28, 2021

I tracked it down to the function _should_use_global_endpoint in botocore/signers.py:line 724:

def _should_use_global_endpoint(client):
    if client.meta.partition != 'aws':
        return False
    s3_config = client.meta.config.s3
    if s3_config:
        if s3_config.get('use_dualstack_endpoint', False):
            return False
        if s3_config.get('us_east_1_regional_endpoint') == 'regional' and \
                client.meta.config.region_name == 'us-east-1':
            return False
    return True

s3_config is None so it always returns True. If I change it to return False, then the presigned URL works.

Why is the config none when a region name is being specified?

@stobrien89 stobrien89 added the s3 label Sep 28, 2021
@tim-finnigan tim-finnigan self-assigned this Sep 28, 2021
@tim-finnigan tim-finnigan added investigating This issue is being investigated and/or work is in progress to resolve the issue. and removed needs-triage This issue or PR still needs to be triaged. labels Sep 28, 2021
@tim-finnigan
Copy link
Contributor

Hi @mangelozzi, thanks for reaching out. I was able to reproduce this issue. I found the solution was to include an endpoint_url as mentioned in this issue: #2728. For more context, this comment notes:

For all Regions that launched after March 20, 2019, if a request arrives at the wrong Amazon S3 location, Amazon S3 returns an HTTP 400 Bad Request error. Basically this means s3 region re-director won't work for region launched after march 20, 2019. That's why when you are specifying the exact endpoint_url it works.

I also found an open issue (#2864) requesting a documentation update to cover this scenario. I’m going to create a ticket for the S3 docs team and will update that issue when I hear back from them.

@tim-finnigan tim-finnigan added duplicate This issue is a duplicate. guidance Question that needs advice or information. and removed investigating This issue is being investigated and/or work is in progress to resolve the issue. labels Sep 28, 2021
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@tim-finnigan tim-finnigan reopened this Sep 29, 2021
@tim-finnigan tim-finnigan added needs-review and removed duplicate This issue is a duplicate. guidance Question that needs advice or information. labels Sep 29, 2021
@tim-finnigan tim-finnigan added documentation This is a problem with documentation. feature-request This issue requests a feature. and removed needs-review labels Mar 14, 2022
@aBurmeseDev aBurmeseDev added the p2 This is a standard priority issue label Nov 8, 2022
@tim-finnigan tim-finnigan removed their assignment Nov 18, 2022
@eNcacz
Copy link

eNcacz commented Nov 29, 2022

Possible workaround without hardcoding the URL structure:

s3 = boto3.client('s3', region_name='af-south-1')
endpointUrl = s3.meta.endpoint_url
s3 = boto3.client('s3', endpoint_url=endpointUrl, region_name='af-south-1')

I know, it is not nice, the fix in boto3 would be much better, but it works and no hardcoding is needed.

@jonemo
Copy link
Contributor

jonemo commented Nov 30, 2022

Another workaround is to use an S3 Access Point in place of the bucket name.

For example, let's assume you have a bucket named bucketname in af-south-1 and create an access point named bucketname-ap for the bucket. You can then find the access point's ARN from the AWS Console or using the s3-control list_access_points API and use it in place of the bucket name:

s3 = boto3.client('s3', region_name='af-south-1')
access_point_arn = 'arn:aws:s3:af-south-1:01234567890:accesspoint/bucketname-ap'
url = s3.generate_presigned_url('get_object', Params={'Bucket': access_point_arn, 'Key': 'object.txt'})

This will produce a URL in the correct regional format:

https://[prefix].s3-accesspoint.af-south-1.amazonaws.com/object.txt?[parameters]

In fact, you don't have to specify the region_name during client creation because boto3 will use the region name from the access point ARN.

That said, I agree that a fix in boto3 would be preferable to any workarounds. Further investigation is needed to understand exactly when the current behavior does not work to ensure that any change we make does not also modify currently working inputs.

softwarefactory-project-zuul bot pushed a commit to ansible-collections/community.aws that referenced this issue Jan 23, 2023
…1674)

aws_ssm - split S3 region/endpoint discovery into dedicated function

Depends-On: #1670
SUMMARY
fixes: #1616
Newer AWS regions don't generate valid presigned URLs unless you explicitly pass the endpoint_url for the region (see also boto/boto3#3015)
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
aws_ssm
ADDITIONAL INFORMATION

Reviewed-by: Markus Bergholz <[email protected]>
Reviewed-by: Alina Buzachis <None>
patchback bot pushed a commit to ansible-collections/community.aws that referenced this issue Jan 23, 2023
…1674)

aws_ssm - split S3 region/endpoint discovery into dedicated function

Depends-On: #1670
SUMMARY
fixes: #1616
Newer AWS regions don't generate valid presigned URLs unless you explicitly pass the endpoint_url for the region (see also boto/boto3#3015)
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
aws_ssm
ADDITIONAL INFORMATION

Reviewed-by: Markus Bergholz <[email protected]>
Reviewed-by: Alina Buzachis <None>
(cherry picked from commit 8237ebb)
softwarefactory-project-zuul bot pushed a commit to ansible-collections/community.aws that referenced this issue Jan 23, 2023
…1674) (#1677)

[PR #1674/8237ebb7 backport][stable-5] aws_ssm - split S3 region/endpoint discovery into dedicated function

This is a backport of PR #1674 as merged into main (8237ebb).
Depends-On: #1670
SUMMARY
fixes: #1616
Newer AWS regions don't generate valid presigned URLs unless you explicitly pass the endpoint_url for the region (see also boto/boto3#3015)
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
aws_ssm
ADDITIONAL INFORMATION

Reviewed-by: Markus Bergholz <[email protected]>
@imtiazmangerah
Copy link

imtiazmangerah commented Oct 15, 2024

This issue seems to be fixed from version 1.33.8 onwards provided addressing_style of virtual is specified, as of this commit:

boto/botocore@4b72854#diff-b0ae51c8153e41a57c73da11fd5c8eb8d42086683ae6e8242e9d2f1979dbc1bbR854

Related issue: boto/botocore#3081

@piwawa
Copy link

piwawa commented Dec 23, 2024

This issue seems to be fixed from version 1.33.8 onwards provided addressing_style of virtual is specified, as of this commit:

boto/botocore@4b72854#diff-b0ae51c8153e41a57c73da11fd5c8eb8d42086683ae6e8242e9d2f1979dbc1bbR854

Related issue: boto/botocore#3081

Can you share a python demo to show how to fix it?

@imtiazmangerah
Copy link

This issue seems to be fixed from version 1.33.8 onwards provided addressing_style of virtual is specified, as of this commit:
boto/botocore@4b72854#diff-b0ae51c8153e41a57c73da11fd5c8eb8d42086683ae6e8242e9d2f1979dbc1bbR854
Related issue: boto/botocore#3081

Can you share a python demo to show how to fix it?

Just make sure you on v1.33.8 or higher, and set up your s3 client as follows:

import boto3
from botocore.client import Config

client = boto3.client("s3", config=Config(signature_version="s3v4", s3={"addressing_style": "virtual"}))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation This is a problem with documentation. feature-request This issue requests a feature. p2 This is a standard priority issue s3
Projects
None yet
Development

No branches or pull requests

8 participants