always |
- Dictionary containing resource information.
+ When state=list, it is a list containing dictionaries of resource information.
+ Otherwise, it is a dictionary of resource information.
+ When state=absent, it is an empty dictionary.
|
diff --git a/docs/amazon.cloud.s3_bucket_module.rst b/docs/amazon.cloud.s3_bucket_module.rst
index e21c37f0..5ea94861 100644
--- a/docs/amazon.cloud.s3_bucket_module.rst
+++ b/docs/amazon.cloud.s3_bucket_module.rst
@@ -17,7 +17,7 @@ Version added: 0.1.0
Synopsis
--------
-- Create and manage S3 buckets (list, create, update, describe, delete).
+- Create and manage S3 buckets.
@@ -25,9 +25,10 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
-- boto3 >= 1.17.0
-- botocore >= 1.20.0
-- python >= 3.6
+- python >= 3.9
+- boto3 >= 1.20.0
+- botocore >= 1.23.0
+- jsonpatch
Parameters
@@ -64,7 +65,6 @@ Parameters
string
- / required
@@ -127,7 +127,6 @@ Parameters
string
- / required
|
@@ -160,7 +159,6 @@ Parameters
dictionary
- / required
|
@@ -196,7 +194,6 @@ Parameters
dictionary
- / required
|
@@ -335,7 +332,6 @@ Parameters
string
- / required
|
@@ -353,7 +349,6 @@ Parameters
string
- / required
|
@@ -377,8 +372,7 @@ Parameters
|
AWS access key . If not set then the value of the AWS_ACCESS_KEY_ID , AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_access_key and profile options are mutually exclusive.
aliases: ec2_access_key, access_key
|
@@ -427,8 +421,7 @@ Parameters
AWS secret key . If not set then the value of the AWS_SECRET_ACCESS_KEY , AWS_SECRET_KEY , or EC2_SECRET_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_secret_key and profile options are mutually exclusive.
aliases: ec2_secret_key, secret_key
|
@@ -456,7 +449,6 @@ Parameters
list
/ elements=dictionary
- / required
@@ -522,7 +514,7 @@ Parameters
|
|
- kms_master_keyID can only be used when you set the value of sse_algorithm as aws:kms.
+ KMSMasterKeyID can only be used when you set the value of sse_algorithm as aws:kms.
|
@@ -535,7 +527,6 @@ Parameters
string
- / required
@@ -628,7 +619,6 @@ Parameters
list
/ elements=string
- / required
|
@@ -654,7 +644,6 @@ Parameters
list
/ elements=string
- / required
|
@@ -752,6 +741,26 @@ Parameters
aliases: aws_endpoint_url, endpoint_url
|
+
+
+
+ force
+
+
+ boolean
+
+ |
+
+
+ |
+
+ Cancel IN_PROGRESS and PENDING resource requestes.
+ Because you can only perform a single operation on a given resource at a time, there might be cases where you need to cancel the current resource operation to make the resource available so that another operation may be performed on it.
+ |
+
@@ -776,7 +785,6 @@ Parameters
string
- / required
|
@@ -809,7 +817,6 @@ Parameters
string
- / required
|
@@ -848,7 +855,6 @@ Parameters
string
- / required
|
@@ -866,7 +872,6 @@ Parameters
string
- / required
|
@@ -904,7 +909,6 @@ Parameters
string
- / required
|
@@ -927,7 +931,6 @@ Parameters
integer
- / required
|
@@ -964,7 +967,6 @@ Parameters
dictionary
- / required
|
@@ -1056,7 +1058,6 @@ Parameters
boolean
- / required
|
@@ -1077,7 +1078,6 @@ Parameters
string
- / required
|
@@ -1094,7 +1094,6 @@ Parameters
string
- / required
|
@@ -1162,7 +1161,6 @@ Parameters
string
- / required
|
@@ -1235,7 +1233,6 @@ Parameters
integer
- / required
|
@@ -1260,7 +1257,7 @@ Parameters
|
The date value in ISO 8601 format.
- The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ)
+ The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ).
|
@@ -1333,7 +1330,7 @@ Parameters
Container for the expiration rule that describes when noncurrent objects are expired.
- If your bucket is versioning-enabled (or versioning is suspended), you can set this action to request that Amazon S3 expire noncurrent object versions at a specific period in the objects lifetime
+ If your bucket is versioning-enabled (or versioning is suspended), you can set this action to request that Amazon S3 expire noncurrent object versions at a specific period in the objects lifetime.
|
@@ -1351,7 +1348,7 @@ Parameters
|
- Specified the number of newer noncurrent and current versions that must exists before performing the associated action
+ Specified the number of newer noncurrent and current versions that must exists before performing the associated action.
|
@@ -1364,13 +1361,12 @@ Parameters
integer
- / required
|
- Specified the number of days an object is noncurrent before Amazon S3 can perform the associated action
+ Specified the number of days an object is noncurrent before Amazon S3 can perform the associated action.
|
@@ -1424,7 +1420,7 @@ Parameters
|
- Specified the number of newer noncurrent and current versions that must exists before performing the associated action
+ Specified the number of newer noncurrent and current versions that must exists before performing the associated action.
|
@@ -1437,7 +1433,6 @@ Parameters
string
- / required
@@ -1465,7 +1460,6 @@ Parameters
integer
- / required
|
@@ -1509,7 +1503,7 @@ Parameters
|
|
- Specified the number of newer noncurrent and current versions that must exists before performing the associated action
+ Specified the number of newer noncurrent and current versions that must exists before performing the associated action.
|
@@ -1522,7 +1516,6 @@ Parameters
string
- / required
@@ -1550,7 +1543,6 @@ Parameters
integer
- / required
|
@@ -1620,7 +1612,6 @@ Parameters
string
- / required
|
@@ -1661,7 +1652,6 @@ Parameters
string
- / required
|
@@ -1680,7 +1670,6 @@ Parameters
string
- / required
|
@@ -1704,7 +1693,7 @@ Parameters
|
|
- You must specify at least one of transition_date and transition_in_days
+ You must specify at least one of transition_date and transition_in_days.
|
@@ -1717,7 +1706,6 @@ Parameters
string
- / required
@@ -1751,7 +1739,7 @@ Parameters
|
The date value in ISO 8601 format.
- The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ)
+ The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ).
|
@@ -1788,7 +1776,7 @@ Parameters
|
- You must specify at least one of transition_date and transition_in_days
+ You must specify at least one of transition_date and transition_in_days.
|
@@ -1801,7 +1789,6 @@ Parameters
string
- / required
@@ -1835,7 +1822,7 @@ Parameters
|
The date value in ISO 8601 format.
- The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ)
+ The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ).
|
@@ -1949,7 +1936,6 @@ Parameters
string
- / required
@@ -2000,7 +1986,6 @@ Parameters
string
- / required
|
@@ -2018,7 +2003,6 @@ Parameters
string
- / required
|
@@ -2109,7 +2093,6 @@ Parameters
string
- / required
|
@@ -2145,7 +2128,6 @@ Parameters
dictionary
- / required
|
@@ -2186,7 +2168,6 @@ Parameters
string
- / required
|
@@ -2207,7 +2188,6 @@ Parameters
string
- / required
|
@@ -2228,7 +2208,6 @@ Parameters
string
- / required
|
@@ -2264,7 +2243,6 @@ Parameters
string
- / required
|
@@ -2300,7 +2278,6 @@ Parameters
dictionary
- / required
|
@@ -2341,7 +2318,6 @@ Parameters
string
- / required
|
@@ -2362,7 +2338,6 @@ Parameters
string
- / required
|
@@ -2383,7 +2358,6 @@ Parameters
string
- / required
|
@@ -2419,7 +2393,6 @@ Parameters
string
- / required
|
@@ -2455,7 +2428,6 @@ Parameters
dictionary
- / required
|
@@ -2496,7 +2468,6 @@ Parameters
string
- / required
|
@@ -2517,7 +2488,6 @@ Parameters
string
- / required
|
@@ -2538,7 +2508,6 @@ Parameters
string
- / required
|
@@ -2718,7 +2687,6 @@ Parameters
list
/ elements=dictionary
- / required
|
@@ -2763,8 +2731,7 @@ Parameters
|
|
- Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
- aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+ The profile option is mutually exclusive with the aws_access_key, aws_secret_key and security_token options.
aliases: aws_profile
|
@@ -2801,7 +2768,7 @@ Parameters
Specifies whether Amazon S3 should block public access control lists (ACLs) for this bucket and objects in this bucket.
- Setting this element to True causes the following behavior:
+ Setting this element to True causes the following behavior:.
- PUT Bucket acl and PUT Object acl calls fail if the specified ACL is public.
- PUT Object calls fail if the request includes a public ACL.
Enabling this setting doesnt affect existing policies or ACLs.
@@ -2934,7 +2901,6 @@ Parameters
string
- / required
|
@@ -3009,7 +2975,6 @@ Parameters
dictionary
- / required
|
@@ -3123,7 +3088,6 @@ Parameters
string
- / required
|
@@ -3182,7 +3146,6 @@ Parameters
integer
- / required
|
@@ -3203,7 +3166,6 @@ Parameters
string
- / required
|
@@ -3246,7 +3208,6 @@ Parameters
string
- / required
|
@@ -3270,7 +3231,6 @@ Parameters
dictionary
- / required
|
@@ -3291,7 +3251,6 @@ Parameters
integer
- / required
|
@@ -3417,7 +3376,6 @@ Parameters
string
- / required
|
@@ -3438,7 +3396,6 @@ Parameters
string
- / required
|
@@ -3496,7 +3453,6 @@ Parameters
string
- / required
|
@@ -3516,7 +3472,6 @@ Parameters
string
- / required
|
@@ -3624,7 +3579,6 @@ Parameters
string
- / required
|
@@ -3667,7 +3621,6 @@ Parameters
string
- / required
|
@@ -3691,7 +3644,6 @@ Parameters
string
- / required
|
@@ -3719,8 +3671,7 @@ Parameters
|
AWS STS security token . If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
- If profile is set this parameter is ignored.
- Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The security_token and profile options are mutually exclusive.
Aliases aws_session_token and session_token have been added in version 3.2.0.
aliases: aws_session_token, session_token, aws_security_token, access_token
|
@@ -3930,7 +3881,6 @@ Parameters
string
- / required
@@ -3988,7 +3938,6 @@ Parameters
dictionary
- / required
|
@@ -4093,7 +4042,7 @@ Parameters
|
|
- The specific object key to use in the redirect request.d
+ The specific object key to use in the redirect request.d.
|
@@ -4111,7 +4060,7 @@ Parameters
|
- A container for describing a condition that must be met for the specified redirect to apply.You must specify at least one of http_error_code_returned_equals and key_prefix_equals
+ A container for describing a condition that must be met for the specified redirect to apply.You must specify at least one of http_error_code_returned_equals and key_prefix_equals.
|
@@ -4167,6 +4116,41 @@ Notes
+Examples
+--------
+
+.. code-block:: yaml
+
+ - name: Create S3 bucket
+ amazon.cloud.s3_bucket:
+ bucket_name: '{{ bucket_name }}'
+ state: present
+ register: output
+
+ - name: Describe S3 bucket
+ amazon.cloud.s3_bucket:
+ state: describe
+ bucket_name: '{{ output.result.identifier }}'
+ register: _result
+
+ - name: List S3 buckets
+ amazon.cloud.s3_bucket:
+ state: list
+ register: _result
+
+ - name: Update S3 bucket public access block configuration and tags (diff=true)
+ amazon.cloud.s3_bucket:
+ bucket_name: '{{ output.result.identifier }}'
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ tags:
+ mykey: myval
+ diff: true
+ register: _result
@@ -4193,7 +4177,9 @@ Common return values are documented `here
always |
- Dictionary containing resource information.
+ When state=list, it is a list containing dictionaries of resource information.
+ Otherwise, it is a dictionary of resource information.
+ When state=absent, it is an empty dictionary.
|
diff --git a/docs/amazon.cloud.s3_multi_region_access_point_module.rst b/docs/amazon.cloud.s3_multi_region_access_point_module.rst
index 3f691860..4ef01aa7 100644
--- a/docs/amazon.cloud.s3_multi_region_access_point_module.rst
+++ b/docs/amazon.cloud.s3_multi_region_access_point_module.rst
@@ -17,7 +17,7 @@ Version added: 0.1.0
Synopsis
--------
-- Create and manage Amazon S3 Multi-Region Access Points (list, create, update, describe, delete).
+- Create and manage Amazon S3 Multi-Region Access Points.
@@ -25,9 +25,10 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
-- boto3 >= 1.17.0
-- botocore >= 1.20.0
-- python >= 3.6
+- python >= 3.9
+- boto3 >= 1.20.0
+- botocore >= 1.23.0
+- jsonpatch
Parameters
@@ -54,8 +55,7 @@ Parameters
AWS access key . If not set then the value of the AWS_ACCESS_KEY_ID , AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_access_key and profile options are mutually exclusive.
aliases: ec2_access_key, access_key
|
@@ -104,8 +104,7 @@ Parameters
AWS secret key . If not set then the value of the AWS_SECRET_ACCESS_KEY , AWS_SECRET_KEY , or EC2_SECRET_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_secret_key and profile options are mutually exclusive.
aliases: ec2_secret_key, secret_key
|
@@ -144,6 +143,26 @@ Parameters
aliases: aws_endpoint_url, endpoint_url
+
+
+
+ force
+
+
+ boolean
+
+ |
+
+
+ |
+
+ Cancel IN_PROGRESS and PENDING resource requestes.
+ Because you can only perform a single operation on a given resource at a time, there might be cases where you need to cancel the current resource operation to make the resource available so that another operation may be performed on it.
+ |
+
@@ -171,8 +190,7 @@ Parameters
|
|
- Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
- aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+ The profile option is mutually exclusive with the aws_access_key, aws_secret_key and security_token options.
aliases: aws_profile
|
@@ -188,7 +206,7 @@ Parameters
|
- The public_access_block configuration that you want to apply to this Multi Region Access Point.
+ The PublicAccessBlock configuration that you want to apply to this Multi Region Access Point.
You can enable the configuration options in any combination.
|
@@ -211,7 +229,7 @@ Parameters
Specifies whether Amazon S3 should block public access control lists (ACLs) for buckets in this account.
- Setting this element to True causes the following behavior:
+ Setting this element to True causes the following behavior:.
- PUT Bucket acl and PUT Object acl calls fail if the specified ACL is public.
- PUT Object calls fail if the request includes a public ACL.
. - PUT Bucket calls fail if the request includes a public ACL.
@@ -309,7 +327,6 @@ Parameters
list
/ elements=dictionary
- / required
|
@@ -320,13 +337,28 @@ Parameters
|
|
+
+
+ account_id
+
+
+ string
+
+ |
+
+ |
+
+ Not Provived.
+ |
+
+
+ |
bucket
string
- / required
|
@@ -349,8 +381,7 @@ Parameters
|
AWS STS security token . If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
- If profile is set this parameter is ignored.
- Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The security_token and profile options are mutually exclusive.
Aliases aws_session_token and session_token have been added in version 3.2.0.
aliases: aws_session_token, session_token, aws_security_token, access_token
|
@@ -475,7 +506,9 @@ Common return values are documented `here
always |
- Dictionary containing resource information.
+ When state=list, it is a list containing dictionaries of resource information.
+ Otherwise, it is a dictionary of resource information.
+ When state=absent, it is an empty dictionary.
|
diff --git a/docs/amazon.cloud.s3_multi_region_access_point_policy_module.rst b/docs/amazon.cloud.s3_multi_region_access_point_policy_module.rst
index 6cf21b22..d2c1bd82 100644
--- a/docs/amazon.cloud.s3_multi_region_access_point_policy_module.rst
+++ b/docs/amazon.cloud.s3_multi_region_access_point_policy_module.rst
@@ -25,9 +25,10 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
-- boto3 >= 1.17.0
-- botocore >= 1.20.0
-- python >= 3.6
+- python >= 3.9
+- boto3 >= 1.20.0
+- botocore >= 1.23.0
+- jsonpatch
Parameters
@@ -54,8 +55,7 @@ Parameters
AWS access key . If not set then the value of the AWS_ACCESS_KEY_ID , AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_access_key and profile options are mutually exclusive.
aliases: ec2_access_key, access_key
|
@@ -104,8 +104,7 @@ Parameters
AWS secret key . If not set then the value of the AWS_SECRET_ACCESS_KEY , AWS_SECRET_KEY , or EC2_SECRET_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_secret_key and profile options are mutually exclusive.
aliases: ec2_secret_key, secret_key
|
@@ -144,6 +143,26 @@ Parameters
aliases: aws_endpoint_url, endpoint_url
+
+
+
+ force
+
+
+ boolean
+
+ |
+
+
+ |
+
+ Cancel IN_PROGRESS and PENDING resource requestes.
+ Because you can only perform a single operation on a given resource at a time, there might be cases where you need to cancel the current resource operation to make the resource available so that another operation may be performed on it.
+ |
+
@@ -151,13 +170,12 @@ Parameters
string
- / required
|
|
- The name of the Multi Region Access Point to apply policy
+ The name of the Multi Region Access Point to apply policy.
|
@@ -167,13 +185,12 @@ Parameters
dictionary
- / required
|
- Policy document to apply to a Multi Region Access Point
+ Policy document to apply to a Multi Region Access Point.
|
@@ -188,8 +205,7 @@ Parameters
|
- Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
- aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+ The profile option is mutually exclusive with the aws_access_key, aws_secret_key and security_token options.
aliases: aws_profile
|
@@ -222,8 +238,7 @@ Parameters
AWS STS security token . If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
- If profile is set this parameter is ignored.
- Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The security_token and profile options are mutually exclusive.
Aliases aws_session_token and session_token have been added in version 3.2.0.
aliases: aws_session_token, session_token, aws_security_token, access_token
|
@@ -348,7 +363,9 @@ Common return values are documented `here
always |
- Dictionary containing resource information.
+ When state=list, it is a list containing dictionaries of resource information.
+ Otherwise, it is a dictionary of resource information.
+ When state=absent, it is an empty dictionary.
|
diff --git a/docs/amazon.cloud.s3_object_lambda_access_point_module.rst b/docs/amazon.cloud.s3objectlambda_access_point_module.rst
similarity index 91%
rename from docs/amazon.cloud.s3_object_lambda_access_point_module.rst
rename to docs/amazon.cloud.s3objectlambda_access_point_module.rst
index 041c1d97..9067f47b 100644
--- a/docs/amazon.cloud.s3_object_lambda_access_point_module.rst
+++ b/docs/amazon.cloud.s3objectlambda_access_point_module.rst
@@ -1,9 +1,9 @@
-.. _amazon.cloud.s3_object_lambda_access_point_module:
+.. _amazon.cloud.s3objectlambda_access_point_module:
-******************************************
-amazon.cloud.s3_object_lambda_access_point
-******************************************
+****************************************
+amazon.cloud.s3objectlambda_access_point
+****************************************
**Create and manage Object Lambda Access Points used to access S3 buckets**
@@ -17,7 +17,7 @@ Version added: 0.1.0
Synopsis
--------
-- Create and manage Object Lambda Access Points used to access S3 buckets (list, create, update, describe, delete).
+- Create and manage Object Lambda Access Points used to access S3 buckets.
@@ -25,9 +25,10 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
-- boto3 >= 1.17.0
-- botocore >= 1.20.0
-- python >= 3.6
+- python >= 3.9
+- boto3 >= 1.20.0
+- botocore >= 1.23.0
+- jsonpatch
Parameters
@@ -54,8 +55,7 @@ Parameters
AWS access key . If not set then the value of the AWS_ACCESS_KEY_ID , AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_access_key and profile options are mutually exclusive.
aliases: ec2_access_key, access_key
|
@@ -104,8 +104,7 @@ Parameters
AWS secret key . If not set then the value of the AWS_SECRET_ACCESS_KEY , AWS_SECRET_KEY , or EC2_SECRET_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_secret_key and profile options are mutually exclusive.
aliases: ec2_secret_key, secret_key
|
@@ -144,6 +143,26 @@ Parameters
aliases: aws_endpoint_url, endpoint_url
+
+
+
+ force
+
+
+ boolean
+
+ |
+
+
+ |
+
+ Cancel IN_PROGRESS and PENDING resource requestes.
+ Because you can only perform a single operation on a given resource at a time, there might be cases where you need to cancel the current resource operation to make the resource available so that another operation may be performed on it.
+ |
+
@@ -166,13 +185,12 @@ Parameters
dictionary
- / required
|
|
- The Object lambda Access Point Configuration that configures transformations to be applied on the objects on specified S3 actions_configuration to be applied to this Object lambda Access Point.
+ The Object lambda Access Point Configuration that configures transformations to be applied on the objects on specified S3 ActionsConfiguration to be applied to this Object lambda Access Point.
It specifies Supporting Access Point, Transformation Configurations.
Customers can also set if they like to enable Cloudwatch metrics for accesses to this Object lambda Access Point.
Default setting for Cloudwatch metrics is disable.
@@ -223,7 +241,6 @@ Parameters
string
- / required
|
@@ -259,7 +276,6 @@ Parameters
list
/ elements=string
- / required
|
@@ -277,7 +293,6 @@ Parameters
dictionary
- / required
|
@@ -315,7 +330,6 @@ Parameters
string
- / required
|
@@ -359,8 +373,7 @@ Parameters
|
|
- Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
- aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+ The profile option is mutually exclusive with the aws_access_key, aws_secret_key and security_token options.
aliases: aws_profile
|
@@ -393,8 +406,7 @@ Parameters
AWS STS security token . If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
- If profile is set this parameter is ignored.
- Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The security_token and profile options are mutually exclusive.
Aliases aws_session_token and session_token have been added in version 3.2.0.
aliases: aws_session_token, session_token, aws_security_token, access_token
|
@@ -519,7 +531,9 @@ Common return values are documented `here
always |
- Dictionary containing resource information.
+ When state=list, it is a list containing dictionaries of resource information.
+ Otherwise, it is a dictionary of resource information.
+ When state=absent, it is an empty dictionary.
|
diff --git a/docs/amazon.cloud.s3_object_lambda_access_point_policy_module.rst b/docs/amazon.cloud.s3objectlambda_access_point_policy_module.rst
similarity index 88%
rename from docs/amazon.cloud.s3_object_lambda_access_point_policy_module.rst
rename to docs/amazon.cloud.s3objectlambda_access_point_policy_module.rst
index 68c32e5a..2ed090b3 100644
--- a/docs/amazon.cloud.s3_object_lambda_access_point_policy_module.rst
+++ b/docs/amazon.cloud.s3objectlambda_access_point_policy_module.rst
@@ -1,9 +1,9 @@
-.. _amazon.cloud.s3_object_lambda_access_point_policy_module:
+.. _amazon.cloud.s3objectlambda_access_point_policy_module:
-*************************************************
-amazon.cloud.s3_object_lambda_access_point_policy
-*************************************************
+***********************************************
+amazon.cloud.s3objectlambda_access_point_policy
+***********************************************
**Specifies the Object Lambda Access Point resource policy document**
@@ -25,9 +25,10 @@ Requirements
------------
The below requirements are needed on the host that executes this module.
-- boto3 >= 1.17.0
-- botocore >= 1.20.0
-- python >= 3.6
+- python >= 3.9
+- boto3 >= 1.20.0
+- botocore >= 1.23.0
+- jsonpatch
Parameters
@@ -54,8 +55,7 @@ Parameters
AWS access key . If not set then the value of the AWS_ACCESS_KEY_ID , AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_access_key and profile options are mutually exclusive.
aliases: ec2_access_key, access_key
|
@@ -104,8 +104,7 @@ Parameters
AWS secret key . If not set then the value of the AWS_SECRET_ACCESS_KEY , AWS_SECRET_KEY , or EC2_SECRET_KEY environment variable is used.
- If profile is set this parameter is ignored.
- Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The aws_secret_key and profile options are mutually exclusive.
aliases: ec2_secret_key, secret_key
|
@@ -144,6 +143,26 @@ Parameters
aliases: aws_endpoint_url, endpoint_url
+
+
+
+ force
+
+
+ boolean
+
+ |
+
+
+ |
+
+ Cancel IN_PROGRESS and PENDING resource requestes.
+ Because you can only perform a single operation on a given resource at a time, there might be cases where you need to cancel the current resource operation to make the resource available so that another operation may be performed on it.
+ |
+
@@ -151,13 +170,12 @@ Parameters
string
- / required
|
|
- The name of the Amazon S3 object_lambda_access_point to which the policy applies.
+ The name of the Amazon S3 ObjectLambdaAccessPoint to which the policy applies.
|
@@ -167,13 +185,12 @@ Parameters
dictionary
- / required
|
- A policy document containing permissions to add to the specified object_lambda_access_point.
+ A policy document containing permissions to add to the specified ObjectLambdaAccessPoint.
|
@@ -189,8 +206,7 @@ Parameters
|
- Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
- aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+ The profile option is mutually exclusive with the aws_access_key, aws_secret_key and security_token options.
aliases: aws_profile
|
@@ -223,8 +239,7 @@ Parameters
AWS STS security token . If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
- If profile is set this parameter is ignored.
- Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+ The security_token and profile options are mutually exclusive.
Aliases aws_session_token and session_token have been added in version 3.2.0.
aliases: aws_session_token, session_token, aws_security_token, access_token
|
@@ -349,7 +364,9 @@ Common return values are documented `here
always |
- Dictionary containing resource information.
+ When state=list, it is a list containing dictionaries of resource information.
+ Otherwise, it is a dictionary of resource information.
+ When state=absent, it is an empty dictionary.
|
diff --git a/galaxy.yml b/galaxy.yml
index f7279309..2accf8b0 100644
--- a/galaxy.yml
+++ b/galaxy.yml
@@ -2,7 +2,7 @@
namespace: amazon
name: cloud
-version: 0.1.0
+version: 0.2.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)
@@ -10,7 +10,7 @@ description: null
license_file: LICENSE
tags: [amazon, cloud, aws]
dependencies:
- amazon.aws: '>=3.0.0'
+ amazon.aws: '>=4.1.0'
repository: https://github.com/ansible-collections/amazon.cloud
documentation: https://github.com/ansible-collections/amazon.cloud/tree/main/docs
homepage: https://github.com/ansible-collections/amazon.cloud
diff --git a/meta/runtime.yml b/meta/runtime.yml
index e721f896..f755fb96 100644
--- a/meta/runtime.yml
+++ b/meta/runtime.yml
@@ -12,12 +12,39 @@ action_groups:
- logs_log_group
- logs_query_definition
- logs_resource_policy
- - rdsdb_proxy
+ - rds_db_proxy
- redshift_cluster
- redshift_event_subscription
- s3_access_point
- s3_bucket
- s3_multi_region_access_point
- s3_multi_region_access_point_policy
+ - s3objectlambda_access_point
+ - s3objectlambda_access_point_policy
+ - eks_fargate_profile
+ - dynamodb_global_table
+ - eks_addon
+ - iam_server_certificate
+ - kms_alias
+ - kms_replica_key
+ - rds_db_proxy_endpoint
+ - redshift_endpoint_access
+ - redshift_endpoint_authorization
+ - redshift_scheduled_action
+ - route53_dnssec
+ - route53_key_signing_key
+ - cloudtrail_trail
+ - cloudtrail_event_data_store
+ - cloudwatch_composite_alarm
+ - cloudwatch_metric_stream
+ - rdsdb_proxy
- s3_object_lambda_access_point
- s3_object_lambda_access_point_policy
+plugin_routing:
+ modules:
+ rdsdb_proxy:
+ redirect: amazon.cloud.rds_db_proxy
+ s3_object_lambda_access_point:
+ redirect: amazon.cloud.s3objectlambda_access_point
+ s3_object_lambda_access_point_policy:
+ redirect: amazon.cloud.s3objectlambda_access_point_policy
diff --git a/plugins/module_utils/core.py b/plugins/module_utils/core.py
index b5e7151a..25321a8f 100644
--- a/plugins/module_utils/core.py
+++ b/plugins/module_utils/core.py
@@ -35,23 +35,27 @@
import json
+import time
import traceback
from itertools import count
-from typing import Iterable, List, Dict, Optional
+from typing import Iterable, List, Dict, Optional, Union
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
from .utils import (
- JsonPatch,
- make_op,
- op,
normalize_response,
scrub_keys,
to_sync,
to_async,
ansible_dict_to_boto3_tag_list,
snake_dict_to_camel_dict,
+ diff_dicts,
+ snake_to_camel,
+ json_patch,
+ get_patch,
)
+from ansible_collections.amazon.cloud.plugins.module_utils.waiters import get_waiter
+
BOTO3_IMP_ERR = None
try:
import botocore
@@ -69,7 +73,7 @@ def __init__(self, module):
"""
self.module = module
self.client = module.client(
- "cloudcontrol", retry_decorator=AWSRetry.jittered_backoff()
+ "cloudcontrol", retry_decorator=AWSRetry.jittered_backoff(retries=10)
)
@property
@@ -78,20 +82,40 @@ def _waiter_config(self):
max_attempts = self.module.params.get("wait_timeout") // delay
return {"Delay": delay, "MaxAttempts": max_attempts}
- def wait_until_resource_request_success(self, request_token):
+ def wait_until_resource_request_success(self, request_token: str):
try:
- self.client.get_waiter("resource_request_success").wait(
+ # This waiter 'resource_request_success' only waits to reach SUCCESS status. It fails otherwise.
+ # botocore.exceptions.WaiterError: Waiter ResourceRequestSuccess failed: Waiter encountered a terminal failure
+ # state: For expression "ProgressEvent.OperationStatus" we matched expected path: CANCEL_COMPLETE
+ # See https://github.com/boto/botocore/blob/develop/botocore/data/cloudcontrol/2021-09-30/waiters-2.json
+ # We should wait for CANCEL_IN_PROGRESS and reach CANCEL_COMPLETE before updating.
+ # Fall to a custom waiter.
+ #
+ # self.client.get_waiter("resource_request_success").wait(
+ # RequestToken=request_token,
+ # WaiterConfig=self._waiter_config,
+ # )
+ get_waiter(self.client, "resource_request_success").wait(
RequestToken=request_token,
WaiterConfig=self._waiter_config,
)
except botocore.exceptions.WaiterError as e:
self.module.fail_json_aws(
- e,
- msg="An error occurred waiting for the resource request to become successful.",
+ e.last_response["ProgressEvent"]["StatusMessage"],
+ msg="Resource request failed to reach successful state",
+ )
+ except (
+ botocore.exceptions.BotoCoreError,
+ botocore.exceptions.ClientError,
+ ) as e:
+ self.module.fail_json_aws(
+ e, msg="Unable to wait for the resource request to become successful"
)
@to_sync
- async def list_resources(self, type_name: str) -> List:
+ async def list_resources(
+ self, type_name: str, identifiers: Optional[List] = None
+ ) -> List:
"""
An exception occurred during task execution. To see the full traceback, use -vvv.
The error was: botocore.exceptions.OperationNotPageableError: Operation cannot be paginated: list_resources
@@ -105,8 +129,17 @@ async def list_resources(self, type_name: str) -> List:
# https://docs.aws.amazon.com/cloudcontrolapi/latest/APIReference/API_ListResources.html
params = {
# https://docs.aws.amazon.com/cloudcontrolapi/latest/userguide/supported-resources.html
- "TypeName": type_name
+ "TypeName": type_name,
}
+ # When a resource is identified using compound identifiers
+ if identifiers:
+ additional_properties: Dict = {}
+ for id in identifiers:
+ additional_properties[
+ snake_to_camel(id, capitalize_first=True)
+ ] = self.module.params.get(id)
+ params["ResourceModel"] = json.dumps(additional_properties)
+
if i == 0 or "NextToken" in response:
if "NextToken" in response:
params["NextToken"] = response["NextToken"]
@@ -169,13 +202,25 @@ def list_resource_requests(self, params: Iterable) -> List:
def get_resources_async(self, type_name, identifier):
return self.get_resource(type_name, identifier)
- def get_resource(self, type_name: str, primary_identifier: str) -> List:
- # This is the "describe" equivalent for CCAPI
+ def get_resource(
+ self, type_name: str, primary_identifier: Union[str, List, Dict]
+ ) -> List:
+ # This is the "describe" equivalent for AWS Cloud Control API
response: Dict = {}
+ identifier: Dict = {}
+
+ if isinstance(primary_identifier, list):
+ for id in primary_identifier:
+ identifier[
+ snake_to_camel(id, capitalize_first=True)
+ ] = self.module.params.get(id)
+ primary_identifier = json.dumps(identifier)
+ elif isinstance(primary_identifier, dict):
+ primary_identifier = json.dumps(primary_identifier)
try:
response = self.client.get_resource(
- TypeName=type_name, Identifier=primary_identifier
+ TypeName=type_name, Identifier=primary_identifier, aws_retry=True
)
except self.client.exceptions.ResourceNotFoundException:
return response
@@ -186,30 +231,54 @@ def get_resource(self, type_name: str, primary_identifier: str) -> List:
self.module.fail_json_aws(e, msg="Failed to retrieve resource")
result: List = normalize_response(response)
+
return result
def present(
self,
type_name: str,
- identifier: str,
+ primary_identifier: List,
params: Dict,
- create_only_params: Optional[List] = None,
+ create_only_params: List,
) -> bool:
+ results = {"changed": False, "result": {}}
create_only_params = create_only_params or []
+ identifier: Dict = {}
+
+ resource = None
+
+ if self.module.params.get("identifier"):
+ identifier = self.module.params.get("identifier")
+ else:
+ for id in primary_identifier:
+ identifier[
+ snake_to_camel(id, capitalize_first=True)
+ ] = self.module.params.get(id)
+ identifier = json.dumps(identifier)
+
try:
resource = self.client.get_resource(
- TypeName=type_name, Identifier=identifier
+ TypeName=type_name, Identifier=identifier, aws_retry=True
)
- return self.update_resource(resource, params, create_only_params)
+ results = self.update_resource(resource, params, create_only_params)
except self.client.exceptions.ResourceNotFoundException:
- return self.create_resource(type_name, identifier, params)
+ if self.module.params.get("identifier"):
+ self.module.fail_json(
+ f"""You must specify both {*primary_identifier, } to create a new resource.
+ The identifier parameter can only be used to manipulate an existing resource."""
+ )
+ results["changed"] |= self.create_resource(type_name, params)
except (
botocore.exceptions.BotoCoreError,
botocore.exceptions.ClientError,
) as e:
self.module.fail_json_aws(e, msg="Failed to modify resource")
- def create_resource(self, type_name: str, identifier: str, params: Dict) -> bool:
+ results["result"] = self.get_resource(type_name, identifier)
+
+ return results
+
+ def create_resource(self, type_name: str, params: Dict) -> bool:
changed: bool = False
params = json.dumps(params)
@@ -218,24 +287,28 @@ def create_resource(self, type_name: str, identifier: str, params: Dict) -> bool
response = self.client.create_resource(
TypeName=type_name, DesiredState=params
)
- self.wait_until_resource_request_success(
- response["ProgressEvent"]["RequestToken"]
- )
except (
botocore.exceptions.BotoCoreError,
botocore.exceptions.ClientError,
) as e:
self.module.fail_json_aws(e, msg="Failed to create resource")
+
+ self.wait_until_resource_request_success(
+ response["ProgressEvent"]["RequestToken"]
+ )
changed: bool = True
return changed
def check_in_progress_requests(self, type_name: str, identifier: str):
- in_progress_requests = []
+ in_progress_requests: List = []
params = {
"ResourceRequestStatusFilter": {
"Operations": ["CREATE", "DELETE", "UPDATE"],
- "OperationStatuses": ["IN_PROGRESS"],
+ "OperationStatuses": [
+ "IN_PROGRESS",
+ "PENDING",
+ ],
}
}
@@ -249,19 +322,38 @@ def check_in_progress_requests(self, type_name: str, identifier: str):
response,
)
)
+ return in_progress_requests
+
+ def wait_for_in_progress_requests(
+ self, in_progress_requests: List, identifier: str
+ ):
+ # Dont warn if nothing to wait on
+ if in_progress_requests:
+ self.module.warn(
+ f"There is one or more IN PROGRESS operations on {identifier}. Wait until there are no more IN PROGRESS operations before proceding."
+ )
+ [
+ self.wait_until_resource_request_success(e["RequestToken"])
+ for e in in_progress_requests
+ ]
- if in_progress_requests:
- self.module.warn(
- f"There is one or more IN PROGRESS operations on {identifier}. Wait until there are no more IN PROGRESS operations before proceding."
- )
- for e in in_progress_requests:
- self.wait_until_resource_request_success(e["RequestToken"])
-
- def absent(self, type_name: str, identifier: str):
+ def absent(self, type_name: str, primary_identifier: List):
changed: bool = False
+ identifier: Dict = {}
+ response: Dict = {}
+
+ if self.module.params.get("identifier"):
+ identifier = self.module.params.get("identifier")
+ else:
+ for id in primary_identifier:
+ identifier[
+ snake_to_camel(id, capitalize_first=True)
+ ] = self.module.params.get(id)
+ identifier = json.dumps(identifier)
+
try:
response = self.client.get_resource(
- TypeName=type_name, Identifier=identifier
+ TypeName=type_name, Identifier=identifier, aws_retry=True
)
except self.client.exceptions.ResourceNotFoundException:
return changed
@@ -271,29 +363,73 @@ def absent(self, type_name: str, identifier: str):
) as e:
self.module.fail_json_aws(e, msg="Failed to retrieve resource")
else:
- return self.delete_resource(type_name, identifier)
+ return self.delete_resource(
+ type_name, response["ResourceDescription"]["Identifier"]
+ )
def delete_resource(self, type_name: str, identifier: str) -> bool:
changed: bool = True
+ in_progress_requests: List = []
+
+ in_progress_requests = self.check_in_progress_requests(type_name, identifier)
+ # There is already a delete operation IN PROGRESS
+ if any(
+ filter(
+ lambda d: d["Operation"] == "DELETE",
+ in_progress_requests,
+ )
+ ):
+ changed = False
- if not self.module.check_mode:
- try:
- self.check_in_progress_requests(type_name, identifier)
- response = self.client.delete_resource(
- TypeName=type_name, Identifier=identifier
- )
- if self.module.params.get("wait"):
- self.wait_until_resource_request_success(
- response["ProgressEvent"]["RequestToken"]
- )
- except (
- botocore.exceptions.BotoCoreError,
- botocore.exceptions.ClientError,
- ) as e:
- self.module.fail_json_aws(e, msg="Failed to delete resource")
+ if self.module.check_mode:
+ return changed
+
+ self.wait_for_in_progress_requests(in_progress_requests, identifier)
+ try:
+ response = self.client.delete_resource(
+ TypeName=type_name, Identifier=identifier
+ )
+ except self.client.exceptions.ResourceNotFoundException:
+ # If the resource has been deleted by an IN PROGRESS delete operation
+ return changed
+ except (
+ botocore.exceptions.BotoCoreError,
+ botocore.exceptions.ClientError,
+ ) as e:
+ self.module.fail_json_aws(e, msg="Failed to delete resource")
+
+ if self.module.params.get("wait"):
+ self.wait_until_resource_request_success(
+ response["ProgressEvent"]["RequestToken"]
+ )
return changed
+ def ensure_request_status(self, response: Dict) -> bool:
+ # Wait until reource request becomes IN_PROGRESS
+ time_end = time.time() + self.module.params.get("wait_timeout")
+ delay = 15
+
+ while time.time() < time_end:
+ if response and response["ProgressEvent"]["OperationStatus"] == "PENDING":
+ try:
+ response = self.client.get_resource_request_status(
+ RequestToken=response["ProgressEvent"]["RequestToken"]
+ )
+ except (
+ botocore.exceptions.BotoCoreError,
+ botocore.exceptions.ClientError,
+ ) as e:
+ self.module.fail_json_aws(
+ e, msg="Failed to get resource request status"
+ )
+ else:
+ return
+ time.sleep(delay)
+
+ # Timeout occured
+ self.module.fail_json(msg="Timeout occured waiting for resource request")
+
def update_resource(
self,
resource: Dict,
@@ -303,41 +439,78 @@ def update_resource(
identifier = resource["ResourceDescription"]["Identifier"]
type_name = resource["TypeName"]
properties = json.loads(resource["ResourceDescription"]["Properties"])
- changed: bool = False
+ results: Dict = {"changed": False, "result": []}
+ obj = None
# Ignore createOnlyProperties that can be set only during resource creation
- params = scrub_keys(params_to_set, create_only_params)
-
- patch = JsonPatch()
- for k, v in params.items():
- strategy = "merge"
- if v == properties.get(k):
- continue
- if k not in properties:
- patch.append(op("add", k, v))
- else:
- if self.module.params.get("purge_{0}".format(k.lower())):
- strategy = "replace"
- patch.append(make_op(k, properties[k], v, strategy))
+ params = scrub_keys(
+ params_to_set,
+ [
+ snake_to_camel(elem, capitalize_first=True)
+ for elem in create_only_params
+ ],
+ )
- if patch:
- try:
- if not self.module.check_mode:
- self.check_in_progress_requests(type_name, identifier)
+ in_progress_requests = self.check_in_progress_requests(type_name, identifier)
+
+ if not self.module.check_mode:
+ if self.module.params.get("force"):
+ self.module.warn(
+ f"There is one or more IN PROGRESS or PENDING resource requests on {identifier} that will be cancelled."
+ )
+ try:
+ for e in in_progress_requests:
+ self.client.cancel_resource_request(
+ RequestToken=e["RequestToken"]
+ )
+ except (
+ botocore.exceptions.BotoCoreError,
+ botocore.exceptions.ClientError,
+ ) as e:
+ self.module.fail_json_aws(
+ e, msg="Failed to cancel resource request"
+ )
+
+ patch = get_patch(self.module, params, properties)
+ obj, error = json_patch(properties, patch)
+ if error:
+ self.module.fail_json(**error)
+ match, diffs = diff_dicts(properties, obj)
+ if not self.module.check_mode:
+ # To handle idempotency when purge_* params are False (where the patch is always generated with strategy='replace')
+ # call self.client.update_resource() only when there's a difference
+ if diffs:
+ # Wait for IN PROGRESS or PENDING resource requests to avoid concurrency exceptions
+ self.wait_for_in_progress_requests(in_progress_requests, identifier)
+ try:
response = self.client.update_resource(
TypeName=type_name,
Identifier=identifier,
PatchDocument=str(patch),
)
- if self.module.params.get("wait"):
- self.wait_until_resource_request_success(
- response["ProgressEvent"]["RequestToken"]
- )
- changed = True
- except (
- botocore.exceptions.BotoCoreError,
- botocore.exceptions.ClientError,
- ) as e:
- self.module.fail_json_aws(e, msg="Failed to update resource")
- return changed
+ except (
+ botocore.exceptions.BotoCoreError,
+ botocore.exceptions.ClientError,
+ ) as e:
+ self.module.fail_json_aws(e, msg="Failed to update resource")
+
+ # Ensure the request is at least IN_PROGRESS to return updated information
+ # Tag updates hangs on PENDING sometimes and updates are not reflected on the resource at this stage
+ self.ensure_request_status(response)
+
+ if self.module.params.get("wait"):
+ self.wait_until_resource_request_success(
+ response["ProgressEvent"]["RequestToken"]
+ )
+ else:
+ # If there's no update and wait=True
+ # wait for any in_progress resource request to complete
+ if self.module.params.get("wait"):
+ self.wait_for_in_progress_requests(in_progress_requests, identifier)
+
+ results["changed"] = not match
+ if self.module._diff:
+ results["diff"] = diffs
+
+ return results
diff --git a/plugins/module_utils/utils.py b/plugins/module_utils/utils.py
index 715ef28f..7553abb3 100644
--- a/plugins/module_utils/utils.py
+++ b/plugins/module_utils/utils.py
@@ -1,14 +1,27 @@
import re
+import copy
import json
import functools
-from typing import Iterable, List, Dict
+import traceback
+from typing import Iterable, List, Dict, Union
+
+JSON_PATCH_IMPORT_ERR = None
+try:
+ import jsonpatch
+
+ HAS_JSON_PATCH = True
+except ImportError:
+ HAS_JSON_PATCH = False
+ JSON_PATCH_IMPORT_ERR = traceback.format_exc()
from ansible.module_utils.common.dict_transformations import (
camel_dict_to_snake_dict,
snake_dict_to_camel_dict,
+ recursive_diff,
)
from ansible.module_utils._text import to_native
+from ansible.module_utils.basic import missing_required_lib
def to_async(fn):
@@ -44,8 +57,6 @@ def _jsonify(data: Dict) -> Dict:
identifier = data.get("Identifier", None)
# Convert the Resource Properties from a str back to json
properties = json.loads(data.get("Properties", None))
- if properties and "Tags" in properties:
- properties["tags"] = boto3_tag_list_to_ansible_dict(properties["Tags"])
data = {"identifier": identifier, "properties": properties}
return data
@@ -75,6 +86,15 @@ def prepend_underscore_and_lower(m):
return re.sub(all_cap_pattern, r"\1_\2", s2).lower()
+def snake_to_camel(snake, capitalize_first=False):
+ if capitalize_first:
+ return "".join(x.capitalize() or "_" for x in snake.split("_"))
+ else:
+ return snake.split("_")[0] + "".join(
+ x.capitalize() or "_" for x in snake.split("_")[1:]
+ )
+
+
def scrub_keys(a_dict: Dict, list_of_keys_to_remove: List[str]) -> Dict:
"""Filter a_dict by removing unwanted key: values listed in list_of_keys_to_remove"""
if not isinstance(a_dict, dict):
@@ -83,20 +103,23 @@ def scrub_keys(a_dict: Dict, list_of_keys_to_remove: List[str]) -> Dict:
def normalize_response(response: Iterable):
- result: List = []
-
resource_descriptions = response.get("ResourceDescription", {}) or response.get(
"ResourceDescriptions", []
)
+
+ def _normalize_response(resource_description):
+ json_res = _jsonify(resource_description)
+ snaked_res = camel_dict_to_snake_dict(json_res)
+ if "tags" in snaked_res["properties"]:
+ snaked_res["properties"]["tags"] = boto3_tag_list_to_ansible_dict(
+ snaked_res["properties"]["tags"]
+ )
+ return snaked_res
+
if isinstance(resource_descriptions, list):
- res = [_jsonify(r_d) for r_d in resource_descriptions]
- _result = [camel_dict_to_snake_dict(r) for r in res]
- result.append(_result)
+ return [_normalize_response(resource) for resource in resource_descriptions]
else:
- result.append(_jsonify(resource_descriptions))
- result = [camel_dict_to_snake_dict(res) for res in result]
-
- return result
+ return _normalize_response(resource_descriptions)
def ansible_dict_to_boto3_tag_list(
@@ -176,17 +199,58 @@ def boto3_tag_list_to_ansible_dict(
)
+def diff_dicts(existing: Dict, new: Dict) -> Union[bool, Dict]:
+ result: Dict = {}
+
+ diff = recursive_diff(existing, new)
+
+ if not diff:
+ return True, {}
+
+ result["before"] = diff[0]
+ result["after"] = diff[1]
+
+ return False, result
+
+
+def json_patch(existing, patch):
+ if not HAS_JSON_PATCH:
+ error = {
+ "msg": missing_required_lib("jsonpatch"),
+ "exception": JSON_PATCH_IMPORT_ERR,
+ }
+ return None, error
+ try:
+ patch = jsonpatch.JsonPatch(patch)
+ patched = patch.apply(existing)
+ return patched, None
+ except jsonpatch.InvalidJsonPatch as e:
+ error = {"msg": "Invalid JSON patch", "exception": e}
+ return None, error
+ except jsonpatch.JsonPatchConflict as e:
+ error = {"msg": "Patch could not be applied due to a conflict", "exception": e}
+ return None, error
+
+
class JsonPatch(list):
def __str__(self):
return json.dumps(self)
-def list_merge(old, new):
- l = []
- for i in old + new:
- if i not in l:
- l.append(i)
- return l
+def find_tag_by_key(key, tags):
+ for tag in tags:
+ if tag["Key"] == key:
+ return tag
+
+
+def tag_merge(t1, t2):
+ for tag in t2:
+ existing = find_tag_by_key(tag["Key"], t1)
+ if existing:
+ existing["Value"] = tag["Value"]
+ else:
+ t1.append(tag)
+ return t1
def op(operation, path, value):
@@ -194,13 +258,45 @@ def op(operation, path, value):
return {"op": operation, "path": path, "value": value}
-# This is a rather naive implementation. Dictionaries within
-# lists and lists within dictionaries will not be merged.
def make_op(path, old, new, strategy):
+ _new_cpy = copy.deepcopy(new)
+
if isinstance(old, dict):
if strategy == "merge":
- new = dict(old, **new)
+ _new_cpy = dict(old, **new)
elif isinstance(old, list):
if strategy == "merge":
- new = list_merge(old, new)
- return op("replace", path, new)
+ _old_cpy = copy.deepcopy(old)
+ _new_cpy = tag_merge(_old_cpy, new)
+
+ return op("replace", path, _new_cpy)
+
+
+def get_patch(module, params, properties):
+ patch = JsonPatch()
+
+ for k, v_in in params.items():
+ strategy = "merge"
+ if k in properties:
+ v_exisiting = properties.get(k)
+ # Continue loop if both values are equal
+ if v_in == v_exisiting:
+ continue
+ # Compare lists contents, not order (i.e. list of tag dicts)
+ if isinstance(v_in, list) and isinstance(v_exisiting, list):
+ if [tag for tag in v_in if tag not in v_exisiting] == [] and [
+ tag for tag in v_exisiting if tag not in v_in
+ ] == []:
+ continue
+ # If purge, then replace old resource
+ if module.params.get("purge_{0}".format(k.lower())):
+ strategy = "replace"
+ # Add difference to JSON patch
+ patch.append(make_op(k, v_exisiting, v_in, strategy))
+ else:
+ # Add patch if key isnt in properties - dont add tags if tags = {} and no tags on resource
+ if k == "Tags" and v_in == [] and "tags" not in properties:
+ continue
+ patch.append(op("add", k, v_in))
+
+ return patch
diff --git a/plugins/module_utils/waiters.py b/plugins/module_utils/waiters.py
new file mode 100644
index 00000000..60fe082b
--- /dev/null
+++ b/plugins/module_utils/waiters.py
@@ -0,0 +1,108 @@
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+import copy
+
+try:
+ import botocore.waiter as core_waiter
+except ImportError:
+ pass # caught by HAS_BOTO3
+
+from ansible_collections.amazon.aws.plugins.module_utils.modules import (
+ _RetryingBotoClientWrapper,
+)
+
+
+cloudcontrolapi_data = {
+ "version": 2,
+ "waiters": {
+ "ResourceRequestSuccess": {
+ "description": "Wait until resource operation request is successful",
+ "delay": 5,
+ "maxAttempts": 24,
+ "operation": "GetResourceRequestStatus",
+ "acceptors": [
+ {
+ "matcher": "path",
+ "argument": "ProgressEvent.OperationStatus",
+ "state": "success",
+ "expected": "SUCCESS",
+ },
+ {
+ "matcher": "path",
+ "argument": "ProgressEvent.OperationStatus",
+ "state": "failure",
+ "expected": "FAILED",
+ },
+ {
+ "matcher": "path",
+ "argument": "ProgressEvent.OperationStatus",
+ "state": "success",
+ "expected": "CANCEL_COMPLETE",
+ },
+ ],
+ }
+ },
+}
+
+
+def _inject_limit_retries(model):
+
+ extra_retries = [
+ "RequestLimitExceeded",
+ "Unavailable",
+ "ServiceUnavailable",
+ "InternalFailure",
+ "InternalError",
+ "TooManyRequestsException",
+ "Throttling",
+ ]
+
+ acceptors = []
+ for error in extra_retries:
+ acceptors.append({"state": "success", "matcher": "error", "expected": error})
+
+ _model = copy.deepcopy(model)
+
+ for waiter in model["waiters"]:
+ _model["waiters"][waiter]["acceptors"].extend(acceptors)
+
+ return _model
+
+
+def cloudcontrolapi_model(name):
+ cloudcontrolapi_models = core_waiter.WaiterModel(
+ waiter_config=_inject_limit_retries(cloudcontrolapi_data)
+ )
+ return cloudcontrolapi_models.get_waiter(name)
+
+
+waiters_by_name = {
+ (
+ "CloudControlApi",
+ "resource_request_success",
+ ): lambda cloudcontrol: core_waiter.Waiter(
+ "resource_request_success",
+ cloudcontrolapi_model("ResourceRequestSuccess"),
+ core_waiter.NormalizedOperationMethod(cloudcontrol.get_resource_request_status),
+ ),
+}
+
+
+def get_waiter(client, waiter_name):
+ if isinstance(client, _RetryingBotoClientWrapper):
+ return get_waiter(client.client, waiter_name)
+ try:
+ return waiters_by_name[(client.__class__.__name__, waiter_name)](client)
+ except KeyError:
+ raise NotImplementedError(
+ "Waiter {0} could not be found for client {1}. Available waiters: {2}".format(
+ waiter_name,
+ type(client),
+ ", ".join(repr(k) for k in waiters_by_name.keys()),
+ )
+ )
diff --git a/plugins/modules/backup_backup_vault.py b/plugins/modules/backup_backup_vault.py
index a4934636..9391c108 100644
--- a/plugins/modules/backup_backup_vault.py
+++ b/plugins/modules/backup_backup_vault.py
@@ -14,8 +14,8 @@
DOCUMENTATION = r"""
module: backup_backup_vault
short_description: Create and manage logical containers where backups are stored
-description: Creates and manages logical containers where backups are stored (list,
- create, update, describe, delete).
+description:
+- Creates and manages logical containers where backups are stored.
options:
access_policy:
description:
@@ -24,7 +24,6 @@
backup_vault_name:
description:
- Not Provived.
- required: true
type: str
backup_vault_tags:
description:
@@ -34,6 +33,15 @@
description:
- Not Provived.
type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
lock_configuration:
description:
- Not Provived.
@@ -49,7 +57,6 @@
min_retention_days:
description:
- Not Provived.
- required: true
type: int
type: dict
notifications:
@@ -60,19 +67,16 @@
description:
- Not Provived.
elements: str
- required: true
type: list
sns_topic_arn:
description:
- Not Provived.
- required: true
type: str
type: dict
purge_tags:
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
state:
choices:
@@ -96,7 +100,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
wait:
default: false
@@ -110,7 +113,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -121,7 +123,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -158,24 +163,20 @@ def main():
)
argument_spec["access_policy"] = {"type": "dict"}
- argument_spec["backup_vault_name"] = {"type": "str", "required": True}
+ argument_spec["backup_vault_name"] = {"type": "str"}
argument_spec["backup_vault_tags"] = {"type": "dict"}
argument_spec["encryption_key_arn"] = {"type": "str"}
argument_spec["notifications"] = {
"type": "dict",
"options": {
- "backup_vault_events": {
- "type": "list",
- "required": True,
- "elements": "str",
- },
- "sns_topic_arn": {"type": "str", "required": True},
+ "backup_vault_events": {"type": "list", "elements": "str"},
+ "sns_topic_arn": {"type": "str"},
},
}
argument_spec["lock_configuration"] = {
"type": "dict",
"options": {
- "min_retention_days": {"type": "int", "required": True},
+ "min_retention_days": {"type": "int"},
"max_retention_days": {"type": "int"},
"changeable_for_days": {"type": "int"},
},
@@ -187,21 +188,22 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
["state", "present", ["backup_vault_name"], True],
["state", "absent", ["backup_vault_name"], True],
["state", "get", ["backup_vault_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -221,7 +223,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -229,22 +231,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["backup_vault_name", "encryption_key_arn"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("backup_vault_name")
+ identifier = ["backup_vault_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/backup_framework.py b/plugins/modules/backup_framework.py
index 82fd8a09..8fac0e7c 100644
--- a/plugins/modules/backup_framework.py
+++ b/plugins/modules/backup_framework.py
@@ -14,19 +14,27 @@
DOCUMENTATION = r"""
module: backup_framework
short_description: Create and manage frameworks with one or more controls
-description: Creates and manages frameworks with one or more controls (list, create,
- update, describe, delete).
+description:
+- Creates and manages frameworks with one or more controls.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
framework_arn:
description:
- - An Amazon Resource Name (ARN) that uniquely identifies Framework as a resource
+ - An Amazon Resource Name (ARN) that uniquely identifies Framework as a resource.
type: str
framework_controls:
description:
- Contains detailed information about all of the controls of a framework.
- Each framework must contain at least one control.
elements: dict
- required: true
suboptions:
control_input_parameters:
description:
@@ -36,19 +44,16 @@
parameter_name:
description:
- Not Provived.
- required: true
type: str
parameter_value:
description:
- Not Provived.
- required: true
type: str
type: list
control_name:
description:
- The name of a control.
- This name is between 1 and 256 characters.
- required: true
type: str
control_scope:
description:
@@ -83,7 +88,6 @@
- 'You can use any of the following characters: the
set of Unicode letters, digits, whitespace, _,
., /, =, +, and -.'
- required: true
type: str
value:
description:
@@ -94,7 +98,6 @@
- 'You can use any of the following characters: the
set of Unicode letters, digits, whitespace, _,
., /, =, +, and -.'
- required: true
type: str
type: list
type: dict
@@ -121,7 +124,6 @@
and cannot be prefixed with aws:.
- 'You can use any of the following characters: the set of Unicode
letters, digits, whitespace, _, ., /, =, +, and -.'
- required: true
type: str
value:
description:
@@ -130,14 +132,12 @@
and cannot be prefixed with aws:.
- 'You can use any of the following characters: the set of Unicode
letters, digits, whitespace, _, ., /, =, +, and -.'
- required: true
type: str
type: list
purge_tags:
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
state:
choices:
@@ -161,7 +161,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
wait:
default: false
@@ -175,7 +174,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -186,7 +184,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -229,13 +230,13 @@ def main():
"type": "list",
"elements": "dict",
"options": {
- "control_name": {"type": "str", "required": True},
+ "control_name": {"type": "str"},
"control_input_parameters": {
"type": "list",
"elements": "dict",
"options": {
- "parameter_name": {"type": "str", "required": True},
- "parameter_value": {"type": "str", "required": True},
+ "parameter_name": {"type": "str"},
+ "parameter_value": {"type": "str"},
},
},
"control_scope": {
@@ -246,23 +247,16 @@ def main():
"tags": {
"type": "list",
"elements": "dict",
- "options": {
- "key": {"type": "str", "required": True},
- "value": {"type": "str", "required": True},
- },
+ "options": {"key": {"type": "str"}, "value": {"type": "str"}},
},
},
},
},
- "required": True,
}
argument_spec["framework_tags"] = {
"type": "list",
"elements": "dict",
- "options": {
- "key": {"type": "str", "required": True},
- "value": {"type": "str", "required": True},
- },
+ "options": {"key": {"type": "str"}, "value": {"type": "str"}},
}
argument_spec["state"] = {
"type": "str",
@@ -271,21 +265,22 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
- ["state", "present", ["framework_controls"], True],
- ["state", "absent", [], True],
- ["state", "get", [], True],
+ ["state", "present", ["framework_arn", "framework_controls"], True],
+ ["state", "absent", ["framework_arn"], True],
+ ["state", "get", ["framework_arn"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -304,7 +299,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -312,22 +307,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["framework_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("framework_arn")
+ identifier = ["framework_arn"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/backup_report_plan.py b/plugins/modules/backup_report_plan.py
index 65261cb6..0f8b9d63 100644
--- a/plugins/modules/backup_report_plan.py
+++ b/plugins/modules/backup_report_plan.py
@@ -14,20 +14,28 @@
DOCUMENTATION = r"""
module: backup_report_plan
short_description: Create and manage report plans
-description: Creates and manages report plans (list, create, update, describe, delete).
+description:
+- Creates and manages report plans.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
purge_tags:
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
report_delivery_channel:
description:
- A structure that contains information about where and how to deliver your
reports, specifically your Amazon S3 bucket name, S3 key prefix, and the
formats of your reports.
- required: true
suboptions:
formats:
description:
@@ -38,7 +46,6 @@
s3_bucket_name:
description:
- The unique name of the S3 bucket that receives your reports.
- required: true
type: str
s3_key_prefix:
description:
@@ -88,7 +95,6 @@
description:
- Identifies the report template for the report.
- Reports are built using a report template.
- required: true
suboptions:
framework_arns:
description:
@@ -100,8 +106,7 @@
- Identifies the report template for the report.
- Reports are built using a report template.
- 'The report templates are: C(BACKUP_JOB_REPORT) | C(COPY_JOB_REPORT)
- | C(RESTORE_JOB_REPORT)'
- required: true
+ | C(RESTORE_JOB_REPORT).'
type: str
type: dict
state:
@@ -126,7 +131,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
wait:
default: false
@@ -140,7 +144,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -151,7 +154,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -199,18 +205,16 @@ def main():
"type": "dict",
"options": {
"formats": {"type": "list", "elements": "str"},
- "s3_bucket_name": {"type": "str", "required": True},
+ "s3_bucket_name": {"type": "str"},
"s3_key_prefix": {"type": "str"},
},
- "required": True,
}
argument_spec["report_setting"] = {
"type": "dict",
"options": {
- "report_template": {"type": "str", "required": True},
+ "report_template": {"type": "str"},
"framework_arns": {"type": "list", "elements": "str"},
},
- "required": True,
}
argument_spec["state"] = {
"type": "str",
@@ -219,21 +223,27 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
- ["state", "present", ["report_delivery_channel", "report_setting"], True],
- ["state", "absent", [], True],
- ["state", "get", [], True],
+ [
+ "state",
+ "present",
+ ["report_plan_arn", "report_setting", "report_delivery_channel"],
+ True,
+ ],
+ ["state", "absent", ["report_plan_arn"], True],
+ ["state", "get", ["report_plan_arn"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -253,7 +263,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -261,22 +271,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["report_plan_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("report_plan_arn")
+ identifier = ["report_plan_arn"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/cloudtrail_event_data_store.py b/plugins/modules/cloudtrail_event_data_store.py
new file mode 100644
index 00000000..8dad241c
--- /dev/null
+++ b/plugins/modules/cloudtrail_event_data_store.py
@@ -0,0 +1,326 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: cloudtrail_event_data_store
+short_description: Creates and manages a new event data store
+description:
+- Creates and manages a new event data store.
+options:
+ advanced_event_selectors:
+ description:
+ - Advanced event selectors let you create fine-grained selectors for the following
+ AWS CloudTrail event record ?elds.
+ - They help you control costs by logging only those events that are important
+ to you.
+ elements: dict
+ suboptions:
+ field_selectors:
+ description:
+ - A single selector statement in an advanced event selector.
+ elements: dict
+ suboptions:
+ ends_with:
+ description:
+ - An operator that includes events that match the last few
+ characters of the event record field specified as the
+ value of Field.
+ elements: str
+ type: list
+ equals:
+ description:
+ - An operator that includes events that match the exact value
+ of the event record field specified as the value of Field.
+ - This is the only valid operator that you can use with the
+ readOnly, eventCategory, and resources.type fields.
+ elements: str
+ type: list
+ field:
+ description:
+ - A field in an event record on which to filter events to
+ be logged.
+ - Supported fields include readOnly, eventCategory, eventSource
+ (for management events), eventName, resources.type, and
+ resources.ARN.
+ type: str
+ not_ends_with:
+ description:
+ - An operator that excludes events that match the last few
+ characters of the event record field specified as the
+ value of Field.
+ elements: str
+ type: list
+ not_equals:
+ description:
+ - An operator that excludes events that match the exact value
+ of the event record field specified as the value of Field.
+ elements: str
+ type: list
+ not_starts_with:
+ description:
+ - An operator that excludes events that match the first few
+ characters of the event record field specified as the
+ value of Field.
+ elements: str
+ type: list
+ starts_with:
+ description:
+ - An operator that includes events that match the first few
+ characters of the event record field specified as the
+ value of Field.
+ elements: str
+ type: list
+ type: list
+ name:
+ description:
+ - An optional, descriptive name for an advanced event selector, such
+ as Log data events for only two S3 buckets.
+ type: str
+ type: list
+ event_data_store_arn:
+ description:
+ - The ARN of the event data store.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ multi_region_enabled:
+ description:
+ - Indicates whether the event data store includes events from all regions,
+ or only from the region in which it was created.
+ type: bool
+ name:
+ description:
+ - The name of the event data store.
+ type: str
+ organization_enabled:
+ description:
+ - Indicates that an event data store is collecting logged events for an organization.
+ type: bool
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ retention_period:
+ description:
+ - The retention period, in days.
+ type: int
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ termination_protection_enabled:
+ description:
+ - Indicates whether the event data store is protected from termination.
+ type: bool
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["advanced_event_selectors"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "name": {"type": "str"},
+ "field_selectors": {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "field": {"type": "str"},
+ "equals": {"type": "list", "elements": "str"},
+ "starts_with": {"type": "list", "elements": "str"},
+ "ends_with": {"type": "list", "elements": "str"},
+ "not_equals": {"type": "list", "elements": "str"},
+ "not_starts_with": {"type": "list", "elements": "str"},
+ "not_ends_with": {"type": "list", "elements": "str"},
+ },
+ },
+ },
+ }
+ argument_spec["event_data_store_arn"] = {"type": "str"}
+ argument_spec["multi_region_enabled"] = {"type": "bool"}
+ argument_spec["name"] = {"type": "str"}
+ argument_spec["organization_enabled"] = {"type": "bool"}
+ argument_spec["retention_period"] = {"type": "int"}
+ argument_spec["termination_protection_enabled"] = {"type": "bool"}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+
+ required_if = [
+ ["state", "present", ["event_data_store_arn"], True],
+ ["state", "absent", ["event_data_store_arn"], True],
+ ["state", "get", ["event_data_store_arn"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::CloudTrail::EventDataStore"
+
+ params = {}
+
+ params["advanced_event_selectors"] = module.params.get("advanced_event_selectors")
+ params["event_data_store_arn"] = module.params.get("event_data_store_arn")
+ params["multi_region_enabled"] = module.params.get("multi_region_enabled")
+ params["name"] = module.params.get("name")
+ params["organization_enabled"] = module.params.get("organization_enabled")
+ params["retention_period"] = module.params.get("retention_period")
+ params["tags"] = module.params.get("tags")
+ params["termination_protection_enabled"] = module.params.get(
+ "termination_protection_enabled"
+ )
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = {}
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["event_data_store_arn"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/cloudtrail_trail.py b/plugins/modules/cloudtrail_trail.py
new file mode 100644
index 00000000..323525c6
--- /dev/null
+++ b/plugins/modules/cloudtrail_trail.py
@@ -0,0 +1,398 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: cloudtrail_trail
+short_description: Creates and manages a trail that specifies the settings for delivery
+ of log data to an Amazon S3 bucket.
+description:
+- Creates and manages a trail that specifies the settings for delivery of log data
+ to an Amazon S3 bucket.
+options:
+ cloud_watch_logs_log_group_arn:
+ description:
+ - Specifies a log group name using an Amazon Resource Name (ARN), a unique
+ identifier that represents the log group to which CloudTrail logs will
+ be delivered.
+ - Not required unless you specify CloudWatchLogsRoleArn.
+ type: str
+ cloud_watch_logs_role_arn:
+ description:
+ - Specifies the role for the CloudWatch Logs endpoint to assume to write to
+ a users log group.
+ type: str
+ enable_log_file_validation:
+ description:
+ - Specifies whether log file validation is enabled.
+ - The default is false.
+ type: bool
+ event_selectors:
+ description:
+ - The type of email sending events to publish to the event destination.
+ elements: dict
+ suboptions:
+ data_resources:
+ description:
+ - CloudTrail supports data event logging for Amazon S3 objects and
+ AWS Lambda functions.
+ - You can specify up to 250 resources for an individual event selector,
+ but the total number of data resources cannot exceed 250 across
+ all event selectors in a trail.
+ - This limit does not apply if you configure resource logging for
+ all data events.
+ elements: dict
+ suboptions:
+ type:
+ description:
+ - The resource type in which you want to log data events.
+ - You can specify AWS::S3::Object or AWS::Lambda::Function
+ resources.
+ type: str
+ values:
+ description:
+ - An array of Amazon Resource Name (ARN) strings or partial
+ ARN strings for the specified objects.
+ elements: str
+ type: list
+ type: list
+ exclude_management_event_sources:
+ description:
+ - An optional list of service event sources from which you do not
+ want management events to be logged on your trail.
+ - In this release, the list can be empty (disables the filter), or
+ it can filter out AWS Key Management Service events by containing
+ kms.amazonaws.com.
+ - By default, I(exclude_management_event_sources) is empty, and AWS
+ KMS events are included in events that are logged to your trail.
+ elements: str
+ type: list
+ include_management_events:
+ description:
+ - Specify if you want your event selector to include management events
+ for your trail.
+ type: bool
+ read_write_type:
+ choices:
+ - All
+ - ReadOnly
+ - WriteOnly
+ description:
+ - Specify if you want your trail to log read-only events, write-only
+ events, or all.
+ - For example, the EC2 GetConsoleOutput is a read-only API operation
+ and RunInstances is a write-only API operation.
+ type: str
+ type: list
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ include_global_service_events:
+ description:
+ - Specifies whether the trail is publishing events from global services such
+ as IAM to the log files.
+ type: bool
+ insight_selectors:
+ description:
+ - A string that contains insight types that are logged on a trail.
+ elements: dict
+ suboptions:
+ insight_type:
+ description:
+ - The type of insight to log on a trail.
+ type: str
+ type: list
+ is_logging:
+ description:
+ - Whether the CloudTrail is currently logging AWS API calls.
+ type: bool
+ is_multi_region_trail:
+ description:
+ - Specifies whether the trail applies only to the current region or to all
+ regions.
+ - The default is false.
+ - If the trail exists only in the current region and this value is set to
+ true, shadow trails (replications of the trail) will be created in the
+ other regions.
+ - If the trail exists in all regions and this value is set to false, the trail
+ will remain in the region where it was created, and its shadow trails
+ in other regions will be deleted.
+ - As a best practice, consider using trails that log events in all regions.
+ type: bool
+ is_organization_trail:
+ description:
+ - Specifies whether the trail is created for all accounts in an organization
+ in AWS Organizations, or only for the current AWS account.
+ - The default is false, and cannot be true unless the call is made on behalf
+ of an AWS account that is the master account for an organization in AWS
+ Organizations.
+ type: bool
+ kms_key_id:
+ description:
+ - Specifies the KMS key ID to use to encrypt the logs delivered by CloudTrail.
+ - The value can be an alias name prefixed by alias/, a fully specified ARN
+ to an alias, a fully specified ARN to a key, or a globally unique identifier.
+ type: str
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ s3_bucket_name:
+ description:
+ - Specifies the name of the Amazon S3 bucket designated for publishing log
+ files.
+ - See Amazon S3 Bucket Naming Requirements.
+ type: str
+ s3_key_prefix:
+ description:
+ - Specifies the Amazon S3 key prefix that comes after the name of the bucket
+ you have designated for log file delivery.
+ - For more information, see Finding Your CloudTrail Log Files.
+ - The maximum length is 200 characters.
+ type: str
+ sns_topic_name:
+ description:
+ - Specifies the name of the Amazon SNS topic defined for notification of log
+ file delivery.
+ - The maximum length is 256 characters.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ trail_name:
+ description:
+ - Not Provived.
+ type: str
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["cloud_watch_logs_log_group_arn"] = {"type": "str"}
+ argument_spec["cloud_watch_logs_role_arn"] = {"type": "str"}
+ argument_spec["enable_log_file_validation"] = {"type": "bool"}
+ argument_spec["event_selectors"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "data_resources": {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "type": {"type": "str"},
+ "values": {"type": "list", "elements": "str"},
+ },
+ },
+ "include_management_events": {"type": "bool"},
+ "read_write_type": {
+ "type": "str",
+ "choices": ["All", "ReadOnly", "WriteOnly"],
+ },
+ "exclude_management_event_sources": {"type": "list", "elements": "str"},
+ },
+ }
+ argument_spec["include_global_service_events"] = {"type": "bool"}
+ argument_spec["is_logging"] = {"type": "bool"}
+ argument_spec["is_multi_region_trail"] = {"type": "bool"}
+ argument_spec["is_organization_trail"] = {"type": "bool"}
+ argument_spec["kms_key_id"] = {"type": "str"}
+ argument_spec["s3_bucket_name"] = {"type": "str"}
+ argument_spec["s3_key_prefix"] = {"type": "str"}
+ argument_spec["sns_topic_name"] = {"type": "str"}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["trail_name"] = {"type": "str"}
+ argument_spec["insight_selectors"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {"insight_type": {"type": "str"}},
+ }
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+
+ required_if = [
+ ["state", "present", ["trail_name", "is_logging", "s3_bucket_name"], True],
+ ["state", "absent", ["trail_name"], True],
+ ["state", "get", ["trail_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::CloudTrail::Trail"
+
+ params = {}
+
+ params["cloud_watch_logs_log_group_arn"] = module.params.get(
+ "cloud_watch_logs_log_group_arn"
+ )
+ params["cloud_watch_logs_role_arn"] = module.params.get("cloud_watch_logs_role_arn")
+ params["enable_log_file_validation"] = module.params.get(
+ "enable_log_file_validation"
+ )
+ params["event_selectors"] = module.params.get("event_selectors")
+ params["include_global_service_events"] = module.params.get(
+ "include_global_service_events"
+ )
+ params["insight_selectors"] = module.params.get("insight_selectors")
+ params["is_logging"] = module.params.get("is_logging")
+ params["is_multi_region_trail"] = module.params.get("is_multi_region_trail")
+ params["is_organization_trail"] = module.params.get("is_organization_trail")
+ params["kms_key_id"] = module.params.get("kms_key_id")
+ params["s3_bucket_name"] = module.params.get("s3_bucket_name")
+ params["s3_key_prefix"] = module.params.get("s3_key_prefix")
+ params["sns_topic_name"] = module.params.get("sns_topic_name")
+ params["tags"] = module.params.get("tags")
+ params["trail_name"] = module.params.get("trail_name")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["trail_name"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["trail_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/cloudwatch_composite_alarm.py b/plugins/modules/cloudwatch_composite_alarm.py
new file mode 100644
index 00000000..9cbbf1c0
--- /dev/null
+++ b/plugins/modules/cloudwatch_composite_alarm.py
@@ -0,0 +1,259 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: cloudwatch_composite_alarm
+short_description: Creates and manages a composite alarm
+description:
+- Creates and manages a composite alarm.
+- When you create a composite alarm, you specify a rule expression for the alarm that
+ takes into account the alarm states of other alarms that you have created.
+- The composite alarm goes into ALARM state only if all conditions of the rule are
+ met.
+options:
+ actions_enabled:
+ description:
+ - Indicates whether actions should be executed during any changes to the alarm
+ state.
+ - The default is C(True).
+ type: bool
+ actions_suppressor:
+ description:
+ - Actions will be suppressed if the suppressor alarm is in the ALARM state.
+ - ActionsSuppressor can be an AlarmName or an Amazon Resource Name (ARN) from
+ an existing alarm.
+ type: str
+ actions_suppressor_extension_period:
+ description:
+ - Actions will be suppressed if WaitPeriod is active.
+ - The length of time that actions are suppressed is in seconds.
+ type: int
+ actions_suppressor_wait_period:
+ description:
+ - Actions will be suppressed if ExtensionPeriod is active.
+ - The length of time that actions are suppressed is in seconds.
+ type: int
+ alarm_actions:
+ description:
+ - Amazon Resource Name (ARN) of the action.
+ elements: str
+ type: list
+ alarm_description:
+ description:
+ - The description of the alarm.
+ type: str
+ alarm_name:
+ description:
+ - The name of the Composite Alarm.
+ type: str
+ alarm_rule:
+ description:
+ - Expression which aggregates the state of other Alarms (Metric or Composite
+ Alarms).
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ insufficient_data_actions:
+ description:
+ - Amazon Resource Name (ARN) of the action.
+ elements: str
+ type: list
+ ok_actions:
+ description:
+ - Amazon Resource Name (ARN) of the action.
+ elements: str
+ type: list
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["alarm_name"] = {"type": "str"}
+ argument_spec["alarm_rule"] = {"type": "str"}
+ argument_spec["alarm_description"] = {"type": "str"}
+ argument_spec["actions_enabled"] = {"type": "bool"}
+ argument_spec["ok_actions"] = {"type": "list", "elements": "str"}
+ argument_spec["alarm_actions"] = {"type": "list", "elements": "str"}
+ argument_spec["insufficient_data_actions"] = {"type": "list", "elements": "str"}
+ argument_spec["actions_suppressor"] = {"type": "str"}
+ argument_spec["actions_suppressor_wait_period"] = {"type": "int"}
+ argument_spec["actions_suppressor_extension_period"] = {"type": "int"}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+
+ required_if = [
+ ["state", "present", ["alarm_name", "alarm_rule"], True],
+ ["state", "absent", ["alarm_name"], True],
+ ["state", "get", ["alarm_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::CloudWatch::CompositeAlarm"
+
+ params = {}
+
+ params["actions_enabled"] = module.params.get("actions_enabled")
+ params["actions_suppressor"] = module.params.get("actions_suppressor")
+ params["actions_suppressor_extension_period"] = module.params.get(
+ "actions_suppressor_extension_period"
+ )
+ params["actions_suppressor_wait_period"] = module.params.get(
+ "actions_suppressor_wait_period"
+ )
+ params["alarm_actions"] = module.params.get("alarm_actions")
+ params["alarm_description"] = module.params.get("alarm_description")
+ params["alarm_name"] = module.params.get("alarm_name")
+ params["alarm_rule"] = module.params.get("alarm_rule")
+ params["insufficient_data_actions"] = module.params.get("insufficient_data_actions")
+ params["ok_actions"] = module.params.get("ok_actions")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["alarm_name"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["alarm_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/cloudwatch_metric_stream.py b/plugins/modules/cloudwatch_metric_stream.py
new file mode 100644
index 00000000..332adb3e
--- /dev/null
+++ b/plugins/modules/cloudwatch_metric_stream.py
@@ -0,0 +1,306 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: cloudwatch_metric_stream
+short_description: Creates and manages a metric stream
+description:
+- Creates and manages a metric stream.
+- Metrics streams can automatically stream CloudWatch metrics to AWS destinations
+ including Amazon S3 and to many third-party solutions.
+- For more information, see U(https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html).
+options:
+ exclude_filters:
+ description:
+ - This structure defines the metrics that will be streamed.
+ elements: dict
+ suboptions:
+ namespace:
+ description:
+ - Only metrics with Namespace matching this value will be streamed.
+ type: str
+ type: list
+ firehose_arn:
+ description:
+ - The ARN of the Kinesis Firehose where to stream the data.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ include_filters:
+ description:
+ - This structure defines the metrics that will be streamed.
+ elements: dict
+ suboptions:
+ namespace:
+ description:
+ - Only metrics with Namespace matching this value will be streamed.
+ type: str
+ type: list
+ name:
+ description:
+ - Name of the metric stream.
+ type: str
+ output_format:
+ description:
+ - The output format of the data streamed to the Kinesis Firehose.
+ type: str
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ role_arn:
+ description:
+ - The ARN of the role that provides access to the Kinesis Firehose.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ statistics_configurations:
+ description:
+ - This structure specifies a list of additional statistics to stream, and
+ the metrics to stream those additional statistics for.
+ - All metrics that match the combination of metric name and namespace will
+ be streamed with the extended statistics, no matter their dimensions.
+ elements: dict
+ suboptions:
+ additional_statistics:
+ description:
+ - The additional statistics to stream for the metrics listed in I(include_metrics).
+ elements: str
+ type: list
+ include_metrics:
+ description:
+ - A structure that specifies the metric name and namespace for one
+ metric that is going to have additional statistics included in
+ the stream.
+ elements: dict
+ suboptions:
+ metric_name:
+ description:
+ - The name of the metric.
+ type: str
+ namespace:
+ description:
+ - The namespace of the metric.
+ type: str
+ type: list
+ type: list
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["exclude_filters"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {"namespace": {"type": "str"}},
+ }
+ argument_spec["firehose_arn"] = {"type": "str"}
+ argument_spec["include_filters"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {"namespace": {"type": "str"}},
+ }
+ argument_spec["name"] = {"type": "str"}
+ argument_spec["role_arn"] = {"type": "str"}
+ argument_spec["output_format"] = {"type": "str"}
+ argument_spec["statistics_configurations"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "additional_statistics": {"type": "list", "elements": "str"},
+ "include_metrics": {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "metric_name": {"type": "str"},
+ "namespace": {"type": "str"},
+ },
+ },
+ },
+ }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+
+ required_if = [
+ [
+ "state",
+ "present",
+ ["name", "role_arn", "output_format", "firehose_arn"],
+ True,
+ ],
+ ["state", "absent", ["name"], True],
+ ["state", "get", ["name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::CloudWatch::MetricStream"
+
+ params = {}
+
+ params["exclude_filters"] = module.params.get("exclude_filters")
+ params["firehose_arn"] = module.params.get("firehose_arn")
+ params["include_filters"] = module.params.get("include_filters")
+ params["name"] = module.params.get("name")
+ params["output_format"] = module.params.get("output_format")
+ params["role_arn"] = module.params.get("role_arn")
+ params["statistics_configurations"] = module.params.get("statistics_configurations")
+ params["tags"] = module.params.get("tags")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["name"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "update", "delete", "list", "read"]
+
+ state = module.params.get("state")
+ identifier = ["name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/dynamodb_global_table.py b/plugins/modules/dynamodb_global_table.py
new file mode 100644
index 00000000..99588bd0
--- /dev/null
+++ b/plugins/modules/dynamodb_global_table.py
@@ -0,0 +1,800 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: dynamodb_global_table
+short_description: Creates and manages a Version 2019.11.21 global table
+description:
+- Creates and manages a Version 2019.11.21 global table.
+- This resource cannot be used to create or manage a Version 2017.11.29 global table.
+options:
+ attribute_definitions:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ attribute_name:
+ description:
+ - Not Provived.
+ type: str
+ attribute_type:
+ description:
+ - Not Provived.
+ type: str
+ type: list
+ billing_mode:
+ description:
+ - Not Provived.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ global_secondary_indexes:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ index_name:
+ description:
+ - Not Provived.
+ type: str
+ key_schema:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ attribute_name:
+ description:
+ - Not Provived.
+ type: str
+ key_type:
+ description:
+ - Not Provived.
+ type: str
+ type: list
+ projection:
+ description:
+ - Not Provived.
+ suboptions:
+ non_key_attributes:
+ description:
+ - Not Provived.
+ elements: str
+ type: list
+ projection_type:
+ description:
+ - Not Provived.
+ type: str
+ type: dict
+ write_provisioned_throughput_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ write_capacity_auto_scaling_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ max_capacity:
+ description:
+ - Not Provived.
+ type: int
+ min_capacity:
+ description:
+ - Not Provived.
+ type: int
+ seed_capacity:
+ description:
+ - Not Provived.
+ type: int
+ target_tracking_scaling_policy_configuration:
+ description:
+ - Not Provived.
+ suboptions:
+ disable_scale_in:
+ description:
+ - Not Provived.
+ type: bool
+ scale_in_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ scale_out_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ target_value:
+ description:
+ - Not Provived.
+ type: int
+ type: dict
+ type: dict
+ type: dict
+ type: list
+ key_schema:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ attribute_name:
+ description:
+ - Not Provived.
+ type: str
+ key_type:
+ description:
+ - Not Provived.
+ type: str
+ type: list
+ local_secondary_indexes:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ index_name:
+ description:
+ - Not Provived.
+ type: str
+ key_schema:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ attribute_name:
+ description:
+ - Not Provived.
+ type: str
+ key_type:
+ description:
+ - Not Provived.
+ type: str
+ type: list
+ projection:
+ description:
+ - Not Provived.
+ suboptions:
+ non_key_attributes:
+ description:
+ - Not Provived.
+ elements: str
+ type: list
+ projection_type:
+ description:
+ - Not Provived.
+ type: str
+ type: dict
+ type: list
+ replicas:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ contributor_insights_specification:
+ description:
+ - Not Provived.
+ suboptions:
+ enabled:
+ description:
+ - Not Provived.
+ type: bool
+ type: dict
+ global_secondary_indexes:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ contributor_insights_specification:
+ description:
+ - Not Provived.
+ suboptions:
+ enabled:
+ description:
+ - Not Provived.
+ type: bool
+ type: dict
+ index_name:
+ description:
+ - Not Provived.
+ type: str
+ read_provisioned_throughput_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ read_capacity_auto_scaling_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ max_capacity:
+ description:
+ - Not Provived.
+ type: int
+ min_capacity:
+ description:
+ - Not Provived.
+ type: int
+ seed_capacity:
+ description:
+ - Not Provived.
+ type: int
+ target_tracking_scaling_policy_configuration:
+ description:
+ - Not Provived.
+ suboptions:
+ disable_scale_in:
+ description:
+ - Not Provived.
+ type: bool
+ scale_in_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ scale_out_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ target_value:
+ description:
+ - Not Provived.
+ type: int
+ type: dict
+ type: dict
+ read_capacity_units:
+ description:
+ - Not Provived.
+ type: int
+ type: dict
+ type: list
+ point_in_time_recovery_specification:
+ description:
+ - Not Provived.
+ suboptions:
+ point_in_time_recovery_enabled:
+ description:
+ - Not Provived.
+ type: bool
+ type: dict
+ read_provisioned_throughput_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ read_capacity_auto_scaling_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ max_capacity:
+ description:
+ - Not Provived.
+ type: int
+ min_capacity:
+ description:
+ - Not Provived.
+ type: int
+ seed_capacity:
+ description:
+ - Not Provived.
+ type: int
+ target_tracking_scaling_policy_configuration:
+ description:
+ - Not Provived.
+ suboptions:
+ disable_scale_in:
+ description:
+ - Not Provived.
+ type: bool
+ scale_in_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ scale_out_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ target_value:
+ description:
+ - Not Provived.
+ type: int
+ type: dict
+ type: dict
+ read_capacity_units:
+ description:
+ - Not Provived.
+ type: int
+ type: dict
+ region:
+ description:
+ - Not Provived.
+ type: str
+ sse_specification:
+ description:
+ - Not Provived.
+ suboptions:
+ kms_master_key_id:
+ description:
+ - Not Provived.
+ type: str
+ type: dict
+ table_class:
+ description:
+ - Not Provived.
+ type: str
+ tags:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ key:
+ description:
+ - Not Provived.
+ type: str
+ value:
+ description:
+ - Not Provived.
+ type: str
+ type: list
+ type: list
+ sse_specification:
+ description:
+ - Not Provived.
+ suboptions:
+ sse_enabled:
+ description:
+ - Not Provived.
+ type: bool
+ sse_type:
+ description:
+ - Not Provived.
+ type: str
+ type: dict
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ stream_specification:
+ description:
+ - Not Provived.
+ suboptions:
+ stream_view_type:
+ description:
+ - Not Provived.
+ type: str
+ type: dict
+ table_name:
+ description:
+ - Not Provived.
+ type: str
+ time_to_live_specification:
+ description:
+ - Not Provived.
+ suboptions:
+ attribute_name:
+ description:
+ - Not Provived.
+ type: str
+ enabled:
+ description:
+ - Not Provived.
+ type: bool
+ type: dict
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+ write_provisioned_throughput_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ write_capacity_auto_scaling_settings:
+ description:
+ - Not Provived.
+ suboptions:
+ max_capacity:
+ description:
+ - Not Provived.
+ type: int
+ min_capacity:
+ description:
+ - Not Provived.
+ type: int
+ seed_capacity:
+ description:
+ - Not Provived.
+ type: int
+ target_tracking_scaling_policy_configuration:
+ description:
+ - Not Provived.
+ suboptions:
+ disable_scale_in:
+ description:
+ - Not Provived.
+ type: bool
+ scale_in_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ scale_out_cooldown:
+ description:
+ - Not Provived.
+ type: int
+ target_value:
+ description:
+ - Not Provived.
+ type: int
+ type: dict
+ type: dict
+ type: dict
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["attribute_definitions"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "attribute_name": {"type": "str"},
+ "attribute_type": {"type": "str"},
+ },
+ }
+ argument_spec["billing_mode"] = {"type": "str"}
+ argument_spec["global_secondary_indexes"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "index_name": {"type": "str"},
+ "key_schema": {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "attribute_name": {"type": "str"},
+ "key_type": {"type": "str"},
+ },
+ },
+ "projection": {
+ "type": "dict",
+ "options": {
+ "non_key_attributes": {"type": "list", "elements": "str"},
+ "projection_type": {"type": "str"},
+ },
+ },
+ "write_provisioned_throughput_settings": {
+ "type": "dict",
+ "options": {
+ "write_capacity_auto_scaling_settings": {
+ "type": "dict",
+ "options": {
+ "min_capacity": {"type": "int"},
+ "max_capacity": {"type": "int"},
+ "seed_capacity": {"type": "int"},
+ "target_tracking_scaling_policy_configuration": {
+ "type": "dict",
+ "options": {
+ "disable_scale_in": {"type": "bool"},
+ "scale_in_cooldown": {"type": "int"},
+ "scale_out_cooldown": {"type": "int"},
+ "target_value": {"type": "int"},
+ },
+ },
+ },
+ }
+ },
+ },
+ },
+ }
+ argument_spec["key_schema"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {"attribute_name": {"type": "str"}, "key_type": {"type": "str"}},
+ }
+ argument_spec["local_secondary_indexes"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "index_name": {"type": "str"},
+ "key_schema": {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "attribute_name": {"type": "str"},
+ "key_type": {"type": "str"},
+ },
+ },
+ "projection": {
+ "type": "dict",
+ "options": {
+ "non_key_attributes": {"type": "list", "elements": "str"},
+ "projection_type": {"type": "str"},
+ },
+ },
+ },
+ }
+ argument_spec["write_provisioned_throughput_settings"] = {
+ "type": "dict",
+ "options": {
+ "write_capacity_auto_scaling_settings": {
+ "type": "dict",
+ "options": {
+ "min_capacity": {"type": "int"},
+ "max_capacity": {"type": "int"},
+ "seed_capacity": {"type": "int"},
+ "target_tracking_scaling_policy_configuration": {
+ "type": "dict",
+ "options": {
+ "disable_scale_in": {"type": "bool"},
+ "scale_in_cooldown": {"type": "int"},
+ "scale_out_cooldown": {"type": "int"},
+ "target_value": {"type": "int"},
+ },
+ },
+ },
+ }
+ },
+ }
+ argument_spec["replicas"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "region": {"type": "str"},
+ "global_secondary_indexes": {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "index_name": {"type": "str"},
+ "contributor_insights_specification": {
+ "type": "dict",
+ "options": {"enabled": {"type": "bool"}},
+ },
+ "read_provisioned_throughput_settings": {
+ "type": "dict",
+ "options": {
+ "read_capacity_units": {"type": "int"},
+ "read_capacity_auto_scaling_settings": {
+ "type": "dict",
+ "options": {
+ "min_capacity": {"type": "int"},
+ "max_capacity": {"type": "int"},
+ "seed_capacity": {"type": "int"},
+ "target_tracking_scaling_policy_configuration": {
+ "type": "dict",
+ "options": {
+ "disable_scale_in": {"type": "bool"},
+ "scale_in_cooldown": {"type": "int"},
+ "scale_out_cooldown": {"type": "int"},
+ "target_value": {"type": "int"},
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ "contributor_insights_specification": {
+ "type": "dict",
+ "options": {"enabled": {"type": "bool"}},
+ },
+ "point_in_time_recovery_specification": {
+ "type": "dict",
+ "options": {"point_in_time_recovery_enabled": {"type": "bool"}},
+ },
+ "table_class": {"type": "str"},
+ "sse_specification": {
+ "type": "dict",
+ "options": {"kms_master_key_id": {"type": "str"}},
+ },
+ "tags": {
+ "type": "list",
+ "elements": "dict",
+ "options": {"key": {"type": "str"}, "value": {"type": "str"}},
+ },
+ "read_provisioned_throughput_settings": {
+ "type": "dict",
+ "options": {
+ "read_capacity_units": {"type": "int"},
+ "read_capacity_auto_scaling_settings": {
+ "type": "dict",
+ "options": {
+ "min_capacity": {"type": "int"},
+ "max_capacity": {"type": "int"},
+ "seed_capacity": {"type": "int"},
+ "target_tracking_scaling_policy_configuration": {
+ "type": "dict",
+ "options": {
+ "disable_scale_in": {"type": "bool"},
+ "scale_in_cooldown": {"type": "int"},
+ "scale_out_cooldown": {"type": "int"},
+ "target_value": {"type": "int"},
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+ argument_spec["sse_specification"] = {
+ "type": "dict",
+ "options": {"sse_enabled": {"type": "bool"}, "sse_type": {"type": "str"}},
+ }
+ argument_spec["stream_specification"] = {
+ "type": "dict",
+ "options": {"stream_view_type": {"type": "str"}},
+ }
+ argument_spec["table_name"] = {"type": "str"}
+ argument_spec["time_to_live_specification"] = {
+ "type": "dict",
+ "options": {"attribute_name": {"type": "str"}, "enabled": {"type": "bool"}},
+ }
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+
+ required_if = [
+ [
+ "state",
+ "present",
+ ["key_schema", "table_name", "replicas", "attribute_definitions"],
+ True,
+ ],
+ ["state", "absent", ["table_name"], True],
+ ["state", "get", ["table_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::DynamoDB::GlobalTable"
+
+ params = {}
+
+ params["attribute_definitions"] = module.params.get("attribute_definitions")
+ params["billing_mode"] = module.params.get("billing_mode")
+ params["global_secondary_indexes"] = module.params.get("global_secondary_indexes")
+ params["key_schema"] = module.params.get("key_schema")
+ params["local_secondary_indexes"] = module.params.get("local_secondary_indexes")
+ params["replicas"] = module.params.get("replicas")
+ params["sse_specification"] = module.params.get("sse_specification")
+ params["stream_specification"] = module.params.get("stream_specification")
+ params["table_name"] = module.params.get("table_name")
+ params["time_to_live_specification"] = module.params.get(
+ "time_to_live_specification"
+ )
+ params["write_provisioned_throughput_settings"] = module.params.get(
+ "write_provisioned_throughput_settings"
+ )
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["local_secondary_indexes", "table_name", "key_schema"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["table_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/eks_addon.py b/plugins/modules/eks_addon.py
new file mode 100644
index 00000000..6208975a
--- /dev/null
+++ b/plugins/modules/eks_addon.py
@@ -0,0 +1,252 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: eks_addon
+short_description: Creates and manages Amazon EKS add-ons
+description:
+- Creates and manages Amazon EKS add-ons.
+- Amazon EKS add-ons require clusters running version 1.18 or later because Amazon
+ EKS add-ons rely on the Server-side Apply Kubernetes feature, which is only available
+ in Kubernetes 1.18 and later.
+- For more information see U(https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html).
+options:
+ addon_name:
+ description:
+ - Name of Addon.
+ type: str
+ addon_version:
+ description:
+ - Version of Addon.
+ type: str
+ cluster_name:
+ description:
+ - Name of Cluster.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ identifier:
+ description:
+ - For compound primary identifiers, to specify the primary identifier as a
+ string, list each in the order that they are specified in the identifier
+ list definition, separated by '|'.
+ - For more details, visit U(https://docs.aws.amazon.com/cloudcontrolapi/latest/userguide/resource-identifier.html).
+ type: str
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ resolve_conflicts:
+ choices:
+ - NONE
+ - OVERWRITE
+ description:
+ - Resolve parameter value conflicts.
+ type: str
+ service_account_role_arn:
+ description:
+ - IAM role to bind to the add-ons service account.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["cluster_name"] = {"type": "str"}
+ argument_spec["addon_name"] = {"type": "str"}
+ argument_spec["addon_version"] = {"type": "str"}
+ argument_spec["resolve_conflicts"] = {
+ "type": "str",
+ "choices": ["NONE", "OVERWRITE"],
+ }
+ argument_spec["service_account_role_arn"] = {"type": "str"}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+ argument_spec["identifier"] = {"type": "str"}
+
+ required_if = [
+ ["state", "list", ["cluster_name"], True],
+ ["state", "present", ["cluster_name", "addon_name", "identifier"], True],
+ ["state", "absent", ["cluster_name", "addon_name", "identifier"], True],
+ ["state", "get", ["cluster_name", "addon_name", "identifier"], True],
+ ]
+ mutually_exclusive = [[("cluster_name", "addon_name"), "identifier"]]
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::EKS::Addon"
+
+ params = {}
+
+ params["addon_name"] = module.params.get("addon_name")
+ params["addon_version"] = module.params.get("addon_version")
+ params["cluster_name"] = module.params.get("cluster_name")
+ params["identifier"] = module.params.get("identifier")
+ params["resolve_conflicts"] = module.params.get("resolve_conflicts")
+ params["service_account_role_arn"] = module.params.get("service_account_role_arn")
+ params["tags"] = module.params.get("tags")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["cluster_name", "addon_name"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "delete", "list", "update"]
+
+ state = module.params.get("state")
+ identifier = ["cluster_name", "addon_name"]
+ if (
+ state in ("present", "absent", "get", "describe")
+ and module.params.get("identifier") is None
+ ):
+ if not module.params.get("cluster_name") or not module.params.get("addon_name"):
+ module.fail_json(f"You must specify both {*identifier, } identifiers.")
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/eks_cluster.py b/plugins/modules/eks_cluster.py
index 21ec1d8d..46d644cf 100644
--- a/plugins/modules/eks_cluster.py
+++ b/plugins/modules/eks_cluster.py
@@ -14,12 +14,12 @@
DOCUMENTATION = r"""
module: eks_cluster
short_description: Create and manages Amazon EKS control planes
-description: Create and manage Amazon EKS control planes (list, create, update, describe,
- delete).
+description:
+- Create and manage Amazon EKS control planes.
options:
encryption_config:
description:
- - The encryption configuration for the cluster
+ - The encryption configuration for the cluster.
elements: dict
suboptions:
provider:
@@ -41,6 +41,15 @@
elements: str
type: list
type: list
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
kubernetes_network_config:
description:
- The Kubernetes network configuration for the cluster.
@@ -52,7 +61,7 @@
description:
- Ipv4 or Ipv6.
- You can only specify ipv6 for 1.21 and later clusters that use version
- 1.10.1 or later of the Amazon VPC CNI add-on
+ 1.10.1 or later of the Amazon VPC CNI add-on.
type: str
service_ipv4_cidr:
description:
@@ -77,7 +86,7 @@
suboptions:
enabled_types:
description:
- - Enabled Logging Type
+ - Enabled Logging Type.
elements: dict
suboptions:
type:
@@ -88,7 +97,7 @@
- controllerManager
- scheduler
description:
- - name of the log type
+ - name of the log type.
type: str
type: list
type: dict
@@ -101,12 +110,10 @@
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
resources_vpc_config:
description:
- An object representing the VPC configuration to use for an Amazon EKS cluster.
- required: true
suboptions:
endpoint_private_access:
description:
@@ -117,7 +124,7 @@
- The default value for this parameter is false, which disables private
access for your Kubernetes API server.
- If you disable private access and you have nodes or AWS Fargate
- pods in the cluster, then ensure that publicI(access_cidrs) includes
+ pods in the cluster, then ensure that publicAccessCidrs includes
the necessary CIDR blocks for communication with the nodes or
Fargate pods.
type: bool
@@ -158,7 +165,6 @@
subnets to allow communication between your nodes and the Kubernetes
control plane.
elements: str
- required: true
type: list
type: dict
role_arn:
@@ -166,7 +172,6 @@
- The Amazon Resource Name (ARN) of the IAM role that provides permissions
for the Kubernetes control plane to make calls to AWS API operations on
your behalf.
- required: true
type: str
state:
choices:
@@ -190,7 +195,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
version:
description:
@@ -210,18 +214,46 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
"""
EXAMPLES = r"""
+- name: Create EKS cluster
+ amazon.cloud.eks_cluster:
+ name: '{{ eks_cluster_name }}'
+ resources_vpc_config:
+ security_group_ids: "{{ _result_create_security_groups.results | map(attribute='group_id') }}"
+ subnet_ids: "{{ _result_create_subnets.results | map(attribute='subnet.id') }}"
+ endpoint_public_access: true
+ endpoint_private_access: false
+ public_access_cidrs:
+ - 0.0.0.0/0
+ role_arn: '{{ _result_create_iam_role.arn }}'
+ tags:
+ Name: '{{ _resource_prefix }}-eks-cluster'
+ wait_timeout: 900
+ register: _result_create_cluster
+
+- name: Describe EKS cluster
+ amazon.cloud.eks_cluster:
+ name: '{{ eks_cluster_name }}'
+ state: describe
+ register: _result_get_cluster
+
+- name: List EKS clusters
+ amazon.cloud.eks_cluster:
+ state: list
+ register: _result_list_clusters
"""
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -257,21 +289,6 @@ def main():
),
)
- argument_spec["encryption_config"] = {
- "type": "list",
- "elements": "dict",
- "options": {
- "provider": {"type": "dict", "options": {"key_arn": {"type": "str"}}},
- "resources": {"type": "list", "elements": "str"},
- },
- }
- argument_spec["kubernetes_network_config"] = {
- "type": "dict",
- "options": {
- "service_ipv4_cidr": {"type": "str"},
- "ip_family": {"type": "str", "choices": ["ipv4", "ipv6"]},
- },
- }
argument_spec["logging"] = {
"type": "dict",
"options": {
@@ -298,25 +315,35 @@ def main():
}
},
}
+ argument_spec["encryption_config"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "resources": {"type": "list", "elements": "str"},
+ "provider": {"type": "dict", "options": {"key_arn": {"type": "str"}}},
+ },
+ }
+ argument_spec["kubernetes_network_config"] = {
+ "type": "dict",
+ "options": {
+ "service_ipv4_cidr": {"type": "str"},
+ "ip_family": {"type": "str", "choices": ["ipv4", "ipv6"]},
+ },
+ }
+ argument_spec["role_arn"] = {"type": "str"}
argument_spec["name"] = {"type": "str"}
+ argument_spec["version"] = {"type": "str"}
argument_spec["resources_vpc_config"] = {
"type": "dict",
"options": {
- "endpoint_private_access": {"type": "bool"},
"endpoint_public_access": {"type": "bool"},
"public_access_cidrs": {"type": "list", "elements": "str"},
+ "endpoint_private_access": {"type": "bool"},
"security_group_ids": {"type": "list", "elements": "str"},
- "subnet_ids": {"type": "list", "required": True, "elements": "str"},
+ "subnet_ids": {"type": "list", "elements": "str"},
},
- "required": True,
- }
- argument_spec["role_arn"] = {"type": "str", "required": True}
- argument_spec["version"] = {"type": "str"}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -324,16 +351,21 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
- ["state", "present", ["role_arn", "name", "resources_vpc_config"], True],
+ ["state", "present", ["name", "resources_vpc_config", "role_arn"], True],
["state", "absent", ["name"], True],
["state", "get", ["name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -354,7 +386,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -369,22 +401,32 @@ def main():
"security_group_ids",
]
+ # Necessary to handle when module does not support all the states
+ handlers = ["read", "create", "update", "list", "delete"]
+
state = module.params.get("state")
- identifier = module.params.get("name")
+ identifier = ["name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/eks_fargate_profile.py b/plugins/modules/eks_fargate_profile.py
new file mode 100644
index 00000000..966ea5cf
--- /dev/null
+++ b/plugins/modules/eks_fargate_profile.py
@@ -0,0 +1,341 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: eks_fargate_profile
+short_description: Creates and manage AWS Fargate profiles
+description:
+- Creates and manage AWS Fargate profiles for your Amazon EKS cluster.
+- You must have at least one Fargate profile in a cluster to be able to run pods on
+ Fargate.
+options:
+ cluster_name:
+ description:
+ - Name of the Cluster.
+ type: str
+ fargate_profile_name:
+ description:
+ - Name of FargateProfile.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ identifier:
+ description:
+ - For compound primary identifiers, to specify the primary identifier as a
+ string, list each in the order that they are specified in the identifier
+ list definition, separated by '|'.
+ - For more details, visit U(https://docs.aws.amazon.com/cloudcontrolapi/latest/userguide/resource-identifier.html).
+ type: str
+ pod_execution_role_arn:
+ description:
+ - The IAM policy arn for pods.
+ type: str
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ selectors:
+ description:
+ - Not Provived.
+ elements: dict
+ suboptions:
+ labels:
+ description:
+ - A key-value pair to associate with a pod.
+ elements: dict
+ suboptions:
+ key:
+ description:
+ - The key name of the label.
+ type: str
+ value:
+ description:
+ - The value for the label.
+ type: str
+ type: list
+ namespace:
+ description:
+ - Not Provived.
+ type: str
+ type: list
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ subnets:
+ description:
+ - Not Provived.
+ elements: str
+ type: list
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+- name: Create Fargate Profile a with wait
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: '{{ eks_fargate_profile_name_a }}'
+ state: present
+ cluster_name: '{{ eks_cluster_name }}'
+ pod_execution_role_arn: '{{ _result_create_iam_role_fp.arn }}'
+ subnets: "{{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains', 'private') | map(attribute='subnet.id') }}"
+ selectors: '{{ selectors }}'
+ wait: true
+ tags: '{{ tags }}'
+ register: _result_create_fp
+
+- name: List Fargate Profiles
+ amazon.cloud.eks_fargate_profile:
+ state: list
+ cluster_name: '{{ eks_cluster_name }}'
+ register: _result_list_fp
+
+- name: Update tags in Fargate Profile a with wait (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: '{{ eks_fargate_profile_name_a }}'
+ state: present
+ cluster_name: '{{ eks_cluster_name }}'
+ pod_execution_role_arn: '{{ _result_create_iam_role_fp.arn }}'
+ subnets: "{{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains', 'private') | map(attribute='subnet.id') }}"
+ selectors: '{{ selectors }}'
+ wait: true
+ tags:
+ env: test
+ test: foo
+ check_mode: true
+ register: _result_update_tags_fp
+
+- name: Delete Fargate Profile a
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: '{{ eks_fargate_profile_name_a }}'
+ cluster_name: '{{ eks_cluster_name }}'
+ state: absent
+ wait: true
+ wait_timeout: 900
+ register: _result_delete_fp
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["cluster_name"] = {"type": "str"}
+ argument_spec["fargate_profile_name"] = {"type": "str"}
+ argument_spec["pod_execution_role_arn"] = {"type": "str"}
+ argument_spec["subnets"] = {"type": "list", "elements": "str"}
+ argument_spec["selectors"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {
+ "namespace": {"type": "str"},
+ "labels": {
+ "type": "list",
+ "elements": "dict",
+ "options": {"key": {"type": "str"}, "value": {"type": "str"}},
+ },
+ },
+ }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+ argument_spec["identifier"] = {"type": "str"}
+
+ required_if = [
+ ["state", "list", ["cluster_name"], True],
+ [
+ "state",
+ "present",
+ [
+ "selectors",
+ "fargate_profile_name",
+ "identifier",
+ "cluster_name",
+ "pod_execution_role_arn",
+ ],
+ True,
+ ],
+ [
+ "state",
+ "absent",
+ ["cluster_name", "fargate_profile_name", "identifier"],
+ True,
+ ],
+ ["state", "get", ["cluster_name", "fargate_profile_name", "identifier"], True],
+ ]
+ mutually_exclusive = [[("cluster_name", "fargate_profile_name"), "identifier"]]
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::EKS::FargateProfile"
+
+ params = {}
+
+ params["cluster_name"] = module.params.get("cluster_name")
+ params["fargate_profile_name"] = module.params.get("fargate_profile_name")
+ params["identifier"] = module.params.get("identifier")
+ params["pod_execution_role_arn"] = module.params.get("pod_execution_role_arn")
+ params["selectors"] = module.params.get("selectors")
+ params["subnets"] = module.params.get("subnets")
+ params["tags"] = module.params.get("tags")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = [
+ "cluster_name",
+ "fargate_profile_name",
+ "pod_execution_role_arn",
+ "subnets",
+ "selectors",
+ ]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "delete", "list", "update"]
+
+ state = module.params.get("state")
+ identifier = ["cluster_name", "fargate_profile_name"]
+ if (
+ state in ("present", "absent", "get", "describe")
+ and module.params.get("identifier") is None
+ ):
+ if not module.params.get("cluster_name") or not module.params.get(
+ "fargate_profile_name"
+ ):
+ module.fail_json(f"You must specify both {*identifier, } identifiers.")
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/iam_role.py b/plugins/modules/iam_role.py
index ae63ecac..843bc245 100644
--- a/plugins/modules/iam_role.py
+++ b/plugins/modules/iam_role.py
@@ -14,18 +14,26 @@
DOCUMENTATION = r"""
module: iam_role
short_description: Create and manage roles
-description: Creates and manages new roles for your AWS account (list, create, update,
- describe, delete).
+description:
+- Creates and manages new roles for your AWS account.
options:
assume_role_policy_document:
description:
- The trust policy that is associated with this role.
- required: true
type: dict
description:
description:
- A description of the role that you provide.
type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
managed_policy_arns:
description:
- A list of Amazon Resource Names (ARNs) of the IAM managed policies that
@@ -56,19 +64,16 @@
policy_document:
description:
- The policy document.
- required: true
type: str
policy_name:
description:
- The friendly name (not ARN) identifying the policy.
- required: true
type: str
type: list
purge_tags:
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
role_name:
description:
@@ -96,7 +101,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
wait:
default: false
@@ -110,7 +114,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -121,7 +124,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -157,7 +163,7 @@ def main():
),
)
- argument_spec["assume_role_policy_document"] = {"type": "dict", "required": True}
+ argument_spec["assume_role_policy_document"] = {"type": "dict"}
argument_spec["description"] = {"type": "str"}
argument_spec["managed_policy_arns"] = {"type": "list", "elements": "str"}
argument_spec["max_session_duration"] = {"type": "int"}
@@ -166,17 +172,10 @@ def main():
argument_spec["policies"] = {
"type": "list",
"elements": "dict",
- "options": {
- "policy_document": {"type": "str", "required": True},
- "policy_name": {"type": "str", "required": True},
- },
+ "options": {"policy_document": {"type": "str"}, "policy_name": {"type": "str"}},
}
argument_spec["role_name"] = {"type": "str"}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -184,16 +183,21 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
- ["state", "present", ["assume_role_policy_document", "role_name"], True],
+ ["state", "present", ["role_name", "assume_role_policy_document"], True],
["state", "absent", ["role_name"], True],
["state", "get", ["role_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -217,7 +221,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -225,22 +229,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["path", "role_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("role_name")
+ identifier = ["role_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/iam_server_certificate.py b/plugins/modules/iam_server_certificate.py
new file mode 100644
index 00000000..db27f2de
--- /dev/null
+++ b/plugins/modules/iam_server_certificate.py
@@ -0,0 +1,261 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: iam_server_certificate
+short_description: Uploads and manages a server certificate entity for the AWS account
+description:
+- Uploads and manages a server certificate entity for the AWS account.
+options:
+ certificate_body:
+ description:
+ - Not Provived.
+ type: str
+ certificate_chain:
+ description:
+ - Not Provived.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ path:
+ description:
+ - Not Provived.
+ type: str
+ private_key:
+ description:
+ - Not Provived.
+ type: str
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ server_certificate_name:
+ description:
+ - Not Provived.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+- name: Create Certificate
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ wait: true
+ register: create_cert
+
+- name: Delete certificate
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: absent
+ register: delete_cert
+
+- name: Create Certificate with Chain and path
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ certificate_chain: '{{ chain_cert_data }}'
+ path: /example/
+ register: create_cert
+
+- name: Gather information about a certificate
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: get
+ register: create_info
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["certificate_body"] = {"type": "str"}
+ argument_spec["certificate_chain"] = {"type": "str"}
+ argument_spec["server_certificate_name"] = {"type": "str"}
+ argument_spec["path"] = {"type": "str"}
+ argument_spec["private_key"] = {"type": "str"}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+
+ required_if = [
+ ["state", "present", ["server_certificate_name"], True],
+ ["state", "absent", ["server_certificate_name"], True],
+ ["state", "get", ["server_certificate_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::IAM::ServerCertificate"
+
+ params = {}
+
+ params["certificate_body"] = module.params.get("certificate_body")
+ params["certificate_chain"] = module.params.get("certificate_chain")
+ params["path"] = module.params.get("path")
+ params["private_key"] = module.params.get("private_key")
+ params["server_certificate_name"] = module.params.get("server_certificate_name")
+ params["tags"] = module.params.get("tags")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = [
+ "server_certificate_name",
+ "private_key",
+ "certificate_body",
+ "certificate_chain",
+ ]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["server_certificate_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/kms_alias.py b/plugins/modules/kms_alias.py
new file mode 100644
index 00000000..9a84fc9d
--- /dev/null
+++ b/plugins/modules/kms_alias.py
@@ -0,0 +1,200 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: kms_alias
+short_description: Specifies a display name for a KMS key.
+description:
+- Specifies a display name for a KMS key.
+options:
+ alias_name:
+ description:
+ - Specifies the alias name.
+ - This value must begin with alias/ followed by a name, such as alias/ExampleAlias.
+ - The alias name cannot begin with alias/aws/.
+ - The alias/aws/ prefix is reserved for AWS managed keys.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ target_key_id:
+ description:
+ - Identifies the AWS KMS key to which the alias refers.
+ - Specify the key ID or the Amazon Resource Name (ARN) of the AWS KMS key.
+ - You cannot specify another alias.
+ - For help finding the key ID and ARN, see Finding the Key ID and ARN in the
+ AWS Key Management Service Developer Guide.
+ type: str
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["alias_name"] = {"type": "str"}
+ argument_spec["target_key_id"] = {"type": "str"}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+
+ required_if = [
+ ["state", "present", ["alias_name", "target_key_id"], True],
+ ["state", "absent", ["alias_name"], True],
+ ["state", "get", ["alias_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::KMS::Alias"
+
+ params = {}
+
+ params["alias_name"] = module.params.get("alias_name")
+ params["target_key_id"] = module.params.get("target_key_id")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["alias_name"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["alias_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/kms_replica_key.py b/plugins/modules/kms_replica_key.py
new file mode 100644
index 00000000..e9b05be4
--- /dev/null
+++ b/plugins/modules/kms_replica_key.py
@@ -0,0 +1,244 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: kms_replica_key
+short_description: Creates and manages a multi-Region replica key that is based on
+ a multi-Region primary key
+description:
+- Creates andn manages a multi-Region replica key that is based on a multi-Region
+ primary key.
+options:
+ description:
+ description:
+ - A description of the AWS KMS key.
+ - Use a description that helps you to distinguish this AWS KMS key from others
+ in the account, such as its intended use.
+ type: str
+ enabled:
+ description:
+ - Specifies whether the AWS KMS key is enabled.
+ - Disabled AWS KMS keys cannot be used in cryptographic operations.
+ type: bool
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ key_id:
+ description:
+ - Not Provived.
+ type: str
+ key_policy:
+ description:
+ - The key policy that authorizes use of the AWS KMS key.
+ - The key policy must observe the following rules.
+ type: dict
+ pending_window_in_days:
+ description:
+ - Specifies the number of days in the waiting period before AWS KMS deletes
+ an AWS KMS key that has been removed from a CloudFormation stack.
+ - Enter a value between 7 and 30 days.
+ - The default value is 30 days.
+ type: int
+ primary_key_arn:
+ description:
+ - Identifies the primary AWS KMS key to create a replica of.
+ - Specify the Amazon Resource Name (ARN) of the AWS KMS key.
+ - You cannot specify an alias or key ID. For help finding the ARN, see Finding
+ the Key ID and ARN in the AWS Key Management Service Developer Guide.
+ type: str
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["primary_key_arn"] = {"type": "str"}
+ argument_spec["description"] = {"type": "str"}
+ argument_spec["enabled"] = {"type": "bool"}
+ argument_spec["key_policy"] = {"type": "dict"}
+ argument_spec["pending_window_in_days"] = {"type": "int"}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["key_id"] = {"type": "str"}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+
+ required_if = [
+ ["state", "present", ["key_policy", "primary_key_arn", "key_id"], True],
+ ["state", "absent", ["key_id"], True],
+ ["state", "get", ["key_id"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::KMS::ReplicaKey"
+
+ params = {}
+
+ params["description"] = module.params.get("description")
+ params["enabled"] = module.params.get("enabled")
+ params["key_id"] = module.params.get("key_id")
+ params["key_policy"] = module.params.get("key_policy")
+ params["pending_window_in_days"] = module.params.get("pending_window_in_days")
+ params["primary_key_arn"] = module.params.get("primary_key_arn")
+ params["tags"] = module.params.get("tags")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["primary_key_arn"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["key_id"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/lambda_code_signing_config.py b/plugins/modules/lambda_code_signing_config.py
index c8fb5833..1f73a10f 100644
--- a/plugins/modules/lambda_code_signing_config.py
+++ b/plugins/modules/lambda_code_signing_config.py
@@ -14,32 +14,30 @@
DOCUMENTATION = r"""
module: lambda_code_signing_config
short_description: Code signing for AWS Lambda
-description: Creates and manage code signing for AWS Lambda (list, create, update,
- describe, delete).
+description:
+- Creates and manage code signing for AWS Lambda.
options:
allowed_publishers:
description:
- - When the I(code_signing_config) is later on attached to a function, the
- function code will be expected to be signed by profiles from this listWhen
- the I(code_signing_config) is later on attached to a function, the function
- code will be expected to be signed by profiles from this list
- required: true
+ - When the CodeSigningConfig is later on attached to a function, the function
+ code will be expected to be signed by profiles from this listWhen the
+ CodeSigningConfig is later on attached to a function, the function code
+ will be expected to be signed by profiles from this list.
suboptions:
signing_profile_version_arns:
description:
- - List of Signing profile version Arns
+ - List of Signing profile version Arns.
elements: str
- required: true
type: list
type: dict
code_signing_config_arn:
description:
- - A unique Arn for I(code_signing_config) resource
+ - A unique Arn for CodeSigningConfig resource.
type: str
code_signing_policies:
description:
- Policies to control how to act if a signature is invalidPolicies to control
- how to act if a signature is invalid
+ how to act if a signature is invalid.
suboptions:
untrusted_artifact_on_deployment:
choices:
@@ -49,13 +47,22 @@
description:
- Indicates how Lambda operations involve updating the code artifact
will operate.
- - Default to Warn if not provided
+ - Default to Warn if not provided.
type: str
type: dict
description:
description:
- - A description of the I(code_signing_config)
+ - A description of the CodeSigningConfig.
type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
state:
choices:
- present
@@ -84,7 +91,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -95,7 +101,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -135,13 +144,8 @@ def main():
argument_spec["allowed_publishers"] = {
"type": "dict",
"options": {
- "signing_profile_version_arns": {
- "type": "list",
- "required": True,
- "elements": "str",
- }
+ "signing_profile_version_arns": {"type": "list", "elements": "str"}
},
- "required": True,
}
argument_spec["code_signing_policies"] = {
"type": "dict",
@@ -161,15 +165,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
- ["state", "present", ["allowed_publishers"], True],
- ["state", "absent", [], True],
- ["state", "get", [], True],
+ ["state", "present", ["code_signing_config_arn", "allowed_publishers"], True],
+ ["state", "absent", ["code_signing_config_arn"], True],
+ ["state", "get", ["code_signing_config_arn"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -186,30 +195,40 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
# Ignore createOnlyProperties that can be set only during resource creation
- create_only_params = None
+ create_only_params = {}
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
state = module.params.get("state")
- identifier = module.params.get("code_signing_config_arn")
+ identifier = ["code_signing_config_arn"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/lambda_event_source_mapping.py b/plugins/modules/lambda_event_source_mapping.py
index 20816de6..b09150c6 100644
--- a/plugins/modules/lambda_event_source_mapping.py
+++ b/plugins/modules/lambda_event_source_mapping.py
@@ -14,8 +14,20 @@
DOCUMENTATION = r"""
module: lambda_event_source_mapping
short_description: Create a mapping between an event source and an AWS Lambda function
-description: Create a mapping between an event source and an AWS Lambda function.
+description:
+- Create a mapping between an event source and an AWS Lambda function.
options:
+ amazon_managed_kafka_event_source_config:
+ description:
+ - Specific configuration settings for an MSK event source.Specific configuration
+ settings for an MSK event source.
+ suboptions:
+ consumer_group_id:
+ description:
+ - The identifier for the Kafka Consumer Group to join.The identifier
+ for the Kafka Consumer Group to join.
+ type: str
+ type: dict
batch_size:
description:
- The maximum number of items to retrieve in a single batch.
@@ -66,10 +78,18 @@
type: str
type: list
type: dict
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
function_name:
description:
- The name of the Lambda function.
- required: true
type: str
function_response_types:
choices:
@@ -123,9 +143,19 @@
type: list
type: dict
type: dict
+ self_managed_kafka_event_source_config:
+ description:
+ - Specific configuration settings for a Self-Managed Apache Kafka event source.Specific
+ configuration settings for a Self-Managed Apache Kafka event source.
+ suboptions:
+ consumer_group_id:
+ description:
+ - The identifier for the Kafka Consumer Group to join.
+ type: str
+ type: dict
source_access_configurations:
description:
- - The configuration used by AWS Lambda to access event source
+ - The configuration used by AWS Lambda to access event source.
elements: dict
suboptions:
type:
@@ -153,8 +183,8 @@
type: str
starting_position_timestamp:
description:
- - With I(starting_position) set to C(AT_TIMESTAMP), the time from which to
- start reading, in Unix time seconds.
+ - With StartingPosition set to C(AT_TIMESTAMP), the time from which to start
+ reading, in Unix time seconds.
type: int
state:
choices:
@@ -194,7 +224,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -205,7 +234,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -262,7 +294,7 @@ def main():
}
},
}
- argument_spec["function_name"] = {"type": "str", "required": True}
+ argument_spec["function_name"] = {"type": "str"}
argument_spec["maximum_batching_window_in_seconds"] = {"type": "int"}
argument_spec["maximum_record_age_in_seconds"] = {"type": "int"}
argument_spec["maximum_retry_attempts"] = {"type": "int"}
@@ -308,6 +340,14 @@ def main():
}
},
}
+ argument_spec["amazon_managed_kafka_event_source_config"] = {
+ "type": "dict",
+ "options": {"consumer_group_id": {"type": "str"}},
+ }
+ argument_spec["self_managed_kafka_event_source_config"] = {
+ "type": "dict",
+ "options": {"consumer_group_id": {"type": "str"}},
+ }
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -315,15 +355,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
- ["state", "present", ["function_name"], True],
- ["state", "absent", [], True],
- ["state", "get", [], True],
+ ["state", "present", ["id", "function_name"], True],
+ ["state", "absent", ["id"], True],
+ ["state", "get", ["id"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -331,6 +376,9 @@ def main():
params = {}
+ params["amazon_managed_kafka_event_source_config"] = module.params.get(
+ "amazon_managed_kafka_event_source_config"
+ )
params["batch_size"] = module.params.get("batch_size")
params["bisect_batch_on_function_error"] = module.params.get(
"bisect_batch_on_function_error"
@@ -352,6 +400,9 @@ def main():
params["parallelization_factor"] = module.params.get("parallelization_factor")
params["queues"] = module.params.get("queues")
params["self_managed_event_source"] = module.params.get("self_managed_event_source")
+ params["self_managed_kafka_event_source_config"] = module.params.get(
+ "self_managed_kafka_event_source_config"
+ )
params["source_access_configurations"] = module.params.get(
"source_access_configurations"
)
@@ -368,7 +419,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -379,24 +430,36 @@ def main():
"starting_position",
"starting_position_timestamp",
"self_managed_event_source",
+ "amazon_managed_kafka_event_source_config",
+ "self_managed_kafka_event_source_config",
]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "delete", "list", "read", "update"]
+
state = module.params.get("state")
- identifier = module.params.get("id")
+ identifier = ["id"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/lambda_function.py b/plugins/modules/lambda_function.py
index 8dc3beab..bd494c42 100644
--- a/plugins/modules/lambda_function.py
+++ b/plugins/modules/lambda_function.py
@@ -14,8 +14,8 @@
DOCUMENTATION = r"""
module: lambda_function
short_description: Create and manage Lambda functions
-description: Creates and manage Lambda functions (list, create, update, describe,
- delete).
+description:
+- Creates and manage Lambda functions.
options:
architectures:
choices:
@@ -28,7 +28,6 @@
code:
description:
- The code for the function.
- required: true
suboptions:
image_uri:
description:
@@ -58,7 +57,7 @@
type: dict
code_signing_config_arn:
description:
- - A unique Arn for I(code_signing_config) resource
+ - A unique Arn for CodeSigningConfig resource.
type: str
dead_letter_config:
description:
@@ -93,7 +92,6 @@
size:
description:
- The amount of ephemeral storage that your function has access to.
- required: true
type: int
type: dict
file_system_configs:
@@ -101,18 +99,26 @@
- Connection settings for an Amazon EFS file system.
- To connect a function to a file system, a mount target must be available
in every Availability Zone that your function connects to.
- - If your template contains an AWS::EFS::I(mount_target) resource, you must
- also specify a I(depends_on) attribute to ensure that the mount target
- is created or updated before the function.
+ - If your template contains an AWS::EFS::MountTarget resource, you must also
+ specify a DependsOn attribute to ensure that the mount target is created
+ or updated before the function.
elements: dict
suboptions:
local_mount_path:
description:
- The path where the function can access the file system, starting
with /mnt/.
- required: true
type: str
type: list
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
function_name:
description:
- The name of the Lambda function, up to 64 characters in length.
@@ -123,11 +129,11 @@
- The name of the method within your code that Lambda calls to execute your
function.
- The format includes the file name.
- - It can also include namespaces and other qualifiers, depending on the runtime
+ - It can also include namespaces and other qualifiers, depending on the runtime.
type: str
image_config:
description:
- - I(image_config)
+ - I(image_config).
suboptions:
command:
description:
@@ -167,13 +173,12 @@
- Image
- Zip
description:
- - I(package_type).
+ - PackageType.
type: str
purge_tags:
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
reserved_concurrent_executions:
description:
@@ -182,7 +187,6 @@
role:
description:
- The Amazon Resource Name (ARN) of the functions execution role.
- required: true
type: str
runtime:
description:
@@ -210,7 +214,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
timeout:
description:
@@ -266,7 +269,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -277,7 +279,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -313,54 +318,16 @@ def main():
),
)
- argument_spec["code"] = {
+ argument_spec["image_config"] = {
"type": "dict",
"options": {
- "s3_bucket": {"type": "str"},
- "s3_key": {"type": "str"},
- "s3_object_version": {"type": "str"},
- "zip_file": {"type": "str"},
- "image_uri": {"type": "str"},
+ "working_directory": {"type": "str"},
+ "command": {"type": "list", "elements": "str"},
+ "entry_point": {"type": "list", "elements": "str"},
},
- "required": True,
}
- argument_spec["dead_letter_config"] = {
- "type": "dict",
- "options": {"target_arn": {"type": "str"}},
- }
- argument_spec["description"] = {"type": "str"}
- argument_spec["environment"] = {
- "type": "dict",
- "options": {"variables": {"type": "dict"}},
- }
- argument_spec["ephemeral_storage"] = {
- "type": "dict",
- "options": {"size": {"type": "int", "required": True}},
- }
- argument_spec["file_system_configs"] = {
- "type": "list",
- "elements": "dict",
- "options": {"local_mount_path": {"type": "str", "required": True}},
- }
- argument_spec["function_name"] = {"type": "str"}
- argument_spec["handler"] = {"type": "str"}
- argument_spec["architectures"] = {
- "type": "list",
- "elements": "str",
- "choices": ["arm64", "x86_64"],
- }
- argument_spec["kms_key_arn"] = {"type": "str"}
- argument_spec["layers"] = {"type": "list", "elements": "str"}
argument_spec["memory_size"] = {"type": "int"}
- argument_spec["reserved_concurrent_executions"] = {"type": "int"}
- argument_spec["role"] = {"type": "str", "required": True}
- argument_spec["runtime"] = {"type": "str"}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
- argument_spec["timeout"] = {"type": "int"}
+ argument_spec["description"] = {"type": "str"}
argument_spec["tracing_config"] = {
"type": "dict",
"options": {"mode": {"type": "str", "choices": ["Active", "PassThrough"]}},
@@ -372,16 +339,49 @@ def main():
"subnet_ids": {"type": "list", "elements": "str"},
},
}
- argument_spec["code_signing_config_arn"] = {"type": "str"}
- argument_spec["image_config"] = {
+ argument_spec["dead_letter_config"] = {
+ "type": "dict",
+ "options": {"target_arn": {"type": "str"}},
+ }
+ argument_spec["timeout"] = {"type": "int"}
+ argument_spec["handler"] = {"type": "str"}
+ argument_spec["reserved_concurrent_executions"] = {"type": "int"}
+ argument_spec["code"] = {
"type": "dict",
"options": {
- "entry_point": {"type": "list", "elements": "str"},
- "command": {"type": "list", "elements": "str"},
- "working_directory": {"type": "str"},
+ "s3_object_version": {"type": "str"},
+ "s3_bucket": {"type": "str"},
+ "zip_file": {"type": "str"},
+ "s3_key": {"type": "str"},
+ "image_uri": {"type": "str"},
},
}
+ argument_spec["role"] = {"type": "str"}
+ argument_spec["file_system_configs"] = {
+ "type": "list",
+ "elements": "dict",
+ "options": {"local_mount_path": {"type": "str"}},
+ }
+ argument_spec["function_name"] = {"type": "str"}
+ argument_spec["runtime"] = {"type": "str"}
+ argument_spec["kms_key_arn"] = {"type": "str"}
argument_spec["package_type"] = {"type": "str", "choices": ["Image", "Zip"]}
+ argument_spec["code_signing_config_arn"] = {"type": "str"}
+ argument_spec["environment"] = {
+ "type": "dict",
+ "options": {"variables": {"type": "dict"}},
+ }
+ argument_spec["ephemeral_storage"] = {
+ "type": "dict",
+ "options": {"size": {"type": "int"}},
+ }
+ argument_spec["layers"] = {"type": "list", "elements": "str"}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["architectures"] = {
+ "type": "list",
+ "elements": "str",
+ "choices": ["arm64", "x86_64"],
+ }
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -389,16 +389,21 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
- ["state", "present", ["function_name", "role", "code"], True],
+ ["state", "present", ["role", "code", "function_name"], True],
["state", "absent", ["function_name"], True],
["state", "get", ["function_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -435,7 +440,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -443,22 +448,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["function_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["read", "create", "update", "list", "delete"]
+
state = module.params.get("state")
- identifier = module.params.get("function_name")
+ identifier = ["function_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/logs_log_group.py b/plugins/modules/logs_log_group.py
index 91efba78..65a39457 100644
--- a/plugins/modules/logs_log_group.py
+++ b/plugins/modules/logs_log_group.py
@@ -14,8 +14,18 @@
DOCUMENTATION = r"""
module: logs_log_group
short_description: Create and manage log groups
-description: Create and manage log groups (list, create, update, describe, delete).
+description:
+- Create and manage log groups.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
kms_key_id:
description:
- The Amazon Resource Name (ARN) of the CMK to use when encrypting log data.
@@ -30,7 +40,6 @@
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
retention_in_days:
choices:
@@ -78,7 +87,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
wait:
default: false
@@ -92,18 +100,51 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
"""
EXAMPLES = r"""
+- name: Create log group
+ amazon.cloud.logs_log_group:
+ state: present
+ log_group_name: '{{ log_group_name }}'
+ retention_in_days: 7
+ tags:
+ testkey: testvalue
+ wait: true
+ register: output
+
+- name: Describe log group
+ amazon.cloud.logs_log_group:
+ state: describe
+ log_group_name: '{{ log_group_name }}'
+ register: output
+
+- name: Update log group
+ amazon.cloud.logs_log_group:
+ state: present
+ log_group_name: '{{ log_group_name }}'
+ tags:
+ anotherkey: anothervalue
+ purge_tags: false
+ wait: true
+ register: output
+
+- name: Delete log group
+ amazon.cloud.logs_log_group:
+ state: absent
+ log_group_name: '{{ log_group_name }}'
+ register: output
"""
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -163,11 +204,7 @@ def main():
3653,
],
}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -175,16 +212,21 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
["state", "present", ["log_group_name"], True],
["state", "absent", ["log_group_name"], True],
["state", "get", ["log_group_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -201,7 +243,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -209,22 +251,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["log_group_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("log_group_name")
+ identifier = ["log_group_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/logs_query_definition.py b/plugins/modules/logs_query_definition.py
index 8bd6d034..5006e938 100644
--- a/plugins/modules/logs_query_definition.py
+++ b/plugins/modules/logs_query_definition.py
@@ -14,27 +14,34 @@
DOCUMENTATION = r"""
module: logs_query_definition
short_description: Create and manage query definitions
-description: Creates and manage query definitions for CloudWatch Logs Insights (list,
- create, update, describe, delete).
+description:
+- Creates and manage query definitions for CloudWatch Logs Insights.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
log_group_names:
description:
- - I(log_group) name
+ - I(log_group) name.
elements: str
type: list
name:
description:
- - A name for the saved query definition
- required: true
+ - A name for the saved query definition.
type: str
query_definition_id:
description:
- - Unique identifier of a query definition
+ - Unique identifier of a query definition.
type: str
query_string:
description:
- - The query string to use for this definition
- required: true
+ - The query string to use for this definition.
type: str
state:
choices:
@@ -64,7 +71,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -75,7 +81,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -111,8 +120,8 @@ def main():
),
)
- argument_spec["name"] = {"type": "str", "required": True}
- argument_spec["query_string"] = {"type": "str", "required": True}
+ argument_spec["name"] = {"type": "str"}
+ argument_spec["query_string"] = {"type": "str"}
argument_spec["log_group_names"] = {"type": "list", "elements": "str"}
argument_spec["query_definition_id"] = {"type": "str"}
argument_spec["state"] = {
@@ -122,15 +131,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
- ["state", "present", ["name", "query_string"], True],
- ["state", "absent", [], True],
- ["state", "get", [], True],
+ ["state", "present", ["query_definition_id", "name", "query_string"], True],
+ ["state", "absent", ["query_definition_id"], True],
+ ["state", "get", ["query_definition_id"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -147,30 +161,40 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
# Ignore createOnlyProperties that can be set only during resource creation
- create_only_params = None
+ create_only_params = {}
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
state = module.params.get("state")
- identifier = module.params.get("query_definition_id")
+ identifier = ["query_definition_id"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/logs_resource_policy.py b/plugins/modules/logs_resource_policy.py
index 0ac6df68..addc9a9c 100644
--- a/plugins/modules/logs_resource_policy.py
+++ b/plugins/modules/logs_resource_policy.py
@@ -14,18 +14,26 @@
DOCUMENTATION = r"""
module: logs_resource_policy
short_description: Create and manage resource policies
-description: Creates and manage resource policies that allows other AWS services to
- put log events to the account (list, create, update, describe, delete).
+description:
+- Creates and manage resource policies that allows other AWS services to put log events
+ to the account.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
policy_document:
description:
- - The policy document
- required: true
+ - The policy document.
type: str
policy_name:
description:
- - A name for resource policy
- required: true
+ - A name for resource policy.
type: str
state:
choices:
@@ -55,7 +63,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -66,7 +73,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -102,8 +112,8 @@ def main():
),
)
- argument_spec["policy_name"] = {"type": "str", "required": True}
- argument_spec["policy_document"] = {"type": "str", "required": True}
+ argument_spec["policy_name"] = {"type": "str"}
+ argument_spec["policy_document"] = {"type": "str"}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -111,15 +121,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
["state", "present", ["policy_document", "policy_name"], True],
["state", "absent", ["policy_name"], True],
["state", "get", ["policy_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -134,7 +149,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -142,22 +157,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["policy_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("policy_name")
+ identifier = ["policy_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/rdsdb_proxy.py b/plugins/modules/rds_db_proxy.py
similarity index 80%
rename from plugins/modules/rdsdb_proxy.py
rename to plugins/modules/rds_db_proxy.py
index c0bd5602..fc7057bf 100644
--- a/plugins/modules/rdsdb_proxy.py
+++ b/plugins/modules/rds_db_proxy.py
@@ -12,15 +12,15 @@
DOCUMENTATION = r"""
-module: rdsdb_proxy
+module: rds_db_proxy
short_description: Create and manage DB proxies
-description: Creates and manage DB proxies (list, create, update, describe, delete).
+description:
+- Creates and manage DB proxies.
options:
auth:
description:
- The authorization mechanism that the proxy uses.
elements: dict
- required: true
suboptions:
auth_scheme:
choices:
@@ -59,7 +59,6 @@
- The identifier for the proxy.
- This name must be unique for all proxies owned by your AWS account in the
specified AWS Region.
- required: true
type: str
debug_logging:
description:
@@ -72,8 +71,16 @@
- POSTGRESQL
description:
- The kinds of databases that the proxy can connect to.
- required: true
type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
idle_client_timeout:
description:
- The number of seconds that a connection to the proxy can be inactive before
@@ -83,7 +90,6 @@
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
require_tls:
description:
@@ -94,7 +100,6 @@
description:
- The Amazon Resource Name (ARN) of the IAM role that the proxy uses to access
secrets in AWS Secrets Manager.
- required: true
type: str
state:
choices:
@@ -118,7 +123,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
vpc_security_group_ids:
description:
@@ -129,7 +133,6 @@
description:
- VPC subnet IDs to associate with the new proxy.
elements: str
- required: true
type: list
wait:
default: false
@@ -143,7 +146,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -154,7 +156,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -200,29 +205,16 @@ def main():
"secret_arn": {"type": "str"},
"user_name": {"type": "str"},
},
- "required": True,
}
- argument_spec["db_proxy_name"] = {"type": "str", "required": True}
+ argument_spec["db_proxy_name"] = {"type": "str"}
argument_spec["debug_logging"] = {"type": "bool"}
- argument_spec["engine_family"] = {
- "type": "str",
- "choices": ["MYSQL", "POSTGRESQL"],
- "required": True,
- }
+ argument_spec["engine_family"] = {"type": "str", "choices": ["MYSQL", "POSTGRESQL"]}
argument_spec["idle_client_timeout"] = {"type": "int"}
argument_spec["require_tls"] = {"type": "bool"}
- argument_spec["role_arn"] = {"type": "str", "required": True}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
+ argument_spec["role_arn"] = {"type": "str"}
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
argument_spec["vpc_security_group_ids"] = {"type": "list", "elements": "str"}
- argument_spec["vpc_subnet_ids"] = {
- "type": "list",
- "elements": "str",
- "required": True,
- }
+ argument_spec["vpc_subnet_ids"] = {"type": "list", "elements": "str"}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -230,21 +222,26 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
[
"state",
"present",
- ["role_arn", "db_proxy_name", "engine_family", "auth", "vpc_subnet_ids"],
+ ["engine_family", "role_arn", "vpc_subnet_ids", "auth", "db_proxy_name"],
True,
],
["state", "absent", ["db_proxy_name"], True],
["state", "get", ["db_proxy_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -267,7 +264,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -275,22 +272,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["db_proxy_name", "engine_family", "vpc_subnet_ids"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("db_proxy_name")
+ identifier = ["db_proxy_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/rds_db_proxy_endpoint.py b/plugins/modules/rds_db_proxy_endpoint.py
new file mode 100644
index 00000000..974cc5e0
--- /dev/null
+++ b/plugins/modules/rds_db_proxy_endpoint.py
@@ -0,0 +1,251 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: rds_db_proxy_endpoint
+short_description: Creates and manages a DB proxy endpoint
+description:
+- Creates and manages a DB proxy endpoint.
+- You can use custom proxy endpoints to access a proxy through a different VPC than
+ the proxy's default VPC.
+options:
+ db_proxy_endpoint_name:
+ description:
+ - The identifier for the DB proxy endpoint.
+ - This name must be unique for all DB proxy endpoints owned by your AWS account
+ in the specified AWS Region.
+ type: str
+ db_proxy_name:
+ description:
+ - The identifier for the proxy.
+ - This name must be unique for all proxies owned by your AWS account in the
+ specified AWS Region.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ purge_tags:
+ default: true
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ tags:
+ aliases:
+ - resource_tags
+ description:
+ - A dict of tags to apply to the resource.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ target_role:
+ choices:
+ - READ_ONLY
+ - READ_WRITE
+ description:
+ - A value that indicates whether the DB proxy endpoint can be used for read/write
+ or read-only operations.
+ type: str
+ vpc_security_group_ids:
+ description:
+ - VPC security group IDs to associate with the new DB proxy endpoint.
+ elements: str
+ type: list
+ vpc_subnet_ids:
+ description:
+ - VPC subnet IDs to associate with the new DB proxy endpoint.
+ elements: str
+ type: list
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["db_proxy_endpoint_name"] = {"type": "str"}
+ argument_spec["db_proxy_name"] = {"type": "str"}
+ argument_spec["vpc_security_group_ids"] = {"type": "list", "elements": "str"}
+ argument_spec["vpc_subnet_ids"] = {"type": "list", "elements": "str"}
+ argument_spec["target_role"] = {
+ "type": "str",
+ "choices": ["READ_ONLY", "READ_WRITE"],
+ }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
+
+ required_if = [
+ [
+ "state",
+ "present",
+ ["db_proxy_endpoint_name", "vpc_subnet_ids", "db_proxy_name"],
+ True,
+ ],
+ ["state", "absent", ["db_proxy_endpoint_name"], True],
+ ["state", "get", ["db_proxy_endpoint_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::RDS::DBProxyEndpoint"
+
+ params = {}
+
+ params["db_proxy_endpoint_name"] = module.params.get("db_proxy_endpoint_name")
+ params["db_proxy_name"] = module.params.get("db_proxy_name")
+ params["tags"] = module.params.get("tags")
+ params["target_role"] = module.params.get("target_role")
+ params["vpc_security_group_ids"] = module.params.get("vpc_security_group_ids")
+ params["vpc_subnet_ids"] = module.params.get("vpc_subnet_ids")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = [
+ "db_proxy_name",
+ "db_proxy_endpoint_name",
+ "vpc_subnet_ids",
+ "target_role",
+ ]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["db_proxy_endpoint_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/redshift_cluster.py b/plugins/modules/redshift_cluster.py
index 44e8cbef..6a70b581 100644
--- a/plugins/modules/redshift_cluster.py
+++ b/plugins/modules/redshift_cluster.py
@@ -14,13 +14,14 @@
DOCUMENTATION = r"""
module: redshift_cluster
short_description: Create and manage clusters
-description: Creates and manage clusters (list, create, update, describe, delete).
+description:
+- Creates and manage clusters.
options:
allow_version_upgrade:
description:
- Major version upgrades can be applied during the maintenance window to the
Amazon Redshift engine that is running on the cluster.
- - Default value is True
+ - Default value is True.
type: bool
aqua_configuration_status:
description:
@@ -36,14 +37,14 @@
description:
- The number of days that automated snapshots are retained.
- If the value is 0, automated snapshots are disabled.
- - Default value is 1
+ - Default value is 1.
type: int
availability_zone:
description:
- - The C(EC2) Availability Zone (AZ) in which you want Amazon Redshift to provision
+ - The EC2 Availability Zone (AZ) in which you want Amazon Redshift to provision
the cluster.
- 'Default: A random, system-chosen Availability Zone in the region that is
- specified by the endpoint'
+ specified by the endpoint.'
type: str
availability_zone_relocation:
description:
@@ -52,7 +53,7 @@
type: bool
availability_zone_relocation_status:
description:
- - The availability zone relocation status of the cluster
+ - The availability zone relocation status of the cluster.
type: str
classic:
description:
@@ -68,7 +69,7 @@
operations such as deleting or modifying.
- All alphabetical characters must be lower case, no hypens at the end, no
two consecutive hyphens.
- - Cluster name should be unique for all clusters within an AWS account
+ - Cluster name should be unique for all clusters within an AWS account.
type: str
cluster_parameter_group_name:
description:
@@ -86,10 +87,8 @@
cluster_type:
description:
- The type of the cluster.
- - When cluster type is specified as single-node, the I(number_of_nodes) parameter
- is not required and if multi-node, the I(number_of_nodes) parameter is
- required
- required: true
+ - When cluster type is specified as single-node, the NumberOfNodes parameter
+ is not required and if multi-node, the NumberOfNodes parameter is required.
type: str
cluster_version:
description:
@@ -101,7 +100,6 @@
- The name of the first database to be created when the cluster is created.
- To create additional databases after the cluster is created, connect to
the cluster with a SQL client and use SQL commands to create a database.
- required: true
type: str
defer_maintenance:
description:
@@ -127,7 +125,7 @@
- The destination AWS Region that you want to copy snapshots to.
- 'Constraints: Must be the name of a valid AWS Region.'
- For more information, see Regions and Endpoints in the Amazon Web Services
- ) General Reference
+ ) General Reference.
type: str
elastic_ip:
description:
@@ -150,12 +148,21 @@
in a VPC. For more information, see Enhanced VPC Routing in the Amazon
Redshift Cluster Management Guide.
- If this option is true , enhanced VPC routing is enabled.
- - 'Default: false'
+ - 'Default: false.'
+ type: bool
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
type: bool
hsm_client_certificate_identifier:
description:
- Specifies the name of the HSM client certificate the Amazon Redshift cluster
- uses to retrieve the data encryption keys stored in an HSM
+ uses to retrieve the data encryption keys stored in an HSM.
type: str
hsm_configuration_identifier:
description:
@@ -167,7 +174,7 @@
- A list of AWS Identity and Access Management (IAM) roles that can be used
by the cluster to access other AWS services.
- You must supply the IAM roles in their Amazon Resource Name (ARN) format.
- - You can supply up to 10 IAM roles in a single request
+ - You can supply up to 10 IAM roles in a single request.
elements: str
type: list
kms_key_id:
@@ -182,7 +189,6 @@
bucket_name:
description:
- Not Provived.
- required: true
type: str
s3_key_prefix:
description:
@@ -193,8 +199,8 @@
description:
- The name for the maintenance track that you want to assign for the cluster.
- This name change is asynchronous.
- - The new track name stays in the I(pending_modified_values) for the cluster
- until the next maintenance window.
+ - The new track name stays in the PendingModifiedValues for the cluster until
+ the next maintenance window.
- When the maintenance track changes, the cluster is switched to the latest
cluster release available for the maintenance track.
- At this point, the maintenance track name is applied.
@@ -213,27 +219,24 @@
- Password must be between 8 and 64 characters in length, should have at least
one uppercase letter.Must contain at least one lowercase letter.Must contain
one number.Can be any printable ASCII character.
- required: true
type: str
master_username:
description:
- The user name associated with the master user account for the cluster that
is being created.
- The user name cant be PUBLIC and first character must be a letter.
- required: true
type: str
node_type:
description:
- 'The node type to be provisioned for the cluster.Valid Values: ds2.xlarge
| ds2.8xlarge | dc1.large | dc1.8xlarge | dc2.large | dc2.8xlarge | ra3.4xlarge
- | ra3.16xlarge'
- required: true
+ | ra3.16xlarge.'
type: str
number_of_nodes:
description:
- The number of compute nodes in the cluster.
- - This parameter is required when the I(cluster_type) parameter is specified
- as multi-node.
+ - This parameter is required when the ClusterType parameter is specified as
+ multi-node.
type: int
owner_account:
description:
@@ -252,17 +255,16 @@
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
resource_action:
description:
- The Redshift operation to be performed.
- - Resource Action supports pause-cluster, resume-cluster I(apis)
+ - Resource Action supports pause-cluster, resume-cluster APIs.
type: str
revision_target:
description:
- The identifier of the database revision.
- - You can retrieve this value from the response to the I(describe_cluster_db_revisions)
+ - You can retrieve this value from the response to the DescribeClusterDbRevisions
request.
type: str
rotate_encryption_key:
@@ -320,7 +322,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
vpc_security_group_ids:
description:
@@ -340,7 +341,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -351,7 +351,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -388,17 +391,17 @@ def main():
)
argument_spec["cluster_identifier"] = {"type": "str"}
- argument_spec["master_username"] = {"type": "str", "required": True}
- argument_spec["master_user_password"] = {"type": "str", "required": True}
- argument_spec["node_type"] = {"type": "str", "required": True}
+ argument_spec["master_username"] = {"type": "str"}
+ argument_spec["master_user_password"] = {"type": "str"}
+ argument_spec["node_type"] = {"type": "str"}
argument_spec["allow_version_upgrade"] = {"type": "bool"}
argument_spec["automated_snapshot_retention_period"] = {"type": "int"}
argument_spec["availability_zone"] = {"type": "str"}
argument_spec["cluster_parameter_group_name"] = {"type": "str"}
- argument_spec["cluster_type"] = {"type": "str", "required": True}
+ argument_spec["cluster_type"] = {"type": "str"}
argument_spec["cluster_version"] = {"type": "str"}
argument_spec["cluster_subnet_group_name"] = {"type": "str"}
- argument_spec["db_name"] = {"type": "str", "required": True}
+ argument_spec["db_name"] = {"type": "str"}
argument_spec["elastic_ip"] = {"type": "str"}
argument_spec["encrypted"] = {"type": "bool"}
argument_spec["hsm_client_certificate_identifier"] = {"type": "str"}
@@ -409,21 +412,14 @@ def main():
argument_spec["publicly_accessible"] = {"type": "bool"}
argument_spec["cluster_security_groups"] = {"type": "list", "elements": "str"}
argument_spec["iam_roles"] = {"type": "list", "elements": "str"}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
argument_spec["vpc_security_group_ids"] = {"type": "list", "elements": "str"}
argument_spec["snapshot_cluster_identifier"] = {"type": "str"}
argument_spec["snapshot_identifier"] = {"type": "str"}
argument_spec["owner_account"] = {"type": "str"}
argument_spec["logging_properties"] = {
"type": "dict",
- "options": {
- "bucket_name": {"type": "str", "required": True},
- "s3_key_prefix": {"type": "str"},
- },
+ "options": {"bucket_name": {"type": "str"}, "s3_key_prefix": {"type": "str"}},
}
argument_spec["endpoint"] = {"type": "dict", "options": {}}
argument_spec["destination_region"] = {"type": "str"}
@@ -451,28 +447,33 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
[
"state",
"present",
[
- "master_user_password",
- "db_name",
"master_username",
- "node_type",
"cluster_identifier",
+ "db_name",
+ "node_type",
"cluster_type",
+ "master_user_password",
],
True,
],
["state", "absent", ["cluster_identifier"], True],
["state", "get", ["cluster_identifier"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -558,7 +559,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -574,22 +575,32 @@ def main():
"master_username",
]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("cluster_identifier")
+ identifier = ["cluster_identifier"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/redshift_endpoint_access.py b/plugins/modules/redshift_endpoint_access.py
new file mode 100644
index 00000000..3156b032
--- /dev/null
+++ b/plugins/modules/redshift_endpoint_access.py
@@ -0,0 +1,232 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: redshift_endpoint_access
+short_description: Creates and manages Redshift-managed VPC endpoint
+description:
+- Creates and manages a Redshift-managed VPC endpoint.
+options:
+ cluster_identifier:
+ description:
+ - A unique identifier for the cluster.
+ - You use this identifier to refer to the cluster for any subsequent cluster
+ operations such as deleting or modifying.
+ - All alphabetical characters must be lower case, no hypens at the end, no
+ two consecutive hyphens.
+ - Cluster name should be unique for all clusters within an AWS account.
+ type: str
+ endpoint_name:
+ description:
+ - The name of the endpoint.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ resource_owner:
+ description:
+ - The AWS account ID of the owner of the cluster.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ subnet_group_name:
+ description:
+ - The subnet group name where Amazon Redshift chooses to deploy the endpoint.
+ type: str
+ vpc_security_group_ids:
+ description:
+ - A list of vpc security group ids to apply to the created endpoint access.
+ elements: str
+ type: list
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["cluster_identifier"] = {"type": "str"}
+ argument_spec["resource_owner"] = {"type": "str"}
+ argument_spec["endpoint_name"] = {"type": "str"}
+ argument_spec["subnet_group_name"] = {"type": "str"}
+ argument_spec["vpc_security_group_ids"] = {"type": "list", "elements": "str"}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+
+ required_if = [
+ [
+ "state",
+ "present",
+ [
+ "vpc_security_group_ids",
+ "subnet_group_name",
+ "endpoint_name",
+ "cluster_identifier",
+ ],
+ True,
+ ],
+ ["state", "absent", ["endpoint_name"], True],
+ ["state", "get", ["endpoint_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::Redshift::EndpointAccess"
+
+ params = {}
+
+ params["cluster_identifier"] = module.params.get("cluster_identifier")
+ params["endpoint_name"] = module.params.get("endpoint_name")
+ params["resource_owner"] = module.params.get("resource_owner")
+ params["subnet_group_name"] = module.params.get("subnet_group_name")
+ params["vpc_security_group_ids"] = module.params.get("vpc_security_group_ids")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = [
+ "endpoint_name",
+ "cluster_identifier",
+ "resource_owner",
+ "subnet_group_name",
+ ]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["endpoint_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/redshift_endpoint_authorization.py b/plugins/modules/redshift_endpoint_authorization.py
new file mode 100644
index 00000000..f340926a
--- /dev/null
+++ b/plugins/modules/redshift_endpoint_authorization.py
@@ -0,0 +1,220 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: redshift_endpoint_authorization
+short_description: Describes an endpoint authorization for authorizing Redshift-managed
+ VPC endpoint access to a cluster across AWS accounts.
+description:
+- Describes an endpoint authorization for authorizing Redshift-managed VPC endpoint
+ access to a cluster across AWS accounts.
+options:
+ account:
+ description:
+ - The target AWS account ID to grant or revoke access for.
+ type: str
+ cluster_identifier:
+ description:
+ - The cluster identifier.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ identifier:
+ description:
+ - For compound primary identifiers, to specify the primary identifier as a
+ string, list each in the order that they are specified in the identifier
+ list definition, separated by '|'.
+ - For more details, visit U(https://docs.aws.amazon.com/cloudcontrolapi/latest/userguide/resource-identifier.html).
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ vpc_ids:
+ description:
+ - The virtual private cloud (VPC) identifiers to grant or revoke access to.
+ elements: str
+ type: list
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["cluster_identifier"] = {"type": "str"}
+ argument_spec["account"] = {"type": "str"}
+ argument_spec["vpc_ids"] = {"type": "list", "elements": "str"}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["identifier"] = {"type": "str"}
+
+ required_if = [
+ ["state", "list", ["cluster_identifier"], True],
+ ["state", "present", ["identifier", "account", "cluster_identifier"], True],
+ ["state", "absent", ["cluster_identifier", "account", "identifier"], True],
+ ["state", "get", ["cluster_identifier", "account", "identifier"], True],
+ ]
+ mutually_exclusive = [[("cluster_identifier", "account"), "identifier"]]
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::Redshift::EndpointAuthorization"
+
+ params = {}
+
+ params["account"] = module.params.get("account")
+ params["cluster_identifier"] = module.params.get("cluster_identifier")
+ params["identifier"] = module.params.get("identifier")
+ params["vpc_ids"] = module.params.get("vpc_ids")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["cluster_identifier", "account"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["cluster_identifier", "account"]
+ if (
+ state in ("present", "absent", "get", "describe")
+ and module.params.get("identifier") is None
+ ):
+ if not module.params.get("cluster_identifier") or not module.params.get(
+ "account"
+ ):
+ module.fail_json(f"You must specify both {*identifier, } identifiers.")
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/redshift_event_subscription.py b/plugins/modules/redshift_event_subscription.py
index c78cefb1..c24760f9 100644
--- a/plugins/modules/redshift_event_subscription.py
+++ b/plugins/modules/redshift_event_subscription.py
@@ -14,8 +14,8 @@
DOCUMENTATION = r"""
module: redshift_event_subscription
short_description: Create and manage Amazon Redshift event notification subscriptions
-description: Creates and manage Amazon Redshift event notification subscriptions (list,
- create, update, describe, delete).
+description:
+- Creates and manage Amazon Redshift event notification subscriptions.
options:
enabled:
description:
@@ -34,11 +34,19 @@
notification subscription.
elements: str
type: list
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
purge_tags:
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
severity:
choices:
@@ -86,8 +94,7 @@
type: str
subscription_name:
description:
- - The name of the Amazon Redshift event notification subscription
- required: true
+ - The name of the Amazon Redshift event notification subscription.
type: str
tags:
aliases:
@@ -95,7 +102,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
wait:
default: false
@@ -109,7 +115,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -120,7 +125,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -156,7 +164,7 @@ def main():
),
)
- argument_spec["subscription_name"] = {"type": "str", "required": True}
+ argument_spec["subscription_name"] = {"type": "str"}
argument_spec["sns_topic_arn"] = {"type": "str"}
argument_spec["source_type"] = {
"type": "str",
@@ -176,11 +184,7 @@ def main():
}
argument_spec["severity"] = {"type": "str", "choices": ["ERROR", "INFO"]}
argument_spec["enabled"] = {"type": "bool"}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -188,16 +192,21 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
["state", "present", ["subscription_name"], True],
["state", "absent", ["subscription_name"], True],
["state", "get", ["subscription_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -218,7 +227,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -226,22 +235,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["subscription_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("subscription_name")
+ identifier = ["subscription_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/redshift_scheduled_action.py b/plugins/modules/redshift_scheduled_action.py
new file mode 100644
index 00000000..2087e410
--- /dev/null
+++ b/plugins/modules/redshift_scheduled_action.py
@@ -0,0 +1,309 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: redshift_scheduled_action
+short_description: Creates and manages a scheduled action
+description:
+- Creates and manages a scheduled action.
+- A scheduled action contains a schedule and an Amazon Redshift API action.
+- For example, you can create a schedule of when to run the ResizeCluster API operation.
+options:
+ enable:
+ description:
+ - If true, the schedule is enabled.
+ - If false, the scheduled action does not trigger.
+ type: bool
+ end_time:
+ description:
+ - The end time in UTC of the scheduled action.
+ - After this time, the scheduled action does not trigger.
+ type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ iam_role:
+ description:
+ - The IAM role to assume to run the target action.
+ type: str
+ schedule:
+ description:
+ - The schedule in at( ) or cron( ) format.
+ type: str
+ scheduled_action_description:
+ description:
+ - The description of the scheduled action.
+ type: str
+ scheduled_action_name:
+ description:
+ - The name of the scheduled action.
+ - The name must be unique within an account.
+ type: str
+ start_time:
+ description:
+ - The start time in UTC of the scheduled action.
+ - Before this time, the scheduled action does not trigger.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ target_action:
+ description:
+ - A JSON format string of the Amazon Redshift API operation with input parameters.
+ suboptions:
+ pause_cluster:
+ description:
+ - Describes a pause cluster operation.
+ - For example, a scheduled action to run the I(pause_cluster) API
+ operation.
+ suboptions:
+ cluster_identifier:
+ description:
+ - Not Provived.
+ type: str
+ type: dict
+ resize_cluster:
+ description:
+ - Describes a resize cluster operation.
+ - For example, a scheduled action to run the I(resize_cluster) API
+ operation.
+ suboptions:
+ classic:
+ description:
+ - Not Provived.
+ type: bool
+ cluster_identifier:
+ description:
+ - Not Provived.
+ type: str
+ cluster_type:
+ description:
+ - Not Provived.
+ type: str
+ node_type:
+ description:
+ - Not Provived.
+ type: str
+ number_of_nodes:
+ description:
+ - Not Provived.
+ type: int
+ type: dict
+ resume_cluster:
+ description:
+ - Describes a resume cluster operation.
+ - For example, a scheduled action to run the I(resume_cluster) API
+ operation.
+ suboptions:
+ cluster_identifier:
+ description:
+ - Not Provived.
+ type: str
+ type: dict
+ type: dict
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["scheduled_action_name"] = {"type": "str"}
+ argument_spec["target_action"] = {
+ "type": "dict",
+ "options": {
+ "resize_cluster": {
+ "type": "dict",
+ "options": {
+ "cluster_identifier": {"type": "str"},
+ "cluster_type": {"type": "str"},
+ "node_type": {"type": "str"},
+ "number_of_nodes": {"type": "int"},
+ "classic": {"type": "bool"},
+ },
+ },
+ "pause_cluster": {
+ "type": "dict",
+ "options": {"cluster_identifier": {"type": "str"}},
+ },
+ "resume_cluster": {
+ "type": "dict",
+ "options": {"cluster_identifier": {"type": "str"}},
+ },
+ },
+ }
+ argument_spec["schedule"] = {"type": "str"}
+ argument_spec["iam_role"] = {"type": "str"}
+ argument_spec["scheduled_action_description"] = {"type": "str"}
+ argument_spec["start_time"] = {"type": "str"}
+ argument_spec["end_time"] = {"type": "str"}
+ argument_spec["enable"] = {"type": "bool"}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+
+ required_if = [
+ ["state", "present", ["scheduled_action_name"], True],
+ ["state", "absent", ["scheduled_action_name"], True],
+ ["state", "get", ["scheduled_action_name"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::Redshift::ScheduledAction"
+
+ params = {}
+
+ params["enable"] = module.params.get("enable")
+ params["end_time"] = module.params.get("end_time")
+ params["iam_role"] = module.params.get("iam_role")
+ params["schedule"] = module.params.get("schedule")
+ params["scheduled_action_description"] = module.params.get(
+ "scheduled_action_description"
+ )
+ params["scheduled_action_name"] = module.params.get("scheduled_action_name")
+ params["start_time"] = module.params.get("start_time")
+ params["target_action"] = module.params.get("target_action")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["scheduled_action_name"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["scheduled_action_name"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/route53_dnssec.py b/plugins/modules/route53_dnssec.py
new file mode 100644
index 00000000..84278cf6
--- /dev/null
+++ b/plugins/modules/route53_dnssec.py
@@ -0,0 +1,187 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: route53_dnssec
+short_description: Is used to enable DNSSEC signing in a hosted zone
+description:
+- Is used to enable DNSSEC signing in a hosted zone.
+options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ hosted_zone_id:
+ description:
+ - The unique string (ID) used to identify a hosted zone.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["hosted_zone_id"] = {"type": "str"}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+
+ required_if = [
+ ["state", "present", ["hosted_zone_id"], True],
+ ["state", "absent", ["hosted_zone_id"], True],
+ ["state", "get", ["hosted_zone_id"], True],
+ ]
+ mutually_exclusive = []
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::Route53::DNSSEC"
+
+ params = {}
+
+ params["hosted_zone_id"] = module.params.get("hosted_zone_id")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["hosted_zone_id"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["hosted_zone_id"]
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/route53_key_signing_key.py b/plugins/modules/route53_key_signing_key.py
new file mode 100644
index 00000000..25aba7bd
--- /dev/null
+++ b/plugins/modules/route53_key_signing_key.py
@@ -0,0 +1,241 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2022, Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+# template: header.j2
+# This module is autogenerated by amazon_cloud_code_generator.
+# See: https://github.com/ansible-collections/amazon_cloud_code_generator
+
+from __future__ import absolute_import, division, print_function
+
+__metaclass__ = type
+
+
+DOCUMENTATION = r"""
+module: route53_key_signing_key
+short_description: Creates a new key-signing key (KSK) in a hosted zone
+description:
+- Creates a new key-signing key (KSK) in a hosted zone.
+options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
+ hosted_zone_id:
+ description:
+ - The unique string (ID) used to identify a hosted zone.
+ type: str
+ identifier:
+ description:
+ - For compound primary identifiers, to specify the primary identifier as a
+ string, list each in the order that they are specified in the identifier
+ list definition, separated by '|'.
+ - For more details, visit U(https://docs.aws.amazon.com/cloudcontrolapi/latest/userguide/resource-identifier.html).
+ type: str
+ key_management_service_arn:
+ description:
+ - The Amazon resource name (ARN) for a customer managed key (CMK) in AWS Key
+ Management Service (KMS). The KeyManagementServiceArn must be unique for
+ each key signing key (KSK) in a single hosted zone.
+ type: str
+ name:
+ description:
+ - An alphanumeric string used to identify a key signing key (KSK). Name must
+ be unique for each key signing key in the same hosted zone.
+ type: str
+ state:
+ choices:
+ - present
+ - absent
+ - list
+ - describe
+ - get
+ default: present
+ description:
+ - Goal state for resource.
+ - I(state=present) creates the resource if it doesn't exist, or updates to
+ the provided state if the resource already exists.
+ - I(state=absent) ensures an existing instance is deleted.
+ - I(state=list) get all the existing resources.
+ - I(state=describe) or I(state=get) retrieves information on an existing resource.
+ type: str
+ status:
+ choices:
+ - ACTIVE
+ - INACTIVE
+ description:
+ - A string specifying the initial status of the key signing key (KSK). You
+ can set the value to ACTIVE or INACTIVE.
+ type: str
+ wait:
+ default: false
+ description:
+ - Wait for operation to complete before returning.
+ type: bool
+ wait_timeout:
+ default: 320
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ type: int
+author: Ansible Cloud Team (@ansible-collections)
+version_added: 0.2.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+"""
+
+EXAMPLES = r"""
+"""
+
+RETURN = r"""
+result:
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
+ returned: always
+ type: complex
+ contains:
+ identifier:
+ description: The unique identifier of the resource.
+ type: str
+ properties:
+ description: The resource properties.
+ type: dict
+"""
+
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ CloudControlResource,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ snake_dict_to_camel_dict,
+)
+from ansible_collections.amazon.cloud.plugins.module_utils.core import (
+ ansible_dict_to_boto3_tag_list,
+)
+
+
+def main():
+
+ argument_spec = dict(
+ state=dict(
+ type="str",
+ choices=["present", "absent", "list", "describe", "get"],
+ default="present",
+ ),
+ )
+
+ argument_spec["hosted_zone_id"] = {"type": "str"}
+ argument_spec["status"] = {"type": "str", "choices": ["ACTIVE", "INACTIVE"]}
+ argument_spec["name"] = {"type": "str"}
+ argument_spec["key_management_service_arn"] = {"type": "str"}
+ argument_spec["state"] = {
+ "type": "str",
+ "choices": ["present", "absent", "list", "describe", "get"],
+ "default": "present",
+ }
+ argument_spec["wait"] = {"type": "bool", "default": False}
+ argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["identifier"] = {"type": "str"}
+
+ required_if = [
+ ["state", "list", ["hosted_zone_id"], True],
+ [
+ "state",
+ "present",
+ [
+ "name",
+ "identifier",
+ "status",
+ "key_management_service_arn",
+ "hosted_zone_id",
+ ],
+ True,
+ ],
+ ["state", "absent", ["hosted_zone_id", "name", "identifier"], True],
+ ["state", "get", ["hosted_zone_id", "name", "identifier"], True],
+ ]
+ mutually_exclusive = [[("hosted_zone_id", "name"), "identifier"]]
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
+ )
+ cloud = CloudControlResource(module)
+
+ type_name = "AWS::Route53::KeySigningKey"
+
+ params = {}
+
+ params["hosted_zone_id"] = module.params.get("hosted_zone_id")
+ params["identifier"] = module.params.get("identifier")
+ params["key_management_service_arn"] = module.params.get(
+ "key_management_service_arn"
+ )
+ params["name"] = module.params.get("name")
+ params["status"] = module.params.get("status")
+
+ # The DesiredState we pass to AWS must be a JSONArray of non-null values
+ _params_to_set = {k: v for k, v in params.items() if v is not None}
+
+ # Only if resource is taggable
+ if module.params.get("tags") is not None:
+ _params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
+
+ params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
+
+ # Ignore createOnlyProperties that can be set only during resource creation
+ create_only_params = ["hosted_zone_id", "name", "key_management_service_arn"]
+
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
+ state = module.params.get("state")
+ identifier = ["hosted_zone_id", "name"]
+ if (
+ state in ("present", "absent", "get", "describe")
+ and module.params.get("identifier") is None
+ ):
+ if not module.params.get("hosted_zone_id") or not module.params.get("name"):
+ module.fail_json(f"You must specify both {*identifier, } identifiers.")
+
+ results = {"changed": False, "result": {}}
+
+ if state == "list":
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
+
+ if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
+ results["result"] = cloud.get_resource(type_name, identifier)
+
+ if state == "present":
+ results = cloud.present(
+ type_name, identifier, params_to_set, create_only_params
+ )
+
+ if state == "absent":
+ results["changed"] |= cloud.absent(type_name, identifier)
+
+ module.exit_json(**results)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/plugins/modules/s3_access_point.py b/plugins/modules/s3_access_point.py
index 3856b565..222b0bed 100644
--- a/plugins/modules/s3_access_point.py
+++ b/plugins/modules/s3_access_point.py
@@ -14,14 +14,22 @@
DOCUMENTATION = r"""
module: s3_access_point
short_description: Create and manage Amazon S3 access points to use to access S3 buckets
-description: Create and manage Amazon S3 access points to use to access S3 buckets
- (list, create, update, describe, delete).
+description:
+- Create and manage Amazon S3 access points to use to access S3 buckets.
options:
bucket:
description:
- The name of the bucket that you want to associate this Access Point with.
- required: true
type: str
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
name:
description:
- The name you want to assign to this Access Point.
@@ -46,8 +54,8 @@
type: dict
public_access_block_configuration:
description:
- - The I(public_access_block) configuration that you want to apply to this
- Access Point.
+ - The PublicAccessBlock configuration that you want to apply to this Access
+ Point.
- You can enable the configuration options in any combination.
- For more information about when Amazon S3 considers a bucket or object public,
see U(https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html#access-control-block-public-access-policy-status)
@@ -57,7 +65,7 @@
description:
- Specifies whether Amazon S3 should block public access control lists
(ACLs) for buckets in this account.
- - 'Setting this element to C(True) causes the following behavior:'
+ - Setting this element to C(True) causes the following behavior:.
- '- PUT Bucket acl and PUT Object acl calls fail if the specified
ACL is public.'
- '- PUT Object calls fail if the request includes a public ACL.'
@@ -135,7 +143,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -146,7 +153,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -183,7 +193,7 @@ def main():
)
argument_spec["name"] = {"type": "str"}
- argument_spec["bucket"] = {"type": "str", "required": True}
+ argument_spec["bucket"] = {"type": "str"}
argument_spec["vpc_configuration"] = {
"type": "dict",
"options": {"vpc_id": {"type": "str"}},
@@ -209,15 +219,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
- ["state", "present", ["bucket"], True],
- ["state", "absent", [], True],
- ["state", "get", [], True],
+ ["state", "present", ["name", "bucket"], True],
+ ["state", "absent", ["name"], True],
+ ["state", "get", ["name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -238,7 +253,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -251,22 +266,32 @@ def main():
"public_access_block_configuration",
]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("name")
+ identifier = ["name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/s3_bucket.py b/plugins/modules/s3_bucket.py
index a3da0b8b..5d916d24 100644
--- a/plugins/modules/s3_bucket.py
+++ b/plugins/modules/s3_bucket.py
@@ -14,7 +14,8 @@
DOCUMENTATION = r"""
module: s3_bucket
short_description: Create and manage S3 buckets
-description: Create and manage S3 buckets (list, create, update, describe, delete).
+description:
+- Create and manage S3 buckets.
options:
accelerate_configuration:
description:
@@ -26,7 +27,6 @@
- Suspended
description:
- Configures the transfer acceleration state for an Amazon S3 bucket.
- required: true
type: str
type: dict
access_control:
@@ -52,7 +52,6 @@
id:
description:
- The ID that identifies the analytics configuration.
- required: true
type: str
prefix:
description:
@@ -64,7 +63,6 @@
- Specifies data related to access patterns to be collected and made
available to analyze the tradeoffs between different storage classes
for an Amazon S3 bucket.
- required: true
suboptions:
data_export:
description:
@@ -76,7 +74,6 @@
- Specifies information about where to publish analysis
or configuration results for an Amazon S3 bucket
and S3 Replication Time Control (S3 RTC).
- required: true
suboptions:
bucket_account_id:
description:
@@ -119,12 +116,10 @@
key:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
type: list
@@ -137,7 +132,6 @@
description:
- Specifies the default server-side encryption configuration.
elements: dict
- required: true
suboptions:
bucket_key_enabled:
description:
@@ -158,8 +152,8 @@
suboptions:
kms_master_key_id:
description:
- - I(kms_master_key)ID can only be used when you set
- the value of I(sse_algorithm) as aws:kms.
+ - KMSMasterKeyID can only be used when you set the
+ value of I(sse_algorithm) as aws:kms.
type: str
sse_algorithm:
choices:
@@ -167,7 +161,6 @@
- aws:kms
description:
- Not Provived.
- required: true
type: str
type: dict
type: list
@@ -205,14 +198,12 @@
description:
- An HTTP method that you allow the origin to execute.
elements: str
- required: true
type: list
allowed_origins:
description:
- One or more origins you want customers to be able to access
the bucket from.
elements: str
- required: true
type: list
exposed_headers:
description:
@@ -232,6 +223,15 @@
type: int
type: list
type: dict
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
intelligent_tiering_configurations:
description:
- Specifies the S3 Intelligent-Tiering configuration for an Amazon S3 bucket.
@@ -240,7 +240,6 @@
id:
description:
- The ID used to identify the S3 Intelligent-Tiering configuration.
- required: true
type: str
prefix:
description:
@@ -253,7 +252,6 @@
- Enabled
description:
- Specifies the status of the configuration.
- required: true
type: str
tag_filters:
description:
@@ -263,12 +261,10 @@
key:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
tierings:
@@ -289,7 +285,6 @@
- See Storage class for automatically optimizing frequently
and infrequently accessed objects for a list of access
tiers in the S3 Intelligent-Tiering storage class.
- required: true
type: str
days:
description:
@@ -300,7 +295,6 @@
tier must be at least 90 days and Deep Archive Access
tier must be at least 180 days.
- The maximum can be up to 2 years (730 days).
- required: true
type: int
type: list
type: list
@@ -314,7 +308,6 @@
- Specifies information about where to publish analysis or configuration
results for an Amazon S3 bucket and S3 Replication Time Control
(S3 RTC).
- required: true
suboptions:
bucket_account_id:
description:
@@ -343,12 +336,10 @@
enabled:
description:
- Specifies whether the inventory is enabled or disabled.
- required: true
type: bool
id:
description:
- The ID used to identify the inventory configuration.
- required: true
type: str
included_object_versions:
choices:
@@ -356,7 +347,6 @@
- Current
description:
- Object versions to include in the inventory list.
- required: true
type: str
optional_fields:
choices:
@@ -387,7 +377,6 @@
- Weekly
description:
- Specifies the schedule for generating inventory results.
- required: true
type: str
type: list
lifecycle_configuration:
@@ -412,13 +401,12 @@
description:
- Specifies the number of days after which Amazon
S3 aborts an incomplete multipart upload.
- required: true
type: int
type: dict
expiration_date:
description:
- The date value in ISO 8601 format.
- - The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ)
+ - The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ).
type: str
expiration_in_days:
description:
@@ -439,19 +427,18 @@
- If your bucket is versioning-enabled (or versioning is suspended),
you can set this action to request that Amazon S3 expire
noncurrent object versions at a specific period in the
- objects lifetime
+ objects lifetime.
suboptions:
newer_noncurrent_versions:
description:
- Specified the number of newer noncurrent and current
versions that must exists before performing the
- associated action
+ associated action.
type: int
noncurrent_days:
description:
- Specified the number of days an object is noncurrent
- before Amazon S3 can perform the associated action
- required: true
+ before Amazon S3 can perform the associated action.
type: int
type: dict
noncurrent_version_expiration_in_days:
@@ -475,7 +462,7 @@
description:
- Specified the number of newer noncurrent and current
versions that must exists before performing the
- associated action
+ associated action.
type: int
storage_class:
choices:
@@ -488,13 +475,11 @@
- STANDARD_IA
description:
- The class of storage used to store the object.
- required: true
type: str
transition_in_days:
description:
- Specifies the number of days an object is noncurrent
before Amazon S3 can perform the associated action.
- required: true
type: int
type: dict
noncurrent_version_transitions:
@@ -515,7 +500,7 @@
description:
- Specified the number of newer noncurrent and current
versions that must exists before performing the
- associated action
+ associated action.
type: int
storage_class:
choices:
@@ -528,13 +513,11 @@
- STANDARD_IA
description:
- The class of storage used to store the object.
- required: true
type: str
transition_in_days:
description:
- Specifies the number of days an object is noncurrent
before Amazon S3 can perform the associated action.
- required: true
type: int
type: list
object_size_greater_than:
@@ -555,7 +538,6 @@
- Enabled
description:
- Not Provived.
- required: true
type: str
tag_filters:
description:
@@ -566,18 +548,16 @@
key:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
transition:
description:
- You must specify at least one of I(transition_date) and
- I(transition_in_days)
+ I(transition_in_days).
suboptions:
storage_class:
choices:
@@ -590,12 +570,11 @@
- STANDARD_IA
description:
- Not Provived.
- required: true
type: str
transition_date:
description:
- The date value in ISO 8601 format.
- - The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ)
+ - The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ).
type: str
transition_in_days:
description:
@@ -605,7 +584,7 @@
transitions:
description:
- You must specify at least one of I(transition_date) and
- I(transition_in_days)
+ I(transition_in_days).
elements: dict
suboptions:
storage_class:
@@ -619,12 +598,11 @@
- STANDARD_IA
description:
- Not Provived.
- required: true
type: str
transition_date:
description:
- The date value in ISO 8601 format.
- - The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ)
+ - The timezone is always UTC. (YYYY-MM-DDThh:mm:ssZ).
type: str
transition_in_days:
description:
@@ -663,7 +641,6 @@
id:
description:
- Not Provived.
- required: true
type: str
prefix:
description:
@@ -677,12 +654,10 @@
key:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
type: list
@@ -713,7 +688,6 @@
description:
- The Amazon S3 bucket event for which to invoke the AWS Lambda
function.
- required: true
type: str
filter:
description:
@@ -725,7 +699,6 @@
description:
- A container for object key name prefix and suffix
filtering rules.
- required: true
suboptions:
rules:
description:
@@ -737,12 +710,10 @@
name:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
type: dict
@@ -751,7 +722,6 @@
description:
- The Amazon Resource Name (ARN) of the AWS Lambda function
that Amazon S3 invokes when the specified event type occurs.
- required: true
type: str
type: list
queue_configurations:
@@ -764,7 +734,6 @@
description:
- The Amazon S3 bucket event about which you want to publish
messages to Amazon SQS.
- required: true
type: str
filter:
description:
@@ -775,7 +744,6 @@
description:
- A container for object key name prefix and suffix
filtering rules.
- required: true
suboptions:
rules:
description:
@@ -787,12 +755,10 @@
name:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
type: dict
@@ -802,7 +768,6 @@
- The Amazon Resource Name (ARN) of the Amazon SQS queue to
which Amazon S3 publishes a message when it detects events
of the specified type.
- required: true
type: str
type: list
topic_configurations:
@@ -814,7 +779,6 @@
event:
description:
- The Amazon S3 bucket event about which to send notifications.
- required: true
type: str
filter:
description:
@@ -825,7 +789,6 @@
description:
- A container for object key name prefix and suffix
filtering rules.
- required: true
suboptions:
rules:
description:
@@ -837,12 +800,10 @@
name:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
type: dict
@@ -852,7 +813,6 @@
- The Amazon Resource Name (ARN) of the Amazon SNS topic to
which Amazon S3 publishes a message when it detects events
of the specified type.
- required: true
type: str
type: list
type: dict
@@ -904,7 +864,6 @@
description:
- Not Provived.
elements: dict
- required: true
suboptions:
object_ownership:
choices:
@@ -924,7 +883,7 @@
description:
- Specifies whether Amazon S3 should block public access control lists
(ACLs) for this bucket and objects in this bucket.
- - 'Setting this element to C(True) causes the following behavior:'
+ - Setting this element to C(True) causes the following behavior:.
- '- PUT Bucket acl and PUT Object acl calls fail if the specified
ACL is public.'
- '- PUT Object calls fail if the request includes a public ACL.'
@@ -965,7 +924,6 @@
default: true
description:
- Remove tags not listed in I(tags).
- required: false
type: bool
replication_configuration:
description:
@@ -978,7 +936,6 @@
description:
- The Amazon Resource Name (ARN) of the AWS Identity and Access Management
(IAM) role that Amazon S3 assumes when replicating objects.
- required: true
type: str
rules:
description:
@@ -1002,7 +959,6 @@
description:
- Specifies which Amazon S3 bucket to store replicated objects
in and their storage class.
- required: true
suboptions:
access_control_translation:
description:
@@ -1040,7 +996,6 @@
of the customer managed customer master
key (CMK) stored in AWS Key Management
Service (KMS) for the destination bucket.
- required: true
type: str
type: dict
metrics:
@@ -1054,7 +1009,6 @@
minutes:
description:
- Not Provived.
- required: true
type: int
type: dict
status:
@@ -1063,7 +1017,6 @@
- Enabled
description:
- Not Provived.
- required: true
type: str
type: dict
replication_time:
@@ -1076,17 +1029,14 @@
- Enabled
description:
- Not Provived.
- required: true
type: str
time:
description:
- Not Provived.
- required: true
suboptions:
minutes:
description:
- Not Provived.
- required: true
type: int
type: dict
type: dict
@@ -1126,12 +1076,10 @@
key:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: list
type: dict
@@ -1147,12 +1095,10 @@
key:
description:
- Not Provived.
- required: true
type: str
value:
description:
- Not Provived.
- required: true
type: str
type: dict
type: dict
@@ -1186,7 +1132,6 @@
description:
- Specifies whether Amazon S3 replicates modifications
on replicas.
- required: true
type: str
type: dict
sse_kms_encrypted_objects:
@@ -1205,7 +1150,6 @@
created with server-side encryption using
a customer master key (CMK) stored in
AWS Key Management Service.
- required: true
type: str
type: dict
type: dict
@@ -1215,7 +1159,6 @@
- Enabled
description:
- Specifies whether the rule is enabled.
- required: true
type: str
type: list
type: dict
@@ -1241,7 +1184,6 @@
description:
- A dict of tags to apply to the resource.
- To remove all tags set I(tags={}) and I(purge_tags=true).
- required: false
type: dict
versioning_configuration:
description:
@@ -1286,7 +1228,6 @@
host_name:
description:
- Name of the host where requests are redirected.
- required: true
type: str
protocol:
choices:
@@ -1312,7 +1253,6 @@
code to return.Specifies how requests are redirected.
- In the event of an error, you can specify a different error
code to return.
- required: true
suboptions:
host_name:
description:
@@ -1338,14 +1278,14 @@
type: str
replace_key_with:
description:
- - The specific object key to use in the redirect request.d
+ - The specific object key to use in the redirect request.d.
type: str
type: dict
routing_rule_condition:
description:
- A container for describing a condition that must be met
for the specified redirect to apply.You must specify at
- least one of I(http_error_code_returned_equals) and I(key_prefix_equals)
+ least one of I(http_error_code_returned_equals) and I(key_prefix_equals).
suboptions:
http_error_code_returned_equals:
description:
@@ -1361,18 +1301,50 @@
type: dict
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
"""
EXAMPLES = r"""
+- name: Create S3 bucket
+ amazon.cloud.s3_bucket:
+ bucket_name: '{{ bucket_name }}'
+ state: present
+ register: output
+
+- name: Describe S3 bucket
+ amazon.cloud.s3_bucket:
+ state: describe
+ bucket_name: '{{ output.result.identifier }}'
+ register: _result
+
+- name: List S3 buckets
+ amazon.cloud.s3_bucket:
+ state: list
+ register: _result
+
+- name: Update S3 bucket public access block configuration and tags (diff=true)
+ amazon.cloud.s3_bucket:
+ bucket_name: '{{ output.result.identifier }}'
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ tags:
+ mykey: myval
+ diff: true
+ register: _result
"""
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -1411,11 +1383,7 @@ def main():
argument_spec["accelerate_configuration"] = {
"type": "dict",
"options": {
- "acceleration_status": {
- "type": "str",
- "choices": ["Enabled", "Suspended"],
- "required": True,
- }
+ "acceleration_status": {"type": "str", "choices": ["Enabled", "Suspended"]}
},
}
argument_spec["access_control"] = {
@@ -1438,10 +1406,7 @@ def main():
"tag_filters": {
"type": "list",
"elements": "dict",
- "options": {
- "value": {"type": "str", "required": True},
- "key": {"type": "str", "required": True},
- },
+ "options": {"value": {"type": "str"}, "key": {"type": "str"}},
},
"storage_class_analysis": {
"type": "dict",
@@ -1451,7 +1416,6 @@ def main():
"options": {
"destination": {
"type": "dict",
- "required": True,
"options": {
"bucket_arn": {"type": "str"},
"bucket_account_id": {"type": "str"},
@@ -1466,9 +1430,8 @@ def main():
},
}
},
- "required": True,
},
- "id": {"type": "str", "required": True},
+ "id": {"type": "str"},
"prefix": {"type": "str"},
},
}
@@ -1477,7 +1440,6 @@ def main():
"options": {
"server_side_encryption_configuration": {
"type": "list",
- "required": True,
"elements": "dict",
"options": {
"bucket_key_enabled": {"type": "bool"},
@@ -1488,7 +1450,6 @@ def main():
"sse_algorithm": {
"type": "str",
"choices": ["AES256", "aws:kms"],
- "required": True,
},
},
},
@@ -1507,15 +1468,10 @@ def main():
"allowed_headers": {"type": "list", "elements": "str"},
"allowed_methods": {
"type": "list",
- "required": True,
"elements": "str",
"choices": ["DELETE", "GET", "HEAD", "POST", "PUT"],
},
- "allowed_origins": {
- "type": "list",
- "required": True,
- "elements": "str",
- },
+ "allowed_origins": {"type": "list", "elements": "str"},
"exposed_headers": {"type": "list", "elements": "str"},
"id": {"type": "str"},
"max_age": {"type": "int"},
@@ -1527,20 +1483,13 @@ def main():
"type": "list",
"elements": "dict",
"options": {
- "id": {"type": "str", "required": True},
+ "id": {"type": "str"},
"prefix": {"type": "str"},
- "status": {
- "type": "str",
- "choices": ["Disabled", "Enabled"],
- "required": True,
- },
+ "status": {"type": "str", "choices": ["Disabled", "Enabled"]},
"tag_filters": {
"type": "list",
"elements": "dict",
- "options": {
- "value": {"type": "str", "required": True},
- "key": {"type": "str", "required": True},
- },
+ "options": {"value": {"type": "str"}, "key": {"type": "str"}},
},
"tierings": {
"type": "list",
@@ -1549,9 +1498,8 @@ def main():
"access_tier": {
"type": "str",
"choices": ["ARCHIVE_ACCESS", "DEEP_ARCHIVE_ACCESS"],
- "required": True,
},
- "days": {"type": "int", "required": True},
+ "days": {"type": "int"},
},
},
},
@@ -1562,7 +1510,6 @@ def main():
"options": {
"destination": {
"type": "dict",
- "required": True,
"options": {
"bucket_arn": {"type": "str"},
"bucket_account_id": {"type": "str"},
@@ -1570,13 +1517,9 @@ def main():
"prefix": {"type": "str"},
},
},
- "enabled": {"type": "bool", "required": True},
- "id": {"type": "str", "required": True},
- "included_object_versions": {
- "type": "str",
- "choices": ["All", "Current"],
- "required": True,
- },
+ "enabled": {"type": "bool"},
+ "id": {"type": "str"},
+ "included_object_versions": {"type": "str", "choices": ["All", "Current"]},
"optional_fields": {
"type": "list",
"elements": "str",
@@ -1596,11 +1539,7 @@ def main():
],
},
"prefix": {"type": "str"},
- "schedule_frequency": {
- "type": "str",
- "choices": ["Daily", "Weekly"],
- "required": True,
- },
+ "schedule_frequency": {"type": "str", "choices": ["Daily", "Weekly"]},
},
}
argument_spec["lifecycle_configuration"] = {
@@ -1612,9 +1551,7 @@ def main():
"options": {
"abort_incomplete_multipart_upload": {
"type": "dict",
- "options": {
- "days_after_initiation": {"type": "int", "required": True}
- },
+ "options": {"days_after_initiation": {"type": "int"}},
},
"expiration_date": {"type": "str"},
"expiration_in_days": {"type": "int"},
@@ -1624,7 +1561,7 @@ def main():
"noncurrent_version_expiration": {
"type": "dict",
"options": {
- "noncurrent_days": {"type": "int", "required": True},
+ "noncurrent_days": {"type": "int"},
"newer_noncurrent_versions": {"type": "int"},
},
},
@@ -1642,9 +1579,8 @@ def main():
"ONEZONE_IA",
"STANDARD_IA",
],
- "required": True,
},
- "transition_in_days": {"type": "int", "required": True},
+ "transition_in_days": {"type": "int"},
"newer_noncurrent_versions": {"type": "int"},
},
},
@@ -1663,25 +1599,17 @@ def main():
"ONEZONE_IA",
"STANDARD_IA",
],
- "required": True,
},
- "transition_in_days": {"type": "int", "required": True},
+ "transition_in_days": {"type": "int"},
"newer_noncurrent_versions": {"type": "int"},
},
},
"prefix": {"type": "str"},
- "status": {
- "type": "str",
- "choices": ["Disabled", "Enabled"],
- "required": True,
- },
+ "status": {"type": "str", "choices": ["Disabled", "Enabled"]},
"tag_filters": {
"type": "list",
"elements": "dict",
- "options": {
- "value": {"type": "str", "required": True},
- "key": {"type": "str", "required": True},
- },
+ "options": {"value": {"type": "str"}, "key": {"type": "str"}},
},
"object_size_greater_than": {"type": "str"},
"object_size_less_than": {"type": "str"},
@@ -1699,7 +1627,6 @@ def main():
"ONEZONE_IA",
"STANDARD_IA",
],
- "required": True,
},
"transition_date": {"type": "str"},
"transition_in_days": {"type": "int"},
@@ -1720,7 +1647,6 @@ def main():
"ONEZONE_IA",
"STANDARD_IA",
],
- "required": True,
},
"transition_date": {"type": "str"},
"transition_in_days": {"type": "int"},
@@ -1742,15 +1668,12 @@ def main():
"elements": "dict",
"options": {
"access_point_arn": {"type": "str"},
- "id": {"type": "str", "required": True},
+ "id": {"type": "str"},
"prefix": {"type": "str"},
"tag_filters": {
"type": "list",
"elements": "dict",
- "options": {
- "value": {"type": "str", "required": True},
- "key": {"type": "str", "required": True},
- },
+ "options": {"value": {"type": "str"}, "key": {"type": "str"}},
},
},
}
@@ -1767,81 +1690,78 @@ def main():
"type": "list",
"elements": "dict",
"options": {
- "event": {"type": "str", "required": True},
+ "event": {"type": "str"},
"filter": {
"type": "dict",
"options": {
"s3_key": {
"type": "dict",
- "required": True,
"options": {
"rules": {
"type": "list",
"elements": "dict",
"options": {
- "name": {"type": "str", "required": True},
- "value": {"type": "str", "required": True},
+ "name": {"type": "str"},
+ "value": {"type": "str"},
},
}
},
}
},
},
- "function": {"type": "str", "required": True},
+ "function": {"type": "str"},
},
},
"queue_configurations": {
"type": "list",
"elements": "dict",
"options": {
- "event": {"type": "str", "required": True},
+ "event": {"type": "str"},
"filter": {
"type": "dict",
"options": {
"s3_key": {
"type": "dict",
- "required": True,
"options": {
"rules": {
"type": "list",
"elements": "dict",
"options": {
- "name": {"type": "str", "required": True},
- "value": {"type": "str", "required": True},
+ "name": {"type": "str"},
+ "value": {"type": "str"},
},
}
},
}
},
},
- "queue": {"type": "str", "required": True},
+ "queue": {"type": "str"},
},
},
"topic_configurations": {
"type": "list",
"elements": "dict",
"options": {
- "event": {"type": "str", "required": True},
+ "event": {"type": "str"},
"filter": {
"type": "dict",
"options": {
"s3_key": {
"type": "dict",
- "required": True,
"options": {
"rules": {
"type": "list",
"elements": "dict",
"options": {
- "name": {"type": "str", "required": True},
- "value": {"type": "str", "required": True},
+ "name": {"type": "str"},
+ "value": {"type": "str"},
},
}
},
}
},
},
- "topic": {"type": "str", "required": True},
+ "topic": {"type": "str"},
},
},
},
@@ -1874,7 +1794,6 @@ def main():
"options": {
"rules": {
"type": "list",
- "required": True,
"elements": "dict",
"options": {
"object_ownership": {
@@ -1901,7 +1820,7 @@ def main():
argument_spec["replication_configuration"] = {
"type": "dict",
"options": {
- "role": {"type": "str", "required": True},
+ "role": {"type": "str"},
"rules": {
"type": "list",
"elements": "dict",
@@ -1917,7 +1836,6 @@ def main():
},
"destination": {
"type": "dict",
- "required": True,
"options": {
"access_control_translation": {
"type": "dict",
@@ -1929,26 +1847,18 @@ def main():
"bucket": {"type": "str"},
"encryption_configuration": {
"type": "dict",
- "options": {
- "replica_kms_key_id": {
- "type": "str",
- "required": True,
- }
- },
+ "options": {"replica_kms_key_id": {"type": "str"}},
},
"metrics": {
"type": "dict",
"options": {
"event_threshold": {
"type": "dict",
- "options": {
- "minutes": {"type": "int", "required": True}
- },
+ "options": {"minutes": {"type": "int"}},
},
"status": {
"type": "str",
"choices": ["Disabled", "Enabled"],
- "required": True,
},
},
},
@@ -1958,14 +1868,10 @@ def main():
"status": {
"type": "str",
"choices": ["Disabled", "Enabled"],
- "required": True,
},
"time": {
"type": "dict",
- "required": True,
- "options": {
- "minutes": {"type": "int", "required": True}
- },
+ "options": {"minutes": {"type": "int"}},
},
},
},
@@ -1995,8 +1901,8 @@ def main():
"type": "list",
"elements": "dict",
"options": {
- "value": {"type": "str", "required": True},
- "key": {"type": "str", "required": True},
+ "value": {"type": "str"},
+ "key": {"type": "str"},
},
},
},
@@ -2005,8 +1911,8 @@ def main():
"tag_filter": {
"type": "dict",
"options": {
- "value": {"type": "str", "required": True},
- "key": {"type": "str", "required": True},
+ "value": {"type": "str"},
+ "key": {"type": "str"},
},
},
},
@@ -2023,7 +1929,6 @@ def main():
"status": {
"type": "str",
"choices": ["Disabled", "Enabled"],
- "required": True,
}
},
},
@@ -2033,26 +1938,17 @@ def main():
"status": {
"type": "str",
"choices": ["Disabled", "Enabled"],
- "required": True,
}
},
},
},
},
- "status": {
- "type": "str",
- "choices": ["Disabled", "Enabled"],
- "required": True,
- },
+ "status": {"type": "str", "choices": ["Disabled", "Enabled"]},
},
},
},
}
- argument_spec["tags"] = {
- "type": "dict",
- "required": False,
- "aliases": ["resource_tags"],
- }
+ argument_spec["tags"] = {"type": "dict", "aliases": ["resource_tags"]}
argument_spec["versioning_configuration"] = {
"type": "dict",
"options": {
@@ -2081,7 +1977,6 @@ def main():
"replace_key_prefix_with": {"type": "str"},
"replace_key_with": {"type": "str"},
},
- "required": True,
},
"routing_rule_condition": {
"type": "dict",
@@ -2095,7 +1990,7 @@ def main():
"redirect_all_requests_to": {
"type": "dict",
"options": {
- "host_name": {"type": "str", "required": True},
+ "host_name": {"type": "str"},
"protocol": {"type": "str", "choices": ["http", "https"]},
},
},
@@ -2108,16 +2003,21 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
- argument_spec["purge_tags"] = {"type": "bool", "required": False, "default": True}
+ argument_spec["force"] = {"type": "bool", "default": False}
+ argument_spec["purge_tags"] = {"type": "bool", "default": True}
required_if = [
["state", "present", ["bucket_name"], True],
["state", "absent", ["bucket_name"], True],
["state", "get", ["bucket_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -2156,7 +2056,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -2164,22 +2064,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["bucket_name", "object_lock_enabled"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("bucket_name")
+ identifier = ["bucket_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/s3_multi_region_access_point.py b/plugins/modules/s3_multi_region_access_point.py
index 9f18cef4..c39f7f81 100644
--- a/plugins/modules/s3_multi_region_access_point.py
+++ b/plugins/modules/s3_multi_region_access_point.py
@@ -14,17 +14,26 @@
DOCUMENTATION = r"""
module: s3_multi_region_access_point
short_description: Create and manage Amazon S3 Multi-Region Access Points
-description: Create and manage Amazon S3 Multi-Region Access Points (list, create,
- update, describe, delete).
+description:
+- Create and manage Amazon S3 Multi-Region Access Points.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
name:
description:
- The name you want to assign to this Multi Region Access Point.
type: str
public_access_block_configuration:
description:
- - The I(public_access_block) configuration that you want to apply to this
- Multi Region Access Point.
+ - The PublicAccessBlock configuration that you want to apply to this Multi
+ Region Access Point.
- You can enable the configuration options in any combination.
- For more information about when Amazon S3 considers a bucket or object public,
see U(https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html#access-control-block-public-access-policy-status)
@@ -34,7 +43,7 @@
description:
- Specifies whether Amazon S3 should block public access control lists
(ACLs) for buckets in this account.
- - 'Setting this element to C(True) causes the following behavior:'
+ - Setting this element to C(True) causes the following behavior:.
- '- PUT Bucket acl and PUT Object acl calls fail if the specified
ACL is public.'
- '- PUT Object calls fail if the request includes a public ACL.'
@@ -77,12 +86,14 @@
- The name of the bucket that represents of the region belonging to this Multi
Region Access Point.
elements: dict
- required: true
suboptions:
+ account_id:
+ description:
+ - Not Provived.
+ type: str
bucket:
description:
- Not Provived.
- required: true
type: str
type: list
state:
@@ -113,7 +124,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -124,7 +134,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -173,8 +186,7 @@ def main():
argument_spec["regions"] = {
"type": "list",
"elements": "dict",
- "options": {"bucket": {"type": "str", "required": True}},
- "required": True,
+ "options": {"bucket": {"type": "str"}, "account_id": {"type": "str"}},
}
argument_spec["state"] = {
"type": "str",
@@ -183,15 +195,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
["state", "present", ["name", "regions"], True],
["state", "absent", ["name"], True],
["state", "get", ["name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -209,7 +226,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -217,22 +234,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["name", "public_access_block_configuration", "regions"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("name")
+ identifier = ["name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/s3_multi_region_access_point_policy.py b/plugins/modules/s3_multi_region_access_point_policy.py
index bb29746e..a93cada5 100644
--- a/plugins/modules/s3_multi_region_access_point_policy.py
+++ b/plugins/modules/s3_multi_region_access_point_policy.py
@@ -14,18 +14,26 @@
DOCUMENTATION = r"""
module: s3_multi_region_access_point_policy
short_description: Manage Amazon S3 access policies
-description: Applie and manage Amazon S3 access policies to an Amazon S3 Multi-Region
- Access Points.
+description:
+- Applie and manage Amazon S3 access policies to an Amazon S3 Multi-Region Access
+ Points.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
mrap_name:
description:
- - The name of the Multi Region Access Point to apply policy
- required: true
+ - The name of the Multi Region Access Point to apply policy.
type: str
policy:
description:
- - Policy document to apply to a Multi Region Access Point
- required: true
+ - Policy document to apply to a Multi Region Access Point.
type: dict
state:
choices:
@@ -55,7 +63,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -66,7 +73,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -102,8 +112,8 @@ def main():
),
)
- argument_spec["mrap_name"] = {"type": "str", "required": True}
- argument_spec["policy"] = {"type": "dict", "required": True}
+ argument_spec["mrap_name"] = {"type": "str"}
+ argument_spec["policy"] = {"type": "dict"}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -111,15 +121,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
- ["state", "present", ["mrap_name", "policy"], True],
+ ["state", "present", ["policy", "mrap_name"], True],
["state", "absent", ["mrap_name"], True],
["state", "get", ["mrap_name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -134,7 +149,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -142,22 +157,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["mrap_name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["update", "read", "list", "delete", "create"]
+
state = module.params.get("state")
- identifier = module.params.get("mrap_name")
+ identifier = ["mrap_name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/s3_object_lambda_access_point.py b/plugins/modules/s3objectlambda_access_point.py
similarity index 79%
rename from plugins/modules/s3_object_lambda_access_point.py
rename to plugins/modules/s3objectlambda_access_point.py
index ccd79a93..78c46fa8 100644
--- a/plugins/modules/s3_object_lambda_access_point.py
+++ b/plugins/modules/s3objectlambda_access_point.py
@@ -12,12 +12,21 @@
DOCUMENTATION = r"""
-module: s3_object_lambda_access_point
+module: s3objectlambda_access_point
short_description: Create and manage Object Lambda Access Points used to access S3
buckets
-description: Create and manage Object Lambda Access Points used to access S3 buckets
- (list, create, update, describe, delete).
+description:
+- Create and manage Object Lambda Access Points used to access S3 buckets.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
name:
description:
- The name you want to assign to this Object lambda Access Point.
@@ -25,13 +34,12 @@
object_lambda_configuration:
description:
- The Object lambda Access Point Configuration that configures transformations
- to be applied on the objects on specified S3 I(actions_configuration)
- to be applied to this Object lambda Access Point.
+ to be applied on the objects on specified S3 ActionsConfiguration to be
+ applied to this Object lambda Access Point.
- It specifies Supporting Access Point, Transformation Configurations.
- Customers can also set if they like to enable Cloudwatch metrics for accesses
to this Object lambda Access Point.
- Default setting for Cloudwatch metrics is disable.
- required: true
suboptions:
allowed_features:
description:
@@ -45,7 +53,6 @@
supporting_access_point:
description:
- Not Provived.
- required: true
type: str
transformation_configurations:
description:
@@ -57,12 +64,10 @@
description:
- Not Provived.
elements: str
- required: true
type: list
content_transformation:
description:
- Not Provived.
- required: true
suboptions:
aws_lambda:
description:
@@ -71,7 +76,6 @@
function_arn:
description:
- Not Provived.
- required: true
type: str
function_payload:
description:
@@ -109,7 +113,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -120,7 +123,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -160,31 +166,29 @@ def main():
argument_spec["object_lambda_configuration"] = {
"type": "dict",
"options": {
- "supporting_access_point": {"type": "str", "required": True},
+ "supporting_access_point": {"type": "str"},
"allowed_features": {"type": "list", "elements": "str"},
"cloud_watch_metrics_enabled": {"type": "bool"},
"transformation_configurations": {
"type": "list",
"elements": "dict",
"options": {
- "actions": {"type": "list", "required": True, "elements": "str"},
+ "actions": {"type": "list", "elements": "str"},
"content_transformation": {
"type": "dict",
"options": {
"aws_lambda": {
"type": "dict",
"options": {
- "function_arn": {"type": "str", "required": True},
+ "function_arn": {"type": "str"},
"function_payload": {"type": "str"},
},
}
},
- "required": True,
},
},
},
},
- "required": True,
}
argument_spec["state"] = {
"type": "str",
@@ -193,15 +197,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
["state", "present", ["name", "object_lambda_configuration"], True],
["state", "absent", ["name"], True],
["state", "get", ["name"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -218,7 +227,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -226,22 +235,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["name"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete", "list"]
+
state = module.params.get("state")
- identifier = module.params.get("name")
+ identifier = ["name"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/plugins/modules/s3_object_lambda_access_point_policy.py b/plugins/modules/s3objectlambda_access_point_policy.py
similarity index 70%
rename from plugins/modules/s3_object_lambda_access_point_policy.py
rename to plugins/modules/s3objectlambda_access_point_policy.py
index 1ed821fe..b6ce9693 100644
--- a/plugins/modules/s3_object_lambda_access_point_policy.py
+++ b/plugins/modules/s3objectlambda_access_point_policy.py
@@ -12,22 +12,29 @@
DOCUMENTATION = r"""
-module: s3_object_lambda_access_point_policy
+module: s3objectlambda_access_point_policy
short_description: Specifies the Object Lambda Access Point resource policy document
-description: Create and manage Object Lambda Access Point resource policy document.
+description:
+- Create and manage Object Lambda Access Point resource policy document.
options:
+ force:
+ default: false
+ description:
+ - Cancel IN_PROGRESS and PENDING resource requestes.
+ - Because you can only perform a single operation on a given resource at a
+ time, there might be cases where you need to cancel the current resource
+ operation to make the resource available so that another operation may
+ be performed on it.
+ type: bool
object_lambda_access_point:
description:
- - The name of the Amazon S3 I(object_lambda_access_point) to which the policy
- applies.
- required: true
+ - The name of the Amazon S3 ObjectLambdaAccessPoint to which the policy applies.
type: str
policy_document:
description:
- - A policy document containing permissions to add to the specified I(object_lambda_access_point).
+ - A policy document containing permissions to add to the specified ObjectLambdaAccessPoint.
- For more information, see Access Policy Language Overview (U(https://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-language-overview.html))
in the Amazon Simple Storage Service Developer Guide.
- required: true
type: dict
state:
choices:
@@ -57,7 +64,6 @@
type: int
author: Ansible Cloud Team (@ansible-collections)
version_added: 0.1.0
-requirements: []
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -68,7 +74,10 @@
RETURN = r"""
result:
- description: Dictionary containing resource information.
+ description:
+ - When I(state=list), it is a list containing dictionaries of resource information.
+ - Otherwise, it is a dictionary of resource information.
+ - When I(state=absent), it is an empty dictionary.
returned: always
type: complex
contains:
@@ -104,8 +113,8 @@ def main():
),
)
- argument_spec["object_lambda_access_point"] = {"type": "str", "required": True}
- argument_spec["policy_document"] = {"type": "dict", "required": True}
+ argument_spec["object_lambda_access_point"] = {"type": "str"}
+ argument_spec["policy_document"] = {"type": "dict"}
argument_spec["state"] = {
"type": "str",
"choices": ["present", "absent", "list", "describe", "get"],
@@ -113,15 +122,20 @@ def main():
}
argument_spec["wait"] = {"type": "bool", "default": False}
argument_spec["wait_timeout"] = {"type": "int", "default": 320}
+ argument_spec["force"] = {"type": "bool", "default": False}
required_if = [
- ["state", "present", ["policy_document", "object_lambda_access_point"], True],
+ ["state", "present", ["object_lambda_access_point", "policy_document"], True],
["state", "absent", ["object_lambda_access_point"], True],
["state", "get", ["object_lambda_access_point"], True],
]
+ mutually_exclusive = []
module = AnsibleAWSModule(
- argument_spec=argument_spec, required_if=required_if, supports_check_mode=True
+ argument_spec=argument_spec,
+ required_if=required_if,
+ mutually_exclusive=mutually_exclusive,
+ supports_check_mode=True,
)
cloud = CloudControlResource(module)
@@ -138,7 +152,7 @@ def main():
_params_to_set = {k: v for k, v in params.items() if v is not None}
# Only if resource is taggable
- if module.params.get("tags", None):
+ if module.params.get("tags") is not None:
_params_to_set["tags"] = ansible_dict_to_boto3_tag_list(module.params["tags"])
params_to_set = snake_dict_to_camel_dict(_params_to_set, capitalize_first=True)
@@ -146,22 +160,32 @@ def main():
# Ignore createOnlyProperties that can be set only during resource creation
create_only_params = ["object_lambda_access_point"]
+ # Necessary to handle when module does not support all the states
+ handlers = ["create", "read", "update", "delete"]
+
state = module.params.get("state")
- identifier = module.params.get("object_lambda_access_point")
+ identifier = ["object_lambda_access_point"]
- results = {"changed": False, "result": []}
+ results = {"changed": False, "result": {}}
if state == "list":
- results["result"] = cloud.list_resources(type_name)
+ if "list" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be listed."
+ )
+ results["result"] = cloud.list_resources(type_name, identifier)
if state in ("describe", "get"):
+ if "read" not in handlers:
+ module.exit_json(
+ **results, msg=f"Resource type {type_name} cannot be read."
+ )
results["result"] = cloud.get_resource(type_name, identifier)
if state == "present":
- results["changed"] |= cloud.present(
+ results = cloud.present(
type_name, identifier, params_to_set, create_only_params
)
- results["result"] = cloud.get_resource(type_name, identifier)
if state == "absent":
results["changed"] |= cloud.absent(type_name, identifier)
diff --git a/tests/config.yml b/tests/config.yml
index 5112f726..19e18bf7 100644
--- a/tests/config.yml
+++ b/tests/config.yml
@@ -1,2 +1,2 @@
modules:
- python_requires: '>=3.6'
+ python_requires: ">=3.9"
diff --git a/tests/integration/targets/eks/aliases b/tests/integration/targets/eks/aliases
new file mode 100644
index 00000000..48931436
--- /dev/null
+++ b/tests/integration/targets/eks/aliases
@@ -0,0 +1,4 @@
+slow
+
+cloud/aws
+zuul/aws/cloud_control
diff --git a/tests/integration/targets/eks/defaults/main.yml b/tests/integration/targets/eks/defaults/main.yml
new file mode 100644
index 00000000..33777f4b
--- /dev/null
+++ b/tests/integration/targets/eks/defaults/main.yml
@@ -0,0 +1,51 @@
+---
+_resource_prefix: "ansible-test-{{ tiny_prefix }}"
+
+eks_cluster_name: "{{ _resource_prefix }}-cluster"
+eks_fargate_profile_name_a: "{{ _resource_prefix }}-fp-a"
+eks_fargate_profile_name_b: "{{ _resource_prefix }}-fp-b"
+
+selectors:
+ - labels:
+ - key: "test"
+ value: "test"
+ namespace: "fp-default"
+
+tags:
+ Foo: foo
+ bar: Bar
+
+eks_subnets:
+ - zone: a
+ cidr: 10.0.1.0/24
+ type: private
+ tag: internal-elb
+ - zone: b
+ cidr: 10.0.2.0/24
+ type: public
+ tag: elb
+
+eks_security_groups:
+ - name: "{{ eks_cluster_name }}-control-plane-sg"
+ description: "EKS Control Plane Security Group"
+ rules:
+ - group_name: "{{ eks_cluster_name }}-workers-sg"
+ group_desc: "EKS Worker Security Group"
+ ports: 443
+ proto: tcp
+ rules_egress:
+ - group_name: "{{ eks_cluster_name }}-workers-sg"
+ group_desc: "EKS Worker Security Group"
+ from_port: 1025
+ to_port: 65535
+ proto: tcp
+ - name: "{{ eks_cluster_name }}-workers-sg"
+ description: "EKS Worker Security Group"
+ rules:
+ - group_name: "{{ eks_cluster_name }}-workers-sg"
+ proto: tcp
+ from_port: 1
+ to_port: 65535
+ - group_name: "{{ eks_cluster_name }}-control-plane-sg"
+ ports: 10250
+ proto: tcp
diff --git a/tests/integration/targets/eks/files/eks_cluster-policy.json b/tests/integration/targets/eks/files/eks_cluster-policy.json
new file mode 100644
index 00000000..85cfb59d
--- /dev/null
+++ b/tests/integration/targets/eks/files/eks_cluster-policy.json
@@ -0,0 +1,12 @@
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Service": "eks.amazonaws.com"
+ },
+ "Action": "sts:AssumeRole"
+ }
+ ]
+}
diff --git a/tests/integration/targets/eks/files/eks_fargate_profile-policy.json b/tests/integration/targets/eks/files/eks_fargate_profile-policy.json
new file mode 100644
index 00000000..084274fd
--- /dev/null
+++ b/tests/integration/targets/eks/files/eks_fargate_profile-policy.json
@@ -0,0 +1,12 @@
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Service": "eks-fargate-pods.amazonaws.com"
+ },
+ "Action": "sts:AssumeRole"
+ }
+ ]
+}
diff --git a/tests/integration/targets/eks/tasks/cleanup.yml b/tests/integration/targets/eks/tasks/cleanup.yml
new file mode 100644
index 00000000..7307ba3d
--- /dev/null
+++ b/tests/integration/targets/eks/tasks/cleanup.yml
@@ -0,0 +1,102 @@
+- name: Delete IAM role
+ community.aws.iam_role:
+ name: "{{ _result_create_iam_role.role_name }}"
+ state: absent
+ ignore_errors: true
+
+- name: Delete IAM role
+ community.aws.iam_role:
+ name: "{{ _result_create_iam_role_fp.role_name }}"
+ state: absent
+ ignore_errors: true
+
+- name: Delete a Fargate Profile b
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_b }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ ignore_errors: true
+
+- name: Delete a Fargate Profile a
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ ignore_errors: true
+
+- name: Remove EKS cluster
+ amazon.cloud.eks_cluster:
+ name: '{{ eks_cluster_name }}'
+ state: absent
+ wait: true
+ wait_timeout: 900
+ ignore_errors: true
+
+- name: Create list of all additional EKS security groups
+ set_fact:
+ additional_eks_sg:
+ - name: '{{ eks_cluster_name }}-workers-sg'
+
+- name: Set all security group rule lists to empty to remove circular dependency
+ ec2_group:
+ name: '{{ item.name }}'
+ description: '{{ item.description }}'
+ state: present
+ rules: []
+ rules_egress: []
+ purge_rules: true
+ purge_rules_egress: true
+ vpc_id: '{{ _result_create_vpc.vpc.id }}'
+ with_items: '{{ eks_security_groups }}'
+ ignore_errors: true
+
+- name: Remove security groups
+ ec2_group:
+ name: '{{ item.name }}'
+ state: absent
+ vpc_id: '{{ _result_create_vpc.vpc.id }}'
+ with_items: '{{ eks_security_groups | reverse | list + additional_eks_sg }}'
+ ignore_errors: true
+
+- name: Remove route tables
+ ec2_vpc_route_table:
+ state: absent
+ vpc_id: '{{ _result_create_vpc.vpc.id }}'
+ route_table_id: '{{ item }}'
+ lookup: id
+ ignore_errors: true
+ with_items:
+ - '{{ _result_create_public_route_table.route_table.route_table_id }}'
+ - '{{ _result_create_nat_route_table.route_table.route_table_id }}'
+
+- name: Remove NAT Gateway
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: absent
+ nat_gateway_id: '{{ _result_create_nat_gateway.nat_gateway_id}}'
+ release_eip: true
+ wait: true
+ ignore_errors: true
+
+- name: Remove subnets
+ ec2_vpc_subnet:
+ az: '{{ aws_region }}{{ item.zone }}'
+ vpc_id: '{{ _result_create_vpc.vpc.id }}'
+ cidr: '{{ item.cidr}}'
+ state: absent
+ with_items: '{{ eks_subnets }}'
+ ignore_errors: true
+
+- name: Remove Internet Gateway
+ amazon.aws.ec2_vpc_igw:
+ state: absent
+ vpc_id: '{{ _result_create_vpc.vpc.id}}'
+ ignore_errors: true
+
+- name: Remove VPC
+ ec2_vpc_net:
+ cidr_block: 10.0.0.0/16
+ state: absent
+ name: "{{ _resource_prefix }}-vpc"
+ ignore_errors: true
diff --git a/tests/integration/targets/eks/tasks/eks_cluster.yml b/tests/integration/targets/eks/tasks/eks_cluster.yml
new file mode 100644
index 00000000..e391e3b4
--- /dev/null
+++ b/tests/integration/targets/eks/tasks/eks_cluster.yml
@@ -0,0 +1,123 @@
+# Create a EKS Cluster to test Fargate Profile
+- name: Ensure IAM instance role exists
+ community.aws.iam_role:
+ name: "{{ _resource_prefix }}-cluster-role"
+ assume_role_policy_document: "{{ lookup('file','eks_cluster-policy.json') }}"
+ state: present
+ create_instance_profile: false
+ managed_policies:
+ - AmazonEKSServicePolicy
+ - AmazonEKSClusterPolicy
+ register: _result_create_iam_role
+
+- name: Create a VPC
+ ec2_vpc_net:
+ cidr_block: 10.0.0.0/16
+ state: present
+ name: "{{ _resource_prefix }}-vpc"
+ resource_tags:
+ Name: "{{ _resource_prefix }}-vpc"
+ register: _result_create_vpc
+
+- name: Create subnets
+ ec2_vpc_subnet:
+ az: "{{ aws_region }}{{ item.zone }}"
+ tags: '{ "Name": "{{ _resource_prefix }}-subnet-{{ item.type }}-{{ item.zone }}", "kubernetes.io/role/{{ item.tag }}": "1" }'
+ vpc_id: "{{ _result_create_vpc.vpc.id }}"
+ cidr: '{{ item.cidr }}'
+ state: present
+ register: _result_create_subnets
+ with_items:
+ - '{{ eks_subnets }}'
+
+- name: Create Internet Gateway
+ amazon.aws.ec2_vpc_igw:
+ vpc_id: "{{ _result_create_vpc.vpc.id }}"
+ state: present
+ tags:
+ Name: "{{ _resource_prefix }}-IGW"
+ register: _result_create_igw
+
+- name: Set up public subnet route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: "{{ _result_create_vpc.vpc.id }}"
+ tags:
+ Name: "Public"
+ subnets: "{{ _result_create_subnets.results | selectattr('subnet.tags.Name', 'contains', 'public') | map(attribute='subnet.id') }}"
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: "{{ _result_create_igw.gateway_id }}"
+ register: _result_create_public_route_table
+
+- name: Create NAT Gateway
+ amazon.aws.ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ state: present
+ subnet_id: "{{ (_result_create_subnets.results | selectattr('subnet.tags.Name', 'contains', 'public') | map(attribute='subnet.id'))[0] }}"
+ wait: true
+ tags:
+ Name: "{{ _resource_prefix }}-NAT"
+ register: _result_create_nat_gateway
+
+- name: Set up NAT-protected route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: '{{ _result_create_vpc.vpc.id }}'
+ tags:
+ Name: Internal
+ subnets: "{{_result_create_subnets.results | selectattr('subnet.tags.Name', 'contains', 'private') | map(attribute='subnet.id') }}"
+ routes:
+ - dest: 0.0.0.0/0
+ nat_gateway_id: "{{ _result_create_nat_gateway.nat_gateway_id }}"
+ register: _result_create_nat_route_table
+
+- name: Create security groups to use for EKS cluster
+ ec2_group:
+ name: '{{ item.name }}'
+ description: '{{ item.description }}'
+ state: present
+ rules: '{{ item.rules }}'
+ rules_egress: '{{ item.rules_egress | default(omit) }}'
+ vpc_id: '{{ _result_create_vpc.vpc.id }}'
+ with_items: '{{ eks_security_groups }}'
+ register: _result_create_security_groups
+
+- debug:
+ msg: "{{ _result_create_security_groups }}"
+
+- name: Create EKS cluster
+ amazon.cloud.eks_cluster:
+ name: "{{ eks_cluster_name }}"
+ resources_vpc_config:
+ security_group_ids: "{{ _result_create_security_groups.results | map(attribute='group_id') }}"
+ subnet_ids: "{{ _result_create_subnets.results | map(attribute='subnet.id') }}"
+ endpoint_public_access: true
+ endpoint_private_access: false
+ public_access_cidrs:
+ - 0.0.0.0/0
+ role_arn: "{{ _result_create_iam_role.arn }}"
+ tags:
+ Name: "{{ _resource_prefix }}-eks-cluster"
+ wait_timeout: 900
+ register: _result_create_cluster
+ tags:
+ - docs
+
+- name: Check that EKS cluster was created
+ assert:
+ that:
+ - _result_create_cluster.result.identifier == "{{ eks_cluster_name }}"
+
+- name: Describe EKS cluster
+ amazon.cloud.eks_cluster:
+ name: "{{ eks_cluster_name }}"
+ state: describe
+ register: _result_get_cluster
+ tags:
+ - docs
+
+- name: List EKS clusters
+ amazon.cloud.eks_cluster:
+ state: list
+ register: _result_list_clusters
+ tags:
+ - docs
diff --git a/tests/integration/targets/eks/tasks/eks_fargate_profile.yml b/tests/integration/targets/eks/tasks/eks_fargate_profile.yml
new file mode 100644
index 00000000..536c1bf2
--- /dev/null
+++ b/tests/integration/targets/eks/tasks/eks_fargate_profile.yml
@@ -0,0 +1,514 @@
+# Creating dependencies
+- name: Delete Fargate Profile b (if present)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_b }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ ignore_errors: true
+ register: _result_delete_fp
+
+- name: Delete Fargate Profile b (if present) using identifier option
+ amazon.cloud.eks_fargate_profile:
+ identifier: "{{ eks_cluster_name }}|{{ eks_fargate_profile_name_b }}"
+ state: absent
+ wait: true
+ ignore_errors: true
+ register: _result_delete_fp
+
+- name: Delete Fargate Profile a (if present)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ ignore_errors: true
+ register: _result_delete_fp
+
+- name: Create IAM instance role
+ community.aws.iam_role:
+ name: "{{ _resource_prefix }}-fp-role"
+ assume_role_policy_document: "{{ lookup('file', 'eks_fargate_profile-policy.json') }}"
+ state: present
+ create_instance_profile: false
+ managed_policies:
+ - AmazonEKSFargatePodExecutionRolePolicy
+ register: _result_create_iam_role_fp
+
+- name: Pause a few seconds to ensure IAM role is available to next task
+ pause:
+ seconds: 10
+
+- name: Attempt to create Fargate Profile a in non existent EKS
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: '{{ eks_fargate_profile_name_a }}'
+ state: present
+ cluster_name: fake_cluster
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results | selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: '{{ selectors }}'
+ ignore_errors: true
+ register: _result_create_non_existent_cluster
+
+- name: Check that Fargate Profile did nothing
+ assert:
+ that:
+ - _result_create_non_existent_cluster is failed
+ - "'No cluster found for name: fake_cluster.' in _result_create_non_existent_cluster.msg"
+
+- name: Delete an as yet non-existent Fargate Profile
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: fake_profile
+ cluster_name: '{{ eks_cluster_name }}'
+ state: absent
+ pod_execution_role_arn: '{{ _result_create_iam_role_fp.arn }}'
+ subnets: >-
+ {{_result_create_subnets.results | selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: '{{ selectors }}'
+ register: _result_delete_non_existent_fp
+
+- name: Check that delete an as yet non-existent Fargate Profile did nothing
+ assert:
+ that:
+ - _result_delete_non_existent_fp is not changed
+
+- name: Try create a Fargate Profile a with public subnets (expected to fail)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ _result_create_cluster.result.identifier }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results | selectattr('subnet.tags.Name', 'contains',
+ 'public') | map(attribute='subnet.id') }}
+ selectors: '{{ selectors }}'
+ wait: true
+ ignore_errors: true
+ register: _result_create_fp
+
+- name: Check that create Fargate Profile a with public subnets failed
+ assert:
+ that:
+ - _result_create_fp is failed
+ - "'provided in Fargate Profile is not a private subnet' in _result_create_fp.msg"
+
+- name: Create Fargate Profile a with wait (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags: "{{ tags }}"
+ check_mode: true
+ register: _result_create_fp
+
+- name: Assert Fargate Profile a is created (check mode)
+ assert:
+ that:
+ - _result_create_fp.changed
+
+- name: Create Fargate Profile with fargate_profile_name option only (expected to fail)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags: "{{ tags }}"
+ register: _result_create_fp
+ ignore_errors: true
+
+- name: Create Fargate Profile with identifier option only (expected to fail)
+ amazon.cloud.eks_fargate_profile:
+ identifier: "{{ eks_cluster_name }}|{{ eks_fargate_profile_name_b }}"
+ state: present
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags: "{{ tags }}"
+ register: _result_create_fp
+ ignore_errors: true
+
+- name: Create Fargate Profile a with wait
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags: "{{ tags }}"
+ register: _result_create_fp
+ tags:
+ - docs
+
+- name: Assert Fargate Profile a is created
+ assert:
+ that:
+ - _result_create_fp.changed
+
+- name: Try create same Fargate Profile with wait - idempotency (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags: "{{ tags }}"
+ purge_tags: false
+ check_mode: true
+ register: _result_create_fp
+
+- name: Assert result is not changed - idempotency (check mode)
+ assert:
+ that:
+ - not _result_create_fp.changed
+
+- name: Try create same Fargate Profile with wait - idempotency
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags: "{{ tags }}"
+ register: _result_create_fp
+
+- name: Assert result is not changed - idempotency (check mode)
+ assert:
+ that:
+ - not _result_create_fp.changed
+
+- name: List Fargate Profiles
+ amazon.cloud.eks_fargate_profile:
+ state: list
+ cluster_name: "{{ eks_cluster_name }}"
+ register: _result_list_fp
+ tags:
+ - docs
+
+- name: Update tags in Fargate Profile a with wait (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags:
+ env: test
+ test: foo
+ check_mode: True
+ register: _result_update_tags_fp
+ tags:
+ - docs
+
+- name: Assert result is changed (check mode)
+ assert:
+ that:
+ - _result_update_tags_fp.changed
+
+- name: Update tags in Fargate Profile a with wait and identifier option (check mode)
+ amazon.cloud.eks_fargate_profile:
+ identifier: "{{ eks_cluster_name }}|{{ eks_fargate_profile_name_a }}"
+ state: present
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags:
+ env: test
+ test: foo
+ check_mode: True
+ register: _result_update_tags_fp
+
+- name: Assert result is changed (check mode)
+ assert:
+ that:
+ - _result_update_tags_fp.changed
+
+- name: Update tags in Fargate Profile a with wait
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results | selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags:
+ env: test
+ test: foo
+ register: _result_update_tags_fp
+ ignore_errors: true
+
+- name: Assert result is changed
+ assert:
+ that:
+ - _result_update_tags_fp.changed
+
+- name: Try update tags again in Fargate Profile a with wait - idempotency (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results | selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags:
+ env: test
+ test: foo
+ register: _result_update_tags_fp
+ ignore_errors: true
+ check_mode: True
+
+- name: Assert result is not changed - idempotency (check mode)
+ assert:
+ that:
+ - not _result_update_tags_fp.changed
+
+- name: Try update tags again in Fargate Profile a with wait - idempotency
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ wait: true
+ tags:
+ env: test
+ test: foo
+ register: _result_update_tags_fp
+ ignore_errors: true
+
+- name: Assert result is not changed - idempotency
+ assert:
+ that:
+ - not _result_update_tags_fp.changed
+
+- name: Try update tags again in Fargate Profile a without wait
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ tags:
+ env: test
+ newTag: New Tag
+ register: _result_update_tags_fp
+ ignore_errors: true
+
+- name: Assert result is changed
+ assert:
+ that:
+ - _result_update_tags_fp.changed
+
+- name: Try update tags again in Fargate Profile a without wait
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ tags:
+ env: test
+ newTag_1: New Tag 1
+ register: _result_update_tags_fp
+ ignore_errors: true
+
+- name: Assert result is not changed
+ assert:
+ that:
+ - _result_update_tags_fp.changed
+
+- name: Create Fargate Profile b without wait (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_b }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ ignore_errors: true
+ check_mode: true
+ register: _result_create_fp
+
+- name: Assert Fargate Profile b is created (check mode)
+ assert:
+ that:
+ - _result_create_fp.changed
+
+- name: Create Fargate Profile b without wait
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_b }}"
+ state: present
+ cluster_name: "{{ eks_cluster_name }}"
+ pod_execution_role_arn: "{{ _result_create_iam_role_fp.arn }}"
+ subnets: >-
+ {{_result_create_subnets.results|selectattr('subnet.tags.Name', 'contains',
+ 'private') | map(attribute='subnet.id') }}
+ selectors: "{{ selectors }}"
+ register: _result_create_fp
+ ignore_errors: true
+
+- name: Assert Fargate Profile b is created (check mode)
+ assert:
+ that:
+ - _result_create_fp.changed
+
+- name: Delete Fargate Profile a (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ register: _result_delete_fp
+ check_mode: true
+
+- name: Assert Fargate Profile a is deleted (check mode)
+ assert:
+ that:
+ - _result_delete_fp.changed
+
+- name: Delete Fargate Profile a
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ wait_timeout: 900
+ register: _result_delete_fp
+ tags:
+ - docs
+
+- name: Assert Fargate Profile a is deleted
+ assert:
+ that:
+ - _result_delete_fp.changed
+
+- name: Delete Fargate Profile a - idempotency (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ register: _result_delete_fp
+ check_mode: true
+
+- name: Assert result is not changed - idempotency (check mode)
+ assert:
+ that:
+ - not _result_delete_fp.changed
+
+- name: Delete Fargate Profile a (idempotency)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_a }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ register: _result_delete_fp
+
+- name: Assert result is not changed - idempotency
+ assert:
+ that:
+ - not _result_delete_fp.changed
+
+- name: Delete Fargate Profile b (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_b }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ register: _result_delete_fp
+ check_mode: true
+
+- name: Assert Fargate Profile b is deleted (check_mode)
+ assert:
+ that:
+ - _result_delete_fp.changed
+
+- name: Delete Fargate Profile b using identifier
+ amazon.cloud.eks_fargate_profile:
+ identifier: "{{ eks_cluster_name }}|{{ eks_fargate_profile_name_b }}"
+ state: absent
+ wait: true
+ wait_timeout: 900
+ ignore_errors: true
+ register: _result_delete_fp
+
+- name: Assert Fargate Profile b is deleted
+ assert:
+ that:
+ - _result_delete_fp.changed
+
+- name: Delete Fargate Profile b - idempotency (check mode)
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_b }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ ignore_errors: true
+ register: _result_delete_fp
+
+- name: Assert result is not changed - idempotency (check mode)
+ assert:
+ that:
+ - not _result_delete_fp.changed
+
+- name: Delete Fargate Profile b - idempotency
+ amazon.cloud.eks_fargate_profile:
+ fargate_profile_name: "{{ eks_fargate_profile_name_b }}"
+ cluster_name: "{{ eks_cluster_name }}"
+ state: absent
+ wait: true
+ ignore_errors: true
+ register: _result_delete_fp
+
+- name: Assert result is not changed - idempotency
+ assert:
+ that:
+ - not _result_delete_fp.changed
diff --git a/tests/integration/targets/eks/tasks/main.yml b/tests/integration/targets/eks/tasks/main.yml
new file mode 100644
index 00000000..cee07fcf
--- /dev/null
+++ b/tests/integration/targets/eks/tasks/main.yml
@@ -0,0 +1,22 @@
+---
+- name: EKS integration tests
+ collections:
+ - amazon.aws
+ - community.aws
+ - amazon.cloud
+ module_defaults:
+ group/amazon.cloud.aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ block:
+ - include_tasks: eks_cluster.yml
+ - include_tasks: eks_fargate_profile.yml
+ always:
+ - include_tasks: cleanup.yml
diff --git a/tests/integration/targets/iam/aliases b/tests/integration/targets/iam/aliases
new file mode 100644
index 00000000..f1e0d7e3
--- /dev/null
+++ b/tests/integration/targets/iam/aliases
@@ -0,0 +1,2 @@
+cloud/aws
+zuul/aws/cloud_control
diff --git a/tests/integration/targets/iam/defaults/main.yml b/tests/integration/targets/iam/defaults/main.yml
new file mode 100644
index 00000000..1f136642
--- /dev/null
+++ b/tests/integration/targets/iam/defaults/main.yml
@@ -0,0 +1,2 @@
+---
+cert_name: 'ansible-test-{{ tiny_prefix }}'
diff --git a/tests/integration/targets/iam/meta/main.yml b/tests/integration/targets/iam/meta/main.yml
new file mode 100644
index 00000000..1810d4be
--- /dev/null
+++ b/tests/integration/targets/iam/meta/main.yml
@@ -0,0 +1,2 @@
+dependencies:
+ - setup_remote_tmp_dir
diff --git a/tests/integration/targets/iam/tasks/generate_certs.yml b/tests/integration/targets/iam/tasks/generate_certs.yml
new file mode 100644
index 00000000..02d2dac7
--- /dev/null
+++ b/tests/integration/targets/iam/tasks/generate_certs.yml
@@ -0,0 +1,65 @@
+################################################
+# Setup SSL certs to store in IAM
+################################################
+- name: 'Generate SSL Keys'
+ community.crypto.openssl_privatekey:
+ path: '{{ remote_tmp_dir }}/{{ item }}-key.pem'
+ size: 2048
+ loop:
+ - 'ca'
+ - 'cert1'
+ - 'cert2'
+
+- name: 'Generate CSRs'
+ community.crypto.openssl_csr:
+ path: '{{ remote_tmp_dir }}/{{ item }}.csr'
+ privatekey_path: '{{ remote_tmp_dir }}/{{ item }}-key.pem'
+ common_name: '{{ item }}.ansible.test'
+ subject_alt_name: 'DNS:{{ item }}.ansible.test'
+ basic_constraints:
+ - 'CA:TRUE'
+ loop:
+ - 'ca'
+ - 'cert1'
+ - 'cert2'
+
+- name: 'Self-sign the "root"'
+ community.crypto.x509_certificate:
+ provider: selfsigned
+ path: '{{ remote_tmp_dir }}/ca.pem'
+ privatekey_path: '{{ remote_tmp_dir }}/ca-key.pem'
+ csr_path: '{{ remote_tmp_dir }}/ca.csr'
+
+- name: 'Sign the intermediate cert'
+ community.crypto.x509_certificate:
+ provider: ownca
+ path: '{{ remote_tmp_dir }}/cert1.pem'
+ csr_path: '{{ remote_tmp_dir }}/cert1.csr'
+ ownca_path: '{{ remote_tmp_dir }}/ca.pem'
+ ownca_privatekey_path: '{{ remote_tmp_dir }}/ca-key.pem'
+
+- name: 'Sign the end-cert'
+ community.crypto.x509_certificate:
+ provider: ownca
+ path: '{{ remote_tmp_dir }}/cert2.pem'
+ csr_path: '{{ remote_tmp_dir }}/cert2.csr'
+ ownca_path: '{{ remote_tmp_dir }}/cert1.pem'
+ ownca_privatekey_path: '{{ remote_tmp_dir }}/cert1-key.pem'
+
+- name: 'Re-Sign the end-cert'
+ community.crypto.x509_certificate:
+ provider: ownca
+ path: '{{ remote_tmp_dir }}/cert2-new.pem'
+ csr_path: '{{ remote_tmp_dir }}/cert2.csr'
+ ownca_path: '{{ remote_tmp_dir }}/cert1.pem'
+ ownca_privatekey_path: '{{ remote_tmp_dir }}/cert1-key.pem'
+
+- set_fact:
+ path_ca_cert: '{{ remote_tmp_dir }}/ca.pem'
+ path_ca_key: '{{ remote_tmp_dir }}/ca-key.pem'
+ path_intermediate_cert: '{{ remote_tmp_dir }}/cert1.pem'
+ path_intermediate_key: '{{ remote_tmp_dir }}/cert1-key.pem'
+ # Same key, updated cert
+ path_cert_a: '{{ remote_tmp_dir }}/cert2.pem'
+ path_cert_b: '{{ remote_tmp_dir }}/cert2-new.pem'
+ path_cert_key: '{{ remote_tmp_dir }}/cert2-key.pem'
diff --git a/tests/integration/targets/iam/tasks/main.yml b/tests/integration/targets/iam/tasks/main.yml
new file mode 100644
index 00000000..396f0040
--- /dev/null
+++ b/tests/integration/targets/iam/tasks/main.yml
@@ -0,0 +1,233 @@
+- name: IAM integration tests
+ module_defaults:
+ group/amazon.cloud.aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ collections:
+ - amazon.cloud
+ - community.crypto
+
+ block:
+ - include_tasks: ./generate_certs.yml
+
+ - set_fact:
+ cert_a_data: '{{ lookup("file", path_cert_a) }}'
+ cert_b_data: '{{ lookup("file", path_cert_b) }}'
+ chain_cert_data: '{{ lookup("file", path_intermediate_cert) }}'
+
+ - name: Create Certificate - CHECK_MODE
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ register: create_cert
+ check_mode: true
+
+ - name: Check result - Create Certificate - CHECK_MODE
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is changed
+
+ - name: Create Certificate
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ wait: true
+ register: create_cert
+ tags:
+ - docs
+
+ - name: Check result - Create Certificate
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is changed
+ - '"arn" in result'
+ - '"path" in result'
+ - '"server_certificate_name" in result'
+ - result.arn.startswith('arn:aws')
+ - result.arn.endswith(cert_name)
+ - result.server_certificate_name == cert_name
+ - result.path == '/'
+ vars:
+ result: "{{ create_cert.result.properties }}"
+
+ - name: Create Certificate - CHECK_MODE (idempotency)
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ register: create_cert
+ check_mode: true
+
+ - name: Check result - Create Certificate - CHECK_MODE
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is not changed
+
+ - name: Create Certificate (idempotency)
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ register: create_cert
+
+ - name: Check result - Create Certificate
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is not changed
+
+ - name: Update Chaining Certificate (CreateOnlyProperties) - CHECK_MODE
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_chain: '{{ chain_cert_data }}'
+ register: update_cert
+ ignore_errors: true
+
+ - name: Check result - Update Certificate
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is not changed
+
+ - name: Delete certificate
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: absent
+ register: delete_cert
+ tags:
+ - docs
+
+ - name: Check result - Delete certificate
+ assert:
+ that:
+ - delete_cert is successful
+ - delete_cert is changed
+
+ - name: Delete certificate - idempotency - CHECK_MODE
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: absent
+ register: delete_cert
+ check_mode: true
+
+ - name: Check result - Delete certificate - CHECK_MODE
+ assert:
+ that:
+ - delete_cert is successful
+ - delete_cert is not changed
+
+ - name: Delete certificate - idempotency
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: absent
+ register: delete_cert
+
+ - name: check result - Delete certificate
+ assert:
+ that:
+ - delete_cert is successful
+ - delete_cert is not changed
+
+ - name: Create Certificate with Chain and path - CHECK_MODE
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ certificate_chain: '{{ chain_cert_data }}'
+ path: '/example/'
+ register: create_cert
+ check_mode: true
+
+ - name: Check result - Create Certificate with Chain and path - CHECK_MODE
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is changed
+
+ - name: Create Certificate with Chain and path
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ certificate_chain: '{{ chain_cert_data }}'
+ path: '/example/'
+ register: create_cert
+ tags:
+ - docs
+
+ - name: Check result - Create Certificate with Chain and path
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is changed
+ - '"arn" in result'
+ - '"path" in result'
+ - '"server_certificate_name" in result'
+ - result.arn.startswith('arn:aws')
+ - result.arn.endswith(cert_name)
+ - result.server_certificate_name == cert_name
+ - result.path == '/example/'
+ vars:
+ result: "{{ create_cert.result.properties }}"
+
+ - name: Create Certificate with Chain and path - idempotency - CHECK_MODE
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ certificate_chain: '{{ chain_cert_data }}'
+ path: '/example/'
+ register: create_cert
+ check_mode: true
+
+ - name: Check result - Create Certificate with Chain and path - idempotency - CHECK_MODE
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is not changed
+
+ - name: Create Certificate with chain and path - idempotency
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: present
+ certificate_body: '{{ cert_a_data }}'
+ private_key: '{{ lookup("file", path_cert_key) }}'
+ certificate_chain: '{{ chain_cert_data }}'
+ path: '/example/'
+ register: create_cert
+
+ - name: Check result - Create Certificate with Chain and path - idempotency
+ assert:
+ that:
+ - create_cert is successful
+ - create_cert is not changed
+
+ - name: Gather information about a certificate
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: get
+ register: create_info
+ tags:
+ - docs
+
+ always:
+ - name: Delete certificate
+ amazon.cloud.iam_server_certificate:
+ server_certificate_name: '{{ cert_name }}'
+ state: absent
+ ignore_errors: true
diff --git a/tests/integration/targets/logs/aliases b/tests/integration/targets/logs/aliases
index 4ef4b206..f1e0d7e3 100644
--- a/tests/integration/targets/logs/aliases
+++ b/tests/integration/targets/logs/aliases
@@ -1 +1,2 @@
cloud/aws
+zuul/aws/cloud_control
diff --git a/tests/integration/targets/logs/tasks/main.yml b/tests/integration/targets/logs/tasks/main.yml
index f12247ab..08288e1b 100644
--- a/tests/integration/targets/logs/tasks/main.yml
+++ b/tests/integration/targets/logs/tasks/main.yml
@@ -14,7 +14,7 @@
log_group_name: "test-{{ lookup('password', '/dev/null') | hash('md5') }}"
- name: Create log group (check mode)
- logs_log_group: &log_group
+ amazon.cloud.logs_log_group: &log_group
state: present
log_group_name: "{{ log_group_name }}"
retention_in_days: 7
@@ -23,26 +23,36 @@
check_mode: yes
- name: Create log group
- logs_log_group:
+ amazon.cloud.logs_log_group:
<<: *log_group
wait: yes
register: output
+ tags:
+ - docs
- assert:
that:
- - output.result[0].identifier == log_group_name
+ - output.result.identifier == log_group_name
- name: Create log group (idempotence)
- logs_log_group:
+ amazon.cloud.logs_log_group:
*log_group
register: output
- assert:
that:
- output is not changed
+
+ - name: Describe log group
+ amazon.cloud.logs_log_group:
+ state: describe
+ log_group_name: "{{ log_group_name }}"
+ register: output
+ tags:
+ - docs
- name: Update log group (check mode)
- logs_log_group: &log_group_update
+ amazon.cloud.logs_log_group: &log_group_update
state: present
log_group_name: "{{ log_group_name }}"
tags:
@@ -55,20 +65,22 @@
- output is changed
- name: Update log group
- logs_log_group:
+ amazon.cloud.logs_log_group:
<<: *log_group_update
purge_tags: false
wait: yes
register: output
+ tags:
+ - docs
- assert:
that:
- output is changed
- - "'testkey' in output.result[0].properties.tags"
- - "'anotherkey' in output.result[0].properties.tags"
+ - "'testkey' in output.result.properties.tags"
+ - "'anotherkey' in output.result.properties.tags"
- name: Update log group (idempotence)
- logs_log_group:
+ amazon.cloud.logs_log_group:
*log_group_update
register: output
@@ -79,7 +91,7 @@
# - output is not changed
- name: Delete log group (check mode)
- logs_log_group:
+ amazon.cloud.logs_log_group:
state: absent
log_group_name: "{{ log_group_name }}"
check_mode: yes
@@ -90,17 +102,19 @@
- output is changed
- name: Delete log group
- logs_log_group:
+ amazon.cloud.logs_log_group:
state: absent
log_group_name: "{{ log_group_name }}"
register: output
+ tags:
+ - docs
- assert:
that:
- output is changed
- name: Delete log group (idempotence)
- logs_log_group:
+ amazon.cloud.logs_log_group:
state: absent
log_group_name: "{{ log_group_name }}"
register: output
@@ -111,7 +125,7 @@
always:
- name: Cleanup log group
- logs_log_group:
+ amazon.cloud.logs_log_group:
state: absent
log_group_name: "{{ log_group_name }}"
- ignore_errors: yes
+ ignore_errors: true
diff --git a/tests/integration/targets/s3/aliases b/tests/integration/targets/s3/aliases
index 4ef4b206..f1e0d7e3 100644
--- a/tests/integration/targets/s3/aliases
+++ b/tests/integration/targets/s3/aliases
@@ -1 +1,2 @@
cloud/aws
+zuul/aws/cloud_control
diff --git a/tests/integration/targets/s3/tasks/main.yml b/tests/integration/targets/s3/tasks/main.yml
index bafe234d..ebff939c 100644
--- a/tests/integration/targets/s3/tasks/main.yml
+++ b/tests/integration/targets/s3/tasks/main.yml
@@ -1,4 +1,4 @@
-- name: S3 bucket tests
+- name: S3 bucket integration tests
module_defaults:
group/amazon.cloud.aws:
aws_access_key: '{{ aws_access_key }}'
@@ -13,31 +13,270 @@
set_fact:
bucket_name: "{{ lookup('password', '/dev/null') | to_uuid }}"
+ - name: Delete S3 bucket if already present
+ amazon.cloud.s3_bucket:
+ bucket_name: '{{ bucket_name }}'
+ state: absent
+ register: output
+ ignore_errors: true
+
+ - name: Create S3 bucket - check_mode
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ bucket_name }}"
+ state: present
+ check_mode: true
+ register: output
+
+ - assert:
+ that:
+ - output is success
+ - output is changed
+
- name: Create S3 bucket
amazon.cloud.s3_bucket:
bucket_name: "{{ bucket_name }}"
+ state: present
register: output
+ tags:
+ - docs
+
+ - assert:
+ that:
+ - output is success
+ - output is changed
- - name: Get S3 bucket
+ - name: Describe S3 bucket
amazon.cloud.s3_bucket:
state: describe
- bucket_name: "{{ output.result[0].identifier }}"
+ bucket_name: "{{ output.result.identifier }}"
+ register: _result
+ tags:
+ - docs
+
+ - assert:
+ that:
+ - _result is success
+
+ - name: List S3 buckets
+ amazon.cloud.s3_bucket:
+ state: list
+ register: _result
+ tags:
+ - docs
+
+ - assert:
+ that:
+ - _result is success
+
+ - name: Create S3 bucket - idempotence
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: present
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is not changed
+
+ - name: Create S3 bucket (check_mode) - idempotence
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: present
+ check_mode: true
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is not changed
+
+ - name: Update S3 bucket public access block configuration and tags - check_mode (diff=true)
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ bucket_name }}"
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ tags:
+ mykey: "myval"
+ diff: true
+ check_mode: true
+ register: _result
- - name: Modify S3 bucket
+ - assert:
+ that:
+ - _result is success
+ - _result is changed
+ - "'diff' in _result"
+
+ - name: Update S3 bucket public access block configuration and tags (diff=true)
amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
state: present
- bucket_name: "{{ output.result[0].identifier }}"
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
tags:
mykey: "myval"
+ diff: true
+ register: _result
+ tags:
+ - docs
+
+ - assert:
+ that:
+ - _result is success
+ - _result is changed
+ - "'diff' in _result"
+ - _result.diff.after != {}
+ - _result.diff.before == {}
+
+ - name: Update S3 bucket public access block configuration and tags - idempotence (diff=true)
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ tags:
+ mykey: "myval"
+ diff: true
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is not changed
+ - "'diff' in _result"
+ - _result.diff == {}
+
+ - name: Update S3 bucket public access block configuration (block_public_policy=true)
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: true
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is changed
+
+ - name: Update S3 bucket public access block configuration (block_public_policy=false, force=true) - check_mode
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ force: true
+ check_mode: true
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is changed
+
+ - name: Update S3 bucket public access block configuration (block_public_policy=false, force=true)
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ force: true
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is changed
+
+ - name: Update S3 bucket public access block configuration (block_public_policy=false, force=true) - idempotency
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: present
+ public_access_block_configuration:
+ block_public_acls: false
+ block_public_policy: false
+ ignore_public_acls: false
+ restrict_public_buckets: false
+ force: true
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is not changed
+
+ - include_tasks: tagging.yml
+
+ - name: Delete S3 bucket - check_mode
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: absent
+ check_mode: true
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is changed
- name: Delete S3 bucket
amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
state: absent
- bucket_name: "{{ output.result[0].identifier }}"
-
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is changed
+
+ - name: Delete S3 bucket - idempotence
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: absent
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is not changed
+
+ - name: Delete S3 bucket (check_mode) - idempotence
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+ state: absent
+ check_mode: true
+ register: _result
+
+ - assert:
+ that:
+ - _result is success
+ - _result is not changed
always:
- name: Delete S3 bucket
amazon.cloud.s3_bucket:
state: absent
bucket_name: "{{ bucket_name }}"
- ignore_errors: yes
+ ignore_errors: true
+ tags:
+ - docs
diff --git a/tests/integration/targets/s3/tasks/tagging.yml b/tests/integration/targets/s3/tasks/tagging.yml
new file mode 100644
index 00000000..4f8728ab
--- /dev/null
+++ b/tests/integration/targets/s3/tasks/tagging.yml
@@ -0,0 +1,244 @@
+- name: Tests relating to setting tags
+ vars:
+ first_tags:
+ 'Key with Spaces': Value with spaces
+ CamelCaseKey: CamelCaseValue
+ pascalCaseKey: pascalCaseValue
+ snake_case_key: snake_case_value
+ second_tags:
+ 'New Key with Spaces': Value with spaces
+ NewCamelCaseKey: CamelCaseValue
+ newPascalCaseKey: pascalCaseValue
+ new_snake_case_key: snake_case_value
+ third_tags:
+ 'Key with Spaces': Value with spaces
+ CamelCaseKey: CamelCaseValue
+ pascalCaseKey: pascalCaseValue
+ snake_case_key: snake_case_value
+ 'New Key with Spaces': Updated Value with spaces
+ final_tags:
+ 'Key with Spaces': Value with spaces
+ CamelCaseKey: CamelCaseValue
+ pascalCaseKey: pascalCaseValue
+ snake_case_key: snake_case_value
+ 'New Key with Spaces': Updated Value with spaces
+ NewCamelCaseKey: CamelCaseValue
+ newPascalCaseKey: pascalCaseValue
+ new_snake_case_key: snake_case_value
+
+ # Mandatory settings
+ module_defaults:
+ amazon.cloud.s3_bucket:
+ bucket_name: "{{ output.result.identifier }}"
+
+ block:
+ - name: test adding tags to amazon.cloud.s3_bucket (check mode)
+ amazon.cloud.s3_bucket:
+ tags: '{{ first_tags }}'
+ purge_tags: true
+ check_mode: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is success
+ - _result is changed
+
+ - name: test adding tags to amazon.cloud.s3_bucket
+ amazon.cloud.s3_bucket:
+ tags: '{{ first_tags }}'
+ purge_tags: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is success
+ - _result is changed
+ - _result.result.properties.tags == first_tags
+
+ - name: test adding tags to amazon.cloud.s3_bucket - idempotency (check mode)
+ amazon.cloud.s3_bucket:
+ tags: '{{ first_tags }}'
+ purge_tags: true
+ register: _result
+ check_mode: yes
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is success
+ - _result is not changed
+ - _result.result.properties.tags == first_tags
+
+
+ - name: test adding tags to amazon.cloud.s3_bucket - idempotency
+ amazon.cloud.s3_bucket:
+ tags: '{{ first_tags }}'
+ purge_tags: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is success
+ - _result is not changed
+ - _result.result.properties.tags == first_tags
+
+ ###
+
+ - name: test updating tags with purge on amazon.cloud.s3_bucket (check mode)
+ amazon.cloud.s3_bucket:
+ tags: '{{ second_tags }}'
+ purge_tags: true
+ register: _result
+ check_mode: yes
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is changed
+
+ - name: test updating tags with purge on amazon.cloud.s3_bucket
+ amazon.cloud.s3_bucket:
+ tags: '{{ second_tags }}'
+ purge_tags: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is changed
+ - _result.result.properties.tags == second_tags
+
+ - name: test updating tags with purge on amazon.cloud.s3_bucket - idempotency (check mode)
+ amazon.cloud.s3_bucket:
+ tags: '{{ second_tags }}'
+ purge_tags: true
+ register: _result
+ check_mode: yes
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is not changed
+
+ - name: test updating tags with purge on amazon.cloud.s3_bucket - idempotency
+ amazon.cloud.s3_bucket:
+ tags: '{{ second_tags }}'
+ purge_tags: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is not changed
+ - _result.result.properties.tags == second_tags
+
+ #
+
+ - name: test updating tags without purge on amazon.cloud.s3_bucket (check mode)
+ amazon.cloud.s3_bucket:
+ tags: '{{ third_tags }}'
+ purge_tags: false
+ register: _result
+ check_mode: yes
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is changed
+
+ - name: test updating tags without purge on amazon.cloud.s3_bucket
+ amazon.cloud.s3_bucket:
+ tags: '{{ third_tags }}'
+ purge_tags: false
+ wait: true
+ wait_timeout: 120
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is changed
+ - _result.result.properties.tags == final_tags
+
+ - name: test updating tags without purge on amazon.cloud.s3_bucket - idempotency (check mode)
+ amazon.cloud.s3_bucket:
+ tags: '{{ third_tags }}'
+ purge_tags: false
+ register: _result
+ check_mode: yes
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is not changed
+
+ - name: test updating tags without purge on amazon.cloud.s3_bucket - idempotency
+ amazon.cloud.s3_bucket:
+ tags: '{{ third_tags }}'
+ purge_tags: false
+ wait: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is not changed
+ - _result.result.properties.tags == final_tags
+
+ - name: test no tags param amazon.cloud.s3_bucket (check mode)
+ amazon.cloud.s3_bucket: {}
+ register: _result
+ check_mode: yes
+ - name: assert no change
+ assert:
+ that:
+ - _result is not changed
+ - _result.result.properties.tags == final_tags
+
+
+ - name: test no tags param amazon.cloud.s3_bucket
+ amazon.cloud.s3_bucket: {}
+ register: _result
+ - name: assert no change
+ assert:
+ that:
+ - _result is not changed
+ - _result.result.properties.tags == final_tags
+
+ ###
+
+ - name: test removing tags from amazon.cloud.s3_bucket (check mode)
+ amazon.cloud.s3_bucket:
+ tags: {}
+ purge_tags: true
+ register: _result
+ check_mode: yes
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is changed
+
+ - name: test removing tags from amazon.cloud.s3_bucket
+ amazon.cloud.s3_bucket:
+ tags: {}
+ purge_tags: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is changed
+ - _result.result.properties.tags is undefined
+
+ - name: test removing tags from amazon.cloud.s3_bucket - idempotency (check mode)
+ amazon.cloud.s3_bucket:
+ tags: {}
+ purge_tags: true
+ register: _result
+ check_mode: yes
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is not changed
+
+ - name: test removing tags from amazon.cloud.s3_bucket - idempotency
+ amazon.cloud.s3_bucket:
+ tags: {}
+ purge_tags: true
+ register: _result
+ - name: assert that update succeeded
+ assert:
+ that:
+ - _result is not changed
+ - _result.result.properties.tags is undefined
diff --git a/tests/integration/targets/setup_remote_tmp_dir/handlers/main.yml b/tests/integration/targets/setup_remote_tmp_dir/handlers/main.yml
new file mode 100644
index 00000000..229037c8
--- /dev/null
+++ b/tests/integration/targets/setup_remote_tmp_dir/handlers/main.yml
@@ -0,0 +1,5 @@
+- name: delete temporary directory
+ include_tasks: default-cleanup.yml
+
+- name: delete temporary directory (windows)
+ include_tasks: windows-cleanup.yml
diff --git a/tests/integration/targets/setup_remote_tmp_dir/meta/main.yml b/tests/integration/targets/setup_remote_tmp_dir/meta/main.yml
new file mode 100644
index 00000000..32cf5dda
--- /dev/null
+++ b/tests/integration/targets/setup_remote_tmp_dir/meta/main.yml
@@ -0,0 +1 @@
+dependencies: []
diff --git a/tests/integration/targets/setup_remote_tmp_dir/tasks/default-cleanup.yml b/tests/integration/targets/setup_remote_tmp_dir/tasks/default-cleanup.yml
new file mode 100644
index 00000000..39872d74
--- /dev/null
+++ b/tests/integration/targets/setup_remote_tmp_dir/tasks/default-cleanup.yml
@@ -0,0 +1,5 @@
+- name: delete temporary directory
+ file:
+ path: "{{ remote_tmp_dir }}"
+ state: absent
+ no_log: yes
diff --git a/tests/integration/targets/setup_remote_tmp_dir/tasks/default.yml b/tests/integration/targets/setup_remote_tmp_dir/tasks/default.yml
new file mode 100644
index 00000000..00877dca
--- /dev/null
+++ b/tests/integration/targets/setup_remote_tmp_dir/tasks/default.yml
@@ -0,0 +1,12 @@
+- name: create temporary directory
+ tempfile:
+ path: /var/tmp
+ state: directory
+ suffix: .test
+ register: remote_tmp_dir
+ notify:
+ - delete temporary directory
+
+- name: record temporary directory
+ set_fact:
+ remote_tmp_dir: "{{ remote_tmp_dir.path }}"
diff --git a/tests/integration/targets/setup_remote_tmp_dir/tasks/main.yml b/tests/integration/targets/setup_remote_tmp_dir/tasks/main.yml
new file mode 100644
index 00000000..f8df391b
--- /dev/null
+++ b/tests/integration/targets/setup_remote_tmp_dir/tasks/main.yml
@@ -0,0 +1,10 @@
+- name: make sure we have the ansible_os_family and ansible_distribution_version facts
+ setup:
+ gather_subset: distribution
+ when: ansible_facts == {}
+
+- include_tasks: "{{ lookup('first_found', files)}}"
+ vars:
+ files:
+ - "{{ ansible_os_family | lower }}.yml"
+ - "default.yml"
diff --git a/tests/integration/targets/setup_remote_tmp_dir/tasks/windows-cleanup.yml b/tests/integration/targets/setup_remote_tmp_dir/tasks/windows-cleanup.yml
new file mode 100644
index 00000000..32f372d0
--- /dev/null
+++ b/tests/integration/targets/setup_remote_tmp_dir/tasks/windows-cleanup.yml
@@ -0,0 +1,4 @@
+- name: delete temporary directory (windows)
+ ansible.windows.win_file:
+ path: '{{ remote_tmp_dir }}'
+ state: absent
diff --git a/tests/integration/targets/setup_remote_tmp_dir/tasks/windows.yml b/tests/integration/targets/setup_remote_tmp_dir/tasks/windows.yml
new file mode 100644
index 00000000..317c146d
--- /dev/null
+++ b/tests/integration/targets/setup_remote_tmp_dir/tasks/windows.yml
@@ -0,0 +1,10 @@
+- name: create temporary directory
+ register: remote_tmp_dir
+ notify:
+ - delete temporary directory (windows)
+ ansible.windows.win_tempfile:
+ state: directory
+ suffix: .test
+- name: record temporary directory
+ set_fact:
+ remote_tmp_dir: '{{ remote_tmp_dir.path }}'
diff --git a/tests/requirements.yml b/tests/requirements.yml
index 13577f7d..b9d80f03 100644
--- a/tests/requirements.yml
+++ b/tests/requirements.yml
@@ -1,2 +1,5 @@
-integration_tests_dependencies: []
+integration_tests_dependencies:
+- amazon.aws
+- community.aws
+- community.crypto
unit_tests_dependencies: []
diff --git a/tests/sanity/ignore-2.10.txt b/tests/sanity/ignore-2.10.txt
index 63dca81d..022e8152 100644
--- a/tests/sanity/ignore-2.10.txt
+++ b/tests/sanity/ignore-2.10.txt
@@ -118,16 +118,16 @@ plugins/modules/logs_resource_policy.py metaclass-boilerplate!skip
plugins/modules/logs_resource_policy.py compile-2.6!skip
plugins/modules/logs_resource_policy.py import-2.6!skip
plugins/modules/logs_resource_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/rdsdb_proxy.py compile-2.7!skip
-plugins/modules/rdsdb_proxy.py compile-3.5!skip
-plugins/modules/rdsdb_proxy.py import-2.7!skip
-plugins/modules/rdsdb_proxy.py import-3.5!skip
-plugins/modules/rdsdb_proxy.py future-import-boilerplate!skip
-plugins/modules/rdsdb_proxy.py metaclass-boilerplate!skip
-plugins/modules/rdsdb_proxy.py compile-2.6!skip
-plugins/modules/rdsdb_proxy.py import-2.6!skip
-plugins/modules/rdsdb_proxy.py validate-modules:no-log-needed
-plugins/modules/rdsdb_proxy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy.py compile-2.7!skip
+plugins/modules/rds_db_proxy.py compile-3.5!skip
+plugins/modules/rds_db_proxy.py import-2.7!skip
+plugins/modules/rds_db_proxy.py import-3.5!skip
+plugins/modules/rds_db_proxy.py future-import-boilerplate!skip
+plugins/modules/rds_db_proxy.py metaclass-boilerplate!skip
+plugins/modules/rds_db_proxy.py compile-2.6!skip
+plugins/modules/rds_db_proxy.py import-2.6!skip
+plugins/modules/rds_db_proxy.py validate-modules:no-log-needed
+plugins/modules/rds_db_proxy.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_cluster.py compile-2.7!skip
plugins/modules/redshift_cluster.py compile-3.5!skip
plugins/modules/redshift_cluster.py import-2.7!skip
@@ -184,21 +184,176 @@ plugins/modules/s3_multi_region_access_point_policy.py metaclass-boilerplate!ski
plugins/modules/s3_multi_region_access_point_policy.py compile-2.6!skip
plugins/modules/s3_multi_region_access_point_policy.py import-2.6!skip
plugins/modules/s3_multi_region_access_point_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point.py compile-2.7!skip
-plugins/modules/s3_object_lambda_access_point.py compile-3.5!skip
-plugins/modules/s3_object_lambda_access_point.py import-2.7!skip
-plugins/modules/s3_object_lambda_access_point.py import-3.5!skip
-plugins/modules/s3_object_lambda_access_point.py future-import-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point.py metaclass-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point.py compile-2.6!skip
-plugins/modules/s3_object_lambda_access_point.py import-2.6!skip
-plugins/modules/s3_object_lambda_access_point.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point_policy.py compile-2.7!skip
-plugins/modules/s3_object_lambda_access_point_policy.py compile-3.5!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-2.7!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-3.5!skip
-plugins/modules/s3_object_lambda_access_point_policy.py future-import-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point_policy.py metaclass-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point_policy.py compile-2.6!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-2.6!skip
-plugins/modules/s3_object_lambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point.py compile-2.7!skip
+plugins/modules/s3objectlambda_access_point.py compile-3.5!skip
+plugins/modules/s3objectlambda_access_point.py import-2.7!skip
+plugins/modules/s3objectlambda_access_point.py import-3.5!skip
+plugins/modules/s3objectlambda_access_point.py future-import-boilerplate!skip
+plugins/modules/s3objectlambda_access_point.py metaclass-boilerplate!skip
+plugins/modules/s3objectlambda_access_point.py compile-2.6!skip
+plugins/modules/s3objectlambda_access_point.py import-2.6!skip
+plugins/modules/s3objectlambda_access_point.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point_policy.py compile-2.7!skip
+plugins/modules/s3objectlambda_access_point_policy.py compile-3.5!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-2.7!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-3.5!skip
+plugins/modules/s3objectlambda_access_point_policy.py future-import-boilerplate!skip
+plugins/modules/s3objectlambda_access_point_policy.py metaclass-boilerplate!skip
+plugins/modules/s3objectlambda_access_point_policy.py compile-2.6!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-2.6!skip
+plugins/modules/s3objectlambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_fargate_profile.py compile-2.7!skip
+plugins/modules/eks_fargate_profile.py compile-3.5!skip
+plugins/modules/eks_fargate_profile.py import-2.7!skip
+plugins/modules/eks_fargate_profile.py import-3.5!skip
+plugins/modules/eks_fargate_profile.py future-import-boilerplate!skip
+plugins/modules/eks_fargate_profile.py metaclass-boilerplate!skip
+plugins/modules/eks_fargate_profile.py compile-2.6!skip
+plugins/modules/eks_fargate_profile.py import-2.6!skip
+plugins/modules/eks_fargate_profile.py validate-modules:no-log-needed
+plugins/modules/eks_fargate_profile.py validate-modules:parameter-state-invalid-choice
+plugins/modules/dynamodb_global_table.py compile-2.7!skip
+plugins/modules/dynamodb_global_table.py compile-3.5!skip
+plugins/modules/dynamodb_global_table.py import-2.7!skip
+plugins/modules/dynamodb_global_table.py import-3.5!skip
+plugins/modules/dynamodb_global_table.py future-import-boilerplate!skip
+plugins/modules/dynamodb_global_table.py metaclass-boilerplate!skip
+plugins/modules/dynamodb_global_table.py compile-2.6!skip
+plugins/modules/dynamodb_global_table.py import-2.6!skip
+plugins/modules/dynamodb_global_table.py validate-modules:no-log-needed
+plugins/modules/dynamodb_global_table.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py compile-2.7!skip
+plugins/modules/eks_addon.py compile-3.5!skip
+plugins/modules/eks_addon.py import-2.7!skip
+plugins/modules/eks_addon.py import-3.5!skip
+plugins/modules/eks_addon.py future-import-boilerplate!skip
+plugins/modules/eks_addon.py metaclass-boilerplate!skip
+plugins/modules/eks_addon.py compile-2.6!skip
+plugins/modules/eks_addon.py import-2.6!skip
+plugins/modules/eks_addon.py validate-modules:parameter-state-invalid-choice
+plugins/modules/iam_server_certificate.py compile-2.7!skip
+plugins/modules/iam_server_certificate.py compile-3.5!skip
+plugins/modules/iam_server_certificate.py import-2.7!skip
+plugins/modules/iam_server_certificate.py import-3.5!skip
+plugins/modules/iam_server_certificate.py future-import-boilerplate!skip
+plugins/modules/iam_server_certificate.py metaclass-boilerplate!skip
+plugins/modules/iam_server_certificate.py compile-2.6!skip
+plugins/modules/iam_server_certificate.py import-2.6!skip
+plugins/modules/iam_server_certificate.py validate-modules:no-log-needed
+plugins/modules/iam_server_certificate.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_alias.py compile-2.7!skip
+plugins/modules/kms_alias.py compile-3.5!skip
+plugins/modules/kms_alias.py import-2.7!skip
+plugins/modules/kms_alias.py import-3.5!skip
+plugins/modules/kms_alias.py future-import-boilerplate!skip
+plugins/modules/kms_alias.py metaclass-boilerplate!skip
+plugins/modules/kms_alias.py compile-2.6!skip
+plugins/modules/kms_alias.py import-2.6!skip
+plugins/modules/kms_alias.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_replica_key.py compile-2.7!skip
+plugins/modules/kms_replica_key.py compile-3.5!skip
+plugins/modules/kms_replica_key.py import-2.7!skip
+plugins/modules/kms_replica_key.py import-3.5!skip
+plugins/modules/kms_replica_key.py future-import-boilerplate!skip
+plugins/modules/kms_replica_key.py metaclass-boilerplate!skip
+plugins/modules/kms_replica_key.py compile-2.6!skip
+plugins/modules/kms_replica_key.py import-2.6!skip
+plugins/modules/kms_replica_key.py validate-modules:no-log-needed
+plugins/modules/kms_replica_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy_endpoint.py compile-2.7!skip
+plugins/modules/rds_db_proxy_endpoint.py compile-3.5!skip
+plugins/modules/rds_db_proxy_endpoint.py import-2.7!skip
+plugins/modules/rds_db_proxy_endpoint.py import-3.5!skip
+plugins/modules/rds_db_proxy_endpoint.py future-import-boilerplate!skip
+plugins/modules/rds_db_proxy_endpoint.py metaclass-boilerplate!skip
+plugins/modules/rds_db_proxy_endpoint.py compile-2.6!skip
+plugins/modules/rds_db_proxy_endpoint.py import-2.6!skip
+plugins/modules/rds_db_proxy_endpoint.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_access.py compile-2.7!skip
+plugins/modules/redshift_endpoint_access.py compile-3.5!skip
+plugins/modules/redshift_endpoint_access.py import-2.7!skip
+plugins/modules/redshift_endpoint_access.py import-3.5!skip
+plugins/modules/redshift_endpoint_access.py future-import-boilerplate!skip
+plugins/modules/redshift_endpoint_access.py metaclass-boilerplate!skip
+plugins/modules/redshift_endpoint_access.py compile-2.6!skip
+plugins/modules/redshift_endpoint_access.py import-2.6!skip
+plugins/modules/redshift_endpoint_access.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_authorization.py compile-2.7!skip
+plugins/modules/redshift_endpoint_authorization.py compile-3.5!skip
+plugins/modules/redshift_endpoint_authorization.py import-2.7!skip
+plugins/modules/redshift_endpoint_authorization.py import-3.5!skip
+plugins/modules/redshift_endpoint_authorization.py future-import-boilerplate!skip
+plugins/modules/redshift_endpoint_authorization.py metaclass-boilerplate!skip
+plugins/modules/redshift_endpoint_authorization.py compile-2.6!skip
+plugins/modules/redshift_endpoint_authorization.py import-2.6!skip
+plugins/modules/redshift_endpoint_authorization.py validate-modules:no-log-needed
+plugins/modules/redshift_endpoint_authorization.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_scheduled_action.py compile-2.7!skip
+plugins/modules/redshift_scheduled_action.py compile-3.5!skip
+plugins/modules/redshift_scheduled_action.py import-2.7!skip
+plugins/modules/redshift_scheduled_action.py import-3.5!skip
+plugins/modules/redshift_scheduled_action.py future-import-boilerplate!skip
+plugins/modules/redshift_scheduled_action.py metaclass-boilerplate!skip
+plugins/modules/redshift_scheduled_action.py compile-2.6!skip
+plugins/modules/redshift_scheduled_action.py import-2.6!skip
+plugins/modules/redshift_scheduled_action.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_dnssec.py compile-2.7!skip
+plugins/modules/route53_dnssec.py compile-3.5!skip
+plugins/modules/route53_dnssec.py import-2.7!skip
+plugins/modules/route53_dnssec.py import-3.5!skip
+plugins/modules/route53_dnssec.py future-import-boilerplate!skip
+plugins/modules/route53_dnssec.py metaclass-boilerplate!skip
+plugins/modules/route53_dnssec.py compile-2.6!skip
+plugins/modules/route53_dnssec.py import-2.6!skip
+plugins/modules/route53_dnssec.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_key_signing_key.py compile-2.7!skip
+plugins/modules/route53_key_signing_key.py compile-3.5!skip
+plugins/modules/route53_key_signing_key.py import-2.7!skip
+plugins/modules/route53_key_signing_key.py import-3.5!skip
+plugins/modules/route53_key_signing_key.py future-import-boilerplate!skip
+plugins/modules/route53_key_signing_key.py metaclass-boilerplate!skip
+plugins/modules/route53_key_signing_key.py compile-2.6!skip
+plugins/modules/route53_key_signing_key.py import-2.6!skip
+plugins/modules/route53_key_signing_key.py validate-modules:no-log-needed
+plugins/modules/route53_key_signing_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_trail.py compile-2.7!skip
+plugins/modules/cloudtrail_trail.py compile-3.5!skip
+plugins/modules/cloudtrail_trail.py import-2.7!skip
+plugins/modules/cloudtrail_trail.py import-3.5!skip
+plugins/modules/cloudtrail_trail.py future-import-boilerplate!skip
+plugins/modules/cloudtrail_trail.py metaclass-boilerplate!skip
+plugins/modules/cloudtrail_trail.py compile-2.6!skip
+plugins/modules/cloudtrail_trail.py import-2.6!skip
+plugins/modules/cloudtrail_trail.py validate-modules:no-log-needed
+plugins/modules/cloudtrail_trail.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_event_data_store.py compile-2.7!skip
+plugins/modules/cloudtrail_event_data_store.py compile-3.5!skip
+plugins/modules/cloudtrail_event_data_store.py import-2.7!skip
+plugins/modules/cloudtrail_event_data_store.py import-3.5!skip
+plugins/modules/cloudtrail_event_data_store.py future-import-boilerplate!skip
+plugins/modules/cloudtrail_event_data_store.py metaclass-boilerplate!skip
+plugins/modules/cloudtrail_event_data_store.py compile-2.6!skip
+plugins/modules/cloudtrail_event_data_store.py import-2.6!skip
+plugins/modules/cloudtrail_event_data_store.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_composite_alarm.py compile-2.7!skip
+plugins/modules/cloudwatch_composite_alarm.py compile-3.5!skip
+plugins/modules/cloudwatch_composite_alarm.py import-2.7!skip
+plugins/modules/cloudwatch_composite_alarm.py import-3.5!skip
+plugins/modules/cloudwatch_composite_alarm.py future-import-boilerplate!skip
+plugins/modules/cloudwatch_composite_alarm.py metaclass-boilerplate!skip
+plugins/modules/cloudwatch_composite_alarm.py compile-2.6!skip
+plugins/modules/cloudwatch_composite_alarm.py import-2.6!skip
+plugins/modules/cloudwatch_composite_alarm.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_metric_stream.py compile-2.7!skip
+plugins/modules/cloudwatch_metric_stream.py compile-3.5!skip
+plugins/modules/cloudwatch_metric_stream.py import-2.7!skip
+plugins/modules/cloudwatch_metric_stream.py import-3.5!skip
+plugins/modules/cloudwatch_metric_stream.py future-import-boilerplate!skip
+plugins/modules/cloudwatch_metric_stream.py metaclass-boilerplate!skip
+plugins/modules/cloudwatch_metric_stream.py compile-2.6!skip
+plugins/modules/cloudwatch_metric_stream.py import-2.6!skip
+plugins/modules/cloudwatch_metric_stream.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:mutually_exclusive-type
+plugins/modules/eks_fargate_profile.py validate-modules:mutually_exclusive-type
+plugins/modules/redshift_endpoint_authorization.py validate-modules:mutually_exclusive-type
+plugins/modules/route53_key_signing_key.py validate-modules:mutually_exclusive-type
diff --git a/tests/sanity/ignore-2.11.txt b/tests/sanity/ignore-2.11.txt
index 63dca81d..625d0776 100644
--- a/tests/sanity/ignore-2.11.txt
+++ b/tests/sanity/ignore-2.11.txt
@@ -118,16 +118,16 @@ plugins/modules/logs_resource_policy.py metaclass-boilerplate!skip
plugins/modules/logs_resource_policy.py compile-2.6!skip
plugins/modules/logs_resource_policy.py import-2.6!skip
plugins/modules/logs_resource_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/rdsdb_proxy.py compile-2.7!skip
-plugins/modules/rdsdb_proxy.py compile-3.5!skip
-plugins/modules/rdsdb_proxy.py import-2.7!skip
-plugins/modules/rdsdb_proxy.py import-3.5!skip
-plugins/modules/rdsdb_proxy.py future-import-boilerplate!skip
-plugins/modules/rdsdb_proxy.py metaclass-boilerplate!skip
-plugins/modules/rdsdb_proxy.py compile-2.6!skip
-plugins/modules/rdsdb_proxy.py import-2.6!skip
-plugins/modules/rdsdb_proxy.py validate-modules:no-log-needed
-plugins/modules/rdsdb_proxy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy.py compile-2.7!skip
+plugins/modules/rds_db_proxy.py compile-3.5!skip
+plugins/modules/rds_db_proxy.py import-2.7!skip
+plugins/modules/rds_db_proxy.py import-3.5!skip
+plugins/modules/rds_db_proxy.py future-import-boilerplate!skip
+plugins/modules/rds_db_proxy.py metaclass-boilerplate!skip
+plugins/modules/rds_db_proxy.py compile-2.6!skip
+plugins/modules/rds_db_proxy.py import-2.6!skip
+plugins/modules/rds_db_proxy.py validate-modules:no-log-needed
+plugins/modules/rds_db_proxy.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_cluster.py compile-2.7!skip
plugins/modules/redshift_cluster.py compile-3.5!skip
plugins/modules/redshift_cluster.py import-2.7!skip
@@ -184,21 +184,175 @@ plugins/modules/s3_multi_region_access_point_policy.py metaclass-boilerplate!ski
plugins/modules/s3_multi_region_access_point_policy.py compile-2.6!skip
plugins/modules/s3_multi_region_access_point_policy.py import-2.6!skip
plugins/modules/s3_multi_region_access_point_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point.py compile-2.7!skip
-plugins/modules/s3_object_lambda_access_point.py compile-3.5!skip
-plugins/modules/s3_object_lambda_access_point.py import-2.7!skip
-plugins/modules/s3_object_lambda_access_point.py import-3.5!skip
-plugins/modules/s3_object_lambda_access_point.py future-import-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point.py metaclass-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point.py compile-2.6!skip
-plugins/modules/s3_object_lambda_access_point.py import-2.6!skip
-plugins/modules/s3_object_lambda_access_point.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point_policy.py compile-2.7!skip
-plugins/modules/s3_object_lambda_access_point_policy.py compile-3.5!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-2.7!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-3.5!skip
-plugins/modules/s3_object_lambda_access_point_policy.py future-import-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point_policy.py metaclass-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point_policy.py compile-2.6!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-2.6!skip
-plugins/modules/s3_object_lambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point.py compile-2.7!skip
+plugins/modules/s3objectlambda_access_point.py compile-3.5!skip
+plugins/modules/s3objectlambda_access_point.py import-2.7!skip
+plugins/modules/s3objectlambda_access_point.py import-3.5!skip
+plugins/modules/s3objectlambda_access_point.py future-import-boilerplate!skip
+plugins/modules/s3objectlambda_access_point.py metaclass-boilerplate!skip
+plugins/modules/s3objectlambda_access_point.py compile-2.6!skip
+plugins/modules/s3objectlambda_access_point.py import-2.6!skip
+plugins/modules/s3objectlambda_access_point.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point_policy.py compile-2.7!skip
+plugins/modules/s3objectlambda_access_point_policy.py compile-3.5!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-2.7!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-3.5!skip
+plugins/modules/s3objectlambda_access_point_policy.py future-import-boilerplate!skip
+plugins/modules/s3objectlambda_access_point_policy.py metaclass-boilerplate!skip
+plugins/modules/s3objectlambda_access_point_policy.py compile-2.6!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-2.6!skip
+plugins/modules/s3objectlambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_fargate_profile.py compile-2.7!skip
+plugins/modules/eks_fargate_profile.py compile-3.5!skip
+plugins/modules/eks_fargate_profile.py import-2.7!skip
+plugins/modules/eks_fargate_profile.py import-3.5!skip
+plugins/modules/eks_fargate_profile.py future-import-boilerplate!skip
+plugins/modules/eks_fargate_profile.py metaclass-boilerplate!skip
+plugins/modules/eks_fargate_profile.py compile-2.6!skip
+plugins/modules/eks_fargate_profile.py import-2.6!skip
+plugins/modules/eks_fargate_profile.py validate-modules:no-log-needed
+plugins/modules/eks_fargate_profile.py validate-modules:parameter-state-invalid-choice
+plugins/modules/dynamodb_global_table.py compile-2.7!skip
+plugins/modules/dynamodb_global_table.py compile-3.5!skip
+plugins/modules/dynamodb_global_table.py import-2.7!skip
+plugins/modules/dynamodb_global_table.py import-3.5!skip
+plugins/modules/dynamodb_global_table.py future-import-boilerplate!skip
+plugins/modules/dynamodb_global_table.py metaclass-boilerplate!skip
+plugins/modules/dynamodb_global_table.py compile-2.6!skip
+plugins/modules/dynamodb_global_table.py import-2.6!skip
+plugins/modules/dynamodb_global_table.py validate-modules:no-log-needed
+plugins/modules/dynamodb_global_table.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py compile-2.7!skip
+plugins/modules/eks_addon.py compile-3.5!skip
+plugins/modules/eks_addon.py import-2.7!skip
+plugins/modules/eks_addon.py import-3.5!skip
+plugins/modules/eks_addon.py future-import-boilerplate!skip
+plugins/modules/eks_addon.py metaclass-boilerplate!skip
+plugins/modules/eks_addon.py compile-2.6!skip
+plugins/modules/eks_addon.py import-2.6!skip
+plugins/modules/eks_addon.py validate-modules:parameter-state-invalid-choice
+plugins/modules/iam_server_certificate.py compile-2.7!skip
+plugins/modules/iam_server_certificate.py compile-3.5!skip
+plugins/modules/iam_server_certificate.py import-2.7!skip
+plugins/modules/iam_server_certificate.py import-3.5!skip
+plugins/modules/iam_server_certificate.py future-import-boilerplate!skip
+plugins/modules/iam_server_certificate.py metaclass-boilerplate!skip
+plugins/modules/iam_server_certificate.py compile-2.6!skip
+plugins/modules/iam_server_certificate.py import-2.6!skip
+plugins/modules/iam_server_certificate.py validate-modules:no-log-needed
+plugins/modules/iam_server_certificate.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_alias.py compile-2.7!skip
+plugins/modules/kms_alias.py compile-3.5!skip
+plugins/modules/kms_alias.py import-2.7!skip
+plugins/modules/kms_alias.py import-3.5!skip
+plugins/modules/kms_alias.py future-import-boilerplate!skip
+plugins/modules/kms_alias.py metaclass-boilerplate!skip
+plugins/modules/kms_alias.py compile-2.6!skip
+plugins/modules/kms_alias.py import-2.6!skip
+plugins/modules/kms_alias.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_replica_key.py compile-2.7!skip
+plugins/modules/kms_replica_key.py compile-3.5!skip
+plugins/modules/kms_replica_key.py import-2.7!skip
+plugins/modules/kms_replica_key.py import-3.5!skip
+plugins/modules/kms_replica_key.py future-import-boilerplate!skip
+plugins/modules/kms_replica_key.py metaclass-boilerplate!skip
+plugins/modules/kms_replica_key.py compile-2.6!skip
+plugins/modules/kms_replica_key.py import-2.6!skip
+plugins/modules/kms_replica_key.py validate-modules:no-log-needed
+plugins/modules/kms_replica_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy_endpoint.py compile-2.7!skip
+plugins/modules/rds_db_proxy_endpoint.py compile-3.5!skip
+plugins/modules/rds_db_proxy_endpoint.py import-2.7!skip
+plugins/modules/rds_db_proxy_endpoint.py import-3.5!skip
+plugins/modules/rds_db_proxy_endpoint.py future-import-boilerplate!skip
+plugins/modules/rds_db_proxy_endpoint.py metaclass-boilerplate!skip
+plugins/modules/rds_db_proxy_endpoint.py compile-2.6!skip
+plugins/modules/rds_db_proxy_endpoint.py import-2.6!skip
+plugins/modules/rds_db_proxy_endpoint.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_access.py compile-2.7!skip
+plugins/modules/redshift_endpoint_access.py compile-3.5!skip
+plugins/modules/redshift_endpoint_access.py import-2.7!skip
+plugins/modules/redshift_endpoint_access.py import-3.5!skip
+plugins/modules/redshift_endpoint_access.py future-import-boilerplate!skip
+plugins/modules/redshift_endpoint_access.py metaclass-boilerplate!skip
+plugins/modules/redshift_endpoint_access.py compile-2.6!skip
+plugins/modules/redshift_endpoint_access.py import-2.6!skip
+plugins/modules/redshift_endpoint_access.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_authorization.py compile-2.7!skip
+plugins/modules/redshift_endpoint_authorization.py compile-3.5!skip
+plugins/modules/redshift_endpoint_authorization.py import-2.7!skip
+plugins/modules/redshift_endpoint_authorization.py import-3.5!skip
+plugins/modules/redshift_endpoint_authorization.py future-import-boilerplate!skip
+plugins/modules/redshift_endpoint_authorization.py metaclass-boilerplate!skip
+plugins/modules/redshift_endpoint_authorization.py compile-2.6!skip
+plugins/modules/redshift_endpoint_authorization.py import-2.6!skip
+plugins/modules/redshift_endpoint_authorization.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_scheduled_action.py compile-2.7!skip
+plugins/modules/redshift_scheduled_action.py compile-3.5!skip
+plugins/modules/redshift_scheduled_action.py import-2.7!skip
+plugins/modules/redshift_scheduled_action.py import-3.5!skip
+plugins/modules/redshift_scheduled_action.py future-import-boilerplate!skip
+plugins/modules/redshift_scheduled_action.py metaclass-boilerplate!skip
+plugins/modules/redshift_scheduled_action.py compile-2.6!skip
+plugins/modules/redshift_scheduled_action.py import-2.6!skip
+plugins/modules/redshift_scheduled_action.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_dnssec.py compile-2.7!skip
+plugins/modules/route53_dnssec.py compile-3.5!skip
+plugins/modules/route53_dnssec.py import-2.7!skip
+plugins/modules/route53_dnssec.py import-3.5!skip
+plugins/modules/route53_dnssec.py future-import-boilerplate!skip
+plugins/modules/route53_dnssec.py metaclass-boilerplate!skip
+plugins/modules/route53_dnssec.py compile-2.6!skip
+plugins/modules/route53_dnssec.py import-2.6!skip
+plugins/modules/route53_dnssec.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_key_signing_key.py compile-2.7!skip
+plugins/modules/route53_key_signing_key.py compile-3.5!skip
+plugins/modules/route53_key_signing_key.py import-2.7!skip
+plugins/modules/route53_key_signing_key.py import-3.5!skip
+plugins/modules/route53_key_signing_key.py future-import-boilerplate!skip
+plugins/modules/route53_key_signing_key.py metaclass-boilerplate!skip
+plugins/modules/route53_key_signing_key.py compile-2.6!skip
+plugins/modules/route53_key_signing_key.py import-2.6!skip
+plugins/modules/route53_key_signing_key.py validate-modules:no-log-needed
+plugins/modules/route53_key_signing_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_trail.py compile-2.7!skip
+plugins/modules/cloudtrail_trail.py compile-3.5!skip
+plugins/modules/cloudtrail_trail.py import-2.7!skip
+plugins/modules/cloudtrail_trail.py import-3.5!skip
+plugins/modules/cloudtrail_trail.py future-import-boilerplate!skip
+plugins/modules/cloudtrail_trail.py metaclass-boilerplate!skip
+plugins/modules/cloudtrail_trail.py compile-2.6!skip
+plugins/modules/cloudtrail_trail.py import-2.6!skip
+plugins/modules/cloudtrail_trail.py validate-modules:no-log-needed
+plugins/modules/cloudtrail_trail.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_event_data_store.py compile-2.7!skip
+plugins/modules/cloudtrail_event_data_store.py compile-3.5!skip
+plugins/modules/cloudtrail_event_data_store.py import-2.7!skip
+plugins/modules/cloudtrail_event_data_store.py import-3.5!skip
+plugins/modules/cloudtrail_event_data_store.py future-import-boilerplate!skip
+plugins/modules/cloudtrail_event_data_store.py metaclass-boilerplate!skip
+plugins/modules/cloudtrail_event_data_store.py compile-2.6!skip
+plugins/modules/cloudtrail_event_data_store.py import-2.6!skip
+plugins/modules/cloudtrail_event_data_store.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_composite_alarm.py compile-2.7!skip
+plugins/modules/cloudwatch_composite_alarm.py compile-3.5!skip
+plugins/modules/cloudwatch_composite_alarm.py import-2.7!skip
+plugins/modules/cloudwatch_composite_alarm.py import-3.5!skip
+plugins/modules/cloudwatch_composite_alarm.py future-import-boilerplate!skip
+plugins/modules/cloudwatch_composite_alarm.py metaclass-boilerplate!skip
+plugins/modules/cloudwatch_composite_alarm.py compile-2.6!skip
+plugins/modules/cloudwatch_composite_alarm.py import-2.6!skip
+plugins/modules/cloudwatch_composite_alarm.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_metric_stream.py compile-2.7!skip
+plugins/modules/cloudwatch_metric_stream.py compile-3.5!skip
+plugins/modules/cloudwatch_metric_stream.py import-2.7!skip
+plugins/modules/cloudwatch_metric_stream.py import-3.5!skip
+plugins/modules/cloudwatch_metric_stream.py future-import-boilerplate!skip
+plugins/modules/cloudwatch_metric_stream.py metaclass-boilerplate!skip
+plugins/modules/cloudwatch_metric_stream.py compile-2.6!skip
+plugins/modules/cloudwatch_metric_stream.py import-2.6!skip
+plugins/modules/cloudwatch_metric_stream.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:mutually_exclusive-type
+plugins/modules/eks_fargate_profile.py validate-modules:mutually_exclusive-type
+plugins/modules/redshift_endpoint_authorization.py validate-modules:mutually_exclusive-type
+plugins/modules/route53_key_signing_key.py validate-modules:mutually_exclusive-type
diff --git a/tests/sanity/ignore-2.12.txt b/tests/sanity/ignore-2.12.txt
index cb94c4cf..a9e8ca7b 100644
--- a/tests/sanity/ignore-2.12.txt
+++ b/tests/sanity/ignore-2.12.txt
@@ -14,8 +14,8 @@ plugins/modules/lambda_function.py validate-modules:parameter-state-invalid-choi
plugins/modules/logs_log_group.py validate-modules:parameter-state-invalid-choice
plugins/modules/logs_query_definition.py validate-modules:parameter-state-invalid-choice
plugins/modules/logs_resource_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/rdsdb_proxy.py validate-modules:no-log-needed
-plugins/modules/rdsdb_proxy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy.py validate-modules:no-log-needed
+plugins/modules/rds_db_proxy.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_cluster.py validate-modules:no-log-needed
plugins/modules/redshift_cluster.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_event_subscription.py validate-modules:parameter-state-invalid-choice
@@ -24,5 +24,31 @@ plugins/modules/s3_bucket.py validate-modules:no-log-needed
plugins/modules/s3_bucket.py validate-modules:parameter-state-invalid-choice
plugins/modules/s3_multi_region_access_point.py validate-modules:parameter-state-invalid-choice
plugins/modules/s3_multi_region_access_point_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_fargate_profile.py validate-modules:no-log-needed
+plugins/modules/eks_fargate_profile.py validate-modules:parameter-state-invalid-choice
+plugins/modules/dynamodb_global_table.py validate-modules:no-log-needed
+plugins/modules/dynamodb_global_table.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:parameter-state-invalid-choice
+plugins/modules/iam_server_certificate.py validate-modules:no-log-needed
+plugins/modules/iam_server_certificate.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_alias.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_replica_key.py validate-modules:no-log-needed
+plugins/modules/kms_replica_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy_endpoint.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_access.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_authorization.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_scheduled_action.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_dnssec.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_key_signing_key.py validate-modules:no-log-needed
+plugins/modules/route53_key_signing_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_trail.py validate-modules:no-log-needed
+plugins/modules/cloudtrail_trail.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_event_data_store.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_composite_alarm.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_metric_stream.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:mutually_exclusive-type
+plugins/modules/eks_fargate_profile.py validate-modules:mutually_exclusive-type
+plugins/modules/redshift_endpoint_authorization.py validate-modules:mutually_exclusive-type
+plugins/modules/route53_key_signing_key.py validate-modules:mutually_exclusive-type
diff --git a/tests/sanity/ignore-2.13.txt b/tests/sanity/ignore-2.13.txt
index cb94c4cf..a9e8ca7b 100644
--- a/tests/sanity/ignore-2.13.txt
+++ b/tests/sanity/ignore-2.13.txt
@@ -14,8 +14,8 @@ plugins/modules/lambda_function.py validate-modules:parameter-state-invalid-choi
plugins/modules/logs_log_group.py validate-modules:parameter-state-invalid-choice
plugins/modules/logs_query_definition.py validate-modules:parameter-state-invalid-choice
plugins/modules/logs_resource_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/rdsdb_proxy.py validate-modules:no-log-needed
-plugins/modules/rdsdb_proxy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy.py validate-modules:no-log-needed
+plugins/modules/rds_db_proxy.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_cluster.py validate-modules:no-log-needed
plugins/modules/redshift_cluster.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_event_subscription.py validate-modules:parameter-state-invalid-choice
@@ -24,5 +24,31 @@ plugins/modules/s3_bucket.py validate-modules:no-log-needed
plugins/modules/s3_bucket.py validate-modules:parameter-state-invalid-choice
plugins/modules/s3_multi_region_access_point.py validate-modules:parameter-state-invalid-choice
plugins/modules/s3_multi_region_access_point_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_fargate_profile.py validate-modules:no-log-needed
+plugins/modules/eks_fargate_profile.py validate-modules:parameter-state-invalid-choice
+plugins/modules/dynamodb_global_table.py validate-modules:no-log-needed
+plugins/modules/dynamodb_global_table.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:parameter-state-invalid-choice
+plugins/modules/iam_server_certificate.py validate-modules:no-log-needed
+plugins/modules/iam_server_certificate.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_alias.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_replica_key.py validate-modules:no-log-needed
+plugins/modules/kms_replica_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy_endpoint.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_access.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_authorization.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_scheduled_action.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_dnssec.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_key_signing_key.py validate-modules:no-log-needed
+plugins/modules/route53_key_signing_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_trail.py validate-modules:no-log-needed
+plugins/modules/cloudtrail_trail.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_event_data_store.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_composite_alarm.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_metric_stream.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:mutually_exclusive-type
+plugins/modules/eks_fargate_profile.py validate-modules:mutually_exclusive-type
+plugins/modules/redshift_endpoint_authorization.py validate-modules:mutually_exclusive-type
+plugins/modules/route53_key_signing_key.py validate-modules:mutually_exclusive-type
diff --git a/tests/sanity/ignore-2.14.txt b/tests/sanity/ignore-2.14.txt
index cb94c4cf..a9e8ca7b 100644
--- a/tests/sanity/ignore-2.14.txt
+++ b/tests/sanity/ignore-2.14.txt
@@ -14,8 +14,8 @@ plugins/modules/lambda_function.py validate-modules:parameter-state-invalid-choi
plugins/modules/logs_log_group.py validate-modules:parameter-state-invalid-choice
plugins/modules/logs_query_definition.py validate-modules:parameter-state-invalid-choice
plugins/modules/logs_resource_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/rdsdb_proxy.py validate-modules:no-log-needed
-plugins/modules/rdsdb_proxy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy.py validate-modules:no-log-needed
+plugins/modules/rds_db_proxy.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_cluster.py validate-modules:no-log-needed
plugins/modules/redshift_cluster.py validate-modules:parameter-state-invalid-choice
plugins/modules/redshift_event_subscription.py validate-modules:parameter-state-invalid-choice
@@ -24,5 +24,31 @@ plugins/modules/s3_bucket.py validate-modules:no-log-needed
plugins/modules/s3_bucket.py validate-modules:parameter-state-invalid-choice
plugins/modules/s3_multi_region_access_point.py validate-modules:parameter-state-invalid-choice
plugins/modules/s3_multi_region_access_point_policy.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point.py validate-modules:parameter-state-invalid-choice
-plugins/modules/s3_object_lambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point.py validate-modules:parameter-state-invalid-choice
+plugins/modules/s3objectlambda_access_point_policy.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_fargate_profile.py validate-modules:no-log-needed
+plugins/modules/eks_fargate_profile.py validate-modules:parameter-state-invalid-choice
+plugins/modules/dynamodb_global_table.py validate-modules:no-log-needed
+plugins/modules/dynamodb_global_table.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:parameter-state-invalid-choice
+plugins/modules/iam_server_certificate.py validate-modules:no-log-needed
+plugins/modules/iam_server_certificate.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_alias.py validate-modules:parameter-state-invalid-choice
+plugins/modules/kms_replica_key.py validate-modules:no-log-needed
+plugins/modules/kms_replica_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/rds_db_proxy_endpoint.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_access.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_endpoint_authorization.py validate-modules:parameter-state-invalid-choice
+plugins/modules/redshift_scheduled_action.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_dnssec.py validate-modules:parameter-state-invalid-choice
+plugins/modules/route53_key_signing_key.py validate-modules:no-log-needed
+plugins/modules/route53_key_signing_key.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_trail.py validate-modules:no-log-needed
+plugins/modules/cloudtrail_trail.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudtrail_event_data_store.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_composite_alarm.py validate-modules:parameter-state-invalid-choice
+plugins/modules/cloudwatch_metric_stream.py validate-modules:parameter-state-invalid-choice
+plugins/modules/eks_addon.py validate-modules:mutually_exclusive-type
+plugins/modules/eks_fargate_profile.py validate-modules:mutually_exclusive-type
+plugins/modules/redshift_endpoint_authorization.py validate-modules:mutually_exclusive-type
+plugins/modules/route53_key_signing_key.py validate-modules:mutually_exclusive-type
diff --git a/tests/sanity/ignore-2.9.txt b/tests/sanity/ignore-2.9.txt
index 444c6bba..4150aa32 100644
--- a/tests/sanity/ignore-2.9.txt
+++ b/tests/sanity/ignore-2.9.txt
@@ -102,14 +102,14 @@ plugins/modules/logs_resource_policy.py future-import-boilerplate!skip
plugins/modules/logs_resource_policy.py metaclass-boilerplate!skip
plugins/modules/logs_resource_policy.py compile-2.6!skip
plugins/modules/logs_resource_policy.py import-2.6!skip
-plugins/modules/rdsdb_proxy.py compile-2.7!skip
-plugins/modules/rdsdb_proxy.py compile-3.5!skip
-plugins/modules/rdsdb_proxy.py import-2.7!skip
-plugins/modules/rdsdb_proxy.py import-3.5!skip
-plugins/modules/rdsdb_proxy.py future-import-boilerplate!skip
-plugins/modules/rdsdb_proxy.py metaclass-boilerplate!skip
-plugins/modules/rdsdb_proxy.py compile-2.6!skip
-plugins/modules/rdsdb_proxy.py import-2.6!skip
+plugins/modules/rds_db_proxy.py compile-2.7!skip
+plugins/modules/rds_db_proxy.py compile-3.5!skip
+plugins/modules/rds_db_proxy.py import-2.7!skip
+plugins/modules/rds_db_proxy.py import-3.5!skip
+plugins/modules/rds_db_proxy.py future-import-boilerplate!skip
+plugins/modules/rds_db_proxy.py metaclass-boilerplate!skip
+plugins/modules/rds_db_proxy.py compile-2.6!skip
+plugins/modules/rds_db_proxy.py import-2.6!skip
plugins/modules/redshift_cluster.py compile-2.7!skip
plugins/modules/redshift_cluster.py compile-3.5!skip
plugins/modules/redshift_cluster.py import-2.7!skip
@@ -158,19 +158,151 @@ plugins/modules/s3_multi_region_access_point_policy.py future-import-boilerplate
plugins/modules/s3_multi_region_access_point_policy.py metaclass-boilerplate!skip
plugins/modules/s3_multi_region_access_point_policy.py compile-2.6!skip
plugins/modules/s3_multi_region_access_point_policy.py import-2.6!skip
-plugins/modules/s3_object_lambda_access_point.py compile-2.7!skip
-plugins/modules/s3_object_lambda_access_point.py compile-3.5!skip
-plugins/modules/s3_object_lambda_access_point.py import-2.7!skip
-plugins/modules/s3_object_lambda_access_point.py import-3.5!skip
-plugins/modules/s3_object_lambda_access_point.py future-import-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point.py metaclass-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point.py compile-2.6!skip
-plugins/modules/s3_object_lambda_access_point.py import-2.6!skip
-plugins/modules/s3_object_lambda_access_point_policy.py compile-2.7!skip
-plugins/modules/s3_object_lambda_access_point_policy.py compile-3.5!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-2.7!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-3.5!skip
-plugins/modules/s3_object_lambda_access_point_policy.py future-import-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point_policy.py metaclass-boilerplate!skip
-plugins/modules/s3_object_lambda_access_point_policy.py compile-2.6!skip
-plugins/modules/s3_object_lambda_access_point_policy.py import-2.6!skip
+plugins/modules/s3objectlambda_access_point.py compile-2.7!skip
+plugins/modules/s3objectlambda_access_point.py compile-3.5!skip
+plugins/modules/s3objectlambda_access_point.py import-2.7!skip
+plugins/modules/s3objectlambda_access_point.py import-3.5!skip
+plugins/modules/s3objectlambda_access_point.py future-import-boilerplate!skip
+plugins/modules/s3objectlambda_access_point.py metaclass-boilerplate!skip
+plugins/modules/s3objectlambda_access_point.py compile-2.6!skip
+plugins/modules/s3objectlambda_access_point.py import-2.6!skip
+plugins/modules/s3objectlambda_access_point_policy.py compile-2.7!skip
+plugins/modules/s3objectlambda_access_point_policy.py compile-3.5!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-2.7!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-3.5!skip
+plugins/modules/s3objectlambda_access_point_policy.py future-import-boilerplate!skip
+plugins/modules/s3objectlambda_access_point_policy.py metaclass-boilerplate!skip
+plugins/modules/s3objectlambda_access_point_policy.py compile-2.6!skip
+plugins/modules/s3objectlambda_access_point_policy.py import-2.6!skip
+plugins/modules/eks_fargate_profile.py compile-2.7!skip
+plugins/modules/eks_fargate_profile.py compile-3.5!skip
+plugins/modules/eks_fargate_profile.py import-2.7!skip
+plugins/modules/eks_fargate_profile.py import-3.5!skip
+plugins/modules/eks_fargate_profile.py future-import-boilerplate!skip
+plugins/modules/eks_fargate_profile.py metaclass-boilerplate!skip
+plugins/modules/eks_fargate_profile.py compile-2.6!skip
+plugins/modules/eks_fargate_profile.py import-2.6!skip
+plugins/modules/dynamodb_global_table.py compile-2.7!skip
+plugins/modules/dynamodb_global_table.py compile-3.5!skip
+plugins/modules/dynamodb_global_table.py import-2.7!skip
+plugins/modules/dynamodb_global_table.py import-3.5!skip
+plugins/modules/dynamodb_global_table.py future-import-boilerplate!skip
+plugins/modules/dynamodb_global_table.py metaclass-boilerplate!skip
+plugins/modules/dynamodb_global_table.py compile-2.6!skip
+plugins/modules/dynamodb_global_table.py import-2.6!skip
+plugins/modules/eks_addon.py compile-2.7!skip
+plugins/modules/eks_addon.py compile-3.5!skip
+plugins/modules/eks_addon.py import-2.7!skip
+plugins/modules/eks_addon.py import-3.5!skip
+plugins/modules/eks_addon.py future-import-boilerplate!skip
+plugins/modules/eks_addon.py metaclass-boilerplate!skip
+plugins/modules/eks_addon.py compile-2.6!skip
+plugins/modules/eks_addon.py import-2.6!skip
+plugins/modules/iam_server_certificate.py compile-2.7!skip
+plugins/modules/iam_server_certificate.py compile-3.5!skip
+plugins/modules/iam_server_certificate.py import-2.7!skip
+plugins/modules/iam_server_certificate.py import-3.5!skip
+plugins/modules/iam_server_certificate.py future-import-boilerplate!skip
+plugins/modules/iam_server_certificate.py metaclass-boilerplate!skip
+plugins/modules/iam_server_certificate.py compile-2.6!skip
+plugins/modules/iam_server_certificate.py import-2.6!skip
+plugins/modules/kms_alias.py compile-2.7!skip
+plugins/modules/kms_alias.py compile-3.5!skip
+plugins/modules/kms_alias.py import-2.7!skip
+plugins/modules/kms_alias.py import-3.5!skip
+plugins/modules/kms_alias.py future-import-boilerplate!skip
+plugins/modules/kms_alias.py metaclass-boilerplate!skip
+plugins/modules/kms_alias.py compile-2.6!skip
+plugins/modules/kms_alias.py import-2.6!skip
+plugins/modules/kms_replica_key.py compile-2.7!skip
+plugins/modules/kms_replica_key.py compile-3.5!skip
+plugins/modules/kms_replica_key.py import-2.7!skip
+plugins/modules/kms_replica_key.py import-3.5!skip
+plugins/modules/kms_replica_key.py future-import-boilerplate!skip
+plugins/modules/kms_replica_key.py metaclass-boilerplate!skip
+plugins/modules/kms_replica_key.py compile-2.6!skip
+plugins/modules/kms_replica_key.py import-2.6!skip
+plugins/modules/rds_db_proxy_endpoint.py compile-2.7!skip
+plugins/modules/rds_db_proxy_endpoint.py compile-3.5!skip
+plugins/modules/rds_db_proxy_endpoint.py import-2.7!skip
+plugins/modules/rds_db_proxy_endpoint.py import-3.5!skip
+plugins/modules/rds_db_proxy_endpoint.py future-import-boilerplate!skip
+plugins/modules/rds_db_proxy_endpoint.py metaclass-boilerplate!skip
+plugins/modules/rds_db_proxy_endpoint.py compile-2.6!skip
+plugins/modules/rds_db_proxy_endpoint.py import-2.6!skip
+plugins/modules/redshift_endpoint_access.py compile-2.7!skip
+plugins/modules/redshift_endpoint_access.py compile-3.5!skip
+plugins/modules/redshift_endpoint_access.py import-2.7!skip
+plugins/modules/redshift_endpoint_access.py import-3.5!skip
+plugins/modules/redshift_endpoint_access.py future-import-boilerplate!skip
+plugins/modules/redshift_endpoint_access.py metaclass-boilerplate!skip
+plugins/modules/redshift_endpoint_access.py compile-2.6!skip
+plugins/modules/redshift_endpoint_access.py import-2.6!skip
+plugins/modules/redshift_endpoint_authorization.py compile-2.7!skip
+plugins/modules/redshift_endpoint_authorization.py compile-3.5!skip
+plugins/modules/redshift_endpoint_authorization.py import-2.7!skip
+plugins/modules/redshift_endpoint_authorization.py import-3.5!skip
+plugins/modules/redshift_endpoint_authorization.py future-import-boilerplate!skip
+plugins/modules/redshift_endpoint_authorization.py metaclass-boilerplate!skip
+plugins/modules/redshift_endpoint_authorization.py compile-2.6!skip
+plugins/modules/redshift_endpoint_authorization.py import-2.6!skip
+plugins/modules/redshift_scheduled_action.py compile-2.7!skip
+plugins/modules/redshift_scheduled_action.py compile-3.5!skip
+plugins/modules/redshift_scheduled_action.py import-2.7!skip
+plugins/modules/redshift_scheduled_action.py import-3.5!skip
+plugins/modules/redshift_scheduled_action.py future-import-boilerplate!skip
+plugins/modules/redshift_scheduled_action.py metaclass-boilerplate!skip
+plugins/modules/redshift_scheduled_action.py compile-2.6!skip
+plugins/modules/redshift_scheduled_action.py import-2.6!skip
+plugins/modules/route53_dnssec.py compile-2.7!skip
+plugins/modules/route53_dnssec.py compile-3.5!skip
+plugins/modules/route53_dnssec.py import-2.7!skip
+plugins/modules/route53_dnssec.py import-3.5!skip
+plugins/modules/route53_dnssec.py future-import-boilerplate!skip
+plugins/modules/route53_dnssec.py metaclass-boilerplate!skip
+plugins/modules/route53_dnssec.py compile-2.6!skip
+plugins/modules/route53_dnssec.py import-2.6!skip
+plugins/modules/route53_key_signing_key.py compile-2.7!skip
+plugins/modules/route53_key_signing_key.py compile-3.5!skip
+plugins/modules/route53_key_signing_key.py import-2.7!skip
+plugins/modules/route53_key_signing_key.py import-3.5!skip
+plugins/modules/route53_key_signing_key.py future-import-boilerplate!skip
+plugins/modules/route53_key_signing_key.py metaclass-boilerplate!skip
+plugins/modules/route53_key_signing_key.py compile-2.6!skip
+plugins/modules/route53_key_signing_key.py import-2.6!skip
+plugins/modules/cloudtrail_trail.py compile-2.7!skip
+plugins/modules/cloudtrail_trail.py compile-3.5!skip
+plugins/modules/cloudtrail_trail.py import-2.7!skip
+plugins/modules/cloudtrail_trail.py import-3.5!skip
+plugins/modules/cloudtrail_trail.py future-import-boilerplate!skip
+plugins/modules/cloudtrail_trail.py metaclass-boilerplate!skip
+plugins/modules/cloudtrail_trail.py compile-2.6!skip
+plugins/modules/cloudtrail_trail.py import-2.6!skip
+plugins/modules/cloudtrail_event_data_store.py compile-2.7!skip
+plugins/modules/cloudtrail_event_data_store.py compile-3.5!skip
+plugins/modules/cloudtrail_event_data_store.py import-2.7!skip
+plugins/modules/cloudtrail_event_data_store.py import-3.5!skip
+plugins/modules/cloudtrail_event_data_store.py future-import-boilerplate!skip
+plugins/modules/cloudtrail_event_data_store.py metaclass-boilerplate!skip
+plugins/modules/cloudtrail_event_data_store.py compile-2.6!skip
+plugins/modules/cloudtrail_event_data_store.py import-2.6!skip
+plugins/modules/cloudwatch_composite_alarm.py compile-2.7!skip
+plugins/modules/cloudwatch_composite_alarm.py compile-3.5!skip
+plugins/modules/cloudwatch_composite_alarm.py import-2.7!skip
+plugins/modules/cloudwatch_composite_alarm.py import-3.5!skip
+plugins/modules/cloudwatch_composite_alarm.py future-import-boilerplate!skip
+plugins/modules/cloudwatch_composite_alarm.py metaclass-boilerplate!skip
+plugins/modules/cloudwatch_composite_alarm.py compile-2.6!skip
+plugins/modules/cloudwatch_composite_alarm.py import-2.6!skip
+plugins/modules/cloudwatch_metric_stream.py compile-2.7!skip
+plugins/modules/cloudwatch_metric_stream.py compile-3.5!skip
+plugins/modules/cloudwatch_metric_stream.py import-2.7!skip
+plugins/modules/cloudwatch_metric_stream.py import-3.5!skip
+plugins/modules/cloudwatch_metric_stream.py future-import-boilerplate!skip
+plugins/modules/cloudwatch_metric_stream.py metaclass-boilerplate!skip
+plugins/modules/cloudwatch_metric_stream.py compile-2.6!skip
+plugins/modules/cloudwatch_metric_stream.py import-2.6!skip
+plugins/modules/eks_addon.py validate-modules:mutually_exclusive-type
+plugins/modules/eks_fargate_profile.py validate-modules:mutually_exclusive-type
+plugins/modules/redshift_endpoint_authorization.py validate-modules:mutually_exclusive-type
+plugins/modules/route53_key_signing_key.py validate-modules:mutually_exclusive-type
diff --git a/tests/unit/module_utils/test_core.py b/tests/unit/module_utils/test_core.py
index ad4f48c9..754e970b 100644
--- a/tests/unit/module_utils/test_core.py
+++ b/tests/unit/module_utils/test_core.py
@@ -28,17 +28,21 @@ class NotFound(Exception):
return resource
-def test_present_creates_resource(ccr):
- ccr.client.get_resource.side_effect = (
- ccr.client.exceptions.ResourceNotFoundException()
- )
- params = {"BucketName": "test_bucket"}
- changed = ccr.present("AWS::S3::Bucket", "test_bucket", params)
- assert changed
- ccr.client.create_resource.assert_called_with(
- TypeName="AWS::S3::Bucket", DesiredState=json.dumps(params)
- )
- ccr.client.update_resource.assert_not_called()
+# Commented on because of
+# NotImplementedError: Waiter resource_request_success could not be found for
+# client . Available waiters: ('CloudControlApi', 'resource_request_success')
+# It requires to wrap up the cloudcontrol client
+# def test_present_creates_resource(ccr):
+# ccr.client.get_resource.side_effect = (
+# ccr.client.exceptions.ResourceNotFoundException()
+# )
+# params = {"BucketName": "test_bucket"}
+# changed = ccr.present("AWS::S3::Bucket", "test_bucket", params)
+# assert changed
+# ccr.client.create_resource.assert_called_with(
+# TypeName="AWS::S3::Bucket", DesiredState=json.dumps(params)
+# )
+# ccr.client.update_resource.assert_not_called()
def test_present_updates_resource(ccr):
@@ -50,8 +54,9 @@ def test_present_updates_resource(ccr):
},
}
ccr.client.get_resource.return_value = resource
+ create_only_params = []
params = {"BucketName": "test_bucket", "Tags": [{"Key": "k", "Value": "v"}]}
- changed = ccr.present("AWS::S3::Bucket", "test_bucket", params)
+ changed = ccr.present("AWS::S3::Bucket", "test_bucket", params, create_only_params)
assert changed
ccr.client.update_resource.assert_called_with(
TypeName="AWS::S3::Bucket",
diff --git a/tests/unit/module_utils/test_utils.py b/tests/unit/module_utils/test_utils.py
index 3fdcd345..bc6a83d5 100644
--- a/tests/unit/module_utils/test_utils.py
+++ b/tests/unit/module_utils/test_utils.py
@@ -10,6 +10,9 @@
from ansible_collections.amazon.cloud.plugins.module_utils.utils import (
ansible_dict_to_boto3_tag_list,
boto3_tag_list_to_ansible_dict,
+ diff_dicts,
+ normalize_response,
+ tag_merge,
)
@@ -54,3 +57,241 @@ def test_boto3_tag_list_to_ansible_dict_empty():
assert boto3_tag_list_to_ansible_dict([]) == {}
# Minio returns [{}] when there are no tags
assert boto3_tag_list_to_ansible_dict([{}]) == {}
+
+
+def test_diff_empty_dicts_no_diff():
+ a_dict = {}
+ b_dict = {}
+ match, diff = diff_dicts(a_dict, b_dict)
+
+ assert match is True
+ assert diff == {}
+
+
+def test_diff_no_diff():
+ a_dict = {
+ "section1": {"category1": 1, "category2": 2},
+ "section2": {
+ "category1": 1,
+ "category2": 2,
+ "category4": {"foo_1": 1, "foo_2": {"bar_1": [1]}},
+ },
+ "section3": ["elem1", "elem2", "elem3"],
+ "section4": ["Foo"],
+ }
+ match, diff = diff_dicts(a_dict, a_dict)
+
+ assert match is True
+ assert diff == {}
+
+
+def test_diff_no_addition():
+ a_dict = {
+ "section1": {"category1": 1, "category2": 2},
+ "section2": {
+ "category1": 1,
+ "category2": 2,
+ "category4": {"foo_1": 1, "foo_2": {"bar_1": [1]}},
+ },
+ "section3": ["elem3", "elem1", "elem2"],
+ "section4": ["Bar"],
+ }
+ b_dict = {
+ "section1": {"category1": 1, "category2": 2},
+ "section2": {
+ "category1": 1,
+ "category2": 3,
+ "category4": {"foo_1": 1, "foo_2": {"bar_1": [1]}},
+ },
+ "section3": ["elem3", "elem1", "elem2"],
+ "section4": ["Foo"],
+ }
+
+ match, diff = diff_dicts(a_dict, b_dict)
+
+ assert match is False
+ assert diff["before"] == {"section4": ["Bar"], "section2": {"category2": 2}}
+ assert diff["after"] == {"section4": ["Foo"], "section2": {"category2": 3}}
+
+
+def test_diff_with_addition():
+ a_dict = {
+ "section1": {"category1": 1, "category2": 2},
+ "section2": {
+ "category1": 1,
+ "category2": 2,
+ "category4": {"foo_1": 1, "foo_2": {"bar_1": [1]}},
+ },
+ "section3": ["elem3", "elem1", "elem2"],
+ "section4": ["Bar"],
+ }
+ b_dict = {
+ "section1": {"category1": 1, "category2": 2},
+ "section2": {
+ "category1": 1,
+ "category2": 2,
+ "category4": {"foo_1": 1, "foo_2": {"bar_1": [1]}},
+ },
+ "section3": ["elem3", "elem1", "elem2"],
+ "section4": ["Foo", "Bar"],
+ "section5": ["FooBar"],
+ }
+ match, diff = diff_dicts(a_dict, b_dict)
+
+ assert match is False
+ assert diff["before"] == {"section4": ["Bar"]}
+ assert diff["after"] == {"section5": ["FooBar"], "section4": ["Foo", "Bar"]}
+
+
+def test_normalize_response_single():
+ response = {
+ "ResourceDescription": {
+ "Identifier": "test_one",
+ "Properties": '{"BucketName":"test_one","RegionalDomainName":"test_one.s3.us-east-1.amazonaws.com", \
+ "DomainName":"test_one.s3.amazonaws.com","WebsiteURL":"http://test_one.s3-website-us-east-1.amazonaws.com", \
+ "DualStackDomainName":"test_one.s3.dualstack.us-east-1.amazonaws.com", \
+ "Arn":"arn:aws:s3:::test_one","Tags":[{"Value":"pascalCaseValue","Key":"newPascalCaseKey"}, \
+ {"Value":"CamelCaseValue","Key":"NewCamelCaseKey"},{"Value":"snake_case_value","Key":"new_snake_case_key"}, \
+ {"Value":"Value with spaces","Key":"New Key with Spaces"}]}',
+ }
+ }
+ normalized_response = {
+ "identifier": "test_one",
+ "properties": {
+ "bucket_name": "test_one",
+ "regional_domain_name": "test_one.s3.us-east-1.amazonaws.com",
+ "domain_name": "test_one.s3.amazonaws.com",
+ "website_url": "http://test_one.s3-website-us-east-1.amazonaws.com",
+ "dual_stack_domain_name": "test_one.s3.dualstack.us-east-1.amazonaws.com",
+ "arn": "arn:aws:s3:::test_one",
+ "tags": {
+ "newPascalCaseKey": "pascalCaseValue",
+ "NewCamelCaseKey": "CamelCaseValue",
+ "new_snake_case_key": "snake_case_value",
+ "New Key with Spaces": "Value with spaces",
+ },
+ },
+ }
+ assert normalized_response == normalize_response(response)
+
+
+def test_normalize_response_multiple():
+ response = {
+ "ResourceDescriptions": [
+ {
+ "Identifier": "test_one",
+ "Properties": '{"BucketName":"test_one","RegionalDomainName":"test_one.s3.us-east-1.amazonaws.com", \
+ "DomainName":"test_one.s3.amazonaws.com","WebsiteURL":"http://test_one.s3-website-us-east-1.amazonaws.com", \
+ "DualStackDomainName":"test_one.s3.dualstack.us-east-1.amazonaws.com", \
+ "Arn":"arn:aws:s3:::test_one","Tags":[{"Value":"pascalCaseValue","Key":"newPascalCaseKey"}, \
+ {"Value":"CamelCaseValue","Key":"NewCamelCaseKey"},{"Value":"snake_case_value","Key":"new_snake_case_key"}, \
+ {"Value":"Value with spaces","Key":"New Key with Spaces"}]}',
+ },
+ {
+ "Identifier": "test_two",
+ "Properties": '{"BucketName":"test_two","RegionalDomainName":"test_two.s3.us-east-1.amazonaws.com", \
+ "DomainName":"test_two.s3.amazonaws.com","WebsiteURL":"http://test_two.s3-website-us-east-1.amazonaws.com", \
+ "DualStackDomainName":"test_two.s3.dualstack.us-east-1.amazonaws.com", \
+ "Arn":"arn:aws:s3:::test_two","Tags":[{"Value":"pascalCaseValue","Key":"newPascalCaseKey"}, \
+ {"Value":"CamelCaseValue","Key":"NewCamelCaseKey"},{"Value":"snake_case_value","Key":"new_snake_case_key"}, \
+ {"Value":"Value with spaces","Key":"New Key with Spaces"}]}',
+ },
+ ]
+ }
+ normalized_response = [
+ {
+ "identifier": "test_one",
+ "properties": {
+ "bucket_name": "test_one",
+ "regional_domain_name": "test_one.s3.us-east-1.amazonaws.com",
+ "domain_name": "test_one.s3.amazonaws.com",
+ "website_url": "http://test_one.s3-website-us-east-1.amazonaws.com",
+ "dual_stack_domain_name": "test_one.s3.dualstack.us-east-1.amazonaws.com",
+ "arn": "arn:aws:s3:::test_one",
+ "tags": {
+ "newPascalCaseKey": "pascalCaseValue",
+ "NewCamelCaseKey": "CamelCaseValue",
+ "new_snake_case_key": "snake_case_value",
+ "New Key with Spaces": "Value with spaces",
+ },
+ },
+ },
+ {
+ "identifier": "test_two",
+ "properties": {
+ "bucket_name": "test_two",
+ "regional_domain_name": "test_two.s3.us-east-1.amazonaws.com",
+ "domain_name": "test_two.s3.amazonaws.com",
+ "website_url": "http://test_two.s3-website-us-east-1.amazonaws.com",
+ "dual_stack_domain_name": "test_two.s3.dualstack.us-east-1.amazonaws.com",
+ "arn": "arn:aws:s3:::test_two",
+ "tags": {
+ "newPascalCaseKey": "pascalCaseValue",
+ "NewCamelCaseKey": "CamelCaseValue",
+ "new_snake_case_key": "snake_case_value",
+ "New Key with Spaces": "Value with spaces",
+ },
+ },
+ },
+ ]
+ assert normalized_response == normalize_response(response)
+
+
+def test_tag_merge_empty_dicts():
+ dict_1 = []
+ dict_2 = []
+ expected = []
+
+ tag_merge(dict_1, dict_2)
+ assert dict_1 == expected
+
+
+def test_tag_merge_one_empty_dict():
+ dict_1 = []
+ dict_2 = [
+ {"Key": "newPascalCaseKey", "Value": "pascalCaseValue"},
+ {"Key": "NewCamelCaseKey", "Value": "CamelCaseValue"},
+ {"Key": "new_snake_case_key", "Value": "snake_case_value"},
+ {"Key": "New Key with Spaces", "Value": "Updated Value with spaces"},
+ ]
+
+ expected = [
+ {"Key": "newPascalCaseKey", "Value": "pascalCaseValue"},
+ {"Key": "NewCamelCaseKey", "Value": "CamelCaseValue"},
+ {"Key": "new_snake_case_key", "Value": "snake_case_value"},
+ {"Key": "New Key with Spaces", "Value": "Updated Value with spaces"},
+ ]
+
+ tag_merge(dict_1, dict_2)
+ assert dict_1 == expected
+
+
+def test_tag_merge_dicts():
+ dict_1 = [
+ {"Key": "Key with Spaces", "Value": "Value with spaces"},
+ {"Key": "CamelCaseKey", "Value": "CamelCaseValue"},
+ {"Key": "pascalCaseKey", "Value": "pascalCaseValue"},
+ {"Key": "snake_case_key", "Value": "snake_case_value"},
+ {"Key": "New Key with Spaces", "Value": "Value with spaces"},
+ ]
+
+ dict_2 = [
+ {"Key": "newPascalCaseKey", "Value": "pascalCaseValue"},
+ {"Key": "NewCamelCaseKey", "Value": "CamelCaseValue"},
+ {"Key": "new_snake_case_key", "Value": "snake_case_value"},
+ {"Key": "New Key with Spaces", "Value": "Updated Value with spaces"},
+ ]
+
+ expected = [
+ {"Key": "Key with Spaces", "Value": "Value with spaces"},
+ {"Key": "CamelCaseKey", "Value": "CamelCaseValue"},
+ {"Key": "pascalCaseKey", "Value": "pascalCaseValue"},
+ {"Key": "snake_case_key", "Value": "snake_case_value"},
+ {"Key": "New Key with Spaces", "Value": "Updated Value with spaces"},
+ {"Key": "newPascalCaseKey", "Value": "pascalCaseValue"},
+ {"Key": "NewCamelCaseKey", "Value": "CamelCaseValue"},
+ {"Key": "new_snake_case_key", "Value": "snake_case_value"},
+ ]
+
+ tag_merge(dict_1, dict_2)
+ assert dict_1 == expected