diff --git a/CHANGELOG.rst b/CHANGELOG.rst new file mode 100644 index 000000000..24cbe9b47 --- /dev/null +++ b/CHANGELOG.rst @@ -0,0 +1,280 @@ +============================= +bm.ibm_zos_core Release Notes +============================= + +.. contents:: Topics + + +v1.4.0-beta.1 +============= + +Release Summary +--------------- + +Release Date: '2021-06-23' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__ + + +Major Changes +------------- + +- zos_copy was updated to support the ansible.builtin.ssh connection options; for further reference refer to the SSH plugin documentation. +- zos_copy was updated to take into account the record length when the source is a USS file and the destination is a data set with a record length. This is done by inspecting the destination data set attributes and using these attributes to create a new data set. +- zos_copy was updated with the capabilities to define destination data sets from within the zos_copy module. In the case where you are copying to a data set destination that does not exist, you can now do so using the new zos_copy module option destination. +- zos_fetch was updated to support the ansible.builtin.ssh connection options; for further reference refer to the SSH plugin documentation. +- zos_job_output was updated to to include the completion code (CC) for each individual job step as part of the ret_code response. +- zos_job_query was updated to handle when an invalid job ID or job name is used with the module and returns a proper response. +- zos_job_query was updated to support a 7 digit job number ID for when there are greater than 99,999 jobs in the history. +- zos_job_submit was enhanced to check for 'JCL ERROR' when jobs are submitted and result in a proper module response. +- zos_job_submit was updated to fail fast when a submitted job fails instead of waiting a predetermined time. +- zos_operator_action_query response messages were improved with more diagnostic information in the event an error is encountered. +- zos_ping was updated to remove the need for the zos_ssh connection plugin dependency. + +Deprecated Features +------------------- + +- zos_copy and zos_fetch option sftp_port has been deprecated. To set the SFTP port, use the supported options in the ansible.builtin.ssh plugin. Refer to the `SSH port `__ option to configure the port used during the modules SFTP transport. +- zos_copy module option model_ds has been removed. The model_ds logic is now automatically managed and data sets are either created based on the src data set or overridden by the new option destination_dataset. +- zos_ssh connection plugin has been removed, it is no longer required. You must remove all playbook references to connection ibm.ibm_zos_core.zos_ssh. + +Bugfixes +-------- + +- zos_job_output was updated to correct possible truncated responses for the ddname content. This would occur for jobs with very large amounts of content from a ddname. +- zos_ssh - connection plugin was updated to correct a bug in Ansible that + would result in playbook task retries overriding the SSH connection + retries. This is resolved by renaming the zos_ssh option + retries to reconnection_retries. The update addresses users of + ansible-core v2.9 which continues to use retries and users of + ansible-core v2.11 or later which uses reconnection_retries. + This also resolves a bug in the connection that referenced a deprecated + constant. (https://github.com/ansible-collections/ibm_zos_core/pull/328) + +New Modules +----------- + +- ibm.ibm_zos_core.zos_mount - Mount a z/OS file system. + +v1.3.4 +====== + +Release Summary +--------------- + +Release Date: '2022-03-06' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__ + + +Bugfixes +-------- + +- zos_ssh - connection plugin was updated to correct a bug in Ansible that + would result in playbook task retries overriding the SSH connection + retries. This is resolved by renaming the zos_ssh option + retries to reconnection_retries. The update addresses users of + ansible-core v2.9 which continues to use retries and users of + ansible-core v2.11 or later which uses reconnection_retries. + This also resolves a bug in the connection that referenced a deprecated + constant. (https://github.com/ansible-collections/ibm_zos_core/pull/328) + +v1.3.3 +====== + +Release Summary +--------------- + +Release Date: '2022-26-04' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__ + + +Bugfixes +-------- + +- zos_copy was updated to correct deletion of all temporary files and unwarranted deletes. - When the module would complete, a cleanup routine did not take into account that other processes had open temporary files and thus would error when trying to remove them. - When the module would copy a directory (source) from USS to another USS directory (destination), any files currently in the destination would be deleted. The modules behavior has changed such that files are no longer deleted unless the force option is set to true. When **force=true**, copying files or a directory to a USS destination will continue if it encounters existing files or directories and overwrite any corresponding files. +- zos_job_query was updated to correct a boolean condition that always evaluated to "CANCELLED". - When querying jobs that are either **CANCELLED** or have **FAILED**, they were always treated as **CANCELLED**. + +v1.3.1 +====== + +Release Summary +--------------- + +Release Date: '2022-27-04' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__ + + +Bugfixes +-------- + +- zos_ping was updated to support Automation Hub documentation generation. +- zos_ssh connection plugin was updated to prioritize the execution of modules written in REXX over other implementations such is the case for zos_ping. + +Known Issues +------------ + +- When executing programs using zos_mvs_raw, you may encounter errors that originate in the implementation of the programs. Two such known issues are noted below of which one has been addressed with an APAR. - zos_mvs_raw module execution fails when invoking Database Image Copy 2 Utility or Database Recovery Utility in conjunction with FlashCopy or Fast Replication. - zos_mvs_raw module execution fails when invoking DFSRRC00 with parm "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". This issue is addressed by APAR PH28089. + +v1.3.0 +====== + +Release Summary +--------------- + +Release Date: '2021-19-04' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__ + +`New Playbooks `__ + - Authorize and synchronize APF authorized libraries on z/OS from a configuration file cloned from GitHub + - Automate program execution with copy, sort and fetch data sets on z/OS playbook. + - Automate user management with add, remove, grant permission, generate + passwords, create zFS, mount zFS and send email notifications when deployed + to Ansible Tower or AWX with the manage z/OS Users Using Ansible playbook. + - Use the configure Python and ZOAU Installation playbook to scan the + **z/OS** target to find the latest supported configuration and generate + inventory and a variables configuration. + - Automate software management with SMP/E Playbooks + + +Minor Changes +------------- + +- All modules support relative paths and remove choice case sensitivity. +- zos_data_set added support to allocate and format zFS data sets. +- zos_operator supports new options **wait** and **wait_time_s** such that you can specify that zos_operator wait the full **wait_time_s** or return as soon as the first operator command executes. + +Bugfixes +-------- + +- Action plugin zos_copy was updated to support Python 2.7. +- Job utility is an internal library used by several modules. It has been updated to use a custom written parsing routine capable of handling special characters to prevent job related reading operations from failing when a special character is encountered. +- Module zos_copy was updated to fail gracefully when a it encounters a non-zero return code. +- Module zos_copy was updated to support copying data set members that are program objects to a PDSE. Prior to this update, copying data set members would yield an error; - FSUM8976 Error writing to PDSE member +- Module zos_job_submit referenced a non-existent option and was corrected to **wait_time_s**. +- Module zos_job_submit was updated to remove all trailing **\r** from jobs that are submitted from the controller. +- Module zos_tso_command support was added for when the command output contained special characters. +- Playbook zos_operator_basics.yaml has been updated to use end in the WTO reply over the previous use of cancel. Using cancel is not a valid reply and results in an execution error. + +Known Issues +------------ + +- When executing programs using zos_mvs_raw, you may encounter errors that originate in the implementation of the programs. Two such known issues are noted below of which one has been addressed with an APAR. - zos_mvs_raw module execution fails when invoking Database Image Copy 2 Utility or Database Recovery Utility in conjunction with FlashCopy or Fast Replication. - zos_mvs_raw module execution fails when invoking DFSRRC00 with parm "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". This issue is addressed by APAR PH28089. + +New Modules +----------- + +- ibm.ibm_zos_core.zos_apf - Add or remove libraries to Authorized Program Facility (APF) +- ibm.ibm_zos_core.zos_backup_restore - Backup and restore data sets and volumes +- ibm.ibm_zos_core.zos_blockinfile - Manage block of multi-line textual data on z/OS +- ibm.ibm_zos_core.zos_data_set - Manage data sets +- ibm.ibm_zos_core.zos_find - Find matching data sets + +v1.2.1 +====== + +Release Summary +--------------- + +Release Date: '2020-10-09' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__. + +Beginning this release, all playbooks previously included with the collection +will be made available on the `playbook repository `__. + +Minor Changes +------------- + +- Documentation related to configuration has been migrated to the `playbook repository `__ +- Python 2.x support + +Bugfixes +-------- + +- zos_copy - fixed regex support, dictionary merge operation fix +- zos_encode - removed TemporaryDirectory usage. +- zos_fetch - fix quote import + +New Modules +----------- + +- ibm.ibm_zos_core.zos_lineinfile - Manage textual data on z/OS + +v1.1.0 +====== + +Release Summary +--------------- + +Release Date: '2020-26-01' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__ + + +Minor Changes +------------- + +- Documentation updates +- Improved error handling and messages +- New Filter that will filter a list of WTOR messages based on message text. + +New Modules +----------- + +- ibm.ibm_zos_core.zos_encode - Perform encoding operations. +- ibm.ibm_zos_core.zos_fetch - Fetch data from z/OS +- ibm.ibm_zos_core.zos_mvs_raw - Run a z/OS program. +- ibm.ibm_zos_core.zos_operator - Execute operator command +- ibm.ibm_zos_core.zos_operator_action_query - Display messages requiring action +- ibm.ibm_zos_core.zos_ping - Ping z/OS and check dependencies. +- ibm.ibm_zos_core.zos_tso_command - Execute TSO commands + +v1.0.0 +====== + +Release Summary +--------------- + +Release Date: '2020-18-03' +This changlelog describes all changes made to the modules and plugins included +in this collection. +For additional details such as required dependencies and availablity review +the collections `release notes `__ + +Minor Changes +------------- + +- Documentation updates +- Module zos_data_set catalog support added + +Security Fixes +-------------- + +- Improved test, security and injection coverage +- Security vulnerabilities fixed + +New Modules +----------- + +- ibm.ibm_zos_core.zos_copy - Copy data to z/OS +- ibm.ibm_zos_core.zos_job_output - Display job output +- ibm.ibm_zos_core.zos_job_query - Query job status +- ibm.ibm_zos_core.zos_job_submit - Submit JCL diff --git a/changelogs/.plugin-cache.yaml b/changelogs/.plugin-cache.yaml new file mode 100644 index 000000000..9529295f8 --- /dev/null +++ b/changelogs/.plugin-cache.yaml @@ -0,0 +1,107 @@ +objects: + role: {} +plugins: + become: {} + cache: {} + callback: {} + cliconf: {} + connection: {} + httpapi: {} + inventory: {} + lookup: {} + module: + zos_apf: + description: Add or remove libraries to Authorized Program Facility (APF) + name: zos_apf + namespace: '' + version_added: 1.3.0 + zos_backup_restore: + description: Backup and restore data sets and volumes + name: zos_backup_restore + namespace: '' + version_added: 1.3.0 + zos_blockinfile: + description: Manage block of multi-line textual data on z/OS + name: zos_blockinfile + namespace: '' + version_added: 1.3.0 + zos_copy: + description: Copy data to z/OS + name: zos_copy + namespace: '' + version_added: 1.0.0 + zos_data_set: + description: Manage data sets + name: zos_data_set + namespace: '' + version_added: 1.3.0 + zos_encode: + description: Perform encoding operations. + name: zos_encode + namespace: '' + version_added: 1.1.0 + zos_fetch: + description: Fetch data from z/OS + name: zos_fetch + namespace: '' + version_added: 1.1.0 + zos_find: + description: Find matching data sets + name: zos_find + namespace: '' + version_added: 1.3.0 + zos_job_output: + description: Display job output + name: zos_job_output + namespace: '' + version_added: 1.0.0 + zos_job_query: + description: Query job status + name: zos_job_query + namespace: '' + version_added: 1.0.0 + zos_job_submit: + description: Submit JCL + name: zos_job_submit + namespace: '' + version_added: 1.0.0 + zos_lineinfile: + description: Manage textual data on z/OS + name: zos_lineinfile + namespace: '' + version_added: 1.2.1 + zos_mount: + description: Mount a z/OS file system. + name: zos_mount + namespace: '' + version_added: 1.4.0 + zos_mvs_raw: + description: Run a z/OS program. + name: zos_mvs_raw + namespace: '' + version_added: 1.1.0 + zos_operator: + description: Execute operator command + name: zos_operator + namespace: '' + version_added: 1.1.0 + zos_operator_action_query: + description: Display messages requiring action + name: zos_operator_action_query + namespace: '' + version_added: 1.1.0 + zos_ping: + description: Ping z/OS and check dependencies. + name: zos_ping + namespace: '' + version_added: 1.1.0 + zos_tso_command: + description: Execute TSO commands + name: zos_tso_command + namespace: '' + version_added: 1.1.0 + netconf: {} + shell: {} + strategy: {} + vars: {} +version: 1.4.0-beta.1 diff --git a/changelogs/changelog.yaml b/changelogs/changelog.yaml new file mode 100644 index 000000000..c952f30b5 --- /dev/null +++ b/changelogs/changelog.yaml @@ -0,0 +1,320 @@ +ancestor: null +releases: + 1.0.0: + changes: + minor_changes: + - Documentation updates + - Module zos_data_set catalog support added + release_summary: 'Release Date: ''2020-18-03'' + + This changlelog describes all changes made to the modules and plugins included + + in this collection. + + For additional details such as required dependencies and availablity review + + the collections `release notes `__ ' + security_fixes: + - Improved test, security and injection coverage + - Security vulnerabilities fixed + fragments: + - v1.0.0_summary.yml + - v1.0.0_summary_minor.yml + - v1.0.0_summary_security.yml + modules: + - description: Copy data to z/OS + name: zos_copy + namespace: '' + - description: Display job output + name: zos_job_output + namespace: '' + - description: Query job status + name: zos_job_query + namespace: '' + - description: Submit JCL + name: zos_job_submit + namespace: '' + release_date: '2022-06-07' + 1.1.0: + changes: + minor_changes: + - Documentation updates + - Improved error handling and messages + - New Filter that will filter a list of WTOR messages based on message text. + release_summary: 'Release Date: ''2020-26-01'' + + This changlelog describes all changes made to the modules and plugins included + + in this collection. + + For additional details such as required dependencies and availablity review + + the collections `release notes `__ + + ' + fragments: + - v1.1.0_summary.yml + - v1.1.0_summary_minor.yml + modules: + - description: Perform encoding operations. + name: zos_encode + namespace: '' + - description: Fetch data from z/OS + name: zos_fetch + namespace: '' + - description: Run a z/OS program. + name: zos_mvs_raw + namespace: '' + - description: Execute operator command + name: zos_operator + namespace: '' + - description: Display messages requiring action + name: zos_operator_action_query + namespace: '' + - description: Ping z/OS and check dependencies. + name: zos_ping + namespace: '' + - description: Execute TSO commands + name: zos_tso_command + namespace: '' + release_date: '2022-06-07' + 1.2.1: + changes: + bugfixes: + - zos_copy - fixed regex support, dictionary merge operation fix + - zos_encode - removed TemporaryDirectory usage. + - zos_fetch - fix quote import + minor_changes: + - Documentation related to configuration has been migrated to the `playbook + repository `__ + - Python 2.x support + release_summary: 'Release Date: ''2020-10-09'' + + This changlelog describes all changes made to the modules and plugins included + + in this collection. + + For additional details such as required dependencies and availablity review + + the collections `release notes `__. + + + Beginning this release, all playbooks previously included with the collection + + will be made available on the `playbook repository `__.' + fragments: + - v1.2.1_summary.yml + - v1.2.1_summary_bugs.yml + - v1.2.1_summary_minor.yml + modules: + - description: Manage textual data on z/OS + name: zos_lineinfile + namespace: '' + release_date: '2022-06-07' + 1.3.0: + changes: + bugfixes: + - Action plugin zos_copy was updated to support Python 2.7. + - Job utility is an internal library used by several modules. It has been updated + to use a custom written parsing routine capable of handling special characters + to prevent job related reading operations from failing when a special character + is encountered. + - Module zos_copy was updated to fail gracefully when a it encounters a non-zero + return code. + - Module zos_copy was updated to support copying data set members that are program + objects to a PDSE. Prior to this update, copying data set members would yield + an error; - FSUM8976 Error writing to PDSE member + - Module zos_job_submit referenced a non-existent option and was corrected to + **wait_time_s**. + - Module zos_job_submit was updated to remove all trailing **\r** from jobs + that are submitted from the controller. + - Module zos_tso_command support was added for when the command output contained + special characters. + - Playbook zos_operator_basics.yaml has been updated to use end in the WTO reply + over the previous use of cancel. Using cancel is not a valid reply and results + in an execution error. + known_issues: + - When executing programs using zos_mvs_raw, you may encounter errors that originate + in the implementation of the programs. Two such known issues are noted below + of which one has been addressed with an APAR. - zos_mvs_raw module execution + fails when invoking Database Image Copy 2 Utility or Database Recovery Utility + in conjunction with FlashCopy or Fast Replication. - zos_mvs_raw module execution + fails when invoking DFSRRC00 with parm "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". + This issue is addressed by APAR PH28089. + minor_changes: + - All modules support relative paths and remove choice case sensitivity. + - zos_data_set added support to allocate and format zFS data sets. + - zos_operator supports new options **wait** and **wait_time_s** such that you + can specify that zos_operator wait the full **wait_time_s** or return as soon + as the first operator command executes. + release_summary: "Release Date: '2021-19-04'\nThis changlelog describes all + changes made to the modules and plugins included\nin this collection.\nFor + additional details such as required dependencies and availablity review\nthe + collections `release notes `__ + \n\n`New Playbooks `__\n + \ - Authorize and synchronize APF authorized libraries on z/OS from a configuration + file cloned from GitHub\n - Automate program execution with copy, sort and + fetch data sets on z/OS playbook.\n - Automate user management with add, + remove, grant permission, generate\n passwords, create zFS, mount zFS and + send email notifications when deployed\n to Ansible Tower or AWX with the + manage z/OS Users Using Ansible playbook.\n - Use the configure Python and + ZOAU Installation playbook to scan the\n **z/OS** target to find the latest + supported configuration and generate\n inventory and a variables configuration.\n + \ - Automate software management with SMP/E Playbooks\n" + fragments: + - v1.3.0_summary.yml + - v1.3.0_summary_bugs.yml + - v1.3.0_summary_known.yml + - v1.3.0_summary_minor.yml + modules: + - description: Add or remove libraries to Authorized Program Facility (APF) + name: zos_apf + namespace: '' + - description: Backup and restore data sets and volumes + name: zos_backup_restore + namespace: '' + - description: Manage block of multi-line textual data on z/OS + name: zos_blockinfile + namespace: '' + - description: Manage data sets + name: zos_data_set + namespace: '' + - description: Find matching data sets + name: zos_find + namespace: '' + release_date: '2022-06-07' + 1.3.1: + changes: + bugfixes: + - zos_ping was updated to support Automation Hub documentation generation. + - zos_ssh connection plugin was updated to prioritize the execution of modules + written in REXX over other implementations such is the case for zos_ping. + known_issues: + - When executing programs using zos_mvs_raw, you may encounter errors that originate + in the implementation of the programs. Two such known issues are noted below + of which one has been addressed with an APAR. - zos_mvs_raw module execution + fails when invoking Database Image Copy 2 Utility or Database Recovery Utility + in conjunction with FlashCopy or Fast Replication. - zos_mvs_raw module execution + fails when invoking DFSRRC00 with parm "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". + This issue is addressed by APAR PH28089. + release_summary: "Release Date: '2022-27-04'\nThis changlelog describes all + changes made to the modules and plugins included\nin this collection.\nFor + additional details such as required dependencies and availablity review\nthe + collections `release notes `__ + \n" + fragments: + - v1.3.1_summary.yml + - v1.3.1_summary_bugs.yml + - v1.3.1_summary_known.yml + release_date: '2022-06-07' + 1.3.3: + changes: + bugfixes: + - zos_copy was updated to correct deletion of all temporary files and unwarranted + deletes. - When the module would complete, a cleanup routine did not take + into account that other processes had open temporary files and thus would + error when trying to remove them. - When the module would copy a directory + (source) from USS to another USS directory (destination), any files currently + in the destination would be deleted. The modules behavior has changed such + that files are no longer deleted unless the force option is set to true. When + **force=true**, copying files or a directory to a USS destination will continue + if it encounters existing files or directories and overwrite any corresponding + files. + - zos_job_query was updated to correct a boolean condition that always evaluated + to "CANCELLED". - When querying jobs that are either **CANCELLED** or have + **FAILED**, they were always treated as **CANCELLED**. + release_summary: "Release Date: '2022-26-04'\nThis changlelog describes all + changes made to the modules and plugins included\nin this collection.\nFor + additional details such as required dependencies and availablity review\nthe + collections `release notes `__ + \n" + fragments: + - v1.3.3_summary.yml + - v1.3.3_summary_bugs.yml + release_date: '2022-06-07' + 1.3.4: + changes: + bugfixes: + - "zos_ssh - connection plugin was updated to correct a bug in Ansible that\n + \ would result in playbook task retries overriding the SSH connection\n retries. + This is resolved by renaming the zos_ssh option\n retries to reconnection_retries. + The update addresses users of\n ansible-core v2.9 which continues to use + retries and users of\n ansible-core v2.11 or later which uses reconnection_retries.\n + \ This also resolves a bug in the connection that referenced a deprecated\n + \ constant. (https://github.com/ansible-collections/ibm_zos_core/pull/328)\n" + release_summary: "Release Date: '2022-03-06'\nThis changlelog describes all + changes made to the modules and plugins included\nin this collection.\nFor + additional details such as required dependencies and availablity review\nthe + collections `release notes `__ + \n" + fragments: + - 328-rename-retries-to-reconnection_retries.yml + - v1.3.4_summary.yml + release_date: '2022-06-07' + 1.4.0-beta.1: + changes: + bugfixes: + - zos_job_output was updated to correct possible truncated responses for the + ddname content. This would occur for jobs with very large amounts of content + from a ddname. + - "zos_ssh - connection plugin was updated to correct a bug in Ansible that\n + \ would result in playbook task retries overriding the SSH connection\n retries. + This is resolved by renaming the zos_ssh option\n retries to reconnection_retries. + The update addresses users of\n ansible-core v2.9 which continues to use + retries and users of\n ansible-core v2.11 or later which uses reconnection_retries.\n + \ This also resolves a bug in the connection that referenced a deprecated\n + \ constant. (https://github.com/ansible-collections/ibm_zos_core/pull/328)\n" + deprecated_features: + - zos_copy and zos_fetch option sftp_port has been deprecated. To set the SFTP + port, use the supported options in the ansible.builtin.ssh plugin. Refer to + the `SSH port `__ + option to configure the port used during the modules SFTP transport. + - zos_copy module option model_ds has been removed. The model_ds logic is now + automatically managed and data sets are either created based on the src data + set or overridden by the new option destination_dataset. + - zos_ssh connection plugin has been removed, it is no longer required. You + must remove all playbook references to connection ibm.ibm_zos_core.zos_ssh. + major_changes: + - zos_copy was updated to support the ansible.builtin.ssh connection options; + for further reference refer to the SSH plugin documentation. + - zos_copy was updated to take into account the record length when the source + is a USS file and the destination is a data set with a record length. This + is done by inspecting the destination data set attributes and using these + attributes to create a new data set. + - zos_copy was updated with the capabilities to define destination data sets + from within the zos_copy module. In the case where you are copying to a data + set destination that does not exist, you can now do so using the new zos_copy + module option destination. + - zos_fetch was updated to support the ansible.builtin.ssh connection options; + for further reference refer to the SSH plugin documentation. + - zos_job_output was updated to to include the completion code (CC) for each + individual job step as part of the ret_code response. + - zos_job_query was updated to handle when an invalid job ID or job name is + used with the module and returns a proper response. + - zos_job_query was updated to support a 7 digit job number ID for when there + are greater than 99,999 jobs in the history. + - zos_job_submit was enhanced to check for 'JCL ERROR' when jobs are submitted + and result in a proper module response. + - zos_job_submit was updated to fail fast when a submitted job fails instead + of waiting a predetermined time. + - zos_operator_action_query response messages were improved with more diagnostic + information in the event an error is encountered. + - zos_ping was updated to remove the need for the zos_ssh connection plugin + dependency. + release_summary: "Release Date: '2021-06-23'\nThis changlelog describes all + changes made to the modules and plugins included\nin this collection.\nFor + additional details such as required dependencies and availablity review\nthe + collections `release notes `__ + \n" + fragments: + - 328-rename-retries-to-reconnection_retries.yml + - v1.4.0-beta.1_summary.yml + - v1.4.0-beta.1_summary_bugs.yml + - v1.4.0-beta.1_summary_deprecated.yml + - v1.4.0-beta.1_summary_minor.yml + - v1.4.0-beta.1_summary_trivial.yml + modules: + - description: Mount a z/OS file system. + name: zos_mount + namespace: '' + release_date: '2022-06-10' diff --git a/changelogs/config.yaml b/changelogs/config.yaml new file mode 100644 index 000000000..dab101642 --- /dev/null +++ b/changelogs/config.yaml @@ -0,0 +1,32 @@ +changelog_filename_template: ../CHANGELOG.rst +changelog_filename_version_depth: 0 +changes_file: changelog.yaml +changes_format: combined +ignore_other_fragment_extensions: true +keep_fragments: true +mention_ancestor: false +new_plugins_after_name: removed_features +notesdir: fragments +prelude_section_name: release_summary +prelude_section_title: Release Summary +sanitize_changelog: true +sections: +- - major_changes + - Major Changes +- - minor_changes + - Minor Changes +- - breaking_changes + - Breaking Changes / Porting Guide +- - deprecated_features + - Deprecated Features +- - removed_features + - Removed Features (previously deprecated) +- - security_fixes + - Security Fixes +- - bugfixes + - Bugfixes +- - known_issues + - Known Issues +title: bm.ibm_zos_core +trivial_section_name: trivial +use_fqcn: true diff --git a/changelogs/fragments/306-updates-zos-copy-architecture.yml b/changelogs/fragments/306-updates-zos-copy-architecture.yml new file mode 100644 index 000000000..b6ca0e51c --- /dev/null +++ b/changelogs/fragments/306-updates-zos-copy-architecture.yml @@ -0,0 +1,37 @@ +bugfixes: + - > + zos_copy - fixes a bug that did not create a data set on the specified + volume, issue #301. + (https://github.com/ansible-collections/ibm_zos_core/pull/306) + zos_copy - fixes a bug where a number of attributes were not an option + when using `dest_data_set`. + (https://github.com/ansible-collections/ibm_zos_core/pull/306) +minor_changes: + -> + zos_copy - introduced an updated creation policy referred to as precedence + rules such that if `dest_data_set` is set, this will take precedence. If + `dest` is an empty data set, the empty data set will be written with the + expectation its attributes satisfy the copy. If no precedent rule has been + exercised, `dest` will be created with the same attributes of `src`. + (https://github.com/ansible-collections/ibm_zos_core/pull/306) + zos_copy - introduced new computation capabilities such that if `dest` is + a nonexistent data set, the attributes assigned will depend on the type of + `src`. If `src` is a USS file, `dest` will have a Fixed Block (FB) record + format and the remaining attributes will be computed. If `src` is binary, + `dest` will have a Fixed Block (FB) record format with a record length of + 80, block size of 32760, and the remaining attributes will be computed. + (https://github.com/ansible-collections/ibm_zos_core/pull/306) + zos_copy - enhanced the force option when `force=true` and the + remote file or data set `dest` is NOT empty, the `dest` will be deleted + and recreated with the `src` data set attributes, otherwise it will be + recreated with the `dest` data set attributes. + (https://github.com/ansible-collections/ibm_zos_core/pull/306) + zos_copy - option `dest_dataset` has been deprecated and removed in favor + of the new option `dest_data_set`. + (https://github.com/ansible-collections/ibm_zos_core/pull/306) + zos_copy - fixes a bug that when a directory is copied from the controller + to the managed node and a mode is set, the mode is applied to the + directory on the managed node. If the directory being copied contains files + and mode is set, mode will only be applied to the files being copied not the + pre-existing files. + (https://github.com/ansible-collections/ibm_zos_core/pull/306) diff --git a/changelogs/fragments/328-rename-retries-to-reconnection_retries.yml b/changelogs/fragments/328-rename-retries-to-reconnection_retries.yml new file mode 100644 index 000000000..e2a882826 --- /dev/null +++ b/changelogs/fragments/328-rename-retries-to-reconnection_retries.yml @@ -0,0 +1,10 @@ +bugfixes: + - > + zos_ssh - connection plugin was updated to correct a bug in Ansible that + would result in playbook task retries overriding the SSH connection + retries. This is resolved by renaming the zos_ssh option + retries to reconnection_retries. The update addresses users of + ansible-core v2.9 which continues to use retries and users of + ansible-core v2.11 or later which uses reconnection_retries. + This also resolves a bug in the connection that referenced a deprecated + constant. (https://github.com/ansible-collections/ibm_zos_core/pull/328) diff --git a/changelogs/fragments/v1.0.0_summary.yml b/changelogs/fragments/v1.0.0_summary.yml new file mode 100644 index 000000000..32ff8de97 --- /dev/null +++ b/changelogs/fragments/v1.0.0_summary.yml @@ -0,0 +1,6 @@ +release_summary: | + Release Date: '2020-18-03' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ \ No newline at end of file diff --git a/changelogs/fragments/v1.0.0_summary_minor.yml b/changelogs/fragments/v1.0.0_summary_minor.yml new file mode 100644 index 000000000..0a893d1b7 --- /dev/null +++ b/changelogs/fragments/v1.0.0_summary_minor.yml @@ -0,0 +1,3 @@ +minor_changes: + - Module zos_data_set catalog support added + - Documentation updates diff --git a/changelogs/fragments/v1.0.0_summary_security.yml b/changelogs/fragments/v1.0.0_summary_security.yml new file mode 100644 index 000000000..f4fcee71e --- /dev/null +++ b/changelogs/fragments/v1.0.0_summary_security.yml @@ -0,0 +1,3 @@ +security_fixes: + - Security vulnerabilities fixed + - Improved test, security and injection coverage \ No newline at end of file diff --git a/changelogs/fragments/v1.1.0_summary.yml b/changelogs/fragments/v1.1.0_summary.yml new file mode 100644 index 000000000..55e418593 --- /dev/null +++ b/changelogs/fragments/v1.1.0_summary.yml @@ -0,0 +1,6 @@ +release_summary: | + Release Date: '2020-26-01' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ diff --git a/changelogs/fragments/v1.1.0_summary_minor.yml b/changelogs/fragments/v1.1.0_summary_minor.yml new file mode 100644 index 000000000..e0f522473 --- /dev/null +++ b/changelogs/fragments/v1.1.0_summary_minor.yml @@ -0,0 +1,4 @@ +minor_changes: + - New Filter that will filter a list of WTOR messages based on message text. + - Improved error handling and messages + - Documentation updates diff --git a/changelogs/fragments/v1.2.1_summary.yml b/changelogs/fragments/v1.2.1_summary.yml new file mode 100644 index 000000000..8c44d3f30 --- /dev/null +++ b/changelogs/fragments/v1.2.1_summary.yml @@ -0,0 +1,9 @@ +release_summary: | + Release Date: '2020-10-09' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__. + + Beginning this release, all playbooks previously included with the collection + will be made available on the `playbook repository `__. \ No newline at end of file diff --git a/changelogs/fragments/v1.2.1_summary_bugs.yml b/changelogs/fragments/v1.2.1_summary_bugs.yml new file mode 100644 index 000000000..8d1bd0c46 --- /dev/null +++ b/changelogs/fragments/v1.2.1_summary_bugs.yml @@ -0,0 +1,4 @@ +bugfixes: + - zos_encode - removed TemporaryDirectory usage. + - zos_copy - fixed regex support, dictionary merge operation fix + - zos_fetch - fix quote import \ No newline at end of file diff --git a/changelogs/fragments/v1.2.1_summary_minor.yml b/changelogs/fragments/v1.2.1_summary_minor.yml new file mode 100644 index 000000000..fada6f3a7 --- /dev/null +++ b/changelogs/fragments/v1.2.1_summary_minor.yml @@ -0,0 +1,4 @@ +minor_changes: + - Python 2.x support + - Documentation related to configuration has been migrated to the + `playbook repository `__ \ No newline at end of file diff --git a/changelogs/fragments/v1.3.0_summary.yml b/changelogs/fragments/v1.3.0_summary.yml new file mode 100644 index 000000000..2243ffd33 --- /dev/null +++ b/changelogs/fragments/v1.3.0_summary.yml @@ -0,0 +1,17 @@ +release_summary: | + Release Date: '2021-19-04' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ + + `New Playbooks `__ + - Authorize and synchronize APF authorized libraries on z/OS from a configuration file cloned from GitHub + - Automate program execution with copy, sort and fetch data sets on z/OS playbook. + - Automate user management with add, remove, grant permission, generate + passwords, create zFS, mount zFS and send email notifications when deployed + to Ansible Tower or AWX with the manage z/OS Users Using Ansible playbook. + - Use the configure Python and ZOAU Installation playbook to scan the + **z/OS** target to find the latest supported configuration and generate + inventory and a variables configuration. + - Automate software management with SMP/E Playbooks diff --git a/changelogs/fragments/v1.3.0_summary_bugs.yml b/changelogs/fragments/v1.3.0_summary_bugs.yml new file mode 100644 index 000000000..e79f9dd30 --- /dev/null +++ b/changelogs/fragments/v1.3.0_summary_bugs.yml @@ -0,0 +1,22 @@ +bugfixes: + - Action plugin zos_copy was updated to support Python 2.7. + - Module zos_copy was updated to fail gracefully when a it + encounters a non-zero return code. + - Module zos_copy was updated to support copying data set members that + are program objects to a PDSE. Prior to this update, copying data set + members would yield an error; + - FSUM8976 Error writing to PDSE member + + - Job utility is an internal library used by several modules. It has been + updated to use a custom written parsing routine capable of handling + special characters to prevent job related reading operations from failing + when a special character is encountered. + - Module zos_job_submit was updated to remove all trailing **\r** from + jobs that are submitted from the controller. + - Module zos_job_submit referenced a non-existent option and was + corrected to **wait_time_s**. + - Module zos_tso_command support was added for when the command output + contained special characters. + - Playbook zos_operator_basics.yaml has been updated to use end in the + WTO reply over the previous use of cancel. Using cancel is not a + valid reply and results in an execution error. \ No newline at end of file diff --git a/changelogs/fragments/v1.3.0_summary_known.yml b/changelogs/fragments/v1.3.0_summary_known.yml new file mode 100644 index 000000000..3b6b4ca3b --- /dev/null +++ b/changelogs/fragments/v1.3.0_summary_known.yml @@ -0,0 +1,10 @@ +known_issues: + - When executing programs using zos_mvs_raw, you may encounter errors + that originate in the implementation of the programs. Two such known issues + are noted below of which one has been addressed with an APAR. + - zos_mvs_raw module execution fails when invoking + Database Image Copy 2 Utility or Database Recovery Utility in conjunction + with FlashCopy or Fast Replication. + - zos_mvs_raw module execution fails when invoking DFSRRC00 with parm + "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". This issue is + addressed by APAR PH28089. diff --git a/changelogs/fragments/v1.3.0_summary_minor.yml b/changelogs/fragments/v1.3.0_summary_minor.yml new file mode 100644 index 000000000..ccb99dd3f --- /dev/null +++ b/changelogs/fragments/v1.3.0_summary_minor.yml @@ -0,0 +1,6 @@ +minor_changes: + - zos_data_set added support to allocate and format zFS data sets. + - zos_operator supports new options **wait** and **wait_time_s** such + that you can specify that zos_operator wait the full **wait_time_s** or + return as soon as the first operator command executes. + - All modules support relative paths and remove choice case sensitivity. diff --git a/changelogs/fragments/v1.3.1_summary.yml b/changelogs/fragments/v1.3.1_summary.yml new file mode 100644 index 000000000..d5ef338f6 --- /dev/null +++ b/changelogs/fragments/v1.3.1_summary.yml @@ -0,0 +1,6 @@ +release_summary: | + Release Date: '2022-27-04' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ diff --git a/changelogs/fragments/v1.3.1_summary_bugs.yml b/changelogs/fragments/v1.3.1_summary_bugs.yml new file mode 100644 index 000000000..1a21ea64e --- /dev/null +++ b/changelogs/fragments/v1.3.1_summary_bugs.yml @@ -0,0 +1,6 @@ +bugfixes: + - zos_ssh connection plugin was updated to prioritize the execution of + modules written in REXX over other implementations such is the case for + zos_ping. + - zos_ping was updated to support Automation Hub documentation + generation. \ No newline at end of file diff --git a/changelogs/fragments/v1.3.1_summary_known.yml b/changelogs/fragments/v1.3.1_summary_known.yml new file mode 100644 index 000000000..3b6b4ca3b --- /dev/null +++ b/changelogs/fragments/v1.3.1_summary_known.yml @@ -0,0 +1,10 @@ +known_issues: + - When executing programs using zos_mvs_raw, you may encounter errors + that originate in the implementation of the programs. Two such known issues + are noted below of which one has been addressed with an APAR. + - zos_mvs_raw module execution fails when invoking + Database Image Copy 2 Utility or Database Recovery Utility in conjunction + with FlashCopy or Fast Replication. + - zos_mvs_raw module execution fails when invoking DFSRRC00 with parm + "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". This issue is + addressed by APAR PH28089. diff --git a/changelogs/fragments/v1.3.3_summary.yml b/changelogs/fragments/v1.3.3_summary.yml new file mode 100644 index 000000000..6b380a486 --- /dev/null +++ b/changelogs/fragments/v1.3.3_summary.yml @@ -0,0 +1,6 @@ +release_summary: | + Release Date: '2022-26-04' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ diff --git a/changelogs/fragments/v1.3.3_summary_bugs.yml b/changelogs/fragments/v1.3.3_summary_bugs.yml new file mode 100644 index 000000000..16634b6de --- /dev/null +++ b/changelogs/fragments/v1.3.3_summary_bugs.yml @@ -0,0 +1,18 @@ +bugfixes: + - zos_copy was updated to correct deletion of all temporary files and + unwarranted deletes. + - When the module would complete, a cleanup routine did not take into + account that other processes had open temporary files and thus would + error when trying to remove them. + - When the module would copy a directory (source) from USS to another + USS directory (destination), any files currently in the destination + would be deleted. + The modules behavior has changed such that files are no longer deleted + unless the force option is set to true. When **force=true**, + copying files or a directory to a USS destination will continue if it + encounters existing files or directories and overwrite any + corresponding files. + - zos_job_query was updated to correct a boolean condition that always + evaluated to "CANCELLED". + - When querying jobs that are either **CANCELLED** or have **FAILED**, + they were always treated as **CANCELLED**. \ No newline at end of file diff --git a/changelogs/fragments/v1.3.4_summary.yml b/changelogs/fragments/v1.3.4_summary.yml new file mode 100644 index 000000000..8eae1a989 --- /dev/null +++ b/changelogs/fragments/v1.3.4_summary.yml @@ -0,0 +1,6 @@ +release_summary: | + Release Date: '2022-03-06' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ diff --git a/changelogs/fragments/v1.4.0-beta.1_summary.yml b/changelogs/fragments/v1.4.0-beta.1_summary.yml new file mode 100644 index 000000000..cb1c79d1f --- /dev/null +++ b/changelogs/fragments/v1.4.0-beta.1_summary.yml @@ -0,0 +1,6 @@ +release_summary: | + Release Date: '2021-06-23' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ diff --git a/changelogs/fragments/v1.4.0-beta.1_summary_bugs.yml b/changelogs/fragments/v1.4.0-beta.1_summary_bugs.yml new file mode 100644 index 000000000..8b0c92794 --- /dev/null +++ b/changelogs/fragments/v1.4.0-beta.1_summary_bugs.yml @@ -0,0 +1,4 @@ +bugfixes: + - zos_job_output was updated to correct possible truncated responses for + the ddname content. This would occur for jobs with very large amounts + of content from a ddname. diff --git a/changelogs/fragments/v1.4.0-beta.1_summary_deprecated.yml b/changelogs/fragments/v1.4.0-beta.1_summary_deprecated.yml new file mode 100644 index 000000000..98d23000f --- /dev/null +++ b/changelogs/fragments/v1.4.0-beta.1_summary_deprecated.yml @@ -0,0 +1,10 @@ +deprecated_features: + - zos_ssh connection plugin has been removed, it is no longer required. You + must remove all playbook references to connection ibm.ibm_zos_core.zos_ssh. + - zos_copy module option model_ds has been removed. The model_ds logic + is now automatically managed and data sets are either created based on the + src data set or overridden by the new option destination_dataset. + - zos_copy and zos_fetch option sftp_port has been deprecated. To + set the SFTP port, use the supported options in the ansible.builtin.ssh + plugin. Refer to the `SSH port `__ + option to configure the port used during the modules SFTP transport. diff --git a/changelogs/fragments/v1.4.0-beta.1_summary_minor.yml b/changelogs/fragments/v1.4.0-beta.1_summary_minor.yml new file mode 100644 index 000000000..d8ecdaaf5 --- /dev/null +++ b/changelogs/fragments/v1.4.0-beta.1_summary_minor.yml @@ -0,0 +1,28 @@ +major_changes: + - zos_ping was updated to remove the need for the zos_ssh + connection plugin dependency. + - zos_copy was updated to support the ansible.builtin.ssh connection options; + for further reference refer to the SSH plugin documentation. + - zos_copy was updated to take into account the record length when the + source is a USS file and the destination is a data set with a record + length. This is done by inspecting the destination data set attributes + and using these attributes to create a new data set. + - zos_copy was updated with the capabilities to define destination data sets + from within the zos_copy module. In the case where you are copying to a + data set destination that does not exist, you can now do so using the + new zos_copy module option destination. + - zos_fetch was updated to support the ansible.builtin.ssh + connection options; for further reference refer to the + SSH plugin documentation. + - zos_job_output was updated to to include the completion code (CC) for each + individual job step as part of the ret_code response. + - zos_job_query was updated to support a 7 digit job number ID for when there + are greater than 99,999 jobs in the history. + - zos_job_query was updated to handle when an invalid job ID or job name is + used with the module and returns a proper response. + - zos_job_submit was updated to fail fast when a submitted job fails instead + of waiting a predetermined time. + - zos_job_submit was enhanced to check for 'JCL ERROR' when jobs are submitted + and result in a proper module response. + - zos_operator_action_query response messages were improved with more + diagnostic information in the event an error is encountered. diff --git a/changelogs/fragments/v1.4.0-beta.1_summary_trivial.yml b/changelogs/fragments/v1.4.0-beta.1_summary_trivial.yml new file mode 100644 index 000000000..eb2bdff58 --- /dev/null +++ b/changelogs/fragments/v1.4.0-beta.1_summary_trivial.yml @@ -0,0 +1,7 @@ +trivial: + - Documentation updates were made to + - zos_copy and zos_fetch about Co:Z SFTP support. + - zos_mvs_raw to remove a duplicate example. + - include documentation for all action plugins. + - update hyperlinks embedded in documentation. + - zos_operator to explain how to use single quotes in operator commands. diff --git a/changelogs/fragments/v1.4.0-beta.2_summary.yml b/changelogs/fragments/v1.4.0-beta.2_summary.yml new file mode 100644 index 000000000..85551b2c9 --- /dev/null +++ b/changelogs/fragments/v1.4.0-beta.2_summary.yml @@ -0,0 +1,6 @@ +release_summary: | + Release Date: '2022-09-30' + This changlelog describes all changes made to the modules and plugins included + in this collection. + For additional details such as required dependencies and availablity review + the collections `release notes `__ diff --git a/docs/source/modules/zos_apf.rst b/docs/source/modules/zos_apf.rst index 6f9d54575..7ac8bff39 100644 --- a/docs/source/modules/zos_apf.rst +++ b/docs/source/modules/zos_apf.rst @@ -121,7 +121,7 @@ persistent | **required**: False | **type**: str - | **default**: /* {mark} ANSIBLE MANAGED BLOCK */ + | **default**: /\* {mark} ANSIBLE MANAGED BLOCK \*/ backup diff --git a/docs/source/modules/zos_copy.rst b/docs/source/modules/zos_copy.rst index 6ed4d0980..f5b1954a4 100644 --- a/docs/source/modules/zos_copy.rst +++ b/docs/source/modules/zos_copy.rst @@ -17,7 +17,6 @@ zos_copy -- Copy data to z/OS Synopsis -------- - The :ref:`zos_copy ` module copies a file or data set from a local or a remote machine to a location on the remote machine. -- Use the :ref:`zos_fetch ` module to copy files or data sets from remote locations to the local machine. @@ -28,7 +27,7 @@ Parameters backup - Specifies whether a backup of destination should be created before copying data. + Specifies whether a backup of the destination should be created before copying data. When set to ``true``, the module creates a backup file or data set. @@ -41,13 +40,11 @@ backup backup_name Specify a unique USS file name or data set name for the destination backup. - If the destination (dest) is a USS file or path, the backup_name must be a file or path name, and the USS path or file must be an absolute path name. + If the destination ``dest`` is a USS file or path, the ``backup_name`` must be an absolute path name. - If the destination is an MVS data set, the backup_name must be an MVS data set name. + If the destination is an MVS data set name, the ``backup_name`` provided must meet data set naming conventions of one or more qualifiers, each from one to eight characters long, that are delimited by periods. - If the backup_name is not provided, the default backup_name will be used. If the destination is a USS file or USS path, the name of the backup file will be the destination file or path name appended with a timestamp, e.g. ``/path/file_name.2020-04-23-08-32-29-bak.tar``. - - If the destination is an MVS data set, it will be a data set with a random name generated by calling the ZOAU API. The MVS backup data set recovery can be done by renaming it. + If the ``backup_name`` is not provided, the default ``backup_name`` will be used. If the ``dest`` is a USS file or USS path, the name of the backup file will be the destination file or path name appended with a timestamp, e.g. ``/path/file_name.2020-04-23-08-32-29-bak.tar``. If the ``dest`` is an MVS data set, it will be a data set with a randomly generated name. If ``dest`` is a data set member and ``backup_name`` is not provided, the data set member will be backed up to the same partitioned data set with a randomly generated member name. @@ -60,8 +57,6 @@ content Works only when ``dest`` is a USS file, sequential data set, or a partitioned data set member. - This is for simple values; for anything complex or with formatting, use `ansible.builtin.copy `_ - If ``dest`` is a directory, then content will be copied to ``/path/to/dest/inline_copy``. | **required**: False @@ -69,23 +64,27 @@ content dest - Remote absolute path or data set where the file should be copied to. + The remote absolute path or data set where the content should be copied to. - Destination can be a USS path or an MVS data set name. + ``dest`` can be a USS file, directory or MVS data set name. + + If ``src`` and ``dest`` are files and if the parent directory of ``dest`` does not exist, then the task will fail If ``dest`` is a nonexistent USS file, it will be created. - If ``dest`` is a nonexistent data set, storage management rules will be used to determine the volume where ``dest`` will be allocated. + If ``dest`` is a nonexistent data set, it will be created following the process outlined here and in the ``volume`` option. + + If ``dest`` is a nonexistent data set, the attributes assigned will depend on the type of ``src``. If ``src`` is a USS file, ``dest`` will have a Fixed Block (FB) record format and the remaining attributes will be computed. If ``src`` is binary, ``dest`` will have a Fixed Block (FB) record format with a record length of 80, block size of 32760, and the remaining attributes will be computed. - If ``src`` and ``dest`` are files and if the parent directory of ``dest`` does not exist, then the task will fail. + When ``dest`` is a data set, precedence rules apply. If ``dest_data_set`` is set, this will take precedence over an existing data set. If ``dest`` is an empty data set, the empty data set will be written with the expectation its attributes satisfy the copy. Lastly, if no precendent rule has been exercised, ``dest`` will be created with the same attributes of ``src``. - When the ``dest`` is an existing VSAM (KSDS) or VSAM (ESDS), then source can be ESDS, KSDS or RRDS. The ``dest`` will be deleted and storage management rules will be used to determine the volume where ``dest`` will be allocated. + When the ``dest`` is an existing VSAM (KSDS) or VSAM (ESDS), then source can be an ESDS, a KSDS or an RRDS. The VSAM (KSDS) or VSAM (ESDS) ``dest`` will be deleted and recreated following the process outlined in the ``volume`` option. - When the ``dest`` is an existing VSAM (RRDS), then the source must be RRDS. The ``dest`` will be deleted and storage management rules will be used to determine the volume where ``dest`` will be allocated. + When the ``dest`` is an existing VSAM (RRDS), then the source must be an RRDS. The VSAM (RRDS) will be deleted and recreated following the process outlined in the ``volume`` option. - When ``dest`` is and existing VSAM (LDS), then source must be LDS. The ``dest`` will be deleted and storage management rules will be used to determine the volume where ``dest`` will be allocated. + When ``dest`` is and existing VSAM (LDS), then source must be an LDS. The VSAM (LDS) will be deleted and recreated following the process outlined in the ``volume`` option. - When ``dest`` is a data set, you can override storage management rules by specifying both ``volume`` and other optional DS specs (type, space, record size, etc). + When ``dest`` is a data set, you can override storage management rules by specifying ``volume`` if the storage class being used has GUARANTEED_SPACE=YES specified, otherwise, the allocation will fail. See ``volume`` for more volume related processes. | **required**: True | **type**: str @@ -96,8 +95,6 @@ encoding If ``encoding`` is not provided, the module determines which local and remote charsets to convert the data from and to. Note that this is only done for text data and not binary data. - If ``encoding`` is provided and ``src`` is an MVS data set, task will fail. - Only valid if ``is_binary`` is false. | **required**: False @@ -120,7 +117,11 @@ encoding force - If set to ``true``, the remote file or data set will be overwritten. + If set to ``true`` and the remote file or data set ``dest`` is empty, the ``dest`` will be reused. + + If set to ``true`` and the remote file or data set ``dest`` is NOT empty, the ``dest`` will be deleted and recreated with the ``src`` data set attributes, otherwise it will be recreated with the ``dest`` data set attributes. + + To backup data before any deletion, see parameters ``backup`` and ``backup_name``. If set to ``false``, the file or data set will only be copied if the destination does not exist. @@ -163,7 +164,7 @@ mode The mode may also be specified as a symbolic mode (for example, ``u+rwx`` or ``u=rw,g=r,o=r``) or a special string `preserve`. - ``preserve`` means that the file will be given the same permissions as the source file. + *mode=preserve* means that the file will be given the same permissions as the source file. | **required**: False | **type**: str @@ -198,11 +199,11 @@ src If ``src`` is a local path or a USS path, it can be absolute or relative. - If ``src`` is a directory, destination must be a partitioned data set or a USS directory. + If ``src`` is a directory, ``dest`` must be a partitioned data set or a USS directory. - If ``src`` is a file and dest ends with "/" or destination is a directory, the file is copied to the directory with the same filename as src. + If ``src`` is a file and ``dest`` ends with "/" or is a directory, the file is copied to the directory with the same filename as ``src``. - If ``src`` is a VSAM data set, destination must also be a VSAM. + If ``src`` is a VSAM data set, ``dest`` must also be a VSAM. Wildcards can be used to copy multiple PDS/PDSE members to another PDS/PDSE. @@ -236,19 +237,18 @@ volume | **type**: str -destination_dataset - These are settings to use when creating the destination data set +dest_data_set + Data set attributes to customize a ``dest`` data set to be copied into. | **required**: False | **type**: dict - dd_type + type Organization of the destination - | **required**: False + | **required**: True | **type**: str - | **default**: BASIC | **choices**: KSDS, ESDS, RRDS, LDS, SEQ, PDS, PDSE, MEMBER, BASIC @@ -259,7 +259,6 @@ destination_dataset | **required**: False | **type**: str - | **default**: 5 space_secondary @@ -269,7 +268,6 @@ destination_dataset | **required**: False | **type**: str - | **default**: 3 space_type @@ -279,7 +277,6 @@ destination_dataset | **required**: False | **type**: str - | **default**: M | **choices**: K, M, G, CYL, TRK @@ -290,7 +287,6 @@ destination_dataset | **required**: False | **type**: str - | **default**: FB | **choices**: FB, VB, FBA, VBA, U @@ -303,7 +299,6 @@ destination_dataset | **required**: False | **type**: int - | **default**: 80 block_size @@ -313,6 +308,74 @@ destination_dataset | **type**: int + directory_blocks + The number of directory blocks to allocate to the data set. + + | **required**: False + | **type**: int + + + key_offset + The key offset to use when creating a KSDS data set. + + *key_offset* is required when *type=KSDS*. + + *key_offset* should only be provided when *type=KSDS* + + | **required**: False + | **type**: int + + + key_length + The key length to use when creating a KSDS data set. + + *key_length* is required when *type=KSDS*. + + *key_length* should only be provided when *type=KSDS* + + | **required**: False + | **type**: int + + + sms_storage_class + The storage class for an SMS-managed dataset. + + Required for SMS-managed datasets that do not match an SMS-rule. + + Not valid for datasets that are not SMS-managed. + + Note that all non-linear VSAM datasets are SMS-managed. + + | **required**: False + | **type**: str + + + sms_data_class + The data class for an SMS-managed dataset. + + Optional for SMS-managed datasets that do not match an SMS-rule. + + Not valid for datasets that are not SMS-managed. + + Note that all non-linear VSAM datasets are SMS-managed. + + | **required**: False + | **type**: str + + + sms_management_class + The management class for an SMS-managed dataset. + + Optional for SMS-managed datasets that do not match an SMS-rule. + + Not valid for datasets that are not SMS-managed. + + Note that all non-linear VSAM datasets are SMS-managed. + + | **required**: False + | **type**: str + + @@ -365,7 +428,7 @@ Examples from: UTF-8 to: IBM-037 - - name: Copy a VSAM (KSDS) to a VSAM (KSDS) + - name: Copy a VSAM (KSDS) to a VSAM (KSDS) zos_copy: src: SAMPLE.SRC.VSAM dest: SAMPLE.DEST.VSAM @@ -418,14 +481,16 @@ Examples src: HLQ.SAMPLE.PDSE dest: HLQ.EXISTING.PDSE remote_src: true + force: true - - name: Copy PDS member to a new PDS member. Replace if it already exists. + - name: Copy PDS member to a new PDS member. Replace if it already exists zos_copy: src: HLQ.SAMPLE.PDSE(SRCMEM) dest: HLQ.NEW.PDSE(DESTMEM) remote_src: true + force: true - - name: Copy a USS file to a PDSE member. If PDSE does not exist, allocate it. + - name: Copy a USS file to a PDSE member. If PDSE does not exist, allocate it zos_copy: src: /path/to/uss/src dest: DEST.PDSE.DATA.SET(MEMBER) @@ -443,7 +508,7 @@ Examples dest: /tmp/member remote_src: true - - name: Copy a PDS to a USS directory (/tmp/SRC.PDS). + - name: Copy a PDS to a USS directory (/tmp/SRC.PDS) zos_copy: src: SRC.PDS dest: /tmp @@ -468,6 +533,20 @@ Examples volume: 'VOL033' remote_src: true + - name: Copy a USS file to a fully customized sequential data set + zos_copy: + src: /path/to/uss/src + dest: SOME.SEQ.DEST + remote_src: true + volume: '222222' + dest_data_set: + type: SEQ + space_primary: 10 + space_secondary: 3 + space_type: K + record_format: VB + record_length: 150 + diff --git a/docs/source/modules/zos_fetch.rst b/docs/source/modules/zos_fetch.rst index a8e81dfcd..7c6fdc3d3 100644 --- a/docs/source/modules/zos_fetch.rst +++ b/docs/source/modules/zos_fetch.rst @@ -16,7 +16,7 @@ zos_fetch -- Fetch data from z/OS Synopsis -------- -- This module fetches a UNIX System Services (USS) file, PS (sequential data set), PDS, PDSE, member of a PDS or PDSE, or KSDS (VSAM data set) from a remote z/OS system. +- This module fetches a UNIX System Services (USS) file, PS (sequential data set), PDS, PDSE, member of a PDS or PDSE, or KSDS (VSAM data set) from a remote z/OS system. - When fetching a sequential data set, the destination file name will be the same as the data set name. - When fetching a PDS or PDSE, the destination will be a directory with the same name as the PDS or PDSE. - When fetching a PDS/PDSE member, destination will be a file. @@ -31,7 +31,7 @@ Parameters src - Name of a UNIX System Services (USS) file, PS (sequential data set), PDS, PDSE, member of a PDS, PDSE or KSDS (VSAM data set). + Name of a UNIX System Services (USS) file, PS (sequential data set), PDS, PDSE, member of a PDS, PDSE or KSDS (VSAM data set). USS file paths should be absolute paths. diff --git a/docs/source/release_notes.rst b/docs/source/release_notes.rst index e2c56f5c0..d64143150 100644 --- a/docs/source/release_notes.rst +++ b/docs/source/release_notes.rst @@ -1,11 +1,86 @@ .. ........................................................................... -.. © Copyright IBM Corporation 2020, 2021 . +.. © Copyright IBM Corporation 2020, 2021, 2021 . .. ........................................................................... ======== Releases ======== +Version 1.4.0-beta.2 +==================== + +* Bug fixes and enhancements + + * Modules + + * ``zos_copy`` + + * introduced an updated creation policy referred to as precedence rules + that if `dest_data_set` is set, it will take precedence. If + `dest` is an empty data set, the empty data set will be written with the + expectation its attributes satisfy the copy. If no precedent rule + has been exercised, `dest` will be created with the same attributes of + `src`. + * introduced new computation capabilities that if `dest` is a nonexistent + data set, the attributes assigned will depend on the type of `src`. If + `src` is a USS file, `dest` will have a Fixed Block (FB) record format + and the remaining attributes will be computed. If `src` is binary, + `dest` will have a Fixed Block (FB) record format with a record length + of 80, block size of 32760, and the remaining attributes will be + computed. + * enhanced the force option when `force=true` and the remote file or + data set `dest`` is NOT empty, the `dest` will be deleted and recreated + with the `src` data set attributes, otherwise it will be recreated with + the `dest` data set attributes. + * option `dest_dataset` has been deprecated and removed in favor + of the new option `dest_data_set`. + * fixes a bug that when a directory is copied from the controller to the + managed node and a mode is set, the mode is applied to the directory + on the managed node. If the directory being copied contains files and + mode is set, mode will only be applied to the files being copied not the + pre-existing files. + * fixes a bug that did not create a data set on the specified volume. + * fixes a bug where a number of attributes were not an option when using + `dest_data_set`. + + * Documentation + + * Review :ref:`version 1.4.0-beta.1` release notes for additional content. + +* Deprecated or removed + + * ``zos_copy`` module option **destination_dataset** has been renamed to + **dest_data_set**. + + * Review :ref:`version 1.4.0-beta.1` release notes for additional content. + + +Availability +------------ + +* `Galaxy`_ +* `GitHub`_ + +Reference +--------- + +* Supported by `z/OS V2R3`_ or later +* Supported by the `z/OS® shell`_ +* Supported by `IBM Open Enterprise SDK for Python`_ 3.8.2 or later +* Supported by IBM `Z Open Automation Utilities 1.1.0`_ and + `Z Open Automation Utilities 1.1.1`_ + +Known Issues +------------ + +* Review :ref:`version 1.4.0-beta.1` release notes for additional content. + +Deprecation Notices +------------------- +* Review :ref:`version 1.4.0-beta.1` release notes for additional content. + +.. _my-reference-label: + Version 1.4.0-beta.1 ==================== @@ -163,6 +238,46 @@ release. .. _SSH port: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ssh_connection.html#parameter-port +Version 1.3.4 +============= + +What's New +---------- + +* Bug Fixes + + * Modules + + * ``zos_ssh`` connection plugin was updated to correct a bug in Ansible that + would result in playbook task ``retries`` overriding the SSH connection + ``retries``. This is resolved by renaming the ``zos_ssh`` option + ``retries`` to ``reconnection_retries``. The update addresses users of + ``ansible-core`` v2.9 which continues to use ``retries`` and users of + ``ansible-core`` v2.11 or later which uses ``reconnection_retries``. This + also resolves a bug in the connection that referenced a deprecated + constant. + * ``zos_job_output`` fixes a bug that returned all ddname's when a specific + ddname was provided. Now a specific ddname can be returned and all others + ignored. + * ``zos_copy`` fixes a bug that would not copy subdirectories. If the source + is a directory with sub directories, all sub directories will now be copied. + +Availability +------------ + +* `Automation Hub`_ +* `Galaxy`_ +* `GitHub`_ + +Reference +--------- + +* Supported by `z/OS V2R3`_ or later +* Supported by the `z/OS® shell`_ +* Supported by `IBM Open Enterprise SDK for Python`_ 3.8.2 or later +* Supported by IBM `Z Open Automation Utilities 1.1.0`_ and + `Z Open Automation Utilities 1.1.1`_ + Version 1.3.1 ============= diff --git a/galaxy.yml b/galaxy.yml index 10acca0f1..726d276da 100644 --- a/galaxy.yml +++ b/galaxy.yml @@ -1,12 +1,12 @@ # IBM collection namespace namespace: ibm -# IBM z/OS core collection as part of the -# Red Hat Ansible Certified Content for IBM Z offering +# IBM z/OS core collection as part of +# Red Hat Ansible Certified Content for IBM Z name: ibm_zos_core # The collection version -version: 1.4.0-beta.1 +version: 1.4.0-beta.2 # Collection README file readme: README.md @@ -14,11 +14,11 @@ readme: README.md # Contributors authors: - Demetrios Dimatos - - Behnam Al Kajbaf - - Andrew Nguyen - - Radha Varadachari - Rich Parker - Ketan Kelkar + - Ivan Alejandro Moreno Soto + - Oscar Fernando Flores Garcia + - Jenny Huang # Description description: The IBM z/OS core collection includes connection plugins, action plugins, modules, filters and ansible-doc to automate tasks on z/OS. @@ -68,6 +68,16 @@ build_ignore: - .gitignore - .github - '*.tar.gz' - - tests - docs - - collections \ No newline at end of file + - collections + - changelogs + - docs + - tests/__pycache__ + - tests/.pytest_cache + - tests/functional + - tests/helpers + - tests/unit + - tests/*.py + - tests/*.ini + - tests/requirements.txt + - test_config.yml \ No newline at end of file diff --git a/meta/execution-environment.yml b/meta/execution-environment.yml index 3afb1f672..8d34ec479 100644 --- a/meta/execution-environment.yml +++ b/meta/execution-environment.yml @@ -1,10 +1,14 @@ ################################################################################ # © Copyright IBM Corporation 2021 ################################################################################ -# This file creates an alias for file 'requirements-zos-core.txt' which is used +# This file creates an alias for file 'requirements.txt' which is used # to manage any python dependencies needed by this Collection to run effectively # on an Ansible controller. ################################################################################ + +--- +version: 1 + dependencies: - python: requirements-zos-core.txt -version: 1 \ No newline at end of file + python: requirements.txt + system: bindep.txt \ No newline at end of file diff --git a/meta/ibm_zos_core_meta.yml b/meta/ibm_zos_core_meta.yml index 623bde4ad..b4130b52a 100644 --- a/meta/ibm_zos_core_meta.yml +++ b/meta/ibm_zos_core_meta.yml @@ -1,5 +1,5 @@ name: ibm_zos_core -version: "1.4.0-beta.1" +version: "1.4.0-beta.2" managed_requirements: - name: "IBM Open Enterprise SDK for Python" diff --git a/meta/runtime.yml b/meta/runtime.yml new file mode 100644 index 000000000..a9f8f8047 --- /dev/null +++ b/meta/runtime.yml @@ -0,0 +1,2 @@ +--- +requires_ansible: '>=2.9,<2.12' \ No newline at end of file diff --git a/plugins/action/zos_copy.py b/plugins/action/zos_copy.py index 9d06e323e..3d142b640 100644 --- a/plugins/action/zos_copy.py +++ b/plugins/action/zos_copy.py @@ -26,6 +26,7 @@ from ansible.module_utils.parsing.convert_bool import boolean from ansible.plugins.action import ActionBase from ansible.utils.display import Display +from ansible import cli from ansible_collections.ibm.ibm_zos_core.plugins.module_utils.data_set import ( is_member, @@ -76,7 +77,6 @@ def run(self, tmp=None, task_vars=None): else: is_uss = "/" in dest is_mvs_dest = is_data_set(dest) - copy_member = is_member(dest) else: msg = "Destination is required" return self._exit_action(result, msg, failed=True) @@ -102,6 +102,8 @@ def run(self, tmp=None, task_vars=None): is_src_dir = os.path.isdir(src) is_pds = is_src_dir and is_mvs_dest + copy_member = is_member(dest) + if not src and not content: msg = "'src' or 'content' is required" return self._exit_action(result, msg, failed=True) @@ -123,9 +125,6 @@ def run(self, tmp=None, task_vars=None): # msg = "Invalid port provided for SFTP. Expected an integer between 0 to 65535." # return self._exit_action(result, msg, failed=True) - if (not force) and self._dest_exists(src, dest, task_vars): - return self._exit_action(result, "Destination exists. No data was copied.") - if not remote_src: if local_follow and not src: msg = "No path given for local symlink" @@ -153,8 +152,8 @@ def run(self, tmp=None, task_vars=None): else: if is_src_dir: path, dirs, files = next(os.walk(src)) - if dirs: - result["msg"] = "Subdirectory found inside source directory" + if not is_uss and dirs: + result["msg"] = "Cannot copy a source directory with subdirectories to a data set, the destination must be another directory" result.update( dict(src=src, dest=dest, changed=False, failed=True) ) @@ -168,6 +167,7 @@ def run(self, tmp=None, task_vars=None): stat.S_IMODE(os.stat(src).st_mode) ) task_args["size"] = os.stat(src).st_size + display.vvv(u"ibm_zos_copy calculated size: {0}".format(os.stat(src).st_size), host=self._play_context.remote_addr) transfer_res = self._copy_to_remote( src, is_dir=is_src_dir, ignore_stderr=ignore_sftp_stderr ) @@ -175,11 +175,13 @@ def run(self, tmp=None, task_vars=None): temp_path = transfer_res.get("temp_path") if transfer_res.get("msg"): return transfer_res + display.vvv(u"ibm_zos_copy temp path: {0}".format(transfer_res.get("temp_path")), host=self._play_context.remote_addr) task_args.update( dict( is_uss=is_uss, is_pds=is_pds, + is_src_dir=is_src_dir, copy_member=copy_member, src_member=src_member, temp_path=temp_path, @@ -229,40 +231,82 @@ def _copy_to_remote(self, src, is_dir=False, ignore_stderr=False): self._connection.exec_command("mkdir -p {0}/{1}".format(temp_path, base)) _sftp_action += ' -r' # add '-r` to clone the source trees - display.vvv(u"ibm_zos_copy: {0} {1} TO {2}".format(_sftp_action, _src, temp_path), host=self._play_context.remote_addr) - (returncode, stdout, stderr) = self._connection._file_transport_command(_src, temp_path, _sftp_action) - - display.vvv(u"ibm_zos_copy return code: {0}".format(returncode), host=self._play_context.remote_addr) - display.vvv(u"ibm_zos_copy stdout: {0}".format(stdout), host=self._play_context.remote_addr) - display.vvv(u"ibm_zos_copy stderr: {0}".format(stderr), host=self._play_context.remote_addr) - display.vvv(u"play context verbosity: {0}".format(self._play_context.verbosity), host=self._play_context.remote_addr) - - err = _detect_sftp_errors(stderr) - - # ************************************************************************* # - # When plugin shh connection member _build_command(..) detects verbosity # - # greater than 3, it constructs a command that includes verbosity like # - # 'EXEC sftp -b - -vvv ...' where this then is returned in the connections # - # stream as 'stderr' and if a user has not set ignore_stderr it will fail # - # the modules execution. So in cases where verbosity # - # (ansible.cfg verbosity = n || CLI -vvv) are collectively summed and # - # amount to greater than 3, ignore_stderr will be set to 'True' so that # - # 'err' which will not be None won't fail the module. 'stderr' does not # - # in our z/OS case actually mean an error happened, it just so happens # - # the verbosity is returned as 'stderr'. # - # ************************************************************************* # - - if self._play_context.verbosity > 3: - ignore_stderr = True - - if returncode != 0 or (err and not ignore_stderr): - return dict( - msg="Error transfering source '{0}' to remote z/OS system".format(src), - rc=returncode, - stderr=err, - stderr_lines=err.splitlines(), - failed=True, - ) + # To support multiple Ansible versions we must do some version detection and act accordingly + version_inf = cli.CLI.version_info(False) + version_major = version_inf['major'] + version_minor = version_inf['minor'] + + # Override the Ansible Connection behavior for this module and track users configuration + sftp_transfer_method = "sftp" + user_ssh_transfer_method = None + is_ssh_transfer_method_updated = False + + try: + if version_major == 2 and version_minor >= 11: + user_ssh_transfer_method = self._connection.get_option('ssh_transfer_method') + + if user_ssh_transfer_method != sftp_transfer_method: + self._connection.set_option('ssh_transfer_method', sftp_transfer_method) + is_ssh_transfer_method_updated = True + + elif version_major == 2 and version_minor <= 10: + user_ssh_transfer_method = self._play_context.ssh_transfer_method + + if user_ssh_transfer_method != sftp_transfer_method: + self._play_context.ssh_transfer_method = sftp_transfer_method + is_ssh_transfer_method_updated = True + + if is_ssh_transfer_method_updated: + display.vvv(u"ibm_zos_copy SSH transfer method updated from {0} to {1}.".format(user_ssh_transfer_method, + sftp_transfer_method), host=self._play_context.remote_addr) + + display.vvv(u"ibm_zos_copy: {0} {1} TO {2}".format(_sftp_action, _src, temp_path), host=self._play_context.remote_addr) + (returncode, stdout, stderr) = self._connection._file_transport_command(_src, temp_path, _sftp_action) + + display.vvv(u"ibm_zos_copy return code: {0}".format(returncode), host=self._play_context.remote_addr) + display.vvv(u"ibm_zos_copy stdout: {0}".format(stdout), host=self._play_context.remote_addr) + display.vvv(u"ibm_zos_copy stderr: {0}".format(stderr), host=self._play_context.remote_addr) + display.vvv(u"play context verbosity: {0}".format(self._play_context.verbosity), host=self._play_context.remote_addr) + + err = _detect_sftp_errors(stderr) + + # ************************************************************************* # + # When plugin shh connection member _build_command(..) detects verbosity # + # greater than 3, it constructs a command that includes verbosity like # + # 'EXEC sftp -b - -vvv ...' where this then is returned in the connections # + # stream as 'stderr' and if a user has not set ignore_stderr it will fail # + # the modules execution. So in cases where verbosity # + # (ansible.cfg verbosity = n || CLI -vvv) are collectively summed and # + # amount to greater than 3, ignore_stderr will be set to 'True' so that # + # 'err' which will not be None won't fail the module. 'stderr' does not # + # in our z/OS case actually mean an error happened, it just so happens # + # the verbosity is returned as 'stderr'. # + # ************************************************************************* # + + if self._play_context.verbosity > 3: + ignore_stderr = True + + if returncode != 0 or (err and not ignore_stderr): + return dict( + msg="Error transfering source '{0}' to remote z/OS system".format(src), + rc=returncode, + stderr=err, + stderr_lines=err.splitlines(), + failed=True, + ) + + finally: + # Restore the users defined option `ssh_transfer_method` if it was overridden + + if is_ssh_transfer_method_updated: + if version_major == 2 and version_minor >= 11: + self._connection.set_option('ssh_transfer_method', user_ssh_transfer_method) + + elif version_major == 2 and version_minor <= 10: + self._play_context.ssh_transfer_method = user_ssh_transfer_method + + display.vvv(u"ibm_zos_copy SSH transfer method restored to {0}".format(user_ssh_transfer_method), host=self._play_context.remote_addr) + is_ssh_transfer_method_updated = False return dict(temp_path=temp_path) @@ -286,35 +330,6 @@ def _remote_cleanup(self, dest, dest_exists, task_vars): task_vars=task_vars, ) - def _dest_exists(self, src, dest, task_vars): - """Determine if destination exists on remote z/OS system""" - if "/" in dest: - rc, out, err = self._connection.exec_command("ls -l {0}".format(dest)) - if rc != 0: - return False - if len(to_text(out).split("\n")) == 2: - return True - if "/" in src: - src = src.rstrip("/") if src.endswith("/") else src - dest += "/" + os.path.basename(src) - else: - dest += "/" + extract_member_name(src) if is_member(src) else src - rc, out, err = self._connection.exec_command("ls -l {0}".format(dest)) - if rc != 0: - return False - else: - cmd = "LISTDS '{0}'".format(dest) - tso_cmd = self._execute_module( - module_name="ibm.ibm_zos_core.zos_tso_command", - module_args=dict(commands=[cmd]), - task_vars=task_vars, - ).get("output")[0] - if tso_cmd.get("rc") != 0: - for line in tso_cmd.get("content"): - if "NOT IN CATALOG" in line: - return False - return True - def _exit_action(self, result, msg, failed=False): """Exit action plugin with a message""" result.update( diff --git a/plugins/action/zos_fetch.py b/plugins/action/zos_fetch.py index e9426409e..2076261d8 100644 --- a/plugins/action/zos_fetch.py +++ b/plugins/action/zos_fetch.py @@ -24,6 +24,7 @@ from ansible.plugins.action import ActionBase from ansible.errors import AnsibleError from ansible.utils.display import Display +from ansible import cli from ansible_collections.ibm.ibm_zos_core.plugins.module_utils import encode @@ -135,8 +136,8 @@ def run(self, tmp=None, task_vars=None): elif len(src) < 1 or len(dest) < 1: msg = "Source and destination parameters must not be empty" - # elif not isinstance(sftp_port, int) or not 0 < sftp_port <= 65535: - # msg = "Invalid port provided for SFTP. Expected an integer between 0 to 65535." + # # elif not isinstance(sftp_port, int) or not 0 < sftp_port <= 65535: + # # msg = "Invalid port provided for SFTP. Expected an integer between 0 to 65535." if msg: result["msg"] = msg @@ -313,42 +314,85 @@ def _transfer_remote_content( if src_type == "PO": _sftp_action += ' -r' # add '-r` to clone the source trees - display.vvv(u"{0} {1} TO {2}".format(_sftp_action, remote_path, dest), host=self._play_context.remote_addr) - (returncode, stdout, stderr) = self._connection._file_transport_command(remote_path, dest, _sftp_action) - - display.vvv(u"ibm_zos_fetch return code: {0}".format(returncode), host=self._play_context.remote_addr) - display.vvv(u"ibm_zos_fetch stdout: {0}".format(stdout), host=self._play_context.remote_addr) - display.vvv(u"ibm_zos_fetch stderr: {0}".format(stderr), host=self._play_context.remote_addr) - display.vvv(u"play context verbosity: {0}".format(self._play_context.verbosity), host=self._play_context.remote_addr) - - err = _detect_sftp_errors(stderr) - - # ************************************************************************* # - # When plugin shh connection member _build_command(..) detects verbosity # - # greater than 3, it constructs a command that includes verbosity like # - # 'EXEC sftp -b - -vvv ...' where this then is returned in the connections # - # stream as 'stderr' and if a user has not set ignore_stderr it will fail # - # the modules execution. So in cases where verbosity # - # (ansible.cfg verbosity = n || CLI -vvv) are collectively summed and # - # amount to greater than 3, ignore_stderr will be set to 'True' so that # - # 'err' which will not be None won't fail the module. 'stderr' does not # - # in our z/OS case actually mean an error happened, it just so happens # - # the verbosity is returned as 'stderr'. # - # ************************************************************************* # - - if self._play_context.verbosity > 3: - ignore_stderr = True - - if re.findall(r"Permission denied", err): - result["msg"] = "Insufficient write permission for destination {0}".format( - dest - ) - elif returncode != 0 or (err and not ignore_stderr): - result["msg"] = "Error transferring remote data from z/OS system" - result["rc"] = returncode - if result.get("msg"): - result["stderr"] = err - result["failed"] = True + # To support multiple Ansible versions we must do some version detection and act accordingly + version_inf = cli.CLI.version_info(False) + version_major = version_inf['major'] + version_minor = version_inf['minor'] + + # Override the Ansible Connection behavior for this module and track users configuration + sftp_transfer_method = "sftp" + user_ssh_transfer_method = None + is_ssh_transfer_method_updated = False + + try: + if version_major == 2 and version_minor >= 11: + user_ssh_transfer_method = self._connection.get_option('ssh_transfer_method') + + if user_ssh_transfer_method != sftp_transfer_method: + self._connection.set_option('ssh_transfer_method', sftp_transfer_method) + is_ssh_transfer_method_updated = True + + elif version_major == 2 and version_minor <= 10: + user_ssh_transfer_method = self._play_context.ssh_transfer_method + + if user_ssh_transfer_method != sftp_transfer_method: + self._play_context.ssh_transfer_method = sftp_transfer_method + is_ssh_transfer_method_updated = True + + if is_ssh_transfer_method_updated: + display.vvv(u"ibm_zos_fetch SSH transfer method updated from {0} to {1}.".format(user_ssh_transfer_method, + sftp_transfer_method), host=self._play_context.remote_addr) + + display.vvv(u"{0} {1} TO {2}".format(_sftp_action, remote_path, dest), host=self._play_context.remote_addr) + (returncode, stdout, stderr) = self._connection._file_transport_command(remote_path, dest, _sftp_action) + + display.vvv(u"ibm_zos_fetch return code: {0}".format(returncode), host=self._play_context.remote_addr) + display.vvv(u"ibm_zos_fetch stdout: {0}".format(stdout), host=self._play_context.remote_addr) + display.vvv(u"ibm_zos_fetch stderr: {0}".format(stderr), host=self._play_context.remote_addr) + display.vvv(u"play context verbosity: {0}".format(self._play_context.verbosity), host=self._play_context.remote_addr) + + err = _detect_sftp_errors(stderr) + + # ************************************************************************* # + # When plugin shh connection member _build_command(..) detects verbosity # + # greater than 3, it constructs a command that includes verbosity like # + # 'EXEC sftp -b - -vvv ...' where this then is returned in the connections # + # stream as 'stderr' and if a user has not set ignore_stderr it will fail # + # the modules execution. So in cases where verbosity # + # (ansible.cfg verbosity = n || CLI -vvv) are collectively summed and # + # amount to greater than 3, ignore_stderr will be set to 'True' so that # + # 'err' which will not be None won't fail the module. 'stderr' does not # + # in our z/OS case actually mean an error happened, it just so happens # + # the verbosity is returned as 'stderr'. # + # ************************************************************************* # + + if self._play_context.verbosity > 3: + ignore_stderr = True + + if re.findall(r"Permission denied", err): + result["msg"] = "Insufficient write permission for destination {0}".format( + dest + ) + elif returncode != 0 or (err and not ignore_stderr): + result["msg"] = "Error transferring remote data from z/OS system" + result["rc"] = returncode + if result.get("msg"): + result["stderr"] = err + result["failed"] = True + + finally: + # Restore the users defined option `ssh_transfer_method` if it was overridden + + if is_ssh_transfer_method_updated: + if version_major == 2 and version_minor >= 11: + self._connection.set_option('ssh_transfer_method', user_ssh_transfer_method) + + elif version_major == 2 and version_minor <= 10: + self._play_context.ssh_transfer_method = user_ssh_transfer_method + + display.vvv(u"ibm_zos_fetch SSH transfer method restored to {0}".format(user_ssh_transfer_method), host=self._play_context.remote_addr) + is_ssh_transfer_method_updated = False + return result def _remote_cleanup(self, remote_path, src_type, encoding): diff --git a/plugins/module_utils/backup.py b/plugins/module_utils/backup.py index 9b3af1e2f..5c6226cf6 100644 --- a/plugins/module_utils/backup.py +++ b/plugins/module_utils/backup.py @@ -35,7 +35,7 @@ is_member, extract_dsname, temp_member_name, - is_empty, + DataSet, ) from ansible_collections.ibm.ibm_zos_core.plugins.module_utils.mvs_cmd import iebcopy @@ -197,7 +197,7 @@ def _copy_ds(ds, bk_ds): ds, out, err ) ) - if rc != 0 and is_empty(ds): + if rc != 0 and DataSet.is_empty(ds): rc = 0 return rc diff --git a/plugins/module_utils/data_set.py b/plugins/module_utils/data_set.py index 76acd558b..99f689c67 100644 --- a/plugins/module_utils/data_set.py +++ b/plugins/module_utils/data_set.py @@ -15,7 +15,7 @@ import re import tempfile -from os import path +from os import path, walk from string import ascii_uppercase, digits from random import randint from ansible.module_utils._text import to_bytes @@ -92,6 +92,10 @@ class DataSet(object): _VSAM_UNCATALOG_COMMAND = " DELETE '{0}' NOSCRATCH" + MVS_PARTITIONED = frozenset({"PE", "PO", "PDSE", "PDS"}) + MVS_SEQ = frozenset({"PS", "SEQ", "BASIC"}) + MVS_VSAM = frozenset({"KSDS", "ESDS", "RRDS", "LDS", "VSAM"}) + @staticmethod def ensure_present( name, @@ -280,6 +284,54 @@ def ensure_uncataloged(name): return True return False + @staticmethod + def allocate_model_data_set(ds_name, model, vol=None): + """Allocates a data set based on the attributes of a 'model' data set. + Useful when a data set needs to be created identical to another. Supported + model(s) are Physical Sequential (PS), Partitioned Data Sets (PDS/PDSE), + and VSAM data sets. If `ds_name` has a member (i.e., "DATASET(member)"), + it will be shortened to just the partitioned data set name. + + Arguments: + ds_name {str} -- The name of the data set to allocate. If the ds_name + is a partitioned member e.g. hlq.llq.ds(mem), only the data set name + must be used. See extract_dsname(ds_name) in data_set.py + model {str} -- The name of the data set whose allocation parameters + should be used to allocate the new data set 'ds_name' + vol {str} -- The volume where data set should be allocated + + Raise: + NonExistentSourceError: When the model data set does not exist. + MVSCmdExecError: When the call to IKJEFT01 to allocate the + data set fails. + """ + if not DataSet.data_set_exists(model): + raise DatasetNotFoundError(model) + + ds_name = extract_dsname(ds_name) + model_type = DataSet.data_set_type(model) + + # The break lines are absolutely necessary, a JCL code line can't + # be longer than 72 characters. The following JCL is compatible with + # all data set types. + alloc_cmd = """ ALLOC DS('{0}') - + LIKE ('{1}')""".format(ds_name, model) + + # Now adding special parameters for sequential and partitioned + # data sets. + if model_type not in DataSet.MVS_VSAM: + block_size = datasets.listing(model)[0].block_size + alloc_cmd = """{0} - + BLKSIZE({1})""".format(alloc_cmd, block_size) + + if vol: + alloc_cmd = """{0} - + VOLUME({1})""".format(alloc_cmd, vol.upper()) + + rc, out, err = mvs_cmd.ikjeft01(alloc_cmd, authorized=True) + if rc != 0: + raise MVSCmdExecError(rc, out, err) + @staticmethod def data_set_cataloged(name): """Determine if a data set is in catalog. @@ -335,6 +387,234 @@ def data_set_member_exists(name): return False return True + @staticmethod + def data_set_shared_members(src, dest): + """Checks for the existence of members from a source data set in + a destination data set. + + Arguments: + src (str) -- The source data set name. The name can contain a wildcard pattern. + dest (str) -- The destination data set name. + + Returns: + bool -- If at least one of the members in src exists in dest. + """ + src_members = datasets.list_members(src) + + for member in src_members: + if DataSet.data_set_member_exists("{0}({1})".format(dest, member)): + return True + + return False + + @staticmethod + def get_member_name_from_file(file_name): + """Creates a member name for a partitioned data set by taking up to the + first 8 characters from a filename without its file extension + + Arguments: + file_name (str) -- A file name that can include a file extension. + + Returns: + str -- Member name constructed from the file name. + """ + # Removing the file extension. + member_name = path.splitext(file_name)[0] + # Taking the first 8 characters from the file name. + member_name = member_name.replace(".", "")[0:8] + + return member_name + + @staticmethod + def files_in_data_set_members(src, dest): + """Checks for the existence of members corresponding to USS files in a + destination data set. The file names get converted to the form they + would take when copied into a partitioned data set. + + Arguments: + src (str) -- USS path to a file or a directory. + dest (str) -- Name of the destination data set. + + Returns: + bool -- If at least one of the members in src exists in dest. + """ + if path.isfile(src): + files = [path.basename(src)] + else: + dummy_path, dummy_dirs, files = next(walk(src)) + + files = [DataSet.get_member_name_from_file(file) for file in files] + + for file in files: + if DataSet.data_set_member_exists("{0}({1})".format(dest, file)): + return True + + return False + + @staticmethod + def data_set_volume(name): + """Checks the volume where a data set is located. + + Arguments: + name (str) -- The name of the data set. + + Returns: + str -- Name of the volume where the data set is. + + Raises: + DatasetNotFoundError: When data set cannot be found on the system. + DatasetVolumeError: When the function is unable to parse the value + of VOLSER. + """ + data_set_information = datasets.listing(name) + + if len(data_set_information) > 0: + return data_set_information[0].volume + + # If listing failed to return a data set, then it's probably a VSAM. + output = DataSet._get_listcat_data(name) + + if re.findall(r"NOT FOUND|NOT LISTED", output): + raise DatasetNotFoundError(name) + + volser_output = re.findall(r"VOLSER-*[A-Z|0-9]+", output) + + if volser_output: + return volser_output[0].replace("VOLSER", "").replace("-", "") + else: + raise DatasetVolumeError(name) + + @staticmethod + def data_set_type(name, volume=None): + """Checks the type of a data set. + + Arguments: + name (str) -- The name of the data set. + volume (str) -- The volume the data set may reside on. + + Returns: + str -- The type of the data set (one of "PS", "PO", "DA", "KSDS", + "ESDS", "LDS" or "RRDS"). + None -- If the data set does not exist or ZOAU is not able to determine + the type. + """ + if not DataSet.data_set_exists(name, volume): + return None + + data_sets_found = datasets.listing(name) + + # Using the DSORG property when it's a sequential or partitioned + # dataset. VSAMs are not found by datasets.listing. + if len(data_sets_found) > 0: + return data_sets_found[0].dsorg + + # Next, trying to get the DATA information of a VSAM through + # LISTCAT. + output = DataSet._get_listcat_data(name) + + # Filtering all the DATA information to only get the ATTRIBUTES block. + data_set_attributes = re.findall(r"ATTRIBUTES.*STATISTICS", output, re.DOTALL) + if len(data_set_attributes) == 0: + return None + + if re.search(r"\bINDEXED\b", data_set_attributes[0]): + return "KSDS" + elif re.search(r"\bNONINDEXED\b", data_set_attributes[0]): + return "ESDS" + elif re.search(r"\bLINEAR\b", data_set_attributes[0]): + return "LDS" + elif re.search(r"\bNUMBERED\b", data_set_attributes[0]): + return "RRDS" + else: + return None + + @staticmethod + def _get_listcat_data(name): + """Runs IDCAMS to get the DATA information associated with a data set. + + Arguments: + name (str) -- Name of the data set. + + Returns: + str -- Standard output from IDCAMS. + """ + name = name.upper() + module = AnsibleModuleHelper(argument_spec={}) + stdin = " LISTCAT ENT('{0}') DATA ALL".format(name) + rc, stdout, stderr = module.run_command( + "mvscmdauth --pgm=idcams --sysprint=* --sysin=stdin", data=stdin + ) + + if rc != 0: + raise MVSCmdExecError(rc, stdout, stderr) + + return stdout + + @staticmethod + def is_empty(name, volume=None): + """Determines whether a data set is empty. + + Arguments: + name (str) -- The name of the data set. + volume (str) -- The volume where the data set resides. + + Returns: + bool -- Whether the data set is empty or not. + """ + if not DataSet.data_set_exists(name, volume): + raise DatasetNotFoundError(name) + + ds_type = DataSet.data_set_type(name, volume) + + if ds_type in DataSet.MVS_PARTITIONED: + return DataSet._pds_empty(name) + elif ds_type in DataSet.MVS_SEQ: + return len(datasets.read(name, tail=10)) == 0 + elif ds_type in DataSet.MVS_VSAM: + return DataSet._vsam_empty(name) + + @staticmethod + def _pds_empty(name): + """Determines if a partitioned data set is empty. + + Arguments: + name (str) -- The name of the PDS/PDSE. + + Returns: + bool - If PDS/PDSE is empty. + Returns True if it is empty. False otherwise. + """ + module = AnsibleModuleHelper(argument_spec={}) + ls_cmd = "mls {0}".format(name) + rc, out, err = module.run_command(ls_cmd) + # RC 2 for mls means that there aren't any members. + return rc == 2 + + @staticmethod + def _vsam_empty(name): + """Determines if a VSAM data set is empty. + + Arguments: + name (str) -- The name of the VSAM data set. + + Returns: + bool - If VSAM data set is empty. + Returns True if VSAM data set exists and is empty. + False otherwise. + """ + module = AnsibleModuleHelper(argument_spec={}) + empty_cmd = """ PRINT - + INFILE(MYDSET) - + COUNT(1)""" + rc, out, err = module.run_command( + "mvscmdauth --pgm=idcams --sysprint=* --sysin=stdin --mydset={0}".format(name), + data=empty_cmd, + ) + if rc == 4 or "VSAM OPEN RETURN CODE IS 160" in out: + return True + elif rc != 0: + return False + @staticmethod def attempt_catalog_if_necessary(name, volumes): """Attempts to catalog a data set if not already cataloged. @@ -576,7 +856,7 @@ def create( raise DatasetCreateError( name, response.rc, response.stdout_response + response.stderr_response ) - return + return response.rc @staticmethod def delete(name): @@ -1017,7 +1297,8 @@ def __init__(self, data_set): data_set {str} -- Name of the input data set """ self.module = AnsibleModuleHelper(argument_spec={}) - self.data_set = data_set + self.data_set = data_set.upper() + self.path = data_set self.is_uss_path = "/" in data_set self.ds_info = dict() if not self.is_uss_path: @@ -1031,7 +1312,7 @@ def exists(self): bool -- If the data set exists """ if self.is_uss_path: - return path.exists(to_bytes(self.data_set)) + return path.exists(to_bytes(self.path)) return self.ds_info.get("exists") def member_exists(self, member): @@ -1237,24 +1518,6 @@ def is_data_set(data_set): return True -def is_empty(data_set): - """Determine whether a given data set is empty - - Arguments: - data_set {str} -- Input source name - - Returns: - {bool} -- Whether the data set is empty - """ - du = DataSetUtils(data_set) - if du.ds_type() == "PO": - return _pds_empty(data_set) - elif du.ds_type() == "PS": - return datasets.read(data_set, tail=10) is None - elif du.ds_type() == "VSAM": - return _vsam_empty(data_set) - - def extract_dsname(data_set): """Extract the actual name of the data set from a given input source @@ -1300,47 +1563,6 @@ def temp_member_name(): return temp_name -def _vsam_empty(ds): - """Determine if a VSAM data set is empty. - - Arguments: - ds {str} -- The name of the VSAM data set. - - Returns: - bool - If VSAM data set is empty. - Returns True if VSAM data set exists and is empty. - False otherwise. - """ - module = AnsibleModuleHelper(argument_spec={}) - empty_cmd = """ PRINT - - INFILE(MYDSET) - - COUNT(1)""" - rc, out, err = module.run_command( - "mvscmdauth --pgm=idcams --sysprint=* --sysin=stdin --mydset={0}".format(ds), - data=empty_cmd, - ) - if rc == 4 or "VSAM OPEN RETURN CODE IS 160" in out: - return True - elif rc != 0: - return False - - -def _pds_empty(data_set): - """Determine if a partitioned data set is empty - - Arguments: - data_set {str} -- The name of the PDS/PDSE - - Returns: - bool - If PDS/PDSE is empty. - Returns True if it is empty. False otherwise. - """ - module = AnsibleModuleHelper(argument_spec={}) - ls_cmd = "mls {0}".format(data_set) - rc, out, err = module.run_command(ls_cmd) - return rc == 2 - - class DatasetDeleteError(Exception): def __init__(self, data_set, rc): self.msg = 'An error occurred during deletion of data set "{0}". RC={1}'.format( @@ -1432,6 +1654,14 @@ def __init__(self, rc, out, err): super().__init__(self.msg) +class DatasetVolumeError(Exception): + def __init__(self, data_set): + self.msg = ( + "The data set {0} could not be found on a volume in the system.".format(data_set) + ) + super().__init__(self.msg) + + class DatasetBusyError(Exception): def __init__(self, data_set): self.msg = ( diff --git a/plugins/modules/zos_copy.py b/plugins/modules/zos_copy.py index 3357ac9ca..42ae7578b 100644 --- a/plugins/modules/zos_copy.py +++ b/plugins/modules/zos_copy.py @@ -25,15 +25,14 @@ description: - The M(zos_copy) module copies a file or data set from a local or a remote machine to a location on the remote machine. - - Use the M(zos_fetch) module to copy files or data sets from remote - locations to the local machine. author: - "Asif Mahmud (@asifmahmud)" - "Demetrios Dimatos (@ddimatos)" + - "Ivan Moreno (@rexemin)" options: backup: description: - - Specifies whether a backup of destination should be created before + - Specifies whether a backup of the destination should be created before copying data. - When set to C(true), the module creates a backup file or data set. - The backup file name will be returned on either success or failure of @@ -44,18 +43,17 @@ backup_name: description: - Specify a unique USS file name or data set name for the destination backup. - - If the destination (dest) is a USS file or path, the backup_name must - be a file or path name, and the USS path or file must be an absolute - path name. - - If the destination is an MVS data set, the backup_name must be an MVS - data set name. - - If the backup_name is not provided, the default backup_name will - be used. If the destination is a USS file or USS path, the name of the backup + - If the destination C(dest) is a USS file or path, the C(backup_name) must + be an absolute path name. + - If the destination is an MVS data set name, the C(backup_name) provided + must meet data set naming conventions of one or more qualifiers, each + from one to eight characters long, that are delimited by periods. + - If the C(backup_name) is not provided, the default C(backup_name) will + be used. If the C(dest) is a USS file or USS path, the name of the backup file will be the destination file or path name appended with a - timestamp, e.g. C(/path/file_name.2020-04-23-08-32-29-bak.tar). - - If the destination is an MVS data set, it will be a data set with a random - name generated by calling the ZOAU API. The MVS backup data set recovery - can be done by renaming it. + timestamp, e.g. C(/path/file_name.2020-04-23-08-32-29-bak.tar). If the + C(dest) is an MVS data set, it will be a data set with a randomly generated + name. - If C(dest) is a data set member and C(backup_name) is not provided, the data set member will be backed up to the same partitioned data set with a randomly generated member name. @@ -67,34 +65,44 @@ directly to the specified value. - Works only when C(dest) is a USS file, sequential data set, or a partitioned data set member. - - This is for simple values; for anything complex or with formatting, use - L(ansible.builtin.copy,https://docs.ansible.com/ansible/latest/modules/copy_module.html) - If C(dest) is a directory, then content will be copied to C(/path/to/dest/inline_copy). type: str required: false dest: description: - - Remote absolute path or data set where the file should be copied to. - - Destination can be a USS path or an MVS data set name. - - If C(dest) is a nonexistent USS file, it will be created. - - If C(dest) is a nonexistent data set, storage management rules will be - used to determine the volume where C(dest) will be allocated. + - The remote absolute path or data set where the content should be copied to. + - C(dest) can be a USS file, directory or MVS data set name. - If C(src) and C(dest) are files and if the parent directory of C(dest) - does not exist, then the task will fail. + does not exist, then the task will fail + - If C(dest) is a nonexistent USS file, it will be created. + - If C(dest) is a nonexistent data set, it will be created following the + process outlined here and in the C(volume) option. + - If C(dest) is a nonexistent data set, the attributes assigned will depend + on the type of C(src). If C(src) is a USS file, C(dest) will have a + Fixed Block (FB) record format and the remaining attributes will be computed. + If C(src) is binary, C(dest) will have a Fixed Block (FB) record format + with a record length of 80, block size of 32760, and the remaining + attributes will be computed. + - When C(dest) is a data set, precedence rules apply. If C(dest_data_set) + is set, this will take precedence over an existing data set. If C(dest) + is an empty data set, the empty data set will be written with the + expectation its attributes satisfy the copy. Lastly, if no precendent + rule has been exercised, C(dest) will be created with the same attributes + of C(src). - When the C(dest) is an existing VSAM (KSDS) or VSAM (ESDS), then source - can be ESDS, KSDS or RRDS. The C(dest) will be deleted and storage - management rules will be used to determine the volume where C(dest) will - be allocated. - - When the C(dest) is an existing VSAM (RRDS), then the source must be RRDS. - The C(dest) will be deleted and storage management rules will be used to - determine the volume where C(dest) will be allocated. - - When C(dest) is and existing VSAM (LDS), then source must be LDS. The - C(dest) will be deleted and storage management rules will be used to - determine the volume where C(dest) will be allocated. + can be an ESDS, a KSDS or an RRDS. The VSAM (KSDS) or VSAM (ESDS) C(dest) will + be deleted and recreated following the process outlined in the C(volume) option. + - When the C(dest) is an existing VSAM (RRDS), then the source must be an RRDS. + The VSAM (RRDS) will be deleted and recreated following the process outlined + in the C(volume) option. + - When C(dest) is and existing VSAM (LDS), then source must be an LDS. The + VSAM (LDS) will be deleted and recreated following the process outlined + in the C(volume) option. - When C(dest) is a data set, you can override storage management rules - by specifying both C(volume) and other optional DS specs (type, space, - record size, etc). + by specifying C(volume) if the storage class being used has + GUARANTEED_SPACE=YES specified, otherwise, the allocation will + fail. See C(volume) for more volume related processes. type: str required: true encoding: @@ -104,7 +112,6 @@ - If C(encoding) is not provided, the module determines which local and remote charsets to convert the data from and to. Note that this is only done for text data and not binary data. - - If C(encoding) is provided and C(src) is an MVS data set, task will fail. - Only valid if C(is_binary) is false. type: dict required: false @@ -121,9 +128,16 @@ type: str force: description: - - If set to C(true), the remote file or data set will be overwritten. - - If set to C(false), the file or data set will only be copied if the destination - does not exist. + - If set to C(true) and the remote file or data set C(dest) is empty, + the C(dest) will be reused. + - If set to C(true) and the remote file or data set C(dest) is NOT empty, + the C(dest) will be deleted and recreated with the C(src) data set + attributes, otherwise it will be recreated with the C(dest) data set + attributes. + - To backup data before any deletion, see parameters C(backup) and + C(backup_name). + - If set to C(false), the file or data set will only be copied if the + destination does not exist. - If set to C(false) and destination exists, the module exits with a note to the user. type: bool @@ -172,7 +186,7 @@ - The mode may also be specified as a symbolic mode (for example, ``u+rwx`` or ``u=rw,g=r,o=r``) or a special string `preserve`. - - C(preserve) means that the file will be given the same permissions as + - I(mode=preserve) means that the file will be given the same permissions as the source file. type: str required: false @@ -202,12 +216,12 @@ - If C(remote_src) is true, then C(src) must be the path to a Unix System Services (USS) file, name of a data set, or data set member. - If C(src) is a local path or a USS path, it can be absolute or relative. - - If C(src) is a directory, destination must be a partitioned data set or + - If C(src) is a directory, C(dest) must be a partitioned data set or a USS directory. - - If C(src) is a file and dest ends with "/" or destination is a + - If C(src) is a file and C(dest) ends with "/" or is a directory, the file is copied to the directory with the same filename as - src. - - If C(src) is a VSAM data set, destination must also be a VSAM. + C(src). + - If C(src) is a VSAM data set, C(dest) must also be a VSAM. - Wildcards can be used to copy multiple PDS/PDSE members to another PDS/PDSE. - Required unless using C(content). @@ -235,16 +249,17 @@ unit name has been specified. type: str required: false - destination_dataset: + dest_data_set: description: - - These are settings to use when creating the destination data set + - Data set attributes to customize a C(dest) data set to be copied into. required: false type: dict suboptions: - dd_type: + type: description: - Organization of the destination type: str + required: true choices: - KSDS - ESDS @@ -255,7 +270,6 @@ - PDSE - MEMBER - BASIC - default: BASIC space_primary: description: - If the destination I(dest) data set does not exist , this sets the @@ -263,7 +277,6 @@ - The unit of space used is set using I(space_type). type: str required: false - default: "5" space_secondary: description: - If the destination I(dest) data set does not exist , this sets the @@ -271,7 +284,6 @@ - The unit of space used is set using I(space_type). type: str required: false - default: "3" space_type: description: - If the destination data set does not exist, this sets the unit of @@ -285,7 +297,6 @@ - CYL - TRK required: false - default: M record_format: description: - If the destination data set does not exist, this sets the format of the @@ -298,7 +309,6 @@ - FBA - VBA - U - default: FB type: str record_length: description: @@ -307,12 +317,54 @@ - "Defaults vary depending on format: If FB/FBA 80, if VB/VBA 137, if U 0." type: int required: false - default: 80 block_size: description: - The block size to use for the data set. type: int required: false + directory_blocks: + description: + - The number of directory blocks to allocate to the data set. + type: int + required: false + key_offset: + description: + - The key offset to use when creating a KSDS data set. + - I(key_offset) is required when I(type=KSDS). + - I(key_offset) should only be provided when I(type=KSDS) + type: int + required: false + key_length: + description: + - The key length to use when creating a KSDS data set. + - I(key_length) is required when I(type=KSDS). + - I(key_length) should only be provided when I(type=KSDS) + type: int + required: false + sms_storage_class: + description: + - The storage class for an SMS-managed dataset. + - Required for SMS-managed datasets that do not match an SMS-rule. + - Not valid for datasets that are not SMS-managed. + - Note that all non-linear VSAM datasets are SMS-managed. + type: str + required: false + sms_data_class: + description: + - The data class for an SMS-managed dataset. + - Optional for SMS-managed datasets that do not match an SMS-rule. + - Not valid for datasets that are not SMS-managed. + - Note that all non-linear VSAM datasets are SMS-managed. + type: str + required: false + sms_management_class: + description: + - The management class for an SMS-managed dataset. + - Optional for SMS-managed datasets that do not match an SMS-rule. + - Not valid for datasets that are not SMS-managed. + - Note that all non-linear VSAM datasets are SMS-managed. + type: str + required: false notes: - Destination data sets are assumed to be in catalog. When trying to copy @@ -382,7 +434,7 @@ from: UTF-8 to: IBM-037 -- name: Copy a VSAM (KSDS) to a VSAM (KSDS) +- name: Copy a VSAM (KSDS) to a VSAM (KSDS) zos_copy: src: SAMPLE.SRC.VSAM dest: SAMPLE.DEST.VSAM @@ -435,14 +487,16 @@ src: HLQ.SAMPLE.PDSE dest: HLQ.EXISTING.PDSE remote_src: true + force: true -- name: Copy PDS member to a new PDS member. Replace if it already exists. +- name: Copy PDS member to a new PDS member. Replace if it already exists zos_copy: src: HLQ.SAMPLE.PDSE(SRCMEM) dest: HLQ.NEW.PDSE(DESTMEM) remote_src: true + force: true -- name: Copy a USS file to a PDSE member. If PDSE does not exist, allocate it. +- name: Copy a USS file to a PDSE member. If PDSE does not exist, allocate it zos_copy: src: /path/to/uss/src dest: DEST.PDSE.DATA.SET(MEMBER) @@ -460,7 +514,7 @@ dest: /tmp/member remote_src: true -- name: Copy a PDS to a USS directory (/tmp/SRC.PDS). +- name: Copy a PDS to a USS directory (/tmp/SRC.PDS) zos_copy: src: SRC.PDS dest: /tmp @@ -484,6 +538,20 @@ dest: SOME.DEST.PDS volume: 'VOL033' remote_src: true + +- name: Copy a USS file to a fully customized sequential data set + zos_copy: + src: /path/to/uss/src + dest: SOME.SEQ.DEST + remote_src: true + volume: '222222' + dest_data_set: + type: SEQ + space_primary: 10 + space_secondary: 3 + space_type: K + record_format: VB + record_length: 150 """ RETURN = r""" @@ -584,35 +652,32 @@ sample: REPRO INDATASET(SAMPLE.DATA.SET) OUTDATASET(SAMPLE.DEST.DATA.SET) """ -import os -import tempfile -import math -import stat -import shutil -import glob - -from hashlib import sha256 -from re import IGNORECASE -from ansible.module_utils.six import PY3 - -from ansible.module_utils.basic import AnsibleModule -from ansible.module_utils._text import to_bytes - -from ansible_collections.ibm.ibm_zos_core.plugins.module_utils.ansible_module import ( - AnsibleModuleHelper, -) -from ansible_collections.ibm.ibm_zos_core.plugins.module_utils import ( - better_arg_parser, data_set, encode, vtoc, backup, copy +from ansible_collections.ibm.ibm_zos_core.plugins.module_utils.import_handler import ( + MissingZOAUImport, ) - from ansible_collections.ibm.ibm_zos_core.plugins.module_utils.mvs_cmd import ( idcams, iebcopy, ikjeft01 ) - -from ansible_collections.ibm.ibm_zos_core.plugins.module_utils.import_handler import ( - MissingZOAUImport, +from ansible_collections.ibm.ibm_zos_core.plugins.module_utils import ( + better_arg_parser, data_set, encode, vtoc, backup, copy ) +from ansible_collections.ibm.ibm_zos_core.plugins.module_utils.ansible_module import ( + AnsibleModuleHelper, +) +from ansible.module_utils._text import to_bytes +from ansible.module_utils.basic import AnsibleModule +from ansible.module_utils.six import PY3 +from re import IGNORECASE +from hashlib import sha256 +import glob +import shutil +import stat +import math +import tempfile +import os +# import subprocess +import time if PY3: from re import fullmatch @@ -625,15 +690,10 @@ datasets = MissingZOAUImport() -MVS_PARTITIONED = frozenset({"PE", "PO", "PDSE", "PDS"}) -MVS_SEQ = frozenset({"PS", "SEQ"}) - - class CopyHandler(object): def __init__( self, module, - dest_exists, is_binary=False, backup_name=None ): @@ -642,7 +702,6 @@ def __init__( Arguments: module {AnsibleModule} -- The AnsibleModule object from currently running module - dest_exists {boolean} -- Whether destination already exists Keyword Arguments: is_binary {bool} -- Whether the file or data set to be copied @@ -651,42 +710,25 @@ def __init__( backup """ self.module = module - self.dest_exists = dest_exists self.is_binary = is_binary self.backup_name = backup_name - def fail_json(self, **kwargs): - """ Wrapper for AnsibleModule.fail_json """ - d = dict(dest_exists=self.dest_exists, backup_name=self.backup_name) - self.module.fail_json(**self._merge_hash(kwargs, d)) - def run_command(self, cmd, **kwargs): """ Wrapper for AnsibleModule.run_command """ return self.module.run_command(cmd, **kwargs) - def exit_json(self, **kwargs): - """ Wrapper for AnsibleModule.exit_json """ - d = dict(dest_exists=self.dest_exists, backup_name=self.backup_name) - self.module.exit_json(**self._merge_hash(kwargs, d)) - def copy_to_seq( self, src, temp_path, conv_path, - dest, - src_ds_type, - alloc_vol=None, - type=type, - space_primary=None, - space_secondary=None, - space_type=None, - record_format=None, - record_length=None, - block_size=None + dest ): """Copy source to a sequential data set. + Raises: + CopyOperationError -- When copying into the data set fails. + Arguments: src {str} -- Path to USS file or data set name temp_path {str} -- Path to the location where the control node @@ -694,117 +736,43 @@ def copy_to_seq( conv_path {str} -- Path to the converted source file dest {str} -- Name of destination data set src_ds_type {str} -- The type of source - alloc_vol {str} -- The volume where destination should be allocated - type {str} -- Type of data set, if it needs created - space_primary {int} -- Primary allocation, if data set needs created - space_secondary {int} -- Secondary allocation, if data set needs created - space_type {str} - Units of measure (CYL/TRK/K/M) of allocation values - record_format {str} - Format (FB/VB) of data set if data set needs created - record_length {int} - Size of individual records in the record set - block_size {int} - Size of data block """ - new_src = temp_path or conv_path or src - # Pre-clear data set: get destination stats before deleting if incoming info is all default - if self.dest_exists: - if space_primary is None or space_primary == 5: - if record_length is None or record_length == 80: - if block_size is None: - res = datasets.listing(dest) - type = res[0].dsorg - record_format = res[0].recfm - record_length = res[0].lrecl - block_size = res[0].block_size - space_primary = round(res[0].total_space / 1024) - space_type = "K" - space_secondary = 3 - datasets.delete(dest) - self.dest_exists = False + new_src = conv_path or temp_path or src + copy_args = dict() - if src_ds_type == "USS": - # create destination if it doesn't exist - if not self.dest_exists: - parms = dict( - name=dest, - type=type, - primary_space=space_primary, - secondary_space=space_secondary, - space_type=space_type, - record_format=record_format, - record_length=record_length, - block_size=block_size - ) - if alloc_vol: - parms['volume'] = alloc_vol - - datasets._create(**parms) + if self.is_binary: + copy_args["options"] = "-B" - rc, out, err = self.run_command( - "cp {0} {1} \"//'{2}'\"".format( - "-B" if self.is_binary else "", new_src, dest - ) + response = datasets._copy(new_src, dest, None, **copy_args) + if response.rc != 0: + raise CopyOperationError( + msg="Unable to copy source {0} to {1}".format(new_src, dest), + rc=response.rc, + stdout=response.stdout_response, + stderr=response.stderr_response ) - if rc != 0: - self.fail_json( - msg="Unable to copy source {0} to {1}".format(src, dest), - rc=rc, - stderr=err, - stdout=out, - ) - else: - rc = datasets.copy(new_src, dest) - # ***************************************************************** - # When Copying a PDSE member to a non-existent sequential data set - # using cp "//'SOME.PDSE.DATA.SET(MEMBER)'" "//'SOME.DEST.SEQ'", - # An I/O abend could be trapped and can be resolved by allocating - # the destination data set before copying. - # ***************************************************************** - if rc != 0: - sz = None - if space_primary is not None: - if space_type in "MK": - sz = str(space_primary) + space_type - - self._allocate_ps(dest, size=sz, alloc_vol=alloc_vol) - response = datasets._copy(new_src, dest) - if response.rc != 0: - self.fail_json( - msg="Unable to copy source {0} to {1}".format(new_src, dest), - rc=response.rc, - stdout=response.stdout_response, - stderr=response.stderr_response - ) - def copy_to_vsam(self, src, dest, alloc_vol): - """ Copy source VSAM to destination VSAM. If source VSAM exists, then - it will be deleted and a new VSAM cluster will be allocated. + def copy_to_vsam(self, src, dest): + """Copy source VSAM to destination VSAM. + + Raises: + CopyOperationError -- When REPRO fails to copy the data set. Arguments: src {str} -- The name of the source VSAM dest {str} -- The name of the destination VSAM - alloc_vol {str} -- The volume where the destination should be allocated """ - if self.dest_exists: - response = datasets._delete(dest) - if response.rc != 0: - self.fail_json( - msg="Unable to delete destination data set {0}".format(dest), - rc=response.rc, - stdout=response.stdout_response, - stderr=response.stderr_response - ) - self.allocate_model(dest, src, vol=alloc_vol) - repro_cmd = """ REPRO - - INDATASET({0}) - - OUTDATASET({1})""".format(src, dest) + INDATASET('{0}') - + OUTDATASET('{1}')""".format(src.upper(), dest.upper()) rc, out, err = idcams(repro_cmd, authorized=True) if rc != 0: - self.fail_json( + raise CopyOperationError( msg=("IDCAMS REPRO encountered a problem while " "copying {0} to {1}".format(src, dest)), + rc=rc, stdout=out, stderr=err, - rc=rc, stdout_lines=out.splitlines(), stderr_lines=err.splitlines(), cmd=repro_cmd, @@ -821,7 +789,7 @@ def convert_encoding(self, src, temp_path, encoding): from and to Raises: - EncodingConversionError -- When the encoding of a USS file is not + CopyOperationError -- When the encoding of a USS file is not able to be converted Returns: @@ -839,18 +807,25 @@ def convert_encoding(self, src, temp_path, encoding): temp_path, os.path.basename(os.path.dirname(src)) ) else: - new_src = "{0}/{1}".format(temp_path, os.path.basename(src)) + new_src = "{0}/{1}".format(temp_path, + os.path.basename(src)) try: if not temp_path: temp_dir = tempfile.mkdtemp() - shutil.copytree(new_src, temp_dir) + shutil.copytree(new_src, temp_dir, dirs_exist_ok=True) new_src = temp_dir + self._convert_encoding_dir(new_src, from_code_set, to_code_set) self._tag_file_encoding(new_src, to_code_set, is_dir=True) + except CopyOperationError as err: + if new_src != src: + shutil.rmtree(new_src) + raise err except Exception as err: - shutil.rmtree(new_src) - self.fail_json(msg=str(err)) + if new_src != src: + shutil.rmtree(new_src) + raise CopyOperationError(msg=str(err)) else: try: if not temp_path: @@ -873,49 +848,16 @@ def convert_encoding(self, src, temp_path, encoding): ) self._tag_file_encoding(new_src, to_code_set) + except CopyOperationError as err: + if new_src != src: + os.remove(new_src) + raise err except Exception as err: - os.remove(new_src) - self.fail_json(msg=str(err)) + if new_src != src: + os.remove(new_src) + raise CopyOperationError(msg=str(err)) return new_src - def allocate_model(self, ds_name, model, vol=None): - """Use 'model' data sets allocation paramters to allocate the given - data set. - - Arguments: - ds_name {str} -- The name of the data set to allocate - model {str} -- The name of the data set whose allocation parameters - should be used to allocate 'ds_name' - dsntype {str} -- The type of data set to be allocated - vol {str} -- The volume where data set should be allocated - - Returns: - {int} -- The return code of executing the allocation command - """ - blksize = data_set.DataSetUtils(model).blksize() - - alloc_cmd = """ ALLOC DS('{0}') - - LIKE('{1}') - - {2}{3}""".format( - ds_name, - model, - "BLKSIZE({0}) ".format(blksize) if blksize else "", - "VOLUME({0})".format(vol.upper()) if vol else "" - ) - - rc, out, err = ikjeft01(alloc_cmd, authorized=True) - if rc != 0: - self.fail_json( - msg="Unable to allocate destination {0}".format(ds_name), - stdout=out, - stderr=err, - rc=rc, - stdout_lines=out.splitlines(), - stderr_lines=err.splitlines(), - cmd=alloc_cmd, - ) - return rc - def _convert_encoding_dir(self, dir_path, from_code_set, to_code_set): """Convert encoding for all files inside a given directory @@ -945,6 +887,9 @@ def _tag_file_encoding(self, file_path, tag, is_dir=False): If `file_path` is a directory, all of the files and subdirectories will be tagged recursively. + Raises: + CopyOperationError -- When chtag fails. + Arguments: file_path {str} -- Absolute file path tag {str} -- Specifies which code set to tag the file @@ -954,51 +899,19 @@ def _tag_file_encoding(self, file_path, tag, is_dir=False): (Default {False}) """ - tag_cmd = "chtag -{0}c {1} {2}".format("R" if is_dir else "t", tag, file_path) + tag_cmd = "chtag -{0}c {1} {2}".format( + "R" if is_dir else "t", tag, file_path) rc, out, err = self.run_command(tag_cmd) if rc != 0: - self.fail_json( + raise CopyOperationError( msg="Unable to tag the file {0} to {1}".format(file_path, tag), + rc=rc, stdout=out, stderr=err, - rc=rc, stdout_lines=out.splitlines(), stderr_lines=err.splitlines(), ) - def _allocate_ps(self, name, size="5M", alloc_vol=None): - """Allocate a sequential data set - - Arguments: - name {str} -- Name of the data set to allocate - size {str} -- The size to allocate - alloc_vol {str} -- The volume where the data set should be allocated - """ - if size is None: - size = "5M" - elif size == "": - size = "5M" - - parms = dict( - name=name, - type="SEQ", - primary_space=size, - record_format="FB", - record_length=1028, - block_size=6144 - ) - if alloc_vol: - parms['volume'] = alloc_vol - - response = datasets._create(**parms) - if response.rc != 0: - self.fail_json( - msg="Unable to allocate destination data set {0}".format(name), - rc=response.rc, - stdout=response.stdout_response, - stderr=response.stderr_response - ) - def _merge_hash(self, *args): """Combine multiple dictionaries""" result = dict() @@ -1011,7 +924,6 @@ class USSCopyHandler(CopyHandler): def __init__( self, module, - dest_exists, is_binary=False, common_file_args=None, backup_name=None, @@ -1021,7 +933,6 @@ def __init__( Arguments: module {AnsibleModule} -- The AnsibleModule object from currently running module - dest_exists {boolean} -- Whether destination already exists Keyword Arguments: common_file_args {dict} -- mode, group and owner information to be @@ -1031,7 +942,7 @@ def __init__( backup_name {str} -- The USS path or data set name of destination backup """ super().__init__( - module, dest_exists, is_binary=is_binary, backup_name=backup_name + module, is_binary=is_binary, backup_name=backup_name ) self.common_file_args = common_file_args @@ -1043,7 +954,8 @@ def copy_to_uss( temp_path, src_ds_type, src_member, - member_name + member_name, + force ): """Copy a file or data set to a USS location @@ -1056,19 +968,21 @@ def copy_to_uss( src_ds_type {str} -- Type of source src_member {bool} -- Whether src is a data set member member_name {str} -- The name of the source data set member + force {bool} -- Wheter to copy files to an already existing directory Returns: {str} -- Destination where the file was copied to """ - if src_ds_type in MVS_SEQ.union(MVS_PARTITIONED): + if src_ds_type in data_set.DataSet.MVS_SEQ.union(data_set.DataSet.MVS_PARTITIONED): self._mvs_copy_to_uss( src, dest, src_ds_type, src_member, member_name=member_name ) else: if os.path.isfile(temp_path or conv_path or src): dest = self._copy_to_file(src, dest, conv_path, temp_path) + changed_files = None else: - dest = self._copy_to_dir(src, dest, conv_path, temp_path) + dest, changed_files = self._copy_to_dir(src, dest, conv_path, temp_path, force) if self.common_file_args is not None: mode = self.common_file_args.get("mode") @@ -1076,6 +990,10 @@ def copy_to_uss( owner = self.common_file_args.get("owner") if mode is not None: self.module.set_mode_if_different(dest, mode, False) + + if changed_files: + for filepath in changed_files: + self.module.set_mode_if_different(os.path.join(dest, filepath), mode, False) if group is not None: self.module.set_group_if_different(dest, group, False) if owner is not None: @@ -1092,11 +1010,15 @@ def _copy_to_file(self, src, dest, conv_path, temp_path): transferred data to conv_path {str} -- Path to the converted source file or directory + Raises: + CopyOperationError -- When copying into the file fails. + Returns: {str} -- Destination where the file was copied to """ if os.path.isdir(dest): - dest = os.path.join(dest, os.path.basename(src) if src else "inline_copy") + dest = os.path.join(dest, os.path.basename(src) + if src else "inline_copy") new_src = temp_path or conv_path or src try: @@ -1105,48 +1027,124 @@ def _copy_to_file(self, src, dest, conv_path, temp_path): else: shutil.copy(new_src, dest) except OSError as err: - self.fail_json( - msg="Destination {0} is not writable".format(dest), stderr=str(err) + raise CopyOperationError( + msg="Destination {0} is not writable".format(dest), + stderr=str(err) ) except Exception as err: - self.fail_json( + raise CopyOperationError( msg="Unable to copy file {0} to {1}".format(new_src, dest), stderr=str(err), ) return dest - def _copy_to_dir(self, src_dir, dest_dir, conv_path, temp_path): + def _copy_to_dir( + self, + src_dir, + dest_dir, + conv_path, + temp_path, + force + ): """Helper function to copy a USS directory to another USS directory Arguments: src_dir {str} -- USS source directory - dest {str} -- USS dest directory + dest_dir {str} -- USS dest directory temp_path {str} -- Path to the location where the control node transferred data to conv_path {str} -- Path to the converted source directory + force {bool} -- Whether to copy files to an already existing directory + + Raises: + CopyOperationError -- When copying into the directory fails. Returns: {str} -- Destination where the directory was copied to """ + if temp_path: + temp_path = "{0}/{1}".format( + temp_path, + os.path.basename(os.path.normpath(src_dir)) + ) new_src_dir = temp_path or conv_path or src_dir - if os.path.exists(dest_dir): - try: - shutil.rmtree(dest_dir) - except Exception as err: - self.fail_json( - msg="Unable to delete pre-existing directory {0}".format(dest_dir), - stdout=str(err), - ) + new_src_dir = os.path.normpath(new_src_dir) + + changed_files, original_permissions = self._get_changed_files(new_src_dir, dest_dir) + try: - shutil.copytree(new_src_dir, dest_dir) + dest = shutil.copytree(new_src_dir, dest_dir, dirs_exist_ok=force) + + # Restoring permissions for preexisting files and subdirectories. + for filepath, permissions in original_permissions: + mode = "0{0:o}".format(stat.S_IMODE(permissions)) + self.module.set_mode_if_different(os.path.join(dest, filepath), mode, False) except Exception as err: - self.fail_json( - msg="Error while copying data to destination directory {0}".format( - dest_dir - ), + raise CopyOperationError( + msg="Error while copying data to destination directory {0}".format(dest_dir), stdout=str(err), ) - return dest_dir + return dest, changed_files + + def _get_changed_files(self, src, dest): + """Traverses a source directory and gets all the paths to files and + subdirectories that got copied into a destination. + + Arguments: + src (str) -- Path to the directory where files are copied from. + dest (str) -- Path to the directory where files are copied into. + + Returns: + tuple -- A list of paths for all new subdirectories and files that + got copied into dest, and a list of the permissions + for the files and directories already present on the + destination. + """ + original_files = self._walk_uss_tree(dest) if os.path.exists(dest) else [] + copied_files = self._walk_uss_tree(src) + + changed_files = [ + relative_path for relative_path in copied_files + if relative_path not in original_files + ] + + # Creating tuples with (filename, permissions). + original_permissions = [ + (filepath, os.stat(os.path.join(dest, filepath)).st_mode) + for filepath in original_files + ] + + return changed_files, original_permissions + + def _walk_uss_tree(self, dir): + """Walks the tree directory for dir and returns all relative paths + found. + Arguments: + dir (str) -- Path to the directory to traverse. + Returns: + list -- List of relative paths to all content inside dir. + """ + original_working_dir = os.getcwd() + # The function gets relative paths, so it changes the current working + # directory to the root of src. + os.chdir(dir) + paths = [] + + for dirpath, subdirs, files in os.walk(".", True): + paths += [ + os.path.join(dirpath, subdir).replace("./", "") + for subdir in subdirs + ] + paths += [ + os.path.join(dirpath, filepath).replace("./", "") + for filepath in files + ] + + # Returning the current working directory to what it was before to not + # interfere with the rest of the module. + os.chdir(original_working_dir) + + return paths def _mvs_copy_to_uss( self, @@ -1164,6 +1162,9 @@ def _mvs_copy_to_uss( src_ds_type -- Type of source src_member {bool} -- Whether src is a data set member + Raises: + CopyOperationError -- When copying the data set into USS fails. + Keyword Arguments: member_name {str} -- The name of the source data set member """ @@ -1172,16 +1173,16 @@ def _mvs_copy_to_uss( # the same name as the member. dest = "{0}/{1}".format(dest, member_name or src) - if src_ds_type in MVS_PARTITIONED and not src_member: + if src_ds_type in data_set.DataSet.MVS_PARTITIONED and not src_member: try: os.mkdir(dest) except FileExistsError: pass try: - if src_member or src_ds_type in MVS_SEQ: + if src_member or src_ds_type in data_set.DataSet.MVS_SEQ: response = datasets._copy(src, dest) if response.rc != 0: - self.fail_json( + raise CopyOperationError( msg="Error while copying source {0} to {1}".format(src, dest), rc=response.rc, stdout=response.stdout_response, @@ -1190,14 +1191,13 @@ def _mvs_copy_to_uss( else: copy.copy_pds2uss(src, dest, is_binary=self.is_binary) except Exception as err: - self.fail_json(msg=str(err)) + raise CopyOperationError(msg=str(err)) class PDSECopyHandler(CopyHandler): def __init__( self, module, - dest_exists, is_binary=False, backup_name=None ): @@ -1207,7 +1207,6 @@ def __init__( Arguments: module {AnsibleModule} -- The AnsibleModule object from currently running module - dest_exists {boolean} -- Whether destination already exists Keyword Arguments: is_binary {bool} -- Whether the data set to be copied contains @@ -1216,7 +1215,6 @@ def __init__( """ super().__init__( module, - dest_exists, is_binary=is_binary, backup_name=backup_name ) @@ -1228,10 +1226,14 @@ def copy_to_pdse( conv_path, dest, src_ds_type, - alloc_vol=None + src_member=None, + dest_member=None, ): """Copy source to a PDS/PDSE or PDS/PDSE member. + Raises: + CopyOperationError -- When copying into a member fails. + Arguments: src {str} -- Path to USS file/directory or data set name. temp_path {str} -- Path to the location where the control node @@ -1239,134 +1241,104 @@ def copy_to_pdse( conv_path {str} -- Path to the converted source file/directory dest {str} -- Name of destination data set src_ds_type {str} -- The type of source - alloc_vol {str} -- The volume where the PDSE should be allocated + src_member {bool, optional} -- Member of the source data set to copy. + dest_member {str, optional} -- Name of destination member in data set """ - new_src = temp_path or conv_path or src + new_src = conv_path or temp_path or src + if src_ds_type == "USS": - if self.dest_exists and not data_set.is_empty(dest): - rc = datasets.delete_members(dest + "(*)") - if rc != 0: - self.fail_json( - msg="Unable to delete data set members for data set {0}".format( - dest - ), - rc=rc, - ) - if src.endswith("/"): - new_src = "{0}/{1}".format( - temp_path, os.path.basename(os.path.dirname(src)) - ) + if os.path.isfile(new_src): + path = os.path.dirname(new_src) + files = [os.path.basename(new_src)] + else: + path, dirs, files = next(os.walk(new_src)) - path, dirs, files = next(os.walk(new_src)) for file in files: - member_name = file[: file.rfind(".")] if "." in file else file - full_file_path = path + "/" + file - self.copy_to_member( - full_file_path, - None, - None, - "{0}({1})".format(dest, member_name), - copy_member=True + full_file_path = os.path.normpath(path + "/" + file) + + if dest_member: + dest_copy_name = "{0}({1})".format(dest, dest_member) + else: + dest_copy_name = "{0}({1})".format(dest, data_set.DataSet.get_member_name_from_file(file)) + + result = self.copy_to_member(full_file_path, dest_copy_name) + + if result["rc"] != 0: + msg = "Unable to copy file {0} to data set member {1}".format(file, dest_copy_name) + raise CopyOperationError( + msg=msg, + rc=result["rc"], + stdout=result["out"], + stderr=result["err"] + ) + elif src_ds_type in data_set.DataSet.MVS_SEQ: + dest_copy_name = "{0}({1})".format(dest, dest_member) + result = self.copy_to_member(new_src, dest_copy_name) + + if result["rc"] != 0: + msg = "Unable to copy data set {0} to data set member {1}".format(new_src, dest_copy_name) + raise CopyOperationError( + msg=msg, + rc=result["rc"], + stdout=result["out"], + stderr=result["err"] ) else: - if is_member_wildcard(src): - members = [] - data_set_base = data_set.extract_dsname(src) - try: - members = list(map(str.strip, datasets.list_members(src))) - except AttributeError: - self.exit_json( - note="The src {0} is likely empty. No data was copied".format( - data_set_base - ) - ) - for member in members: - self.copy_to_member( - "{0}({1})".format(data_set_base, member), - None, - None, - "{0}({1})".format(dest, member), - ) + members = [] + src_data_set_name = data_set.extract_dsname(new_src) + + if src_member: + members.append(data_set.extract_member_name(new_src)) else: - if self.dest_exists: - rc = datasets.delete(dest) - if rc != 0: - self.fail_json( - msg="Error while removing existing destination {0}".format( - dest - ), - rc=rc, - ) - self.allocate_model(dest, new_src, vol=alloc_vol) - - dds = dict(OUTPUT=dest, INPUT=new_src) - copy_cmd = " COPY OUTDD=OUTPUT,INDD=INPUT" - rc, out, err = iebcopy(copy_cmd, dds=dds) - if rc != 0: - self.fail_json( - msg="IEBCOPY encountered a problem while copying {0} to {1}".format( - new_src, dest - ), - stdout=out, - stderr=err, - rc=rc, - stdout_lines=out.splitlines(), - stderr_lines=err.splitlines(), - cmd=copy_cmd, + members = datasets.list_members(new_src) + + for member in members: + copy_src = "{0}({1})".format(src_data_set_name, member) + if dest_member: + dest_copy_name = "{0}({1})".format(dest, dest_member) + else: + dest_copy_name = "{0}({1})".format(dest, member) + + result = self.copy_to_member(copy_src, dest_copy_name) + + if result["rc"] != 0: + msg = "Unable to copy data set member {0} to data set member {1}".format(new_src, dest_copy_name) + raise CopyOperationError( + msg=msg, + rc=result["rc"], + stdout=result["out"], + stderr=result["err"] ) def copy_to_member( self, src, - temp_path, - conv_path, - dest, - copy_member=False + dest ): """Copy source to a PDS/PDSE member. The only valid sources are: - USS files - Sequential data sets - PDS/PDSE members - - local files Arguments: src {str} -- Path to USS file or data set name. - temp_path {str} -- Path to the location where the control node - transferred data to - conv_path {str} -- Path to the converted source file/directory dest {str} -- Name of destination data set - Keyword Arguments: - copy_member {bool} -- Whether destination specifies a member name. - (default {False}) - Returns: - {str} -- Destination where the member was copied to + dict -- Dictionary containing the return code, stdout, and stderr from + the copy command. """ - is_uss_src = temp_path is not None or conv_path is not None or "/" in src - # if constructing, remove periods from member name - if src and is_uss_src and not copy_member: - dest = "{0}({1})".format(dest, os.path.basename(src).replace(".", "")[0:8]) - - new_src = (temp_path or conv_path or src).replace("$", "\\$") - + src = src.replace("$", "\\$") dest = dest.replace("$", "\\$").upper() opts = dict() - # added -B to remove the 'truncated' error when copying uss->fb/pdse style stuff if self.is_binary: opts["options"] = "-B" - response = datasets._copy(new_src, dest, None, **opts) + response = datasets._copy(src, dest, None, **opts) rc, out, err = response.rc, response.stdout_response, response.stderr_response if rc != 0: - msg = "" - if is_uss_src: - msg = "Unable to copy file {0} to data set member {1}".format(src, dest) - else: - msg = "Unable to copy data set member {0} to {1}".format(src, dest) - # ***************************************************************** # An error occurs while attempting to write a data set member to a # PDSE containing program object members, a PDSE cannot contain @@ -1374,130 +1346,176 @@ def copy_to_member( # resolved by copying the program object with a "-X" flag. # ***************************************************************** if "FSUM8976" in err and "EDC5091I" in err: - rc, out, err = self.run_command( - "cp -X \"//'{0}'\" \"//'{1}'\"".format(new_src, dest) - ) - if rc != 0: - self.fail_json(msg=msg, rc=rc, stdout=out, stderr=err) - else: - self.fail_json(msg=msg, rc=rc, stdout=out, stderr=err) + opts["options"] = "-X" + response = datasets._copy(src, dest, None, **opts) + rc, out, err = response.rc, response.stdout_response, response.stderr_response + + return dict( + rc=rc, + out=out, + err=err + ) - return dest.replace("\\", "").upper() - def create_pdse( - self, - src, - dest_name, - size, - src_ds_type, - remote_src=False, - src_vol=None, - alloc_vol=None, - ): - """Create a partitioned data set specified by 'dest_name' +def get_file_record_length(file): + """Gets the longest line length from a file. - Arguments: - src {str} -- Name of the source data set - dest_name {str} -- Name of the data set to be created - size {int} -- The size, in bytes, of the source file - dest_ds_type {str} -- Type of the data set to be created - src_ds_type {str} -- Type of source data set - alloc_vol {str} -- The volume to allocate the PDSE to + Arguments: + file (str) -- Path of the file. - Keyword Arguments: - remote_src {bool} -- Whether source is located on remote system. - (Default {False}) - src_vol {str} -- Volume where source data set is stored. (Default {None}) - """ - rc = out = err = None - if remote_src: - if src_ds_type in MVS_PARTITIONED: - rc = self.allocate_model(dest_name, src, vol=alloc_vol) - - elif src_ds_type in MVS_SEQ: - rc = self._allocate_pdse( - dest_name, src_vol=src_vol, src=src, alloc_vol=alloc_vol - ) + Returns: + int -- Length of the longest line in the file. + """ + max_line_length = 0 - elif os.path.isfile(src): - size = os.stat(src).st_size - rc = self._allocate_pdse(dest_name, size=size) + with open(file, "r") as src_file: + current_line = src_file.readline() - elif os.path.isdir(src): - path, dirs, files = next(os.walk(src)) - if dirs: - self.fail_json( - msg="Subdirectory found in source directory {0}".format(src) - ) - size = sum(os.stat(path + "/" + f).st_size for f in files) - rc = self._allocate_pdse(dest_name, size=size) - else: - rc = self._allocate_pdse(dest_name, src=src, size=size, alloc_vol=alloc_vol) + while current_line: + if len(current_line) > max_line_length: + max_line_length = len(current_line) - if rc != 0: - self.fail_json( - msg="Unable to allocate destination data set {0} to receive {1}".format(dest_name, src), - stdout=out, - stderr=err, - rc=rc, - stdout_lines=out.splitlines() if out else None, - stderr_lines=err.splitlines() if err else None, - ) + current_line = src_file.readline() - def _allocate_pdse( - self, - ds_name, - size=None, - src_vol=None, - src=None, - alloc_vol=None - ): - """Allocate a partitioned extended data set. If 'size' - is provided, allocate PDSE using this given size. If size is not - provided, obtain the 'src' data set size from vtoc and allocate using - that information. + if max_line_length == 0: + max_line_length = 80 - Arguments: - ds_name {str} -- The name of the PDSE to allocate + return max_line_length - Keyword Arguments: - size {int} -- The size, in bytes, of the allocated PDSE - src {str} -- The name of the source data set from which to get the size - src_vol {str} -- Volume of the source data set - allc_vol {str} -- The volume where PDSE should be allocated - """ - rc = -1 - recfm = "FB" - lrecl = 80 - alloc_size = size - if not alloc_size: - if src_vol: - vtoc_info = vtoc.get_data_set_entry(src, src_vol) - tracks = int(vtoc_info.get("last_block_pointer").get("track")) - blocks = int(vtoc_info.get("last_block_pointer").get("block")) - blksize = int(vtoc_info.get("block_size")) - bytes_per_trk = 56664 - alloc_size = (tracks * bytes_per_trk) + (blocks * blksize) - recfm = vtoc_info.get("record_format") or recfm - lrecl = int(vtoc_info.get("record_length")) or lrecl - else: - alloc_size = 5242880 # Use the default 5 Megabytes - - alloc_size = "{0}K".format(str(int(math.ceil(alloc_size / 1024)))) - parms = dict( - name=ds_name, - type="PDSE", - primary_space=alloc_size, - record_format=recfm, - record_length=lrecl - ) - if alloc_vol: - parms['volume'] = alloc_vol - response = datasets._create(**parms) - rc = response.rc +def dump_data_set_member_to_file(data_set_member, is_binary): + """Dumps a data set member into a file in USS. + + Arguments: + data_set_member (str) -- Name of the data set member to dump. + is_binary (bool) -- Whether the data set member contains binary data. + + Returns: + str -- Path of the file in USS that contains the dump of the member. + + Raise: + DataSetMemberAttributeError: When the call to dcp fails. + """ + fd, temp_path = tempfile.mkstemp() + os.close(fd) + + copy_args = dict() + if is_binary: + copy_args["options"] = "-B" + + response = datasets._copy(data_set_member, temp_path, None, **copy_args) + if response.rc != 0 or response.stderr_response: + raise DataSetMemberAttributeError(data_set_member) + + return temp_path + + +def get_data_set_attributes( + name, + size, + is_binary, + record_format="VB", + record_length=1028, + type="SEQ", + volume=None +): + """Returns the parameters needed to allocate a new data set by using a mixture + of default values and user provided ones. + + Binary data sets will always have "VB" and 1028 as their record format and + record length, respectively. Values provided when calling the function will + be overwritten in this case. + + The default values for record format and record length are taken from the + default values that the cp command uses. Primary space is computed based on + the size provided, and secondary space is 10% of the primary one. + + Block sizes are computed following the recomendations on this documentation + page: https://www.ibm.com/docs/en/zos/2.4.0?topic=options-block-size-blksize + + Arguments: + name (str) -- Name of the new sequential data set. + size (int) -- Number of bytes needed for the new data set. + is_binary (bool) -- Whether or not the data set will have binary data. + record_format (str, optional) -- Type of record format. + record_length (int, optional) -- Record length for the data set. + type (str, optional) -- Type of the new data set. + volume (str, optional) -- Volume where the data set should be allocated. + + Returns: + dict -- Parameters that can be passed into data_set.DataSet.ensure_present + """ + # Calculating the size needed to allocate. + space_primary = int(math.ceil((size / 1024))) + space_primary = space_primary + int(math.ceil(space_primary * 0.05)) + space_secondary = int(math.ceil(space_primary * 0.10)) + + # Overwriting record_format and record_length when the data set has binary data. + if is_binary: + record_format = "FB" + record_length = 80 + + max_block_size = 32760 + if record_format == "FB": + # Computing the biggest possible block size that doesn't exceed + # the maximum size allowed. + block_size = math.floor(max_block_size / record_length) * record_length + else: + block_size = max_block_size + + parms = dict( + name=name, + type=type, + space_primary=space_primary, + space_secondary=space_secondary, + record_format=record_format, + record_length=record_length, + block_size=block_size, + space_type="K" + ) + + if volume: + parms['volumes'] = [volume] - return rc + return parms + + +def create_seq_dataset_from_file( + file, + dest, + force, + is_binary, + volume=None +): + """Creates a new sequential dataset with attributes suitable to copy the + contents of a file into it. + + Arguments: + file (str) -- Path of the source file. + dest (str) -- Name of the data set. + force (bool) -- Whether to replace an existing data set. + is_binary (bool) -- Whether the file has binary data. + volume (str, optional) -- Volume where the data set should be. + """ + src_size = os.stat(file).st_size + record_format = record_length = None + + # When src is a binary file, the module will use default attributes + # for the data set, such as a record format of "VB". + if not is_binary: + record_format = "FB" + record_length = get_file_record_length(file) + + dest_params = get_data_set_attributes( + name=dest, + size=src_size, + is_binary=is_binary, + record_format=record_format, + record_length=record_length, + volume=volume + ) + + data_set.DataSet.ensure_present(replace=force, **dest_params) def backup_data(ds_name, ds_type, backup_name): @@ -1515,31 +1533,92 @@ def backup_data(ds_name, ds_type, backup_name): {str} -- The USS path or data set name where data was backed up """ module = AnsibleModuleHelper(argument_spec={}) + try: if ds_type == "USS": return backup.uss_file_backup(ds_name, backup_name=backup_name) return backup.mvs_file_backup(ds_name, backup_name) except Exception as err: - module.fail_json( - msg=str(err.msg), - stdout=err.stdout, - stderr=err.stderr, - rc=err.rc - ) + module.fail_json(msg=repr(err)) + + +def restore_backup(dest, backup, dest_type, use_backup, volume=None): + """Restores a destination file/directory/data set by using a given backup. + Arguments: + dest (str) -- Name of the destination data set or path of the file/directory. + backup (str) -- Name or path of the backup. + dest_type (str) -- Type of the destination. + use_backup (bool) -- Whether the destination actually created a backup, sometimes the user + tries to use an empty data set, and in that case a new data set is allocated instead + of copied. + volume (str, optional) -- Volume where the data set should be. + """ + volumes = [volume] if volume else None + + if use_backup: + if dest_type == "USS": + if os.path.isfile(backup): + os.remove(dest) + shutil.copy(backup, dest) + else: + shutil.rmtree(dest, ignore_errors=True) + shutil.copytree(backup, dest) + else: + data_set.DataSet.ensure_absent(dest, volumes) + + if dest_type in data_set.DataSet.MVS_VSAM: + repro_cmd = """ REPRO - + INDATASET('{0}') - + OUTDATASET('{1}')""".format(backup.upper(), dest.upper()) + idcams(repro_cmd, authorized=True) + else: + datasets.copy(backup, dest) -def is_compatible(src_type, dest_type, copy_member, src_member): + else: + data_set.DataSet.ensure_absent(dest, volumes) + data_set.DataSet.allocate_model_data_set(dest, backup, volume) + + +def erase_backup(backup, dest_type, volume=None): + """Erases a temporary backup from the system. + + Arguments: + backup (str) -- Name or path of the backup. + dest_type (str) -- Type of the destination. + volume (str, optional) -- Volume where the data set should be. + """ + if dest_type == "USS": + if os.path.isfile(backup): + os.remove(backup) + else: + shutil.rmtree(backup, ignore_errors=True) + else: + volumes = [volume] if volume else None + data_set.DataSet.ensure_absent(backup, volumes) + + +def is_compatible( + src_type, + dest_type, + copy_member, + src_member, + is_src_dir, + is_src_inline +): """Determine whether the src and dest are compatible and src can be copied to dest. Arguments: - src_type {str} -- Type of the source (e.g. PDSE, USS) - dest_type {str} -- Type of destination - src_member {bool} -- Whether src is a data set member - copy_member {bool} -- Whether dest is a data set member + src_type {str} -- Type of the source (e.g. PDSE, USS). + dest_type {str} -- Type of destination. + copy_member {bool} -- Whether dest is a data set member. + src_member {bool} -- Whether src is a data set member. + is_src_dir {bool} -- Whether the src is a USS directory. + is_src_inline {bool} -- Whether the src comes from inline content. Returns: - {bool} -- Whether src can be copied to dest + {bool} -- Whether src can be copied to dest. """ # ******************************************************************** # If the destination does not exist, then obviously it will need @@ -1553,9 +1632,9 @@ def is_compatible(src_type, dest_type, copy_member, src_member): # partitioned data set member, other sequential data sets or USS files. # Anything else is incompatible. # ******************************************************************** - if src_type in MVS_SEQ: + if src_type in data_set.DataSet.MVS_SEQ: return not ( - (dest_type in MVS_PARTITIONED and not copy_member) or dest_type == "VSAM" + (dest_type in data_set.DataSet.MVS_PARTITIONED and not copy_member) or dest_type == "VSAM" ) # ******************************************************************** @@ -1570,18 +1649,98 @@ def is_compatible(src_type, dest_type, copy_member, src_member): # In the second case, the possible targets are USS directories and # other PDS/PDSE. Anything else is incompatible. # ******************************************************************** - elif src_type in MVS_PARTITIONED: + elif src_type in data_set.DataSet.MVS_PARTITIONED: if dest_type == "VSAM": return False if not src_member: - return not (copy_member or dest_type in MVS_SEQ) + return not (copy_member or dest_type in data_set.DataSet.MVS_SEQ) return True + # ******************************************************************** + # If source is a USS file, then the destination can be another USS file, + # a directory, a sequential data set or a partitioned data set member. + # When using the content option, the destination should specify + # a member name if copying into a partitioned data set. + # + # If source is instead a directory, the destination has to be another + # directory or a partitioned data set. + # ******************************************************************** elif src_type == "USS": - return dest_type != "VSAM" + if dest_type in data_set.DataSet.MVS_SEQ or copy_member: + return not is_src_dir + elif dest_type in data_set.DataSet.MVS_PARTITIONED and not copy_member and is_src_inline: + return False + elif dest_type in data_set.DataSet.MVS_VSAM: + return False + else: + return True + # ******************************************************************** + # If source is a VSAM data set, we need to check compatibility between + # all the different types of VSAMs, following the documentation for the + # dest parameter. + # ******************************************************************** else: - return dest_type == "VSAM" + if (dest_type == "KSDS" or dest_type == "ESDS"): + return src_type == "ESDS" or src_type == "KSDS" or src_type == "RRDS" + elif dest_type == "RRDS": + return src_type == "RRDS" + elif dest_type == "LDS": + return src_type == "LDS" + else: + return dest_type == "VSAM" + + +def does_destination_allow_copy( + src, + src_type, + dest, + dest_exists, + member_exists, + dest_type, + is_uss, + force, + volume=None +): + """Checks whether or not the module can copy into the destination + specified. + + Arguments: + src {str} -- Name of the source. + src_type {bool} -- Type of the source (SEQ/PARTITIONED/VSAM/USS). + dest {str} -- Name of the destination. + dest_exists {bool} -- Whether or not the destination exists. + member_exists {bool} -- Whether or not a member in a partitioned destination exists. + dest_type {str} -- Type of the destination (SEQ/PARTITIONED/VSAM/USS). + is_uss {bool} -- Whether or not the destination is inside USS. + force {bool} -- Whether or not the module can replace existing destinations. + volume {str, optional} -- Volume where the destination should be. + + Returns: + bool -- If the module has the permissions needed to create, use or replace + the destination. + """ + # If the destination is inside USS and the module doesn't have permission to replace it, + # it fails. + if is_uss and dest_exists: + if src_type == "USS" and os.path.isdir(dest) and os.path.isdir(src) and not force: + return False + elif os.path.isfile(dest) and not force: + return False + + # If the destination is a sequential or VSAM data set and is empty, the module will try to use it, + # otherwise, force needs to be True to continue and replace it. + if (dest_type in data_set.DataSet.MVS_SEQ or dest_type in data_set.DataSet.MVS_VSAM) and dest_exists: + is_dest_empty = data_set.DataSet.is_empty(dest, volume) + if not (is_dest_empty or force): + return False + + # When the destination is a partitioned data set, the module will have to be able to replace + # existing members inside of it, if needed. + if dest_type in data_set.DataSet.MVS_PARTITIONED and dest_exists and member_exists and not force: + return False + + return True def get_file_checksum(src): @@ -1604,7 +1763,7 @@ def get_file_checksum(src): while block: hash_digest.update(block) block = infile.read(blksize) - except Exception as err: + except Exception: raise return hash_digest.hexdigest() @@ -1622,9 +1781,8 @@ def cleanup(src_list): dir_list = glob.glob(tmp_dir + "/ansible-zos-copy-payload*") conv_list = glob.glob(tmp_dir + "/converted*") tmp_list = glob.glob(tmp_dir + "/{0}*".format(tmp_prefix)) - tmp_ds = glob.glob(tmp_dir + "/*.*.*.*") - for file in (dir_list + conv_list + tmp_list + tmp_ds + src_list): + for file in (dir_list + conv_list + tmp_list + src_list): try: if file and os.path.exists(file): if os.path.isfile(file): @@ -1657,6 +1815,119 @@ def is_member_wildcard(src): ) +def allocate_destination_data_set( + src, + dest, + src_ds_type, + dest_ds_type, + dest_exists, + force, + is_binary, + dest_data_set=None, + volume=None +): + """ + Allocates a new destination data set to copy into, erasing a preexistent one if + needed. + + Arguments: + src (str) -- Name of the source data set, used as a model when appropiate. + dest (str) -- Name of the destination data set. + src_ds_type (str) -- Source of the destination data set. + dest_ds_type (str) -- Type of the destination data set. + dest_exists (bool) -- Whether the destination data set already exists. + force (bool) -- Whether to replace an existent data set. + is_binary (bool) -- Whether the data set will contain binary data. + dest_data_set (dict, optional) -- Parameters containing a full definition + of the new data set; they will take precedence over any other allocation logic. + volume (str, optional) -- Volume where the data set should be allocated into. + + Returns: + bool -- True if the data set was created, False otherwise. + """ + src_name = data_set.extract_dsname(src) + is_dest_empty = data_set.DataSet.is_empty(dest) if dest_exists else True + + # Replacing an existing dataset only when it's not empty. We don't know whether that + # empty dataset was created for the user by an admin/operator, and they don't have permissions + # to create new datasets. + # These rules assume that source and destination types are compatible. + if dest_exists and is_dest_empty: + return False + + # Giving more priority to the parameters given by the user. + if dest_data_set: + dest_params = dest_data_set + dest_params["name"] = dest + data_set.DataSet.ensure_present(replace=force, **dest_params) + elif dest_ds_type in data_set.DataSet.MVS_SEQ: + volumes = [volume] if volume else None + data_set.DataSet.ensure_absent(dest, volumes=volumes) + + if src_ds_type == "USS": + # Taking the temp file when a local file was copied with sftp. + create_seq_dataset_from_file(src, dest, force, is_binary, volume=volume) + elif src_ds_type in data_set.DataSet.MVS_SEQ: + data_set.DataSet.allocate_model_data_set(ds_name=dest, model=src_name, vol=volume) + else: + temp_dump = None + try: + # Dumping the member into a file in USS to compute the record length and + # size for the new data set. + temp_dump = dump_data_set_member_to_file(src, is_binary) + create_seq_dataset_from_file(temp_dump, dest, force, is_binary, volume=volume) + finally: + if temp_dump: + os.remove(temp_dump) + elif dest_ds_type in data_set.DataSet.MVS_PARTITIONED and not dest_exists: + # Taking the src as model if it's also a PDSE. + if src_ds_type in data_set.DataSet.MVS_PARTITIONED: + data_set.DataSet.allocate_model_data_set(ds_name=dest, model=src_name, vol=volume) + elif src_ds_type in data_set.DataSet.MVS_SEQ: + src_attributes = datasets.listing(src_name)[0] + # The size returned by listing is in bytes. + size = int(src_attributes.total_space) + record_format = src_attributes.recfm + record_length = int(src_attributes.lrecl) + + dest_params = get_data_set_attributes(dest, size, is_binary, record_format=record_format, record_length=record_length, type="PDSE", volume=volume) + data_set.DataSet.ensure_present(replace=force, **dest_params) + elif src_ds_type == "USS": + if os.path.isfile(src): + # This is almost the same as allocating a sequential dataset. + size = os.stat(src).st_size + record_format = record_length = None + + if not is_binary: + record_format = "FB" + record_length = get_file_record_length(src) + + dest_params = get_data_set_attributes( + dest, + size, + is_binary, + record_format=record_format, + record_length=record_length, + type="PDSE", + volume=volume + ) + else: + # TODO: decide on whether to compute the longest file record length and use that for the whole PDSE. + size = sum(os.stat("{0}/{1}".format(src, member)).st_size for member in os.listdir(src)) + # This PDSE will be created with record format VB and a record length of 1028. + dest_params = get_data_set_attributes(dest, size, is_binary, type="PDSE", volume=volume) + + data_set.DataSet.ensure_present(replace=force, **dest_params) + elif dest_ds_type in data_set.DataSet.MVS_VSAM: + # If dest_data_set is not available, always create the destination using the src VSAM + # as a model. + volumes = [volume] if volume else None + data_set.DataSet.ensure_absent(dest, volumes=volumes) + data_set.DataSet.allocate_model_data_set(ds_name=dest, model=src_name, vol=volume) + + return True + + def run_module(module, arg_def): # ******************************************************************** # Verify the validity of module args. BetterArgParser raises ValueError @@ -1668,7 +1939,8 @@ def run_module(module, arg_def): except ValueError as err: # Bypass BetterArgParser when src is of the form 'SOME.DATA.SET(*)' if not is_member_wildcard(module.params["src"]): - module.fail_json(msg="Parameter verification failed", stderr=str(err)) + module.fail_json( + msg="Parameter verification failed", stderr=str(err)) # ******************************************************************** # Initialize module variables # ******************************************************************** @@ -1686,20 +1958,18 @@ def run_module(module, arg_def): volume = module.params.get('volume') is_uss = module.params.get('is_uss') is_pds = module.params.get('is_pds') + is_src_dir = module.params.get('is_src_dir') is_mvs_dest = module.params.get('is_mvs_dest') temp_path = module.params.get('temp_path') alloc_size = module.params.get('size') src_member = module.params.get('src_member') copy_member = module.params.get('copy_member') - destination_dataset = module.params.get('destination_dataset') + force = module.params.get('force') - dd_type = destination_dataset.get("dd_type") or "BASIC" - space_primary = destination_dataset.get("space_primary") - space_secondary = destination_dataset.get("space_secondary") - space_type = destination_dataset.get("space_type") - record_format = destination_dataset.get("record_format") - record_length = destination_dataset.get("record_length") - block_size = destination_dataset.get("block_size") + dest_data_set = module.params.get('dest_data_set') + if dest_data_set: + if volume: + dest_data_set["volumes"] = [volume] # ******************************************************************** # When copying to and from a data set member, 'dest' or 'src' will be @@ -1711,8 +1981,7 @@ def run_module(module, arg_def): src_name = data_set.extract_dsname(src) if src else None member_name = data_set.extract_member_name(src) if src_member else None - conv_path = None - src_ds_vol = src_ds_type = dest_ds_type = dest_exists = None + conv_path = src_ds_vol = src_ds_type = dest_ds_type = dest_exists = None res_args = dict() # ******************************************************************** @@ -1731,31 +2000,69 @@ def run_module(module, arg_def): mode = "0{0:o}".format(stat.S_IMODE(os.stat(src).st_mode)) # ******************************************************************** - # 1. Use DataSetUtils to determine the src and dest data set type. - # 2. For source data sets, find its volume, which will be used later. + # Use the DataSet class to gather the type and volume of the source + # and destination datasets, if needed. # ******************************************************************** + dest_member_exists = False + src_has_subdirs = False try: - if is_uss: - dest_ds_type = "USS" - dest_exists = os.path.exists(dest) - else: - dest_du = data_set.DataSetUtils(dest_name) - dest_exists = dest_du.exists() - if copy_member: - dest_exists = dest_exists and dest_du.member_exists(dest_member) - dest_ds_type = dest_du.ds_type() + # If temp_path, the plugin has copied a file from the controller to USS. if temp_path or "/" in src: src_ds_type = "USS" else: - src_du = data_set.DataSetUtils(src_name) - if src_du.exists(): - if src_member and not src_du.member_exists(member_name): + if data_set.DataSet.data_set_exists(src_name): + if src_member and not data_set.DataSet.data_set_member_exists(src): raise NonExistentSourceError(src) - src_ds_type = src_du.ds_type() - src_ds_vol = src_du.volume() + src_ds_type = data_set.DataSet.data_set_type(src_name) + src_ds_vol = data_set.DataSet.data_set_volume(src_name) + else: raise NonExistentSourceError(src) + # An empty VSAM will throw an error when IDCAMS tries to open it to copy + # the contents. + if src_ds_type in data_set.DataSet.MVS_VSAM and data_set.DataSet.is_empty(src_name): + module.exit_json( + note="The source VSAM {0} is likely empty. No data was copied.".format(src_name), + changed=False, + dest=dest + ) + + if encoding: + module.fail_json( + msg="Encoding conversion is only valid for USS source" + ) + + if is_uss: + dest_ds_type = "USS" + dest_exists = os.path.exists(dest) + + if dest_exists and not os.access(dest, os.W_OK): + module.fail_json(msg="Destination {0} is not writable".format(dest)) + else: + dest_exists = data_set.DataSet.data_set_exists(dest_name, volume) + dest_ds_type = data_set.DataSet.data_set_type(dest_name, volume) + + # dest_data_set.type overrides `dest_ds_type` given precedence rules + if dest_data_set and dest_data_set.get("type"): + dest_ds_type = dest_data_set.get("type") + + if dest_ds_type in data_set.DataSet.MVS_PARTITIONED: + # Checking if the members that would be created from the directory files + # are already present on the system. + if copy_member: + dest_member_exists = dest_exists and data_set.DataSet.data_set_member_exists(dest) + elif src_ds_type == "USS": + if temp_path: + root_dir = "{0}/{1}".format(temp_path, os.path.basename(os.path.normpath(src))) + root_dir = os.path.normpath(root_dir) + else: + root_dir = src + + dest_member_exists = dest_exists and data_set.DataSet.files_in_data_set_members(root_dir, dest) + elif src_ds_type in data_set.DataSet.MVS_PARTITIONED: + dest_member_exists = dest_exists and data_set.DataSet.data_set_shared_members(src, dest) + except Exception as err: module.fail_json(msg=str(err)) @@ -1763,8 +2070,16 @@ def run_module(module, arg_def): # Some src and dest combinations are incompatible. For example, it is # not possible to copy a PDS member to a VSAM data set or a USS file # to a PDS. Perform these sanity checks. + # Note: dest_ds_type can also be passed from dest_data_set.type # ******************************************************************** - if not is_compatible(src_ds_type, dest_ds_type, copy_member, src_member): + if not is_compatible( + src_ds_type, + dest_ds_type, + copy_member, + src_member, + is_src_dir, + (src_ds_type == "USS" and src is None) + ): module.fail_json( msg="Incompatible target type '{0}' for source '{1}'".format( dest_ds_type, src_ds_type @@ -1774,13 +2089,10 @@ def run_module(module, arg_def): # ******************************************************************** # Backup should only be performed if dest is an existing file or # data set. Otherwise ignored. - # - # If destination exists and the 'force' parameter is set to false, - # the module exits with a note to the user. # ******************************************************************** if dest_exists: if backup or backup_name: - if dest_ds_type in MVS_PARTITIONED and data_set.is_empty(dest_name): + if dest_ds_type in data_set.DataSet.MVS_PARTITIONED and data_set.DataSet.is_empty(dest_name): # The partitioned data set is empty res_args["note"] = "Destination is empty, backup request ignored" else: @@ -1797,151 +2109,186 @@ def run_module(module, arg_def): # # USS files and sequential data sets are not required to be explicitly # created; they are automatically created by the Python/ZOAU API. + # + # is_pds is determined in the action plugin when src is a directory and destination + # is a data set (is_src_dir and is_mvs_dest are true) # ******************************************************************** else: if not dest_ds_type: if ( is_pds or copy_member - or (src_ds_type in MVS_PARTITIONED and (not src_member) and is_mvs_dest) + or (src_ds_type in data_set.DataSet.MVS_PARTITIONED and (not src_member) and is_mvs_dest) or (src and os.path.isdir(src) and is_mvs_dest) ): dest_ds_type = "PDSE" - pch = PDSECopyHandler(module, dest_exists, backup_name=backup_name) - pch.create_pdse( - src, - dest_name, - alloc_size, - src_ds_type, - remote_src=remote_src, - src_vol=src_ds_vol, - alloc_vol=volume, - ) - elif src_ds_type == "VSAM": - dest_ds_type = "VSAM" + elif src_ds_type in data_set.DataSet.MVS_VSAM: + dest_ds_type = src_ds_type elif not is_uss: dest_ds_type = "SEQ" + # Filling in the type in dest_data_set in case the user didn't specify it in + # the playbook. + if dest_data_set: + dest_data_set["type"] = dest_ds_type + res_args["changed"] = True + if not does_destination_allow_copy( + src, + is_src_dir, + dest_name, + dest_exists, + dest_member_exists, + dest_ds_type, + is_uss, + force, + volume + ): + module.fail_json(msg="{0} already exists on the system, unable to overwrite unless force=True is specified.".format(dest)) + + # Creating an emergency backup or an empty data set to use as a model to + # be able to restore the destination in case the copy fails. + if dest_exists: + if is_uss or not data_set.DataSet.is_empty(dest_name): + use_backup = True + if is_uss: + emergency_backup = tempfile.mkdtemp() + emergency_backup = backup_data(dest, dest_ds_type, emergency_backup) + else: + emergency_backup = backup_data(dest, dest_ds_type, None) + # If dest is an empty data set, instead create a data set to + # use as a model when restoring. + else: + use_backup = False + emergency_backup = data_set.DataSet.temp_name() + data_set.DataSet.allocate_model_data_set(emergency_backup, dest_name) + + try: + if not is_uss: + res_args["changed"] = allocate_destination_data_set( + temp_path or src, + dest_name, src_ds_type, + dest_ds_type, + dest_exists, + force, + is_binary, + dest_data_set=dest_data_set, + volume=volume + ) + except Exception as err: + if dest_exists: + restore_backup(dest_name, emergency_backup, dest_ds_type, use_backup) + erase_backup(emergency_backup, dest_ds_type) + module.fail_json(msg="Unable to allocate destination data set: {0}".format(str(err))) + # ******************************************************************** # Encoding conversion is only valid if the source is a local file, # local directory or a USS file/directory. # ******************************************************************** copy_handler = CopyHandler( module, - dest_exists, is_binary=is_binary, backup_name=backup_name ) - if encoding: - if remote_src and src_ds_type != "USS": - copy_handler.fail_json( - msg="Encoding conversion is only valid for USS source" - ) - # 'conv_path' points to the converted src file or directory - if is_mvs_dest: - encoding["to"] = encode.Defaults.DEFAULT_EBCDIC_MVS_CHARSET - conv_path = copy_handler.convert_encoding(src, temp_path, encoding) + try: + if encoding: + # 'conv_path' points to the converted src file or directory + if is_mvs_dest: + encoding["to"] = encode.Defaults.DEFAULT_EBCDIC_MVS_CHARSET - # ------------------------------- o ----------------------------------- - # Copy to USS file or directory - # --------------------------------------------------------------------- - if is_uss: - if dest_exists and not os.access(dest, os.W_OK): - copy_handler.fail_json(msg="Destination {0} is not writable".format(dest)) + conv_path = copy_handler.convert_encoding(src, temp_path, encoding) - uss_copy_handler = USSCopyHandler( - module, - dest_exists, - is_binary=is_binary, - common_file_args=dict(mode=mode, group=group, owner=owner), - backup_name=backup_name, - ) - dest = uss_copy_handler.copy_to_uss( - src, - dest, - conv_path, - temp_path, - src_ds_type, - src_member, - member_name - ) - res_args['size'] = os.stat(dest).st_size - remote_checksum = dest_checksum = None - if validate: - try: - remote_checksum = get_file_checksum(temp_path or src) - dest_checksum = get_file_checksum(dest) - except Exception as err: - copy_handler.fail_json( - msg="Unable to calculate checksum", stderr=str(err) - ) - res_args["checksum"] = remote_checksum - res_args["changed"] = ( - res_args.get("changed") or remote_checksum != dest_checksum + # ------------------------------- o ----------------------------------- + # Copy to USS file or directory + # --------------------------------------------------------------------- + if is_uss: + uss_copy_handler = USSCopyHandler( + module, + is_binary=is_binary, + common_file_args=dict(mode=mode, group=group, owner=owner), + backup_name=backup_name, ) - else: + + original_checksum = None + if dest_exists: + original_checksum = get_file_checksum(dest) + + dest = uss_copy_handler.copy_to_uss( + src, + dest, + conv_path, + temp_path, + src_ds_type, + src_member, + member_name, + force + ) + res_args['size'] = os.stat(dest).st_size + remote_checksum = dest_checksum = None + try: remote_checksum = get_file_checksum(temp_path or src) dest_checksum = get_file_checksum(dest) - except Exception as err: - pass - res_args["changed"] = ( - res_args.get("changed") or remote_checksum != dest_checksum - ) - # ------------------------------- o ----------------------------------- - # Copy to sequential data set - # --------------------------------------------------------------------- - elif dest_ds_type in MVS_SEQ: - copy_handler.copy_to_seq( - src, - temp_path, - conv_path, - dest, - src_ds_type, - alloc_vol=volume, - type=dd_type, - space_primary=space_primary, - space_secondary=space_secondary, - space_type=space_type, - record_format=record_format, - record_length=record_length, - block_size=block_size, - ) - dest = dest.upper() + if validate: + res_args["checksum"] = dest_checksum - # ------------------------------- o ----------------------------------- - # Copy to PDS/PDSE - # --------------------------------------------------------------------- - elif dest_ds_type in MVS_PARTITIONED: - if not remote_src and not copy_member and os.path.isdir(temp_path): - temp_path = os.path.join(temp_path, os.path.basename(src)) + if remote_checksum != dest_checksum: + raise CopyOperationError(msg="Validation failed for copied files") - pdse_copy_handler = PDSECopyHandler( - module, dest_exists, is_binary=is_binary, backup_name=backup_name - ) - if copy_member or os.path.isfile(temp_path or src) or src_member: - dest = pdse_copy_handler.copy_to_member( + res_args["changed"] = ( + res_args.get("changed") or dest_checksum != original_checksum or os.path.isdir(dest) + ) + except Exception as err: + if validate: + raise CopyOperationError(msg="Unable to calculate checksum", stderr=str(err)) + + # ------------------------------- o ----------------------------------- + # Copy to sequential data set (PS / SEQ) + # --------------------------------------------------------------------- + elif dest_ds_type in data_set.DataSet.MVS_SEQ: + copy_handler.copy_to_seq( src, temp_path, conv_path, dest, - copy_member=copy_member ) - else: + res_args["changed"] = True + dest = dest.upper() + + # --------------------------------------------------------------------- + # Copy to PDS/PDSE + # --------------------------------------------------------------------- + elif dest_ds_type in data_set.DataSet.MVS_PARTITIONED: + if not remote_src and not copy_member and os.path.isdir(temp_path): + temp_path = os.path.join(temp_path, os.path.basename(src)) + + pdse_copy_handler = PDSECopyHandler( + module, is_binary=is_binary, backup_name=backup_name + ) + pdse_copy_handler.copy_to_pdse( - src, temp_path, conv_path, dest, src_ds_type, alloc_vol=volume + src, temp_path, conv_path, dest_name, src_ds_type, src_member=src_member, dest_member=dest_member ) - dest = dest.upper() + res_args["changed"] = True + dest = dest.upper() - # ------------------------------- o ----------------------------------- - # Copy to VSAM data set - # --------------------------------------------------------------------- - else: - copy_handler.copy_to_vsam(src, dest, alloc_vol=volume) + # ------------------------------- o ----------------------------------- + # Copy to VSAM data set + # --------------------------------------------------------------------- + else: + copy_handler.copy_to_vsam(src, dest) + res_args["changed"] = True + + except CopyOperationError as err: + if dest_exists: + restore_backup(dest_name, emergency_backup, dest_ds_type, use_backup) + raise err + finally: + if dest_exists: + erase_backup(emergency_backup, dest_ds_type) res_args.update( dict( @@ -1972,36 +2319,43 @@ def main(): ignore_sftp_stderr=dict(type='bool', default=False), validate=dict(type='bool', default=False), volume=dict(type='str', required=False), - destination_dataset=dict( + dest_data_set=dict( type='dict', required=False, options=dict( - dd_type=dict( - arg_type='str', - default='BASIC', - choices=['BASIC', 'KSDS', 'ESDS', 'RRDS', 'LDS', 'SEQ', 'PDS', 'PDSE', 'MEMBER'], - required=False, + type=dict( + type='str', + choices=['BASIC', 'KSDS', 'ESDS', 'RRDS', + 'LDS', 'SEQ', 'PDS', 'PDSE', 'MEMBER'], + required=True, ), - space_primary=dict(arg_type='int', default=5, required=False), - space_secondary=dict(arg_type='int', default=3, required=False), + space_primary=dict( + type='int', required=False), + space_secondary=dict( + type='int', required=False), space_type=dict( - arg_type='str', - default='M', + type='str', choices=['K', 'M', 'G', 'CYL', 'TRK'], required=False, ), record_format=dict( - arg_type='str', - default='FB', + type='str', choices=["FB", "VB", "FBA", "VBA", "U"], required=False ), - record_length=dict(type='int', default=80, required=False), + record_length=dict(type='int', required=False), block_size=dict(type='int', required=False), + directory_blocks=dict(type="int", required=False), + key_offset=dict(type="int", required=False), + key_length=dict(type="int", required=False), + sms_storage_class=dict(type="str", required=False), + sms_data_class=dict(type="str", required=False), + sms_management_class=dict(type="str", required=False), ) ), is_uss=dict(type='bool'), is_pds=dict(type='bool'), + is_src_dir=dict(type='bool'), is_mvs_dest=dict(type='bool'), size=dict(type='int'), temp_path=dict(type='str'), @@ -2016,7 +2370,10 @@ def main(): module.deprecate( msg='Support for configuring sftp_port has been deprecated.' 'Configuring the SFTP port is now managed through Ansible connection plugins option \'ansible_port\'', - date='2021-08-01', collection_name='ibm.ibm_zos_core') + version='1.5.0', + date='2021-08-01', + collection_name='ibm.ibm_zos_core') + # Date and collection are supported in Ansbile 2.9.10 or later arg_def = dict( src=dict(arg_type='data_set_or_path', required=False), @@ -2032,17 +2389,24 @@ def main(): sftp_port=dict(arg_type='int', required=False), volume=dict(arg_type='str', required=False), - destination_dataset=dict( + dest_data_set=dict( arg_type='dict', required=False, options=dict( - dd_type=dict(arg_type='str', default='BASIC', required=False), - space_primary=dict(arg_type='int', default=5, required=False), - space_secondary=dict(arg_type='int', default=3, required=False), - space_type=dict(arg_type='str', default='TRK', required=False), - record_format=dict(arg_type='str', default='FB', required=False), - record_length=dict(arg_type='int', default=80, required=False), + type=dict(arg_type='str', required=True), + space_primary=dict(arg_type='int', required=False), + space_secondary=dict( + arg_type='int', required=False), + space_type=dict(arg_type='str', required=False), + record_format=dict( + arg_type='str', required=False), block_size=dict(arg_type='int', required=False), + directory_blocks=dict(arg_type="int", required=False), + key_offset=dict(arg_type="int", required=False), + key_length=dict(arg_type="int", required=False), + sms_storage_class=dict(arg_type="str", required=False), + sms_data_class=dict(arg_type="str", required=False), + sms_management_class=dict(arg_type="str", required=False), ) ), ) @@ -2086,6 +2450,9 @@ def main(): try: res_args, temp_path, conv_path = run_module(module, arg_def) module.exit_json(**res_args) + except CopyOperationError as err: + cleanup([]) + module.fail_json(**(err.json_args)) finally: cleanup([temp_path, conv_path]) @@ -2100,9 +2467,38 @@ def __init__(self, src, f_code, t_code): class NonExistentSourceError(Exception): def __init__(self, src): - self.msg = "Source {0} does not exist".format(src) + self.msg = "Source data set {0} does not exist".format(src) + super().__init__(self.msg) + + +class DataSetMemberAttributeError(Exception): + def __init__(self, src): + self.msg = "Unable to get size and record length of member {0}".format(src) super().__init__(self.msg) +class CopyOperationError(Exception): + def __init__( + self, + msg, + rc=None, + stdout=None, + stderr=None, + stdout_lines=None, + stderr_lines=None, + cmd=None + ): + self.json_args = dict( + msg=msg, + rc=rc, + stdout=stdout, + stderr=stderr, + stdout_lines=stdout_lines, + stderr_lines=stderr_lines, + cmd=cmd, + ) + super().__init__(msg) + + if __name__ == "__main__": main() diff --git a/plugins/modules/zos_fetch.py b/plugins/modules/zos_fetch.py index 1dce3ec5e..4584890fa 100644 --- a/plugins/modules/zos_fetch.py +++ b/plugins/modules/zos_fetch.py @@ -1,7 +1,7 @@ #!/usr/bin/python # -*- coding: utf-8 -*- -# Copyright (c) IBM Corporation 2019, 2020, 2021 +# Copyright (c) IBM Corporation 2019, 2020, 2021, 2021 # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at @@ -24,8 +24,8 @@ short_description: Fetch data from z/OS description: - This module fetches a UNIX System Services (USS) file, - PS (sequential data set), PDS, PDSE, member of a PDS or PDSE, or - KSDS (VSAM data set) from a remote z/OS system. + PS (sequential data set), PDS, PDSE, member of a PDS or PDSE, or + KSDS (VSAM data set) from a remote z/OS system. - When fetching a sequential data set, the destination file name will be the same as the data set name. - When fetching a PDS or PDSE, the destination will be a directory with the @@ -39,8 +39,8 @@ options: src: description: - - Name of a UNIX System Services (USS) file, PS (sequential data set), PDS, - PDSE, member of a PDS, PDSE or KSDS (VSAM data set). + - Name of a UNIX System Services (USS) file, PS (sequential data set), PDS, + PDSE, member of a PDS, PDSE or KSDS (VSAM data set). - USS file paths should be absolute paths. required: true type: str @@ -577,7 +577,10 @@ def run_module(): module.deprecate( msg='Support for configuring sftp_port has been deprecated.' 'Configuring the SFTP port is now managed through Ansible connection plugins option \'ansible_port\'', - date='2021-08-01', collection_name='ibm.ibm_zos_core') + version='1.5.0', + date='2021-08-01', + collection_name='ibm.ibm_zos_core') + # Date and collection are supported in Ansbile 2.9.10 or later # ********************************************************** # # Verify paramater validity # diff --git a/requirements-zos-core.txt b/requirements.txt similarity index 100% rename from requirements-zos-core.txt rename to requirements.txt diff --git a/tests/dependencyfinder.py b/tests/dependencyfinder.py index 9e77be7c7..be039c918 100755 --- a/tests/dependencyfinder.py +++ b/tests/dependencyfinder.py @@ -1,6 +1,6 @@ #!/usr/bin/env python3 -# Copyright (c) IBM Corporation 2020 +# Copyright (c) IBM Corporation 2020, 2022 # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at @@ -23,6 +23,22 @@ class ArtifactManager(object): + """ + Dependency analyzer will review modules and action plugin changes and then + discover which tests should be run. It addition to mapping a test suite, + whether it is functional or unit, it will also see if the module/plugin + is used in test cases. In the even a module is used in another test suite + unrelated to the modules test suite, it will also be returned. This ensures + that a module changes don't break test suites dependent on a module. + + Usage (minimal) example: + python dependencyfinder.py -p .. -b origin/dev -m + + Note: It is possible that only test cases are modified only, without a module + or modules, in that case without a module pairing no test cases will be + returned. Its best to run full regression in that case until this can be + updated to support detecting only test cases. + """ artifacts = [] def __init__(self, artifacts=None): @@ -198,6 +214,14 @@ def __init__(self, name, source, path, dependencies=None): if dependencies: self.dependencies = dependencies + def __str__(self): + """ + Print the Artifact class instance variables in a pretty manor. + """ + return "name: {0},\nsource: {1},\npath: {2}\n".format(self.name, + self.source, + self.path) + @classmethod def from_path(cls, path): """Instantiate an Artifact based on provided path. @@ -465,7 +489,7 @@ def get_changed_files(path, branch="origin/dev"): stdout, stderr = get_diff.communicate() stdout = stdout.decode("utf-8") if get_diff.returncode > 0: - raise RuntimeError("Could not acquire change list") + raise RuntimeError("Could not acquire change list, error = [{0}]".format(stderr)) if stdout: changed_files = [ x.split("\t")[-1] for x in stdout.split("\n") if "D" not in x.split("\t")[0] @@ -473,6 +497,61 @@ def get_changed_files(path, branch="origin/dev"): return changed_files +def get_changed_plugins(path, branch="origin/dev"): + """Get a list of modules or plugins in a specific branch. + + Args: + branch (str, optional): The branch to compare to. Defaults to "dev". + + Raises: + RuntimeError: When git request-pull fails. + + Returns: + list[str]: A list of changed file paths. + """ + changed_plugins_modules = [] + get_diff_pr = subprocess.Popen( + ["git", "request-pull", branch, "./"], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + cwd=path, + ) + + stdout, stderr = get_diff_pr.communicate() + stdout = stdout.decode("utf-8") + + if get_diff_pr.returncode > 0: + raise RuntimeError("Could not acquire change list, error = [{0}]".format(stderr)) + if stdout: + for line in stdout.split("\n"): + path_corrected_line = None + if "plugins/action/" in line: + path_corrected_line = line.split("|", 1)[0].strip() + if "plugins/modules/" in line: + path_corrected_line = line.split("|", 1)[0].strip() + if "functional/modules/" in line: + if re.match('...', line): + line = line.replace("...", "tests") + path_corrected_line = line.split("|", 1)[0].strip() + if "plugins/module_utils/" in line: + path_corrected_line = line.split("|", 1)[0].strip() + if "unit/" in line: + if re.match('...', line): + line = line.replace("...", "tests") + path_corrected_line = line.split("|", 1)[0].strip() + if path_corrected_line is not None: + changed_plugins_modules.append(path_corrected_line) + + # # There can be the case where only test cases are updated, question is + # # should this be default behavior only when no modules are edited + # if not changed_plugins_modules: + # for line in stdout.split("\n"): + # if "tests/functional/modules/" in line: + # changed_plugins_modules.append(line.split("|", 1)[0].strip()) + + return changed_plugins_modules + + def parse_arguments(): """Parse and return command line arguments. @@ -511,6 +590,14 @@ def parse_arguments(): action="store_true", help="Print one test per line to stdout. Default behavior prints all tests on same line.", ) + parser.add_argument( + "-m", + "--minimum", + required=False, + action="store_true", + default=False, + help="Detect only the changes from the branch request-pull.", + ) args = parser.parse_args() return args @@ -519,13 +606,15 @@ def parse_arguments(): # TODO: add logic to only grab necessary tests that are impacted by changes args = parse_arguments() - global collection_to_use collection_to_use = args.collection artifacts = build_artifacts_from_collection(args.path) all_artifact_manager = ArtifactManager(artifacts) - changed_files = get_changed_files(args.path, args.branch) + if args.minimum: + changed_files = get_changed_plugins(args.path, args.branch) + else: + changed_files = get_changed_files(args.path, args.branch) changed_artifacts = [] for file in changed_files: diff --git a/tests/functional/modules/test_zos_apf_func.py b/tests/functional/modules/test_zos_apf_func.py index e5b123d3c..da7a885da 100644 --- a/tests/functional/modules/test_zos_apf_func.py +++ b/tests/functional/modules/test_zos_apf_func.py @@ -277,7 +277,8 @@ def test_add_already_present(ansible_zos_module): results = hosts.all.zos_apf(**test_info) pprint(vars(results)) for result in results.contacted.values(): - assert result.get("rc") == 16 + # Return code 16 if ZOAU < 1.2.0 and RC is 8 if ZOAU >= 1.2.0 + assert result.get("rc") == 16 or result.get("rc") == 8 test_info['state'] = 'absent' hosts.all.zos_apf(**test_info) clean_test_env(hosts, test_info) @@ -291,7 +292,8 @@ def test_del_not_present(ansible_zos_module): results = hosts.all.zos_apf(**test_info) pprint(vars(results)) for result in results.contacted.values(): - assert result.get("rc") == 16 + # Return code 16 if ZOAU < 1.2.0 and RC is 8 if ZOAU >= 1.2.0 + assert result.get("rc") == 16 or result.get("rc") == 8 clean_test_env(hosts, test_info) @@ -302,7 +304,8 @@ def test_add_not_found(ansible_zos_module): results = hosts.all.zos_apf(**test_info) pprint(vars(results)) for result in results.contacted.values(): - assert result.get("rc") == 16 + # Return code 16 if ZOAU < 1.2.0 and RC is 8 if ZOAU >= 1.2.0 + assert result.get("rc") == 16 or result.get("rc") == 8 def test_add_with_wrong_volume(ansible_zos_module): @@ -314,7 +317,8 @@ def test_add_with_wrong_volume(ansible_zos_module): results = hosts.all.zos_apf(**test_info) pprint(vars(results)) for result in results.contacted.values(): - assert result.get("rc") == 16 + # Return code 16 if ZOAU < 1.2.0 and RC is 8 if ZOAU >= 1.2.0 + assert result.get("rc") == 16 or result.get("rc") == 8 clean_test_env(hosts, test_info) diff --git a/tests/functional/modules/test_zos_copy_func.py b/tests/functional/modules/test_zos_copy_func.py index e593af088..78f0251f9 100644 --- a/tests/functional/modules/test_zos_copy_func.py +++ b/tests/functional/modules/test_zos_copy_func.py @@ -16,8 +16,10 @@ extract_member_name ) +import pytest import os import shutil +import re import tempfile from tempfile import mkstemp @@ -33,6 +35,11 @@ DUMMY DATA ---- LINE 007 ------ """ +VSAM_RECORDS = """00000001A record +00000002A record +00000003A record +""" + # SHELL_EXECUTABLE = "/usr/lpp/rsusr/ported/bin/bash" SHELL_EXECUTABLE = "/bin/sh" TEST_PS = "IMSTESTL.IMS01.DDCHKPT" @@ -50,527 +57,618 @@ def populate_dir(dir_path): infile.write(DUMMY_DATA) -def create_vsam_ksds(ds_name, ansible_zos_module): - hosts = ansible_zos_module - alloc_cmd = """ DEFINE CLUSTER (NAME({0}) - - INDEXED - - RECSZ(80,80) - - TRACKS(1,1) - - KEYS(5,0) - - CISZ(4096) - - VOLUMES(000000) - - FREESPACE(3,3) ) - - DATA (NAME({0}.DATA)) - - INDEX (NAME({0}.INDEX))""".format( - ds_name +def populate_partitioned_data_set(hosts, name, ds_type, members=None): + """Creates a new partitioned data set and inserts records into various + members of it. + + Arguments: + hosts (object) -- Ansible instance(s) that can call modules. + name (str) -- Name of the data set. + ds_type (str) -- Type of the data set (either PDS or PDSE). + members (list, optional) -- List of member names to create. + """ + if not members: + members = ["MEMBER1", "MEMBER2", "MEMBER3"] + ds_list = ["{0}({1})".format(name, member) for member in members] + + hosts.all.zos_data_set(name=name, type=ds_type, state="present") + + for member in ds_list: + hosts.all.shell( + cmd="decho '{0}' '{1}'".format(DUMMY_DATA, member), + executable=SHELL_EXECUTABLE + ) + + +def get_listcat_information(hosts, name, ds_type): + """Runs IDCAMS to get information about a data set. + + Arguments: + hosts (object) -- Ansible instance(s) that can call modules. + name (str) -- Name of the data set. + ds_type (str) -- Type of data set ("SEQ", "PDS", "PDSE", "KSDS"). + """ + if ds_type.upper() == "KSDS": + idcams_input = " LISTCAT ENT('{0}') DATA ALL".format(name) + else: + idcams_input = " LISTCAT ENTRIES('{0}')".format(name) + + return hosts.all.zos_mvs_raw( + program_name="idcams", + auth=True, + dds=[ + dict(dd_output=dict( + dd_name="sysprint", + return_content=dict(type="text") + )), + dict(dd_input=dict( + dd_name="sysin", + content=idcams_input + )) + ] ) - return hosts.all.shell( - cmd="mvscmdauth --pgm=idcams --sysprint=* --sysin=stdin", - executable=SHELL_EXECUTABLE, - stdin=alloc_cmd, + +def create_vsam_data_set(hosts, name, ds_type, add_data=False, key_length=None, key_offset=None): + """Creates a new VSAM on the system. + + Arguments: + hosts (object) -- Ansible instance(s) that can call modules. + name (str) -- Name of the VSAM data set. + type (str) -- Type of the VSAM (KSDS, ESDS, RRDS, LDS) + add_data (bool, optional) -- Whether to add records to the VSAM. + key_length (int, optional) -- Key length (only for KSDS data sets). + key_offset (int, optional) -- Key offset (only for KSDS data sets). + """ + params = dict( + name=name, + type=ds_type, + state="present" ) + if ds_type == "KSDS": + params["key_length"] = key_length + params["key_offset"] = key_offset + hosts.all.zos_data_set(**params) -def test_copy_local_file_to_non_existing_uss_file(ansible_zos_module): + if add_data: + record_src = "/tmp/zos_copy_vsam_record" + + hosts.all.zos_copy(content=VSAM_RECORDS, dest=record_src) + hosts.all.zos_encode(src=record_src, dest=name, from_encoding="ISO8859-1", to_encoding="IBM-1047") + hosts.all.file(path=record_src, state="absent") + + +@pytest.mark.uss +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", is_file=True, is_binary=False, is_remote=False), + dict(src="/etc/profile", is_file=True, is_binary=True, is_remote=False), + dict(src="Example inline content", is_file=False, is_binary=False, is_remote=False), + dict(src="Example inline content", is_file=False, is_binary=True, is_remote=False), + dict(src="/etc/profile", is_file=True, is_binary=False, is_remote=True), + dict(src="/etc/profile", is_file=True, is_binary=True, is_remote=True), +]) +def test_copy_file_to_non_existing_uss_file(ansible_zos_module, src): hosts = ansible_zos_module - dest_path = "/tmp/profile" + dest_path = "/tmp/zos_copy_test_profile" + try: hosts.all.file(path=dest_path, state="absent") - copy_res = hosts.all.zos_copy(src="/etc/profile", dest=dest_path) + + if src["is_file"]: + copy_res = hosts.all.zos_copy(src=src["src"], dest=dest_path, is_binary=src["is_binary"], remote_src=src["is_remote"]) + else: + copy_res = hosts.all.zos_copy(content=src["src"], dest=dest_path, is_binary=src["is_binary"]) + stat_res = hosts.all.stat(path=dest_path) for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_path + assert result.get("state") == "file" for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True finally: hosts.all.file(path=dest_path, state="absent") -def test_copy_local_file_to_existing_uss_file(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", is_file=True, force=False, is_remote=False), + dict(src="/etc/profile", is_file=True, force=True, is_remote=False), + dict(src="Example inline content", is_file=False, force=False, is_remote=False), + dict(src="Example inline content", is_file=False, force=True, is_remote=False), + dict(src="/etc/profile", is_file=True, force=False, is_remote=True), + dict(src="/etc/profile", is_file=True, force=True, is_remote=True), +]) +def test_copy_file_to_existing_uss_file(ansible_zos_module, src): hosts = ansible_zos_module - dest_path = "/tmp/profile" + dest_path = "/tmp/test_profile" + try: + hosts.all.file(path=dest_path, state="absent") hosts.all.file(path=dest_path, state="touch") stat_res = list(hosts.all.stat(path=dest_path).contacted.values()) timestamp = stat_res[0].get("stat").get("atime") assert timestamp is not None - copy_res = hosts.all.zos_copy(src="/etc/profile", dest=dest_path) + + if src["is_file"]: + copy_res = hosts.all.zos_copy(src=src["src"], dest=dest_path, force=src["force"], remote_src=src["is_remote"]) + else: + copy_res = hosts.all.zos_copy(content=src["src"], dest=dest_path, force=src["force"]) + stat_res = hosts.all.stat(path=dest_path) + for result in copy_res.contacted.values(): - assert result.get("msg") is None + if src["force"]: + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_path + assert result.get("state") == "file" + else: + assert result.get("msg") is not None + assert result.get("changed") is False for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True finally: hosts.all.file(path=dest_path, state="absent") -def test_copy_local_file_to_uss_dir(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", is_binary=False, is_remote=False), + dict(src="/etc/profile", is_binary=True, is_remote=False), + dict(src="/etc/profile", is_binary=False, is_remote=True), + dict(src="/etc/profile", is_binary=True, is_remote=True), +]) +def test_copy_file_to_uss_dir(ansible_zos_module, src): hosts = ansible_zos_module - dest_path = "/tmp/testdir" + dest = "/tmp" + dest_path = "/tmp/profile" + try: - hosts.all.file(path=dest_path, state="directory") - copy_res = hosts.all.zos_copy(src="/etc/profile", dest=dest_path) - stat_res = hosts.all.stat(path=dest_path + "/" + "profile") + hosts.all.file(path=dest_path, state="absent") + + copy_res = hosts.all.zos_copy(src=src["src"], dest=dest, is_binary=src["is_binary"], remote_src=src["is_remote"]) + + stat_res = hosts.all.stat(path=dest_path) for result in copy_res.contacted.values(): assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True + assert result.get("changed") is True + assert result.get("dest") == dest_path + assert result.get("state") == "file" + for st in stat_res.contacted.values(): + assert st.get("stat").get("exists") is True finally: hosts.all.file(path=dest_path, state="absent") -def test_copy_local_file_to_non_existing_sequential_data_set(ansible_zos_module): +@pytest.mark.uss +def test_copy_local_symlink_to_uss_file(ansible_zos_module): hosts = ansible_zos_module - dest = "USER.TEST.SEQ.FUNCTEST" - src_file = "/etc/profile" + src_lnk = "/tmp/etclnk" + dest_path = "/tmp/profile" try: - copy_result = hosts.all.zos_copy(src=src_file, dest=dest) + try: + os.symlink("/etc/profile", src_lnk) + except FileExistsError: + pass + copy_res = hosts.all.zos_copy(src=src_lnk, dest=dest_path, local_follow=True) verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest), - executable=SHELL_EXECUTABLE, + cmd="head {0}".format(dest_path), executable=SHELL_EXECUTABLE ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 + stat_res = hosts.all.stat(path=dest_path) + for result in copy_res.contacted.values(): + assert result.get("msg") is None + for result in stat_res.contacted.values(): + assert result.get("stat").get("exists") is True + for result in verify_copy.contacted.values(): + assert result.get("rc") == 0 + assert result.get("stdout") != "" finally: - hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.file(path=dest_path, state="absent") + os.remove(src_lnk) -def test_copy_local_file_to_existing_sequential_data_set(ansible_zos_module): +@pytest.mark.uss +def test_copy_local_file_to_uss_file_convert_encoding(ansible_zos_module): hosts = ansible_zos_module - dest = "USER.TEST.SEQ.FUNCTEST" - src_file = "/etc/profile" + dest_path = "/tmp/profile" try: - hosts.all.zos_data_set(name=dest, type="seq", state="present") - copy_result = hosts.all.zos_copy(src=src_file, dest=dest) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE + hosts.all.file(path=dest_path, state="absent") + copy_res = hosts.all.zos_copy( + src="/etc/profile", + dest=dest_path, + encoding={"from": "ISO8859-1", "to": "IBM-1047"}, ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - assert v_cp.get("stdout") != "" + stat_res = hosts.all.stat(path=dest_path) + for result in copy_res.contacted.values(): + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_path + assert result.get("state") == "file" + for result in stat_res.contacted.values(): + assert result.get("stat").get("exists") is True finally: - hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.file(path=dest_path, state="absent") -def test_copy_local_file_to_existing_pdse_member(ansible_zos_module): +@pytest.mark.uss +def test_copy_inline_content_to_uss_dir(ansible_zos_module): hosts = ansible_zos_module - dest = "USER.TEST.PDS.FUNCTEST" - dest_path = "USER.TEST.PDS.FUNCTEST(DATA)" - src_file = "/etc/profile" + dest = "/tmp/" + dest_path = "/tmp/inline_copy" + try: - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.zos_data_set(name=dest_path, type="MEMBER", replace="yes") - copy_result = hosts.all.zos_copy(src=src_file, dest=dest_path) - print(vars(copy_result)) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest_path), - executable=SHELL_EXECUTABLE, - ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 + copy_res = hosts.all.zos_copy(content="Inline content", dest=dest) + stat_res = hosts.all.stat(path=dest_path) + + for result in copy_res.contacted.values(): + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_path + for result in stat_res.contacted.values(): + assert result.get("stat").get("exists") is True finally: - hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.file(path=dest_path, state="absent") -def test_copy_local_file_to_non_existing_pdse_member(ansible_zos_module): +@pytest.mark.uss +def test_copy_dir_to_existing_uss_dir_not_forced(ansible_zos_module): hosts = ansible_zos_module - dest = "USER.TEST.PDS.FUNCTEST" - src_file = "/etc/profile" + dest_new_dir = "/tmp/new_dir" + dest_path = "{0}/profile".format(dest_new_dir) + dest_parent_dir = "/tmp/test_dir" + dest_old_dir = "{0}/old_dir".format(dest_parent_dir) + try: - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - copy_result = hosts.all.zos_copy(src=src_file, dest=dest) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest + "(PROFILE)"), - executable=SHELL_EXECUTABLE, + hosts.all.file(path=dest_new_dir, state="directory") + hosts.all.file(path=dest_path, state="touch") + hosts.all.file(path=dest_old_dir, state="directory") + + copy_result = hosts.all.zos_copy( + src=dest_new_dir, + dest=dest_parent_dir, + remote_src=True, + force=False ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 + + for result in copy_result.contacted.values(): + assert result.get("msg") is not None + assert result.get("changed") is False + assert "Error" in result.get("msg") + assert "EDC5117I" in result.get("stdout") finally: - hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.file(path=dest_parent_dir, state="absent") + hosts.all.file(path=dest_new_dir, state="absent") -def test_copy_local_file_to_non_existing_pdse(ansible_zos_module): +@pytest.mark.uss +def test_copy_dir_to_existing_uss_dir_forced(ansible_zos_module): hosts = ansible_zos_module - dest = "USER.TEST.PDS.FUNCTEST" - dest_path = "USER.TEST.PDS.FUNCTEST(PROFILE)" - src_file = "/etc/profile" + dest_new_dir = "/tmp/new_dir/" + dest_path = "{0}profile".format(dest_new_dir) + dest_parent_dir = "/tmp/test_dir" + dest_old_dir = "{0}/old_dir".format(dest_parent_dir) + dest_dir = "{0}/profile".format(dest_parent_dir) + try: - copy_result = hosts.all.zos_copy(src=src_file, dest=dest_path) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest_path), - executable=SHELL_EXECUTABLE, + hosts.all.file(path=dest_new_dir, state="directory") + hosts.all.file(path=dest_path, state="touch") + hosts.all.file(path=dest_old_dir, state="directory") + + copy_result = hosts.all.zos_copy( + src=dest_new_dir, + dest=dest_parent_dir, + remote_src=True, + force=True ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - finally: - hosts.all.zos_data_set(name=dest, state="absent") + stat_old_dir_res = hosts.all.stat(path=dest_old_dir) + stat_new_dir_res = hosts.all.stat(path=dest_dir) + for result in copy_result.contacted.values(): + assert result.get("msg") is None + assert result.get("changed") is True + for result in stat_old_dir_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + for result in stat_new_dir_res.contacted.values(): + assert result.get("stat").get("exists") is True -def test_copy_local_dir_to_existing_pdse(ansible_zos_module): - hosts = ansible_zos_module - source_path = tempfile.mkdtemp() - dest = "USER.TEST.PDS.FUNCTEST" - try: - populate_dir(source_path) - hosts.all.zos_data_set( - name=dest, - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.zos_data_set(name=dest + "(FILE1)", type="MEMBER", replace="yes") - copy_result = hosts.all.zos_copy(src=source_path, dest=dest) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest + "(FILE2)"), - executable=SHELL_EXECUTABLE, - ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 finally: - shutil.rmtree(source_path) - hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.file(path=dest_parent_dir, state="absent") + hosts.all.file(path=dest_new_dir, state="absent") -def test_copy_local_dir_to_non_existing_pdse(ansible_zos_module): +@pytest.mark.uss +def test_copy_local_nested_dir_to_existing_uss_dir_forced(ansible_zos_module): hosts = ansible_zos_module + dest_path = "/tmp/new_dir" + source_path = tempfile.mkdtemp() - dest = "USER.TEST.PDS.FUNCTEST" + subdir_a_path = "{0}/subdir_a".format(source_path) + subdir_b_path = "{0}/subdir_b".format(source_path) + try: - populate_dir(source_path) - copy_result = hosts.all.zos_copy(src=source_path, dest=dest) - verify_copy = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\" > /dev/null 2>/dev/null".format(dest), - executable=SHELL_EXECUTABLE, + os.mkdir(subdir_a_path) + os.mkdir(subdir_b_path) + populate_dir(subdir_a_path) + populate_dir(subdir_b_path) + + hosts.all.file(path=dest_path, state="directory") + copy_result = hosts.all.zos_copy( + src=source_path, + dest=dest_path, + force=True ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 + + stat_subdir_a_res = hosts.all.stat(path="{0}/subdir_a".format(dest_path)) + stat_subdir_b_res = hosts.all.stat(path="{0}/subdir_b".format(dest_path)) + + for result in copy_result.contacted.values(): + assert result.get("msg") is None + assert result.get("changed") is True + for result in stat_subdir_a_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + for result in stat_subdir_b_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + finally: + hosts.all.file(path=dest_path, state="absent") shutil.rmtree(source_path) - hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_local_file_to_uss_binary(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.parametrize("backup", [None, "/tmp/uss_backup"]) +def test_backup_uss_file(ansible_zos_module, backup): hosts = ansible_zos_module - dest_path = "/tmp/profile" + src = "/etc/profile" + dest = "/tmp/profile" + backup_name = None + try: - hosts.all.file(path=dest_path, state="absent") - copy_res = hosts.all.zos_copy( - src="/etc/profile", dest=dest_path, is_binary=True - ) - stat_res = hosts.all.stat(path=dest_path) + hosts.all.file(path=dest, state="touch") + if backup: + copy_res = hosts.all.zos_copy(src=src, dest=dest, force=True, backup=True, backup_name=backup) + else: + copy_res = hosts.all.zos_copy(src=src, dest=dest, force=True, backup=True) + for result in copy_res.contacted.values(): assert result.get("msg") is None + backup_name = result.get("backup_name") + + if backup: + assert backup_name == backup + else: + assert backup_name is not None + + stat_res = hosts.all.stat(path=backup_name) for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True + finally: - hosts.all.file(path=dest_path, state="absent") + hosts.all.file(path=dest, state="absent") + if backup_name: + hosts.all.file(path=backup_name, state="absent") -def test_copy_local_file_to_sequential_data_set_binary(ansible_zos_module): +@pytest.mark.uss +def test_copy_file_insufficient_read_permission_fails(ansible_zos_module): hosts = ansible_zos_module - dest = "USER.TEST.SEQ.FUNCTEST" - src_file = "/etc/profile" + src_path = "/tmp/testfile" + dest = "/tmp" try: - copy_result = hosts.all.zos_copy(src=src_file, dest=dest, is_binary=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest), - executable=SHELL_EXECUTABLE, - ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 + open(src_path, "w").close() + os.chmod(src_path, 0) + copy_res = hosts.all.zos_copy(src=src_path, dest=dest) + for result in copy_res.contacted.values(): + assert result.get("msg") is not None + assert "read permission" in result.get("msg") finally: - hosts.all.zos_data_set(name=dest, state="absent") + if os.path.exists(src_path): + os.remove(src_path) -def test_copy_local_file_to_pds_member_binary(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.parametrize("is_remote", [False, True]) +def test_copy_non_existent_file_fails(ansible_zos_module, is_remote): hosts = ansible_zos_module - dest = "USER.TEST.PDS.FUNCTEST" - dest_path = "USER.TEST.PDS.FUNCTEST(DATA)" - fd, src_file = mkstemp() - with open(src_file, 'w') as infile: - infile.write(DUMMY_DATA) + src_path = "/tmp/non_existent_src" + dest = "/tmp" + + copy_res = hosts.all.zos_copy(src=src_path, dest=dest, remote_src=is_remote) + for result in copy_res.contacted.values(): + assert result.get("msg") is not None + assert "does not exist" in result.get("msg") + + +@pytest.mark.uss +def test_copy_dir_and_change_mode(ansible_zos_module): + hosts = ansible_zos_module + dest_path = "/tmp/new_dir" + + source_path = tempfile.mkdtemp() + subdir_path = "{0}/subdir".format(source_path) + mode = "0755" try: - hosts.all.zos_data_set( - name=dest, - type="pds", - space_primary=5, - space_type="M", - record_format="fb", - record_length=255, - ) - copy_result = hosts.all.zos_copy(src=src_file, dest=dest_path, is_binary=True) - print(vars(copy_result)) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest_path), - executable=SHELL_EXECUTABLE, - ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - finally: - hosts.all.zos_data_set(name=dest, state="absent") - os.close(fd) - os.remove(src_file) + os.mkdir(subdir_path) + populate_dir(subdir_path) - -def test_copy_local_file_to_pdse_member_binary(ansible_zos_module): - hosts = ansible_zos_module - dest = "USER.TEST.PDS.FUNCTEST" - dest_path = "USER.TEST.PDS.FUNCTEST(DATA)" - fd, src_file = mkstemp() - with open(src_file, 'w') as infile: - infile.write(DUMMY_DATA) - try: - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fb", - record_length=255, - ) - copy_result = hosts.all.zos_copy(src=src_file, dest=dest_path, is_binary=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest_path), - executable=SHELL_EXECUTABLE, + hosts.all.file(path=dest_path, state="directory") + copy_result = hosts.all.zos_copy( + src=source_path, + dest=dest_path, + force=True, + mode=mode ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - finally: - hosts.all.zos_data_set(name=dest, state="absent") - os.close(fd) - os.remove(src_file) - -def test_copy_uss_file_to_uss_file(ansible_zos_module): - hosts = ansible_zos_module - remote_src = "/etc/profile" - dest = "/tmp/test_profile" - try: - hosts.all.file(path=dest, state="absent") - copy_result = hosts.all.zos_copy(src=remote_src, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for st in stat_res.contacted.values(): - assert st.get("stat").get("exists") is True - finally: - hosts.all.file(path=dest, state="absent") + stat_dir_res = hosts.all.stat(path=dest_path) + stat_subdir_res = hosts.all.stat(path="{0}/subdir".format(dest_path)) + stat_file_res = hosts.all.stat(path="{0}/subdir/file3".format(dest_path)) + for result in copy_result.contacted.values(): + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_path + for result in stat_dir_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + assert result.get("stat").get("mode") == mode + for result in stat_subdir_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + assert result.get("stat").get("mode") == mode + for result in stat_file_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is False + assert result.get("stat").get("mode") == mode -def test_copy_uss_file_to_uss_dir(ansible_zos_module): - hosts = ansible_zos_module - remote_src = "/etc/profile" - dest = "/tmp" - dest_path = "/tmp/profile" - try: - hosts.all.file(path=dest_path, state="absent") - copy_result = hosts.all.zos_copy(src=remote_src, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest_path) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for st in stat_res.contacted.values(): - assert st.get("stat").get("exists") is True finally: hosts.all.file(path=dest_path, state="absent") + shutil.rmtree(source_path) - -def test_copy_uss_file_to_non_existing_sequential_data_set(ansible_zos_module): + for result in copy_result.contacted.values(): + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_path + for result in stat_dir_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + assert result.get("stat").get("mode") == mode + for result in stat_subdir_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + assert result.get("stat").get("mode") == mode + for result in stat_file_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is False + assert result.get("stat").get("mode") == mode + + +@pytest.mark.uss +@pytest.mark.seq +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", is_file=True, is_binary=False, is_remote=False), + dict(src="/etc/profile", is_file=True, is_binary=True, is_remote=False), + dict(src="Example inline content", is_file=False, is_binary=False, is_remote=False), + dict(src="Example inline content", is_file=False, is_binary=True, is_remote=False), + dict(src="/etc/profile", is_file=True, is_binary=False, is_remote=True), + dict(src="/etc/profile", is_file=True, is_binary=True, is_remote=True), +]) +def test_copy_file_to_non_existing_sequential_data_set(ansible_zos_module, src): hosts = ansible_zos_module dest = "USER.TEST.SEQ.FUNCTEST" - src_file = "/etc/profile" + try: - copy_result = hosts.all.zos_copy(src=src_file, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest), - executable=SHELL_EXECUTABLE, - ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - finally: hosts.all.zos_data_set(name=dest, state="absent") + if src["is_file"]: + copy_result = hosts.all.zos_copy(src=src["src"], dest=dest, remote_src=src["is_remote"], is_binary=src["is_binary"]) + else: + copy_result = hosts.all.zos_copy(content=src["src"], dest=dest, remote_src=src["is_remote"], is_binary=src["is_binary"]) -def test_copy_uss_file_to_existing_sequential_data_set(ansible_zos_module): - hosts = ansible_zos_module - dest = "USER.TEST.SEQ.FUNCTEST" - src_file = "/etc/profile" - try: - hosts.all.zos_data_set(name=dest, type="seq", state="present") - copy_result = hosts.all.zos_copy(src=src_file, dest=dest, remote_src=True) verify_copy = hosts.all.shell( cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest), executable=SHELL_EXECUTABLE, ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - finally: - hosts.all.zos_data_set(name=dest, state="absent") - -def test_copy_uss_file_to_non_existing_pdse_member(ansible_zos_module): - hosts = ansible_zos_module - dest = "USER.TEST.PDSE.FUNCTEST" - dest_path = "USER.TEST.PDSE.FUNCTEST(DATA)" - src_file = "/etc/profile" - try: - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - copy_result = hosts.all.zos_copy(src=src_file, dest=dest_path, remote_src=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest_path), - executable=SHELL_EXECUTABLE, - ) for cp_res in copy_result.contacted.values(): assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest + assert cp_res.get("is_binary") == src["is_binary"] for v_cp in verify_copy.contacted.values(): assert v_cp.get("rc") == 0 finally: hosts.all.zos_data_set(name=dest, state="absent") - -def test_copy_uss_file_to_existing_pdse_member(ansible_zos_module): - hosts = ansible_zos_module - dest = "USER.TEST.PDSE.FUNCTEST" - dest_path = "USER.TEST.PDSE.FUNCTEST(DATA)" - src_file = "/etc/profile" - try: - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.zos_data_set(name=dest_path, type="MEMBER", replace="yes") - copy_result = hosts.all.zos_copy(src=src_file, dest=dest_path, remote_src=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest_path), - executable=SHELL_EXECUTABLE, - ) - for cp_res in copy_result.contacted.values(): - assert cp_res.get("msg") is None - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - finally: - hosts.all.zos_data_set(name=dest, state="absent") + if src["is_file"]: + copy_result = hosts.all.zos_copy(src=src["src"], dest=dest, remote_src=src["is_remote"], is_binary=src["is_binary"]) + else: + copy_result = hosts.all.zos_copy(content=src["src"], dest=dest, remote_src=src["is_remote"], is_binary=src["is_binary"]) -def test_copy_uss_dir_to_existing_pdse(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.seq +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", is_file=True, force=True, is_remote=False), + dict(src="Example inline content", is_file=False, force=True, is_remote=False), + dict(src="/etc/profile", is_file=True, force=True, is_remote=True), + dict(src="/etc/profile", is_file=True, force=False, is_remote=False), + dict(src="Example inline content", is_file=False, force=False, is_remote=False), + dict(src="/etc/profile", is_file=True, force=False, is_remote=True), +]) +def test_copy_file_to_empty_sequential_data_set(ansible_zos_module, src): hosts = ansible_zos_module - src_dir = "/tmp/testdir" - dest = "USER.TEST.PDSE.FUNCTEST" + dest = "USER.TEST.SEQ.FUNCTEST" + try: - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.file(path=src_dir, state="directory") - for i in range(5): - hosts.all.file(path=src_dir + "/" + "file" + str(i), state="touch") + hosts.all.zos_data_set(name=dest, type="seq", state="present") - copy_res = hosts.all.zos_copy(src=src_dir, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest + "(FILE2)"), - executable=SHELL_EXECUTABLE, - ) - for result in copy_res.contacted.values(): + if src["is_file"]: + copy_result = hosts.all.zos_copy(src=src["src"], dest=dest, remote_src=src["is_remote"], force=src["force"]) + else: + copy_result = hosts.all.zos_copy(content=src["src"], dest=dest, remote_src=src["is_remote"], force=src["force"]) + + for result in copy_result.contacted.values(): assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 + assert result.get("changed") is True + assert result.get("dest") == dest finally: - hosts.all.file(path=src_dir, state="absent") hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_uss_dir_to_non_existing_pdse(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.seq +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", force=False, is_remote=False), + dict(src="/etc/profile", force=True, is_remote=False), + dict(src="/etc/profile", force=False, is_remote=True), + dict(src="/etc/profile", force=True, is_remote=True), +]) +def test_copy_file_to_non_empty_sequential_data_set(ansible_zos_module, src): hosts = ansible_zos_module - src_dir = "/tmp/testdir" - dest = "USER.TEST.PDSE.FUNCTEST" + dest = "USER.TEST.SEQ.FUNCTEST" + try: - hosts.all.file(path=src_dir, state="directory") - for i in range(5): - hosts.all.file(path=src_dir + "/" + "file" + str(i), state="touch") + hosts.all.zos_data_set(name=dest, type="seq", state="absent") + hosts.all.zos_copy(content="Inline content", dest=dest) - copy_res = hosts.all.zos_copy(src=src_dir, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest + "(FILE2)"), - executable=SHELL_EXECUTABLE, - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 + copy_result = hosts.all.zos_copy(src=src["src"], dest=dest, remote_src=src["is_remote"], force=src["force"]) + + for result in copy_result.contacted.values(): + if src["force"]: + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + else: + assert result.get("msg") is not None + assert result.get("changed") is False finally: - hosts.all.file(path=src_dir, state="absent") hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_ps_to_existing_uss_file(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.seq +def test_copy_ps_to_non_existing_uss_file(ansible_zos_module): hosts = ansible_zos_module src_ds = TEST_PS dest = "/tmp/ddchkpt" + try: - hosts.all.file(path=dest, state="touch") copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) stat_res = hosts.all.stat(path=dest) verify_copy = hosts.all.shell( cmd="cat {0}".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True for result in verify_copy.contacted.values(): @@ -580,32 +678,47 @@ def test_copy_ps_to_existing_uss_file(ansible_zos_module): hosts.all.file(path=dest, state="absent") -def test_copy_ps_to_non_existing_uss_file(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.seq +@pytest.mark.parametrize("force", [False, True]) +def test_copy_ps_to_existing_uss_file(ansible_zos_module, force): hosts = ansible_zos_module src_ds = TEST_PS dest = "/tmp/ddchkpt" + try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + hosts.all.file(path=dest, state="touch") + + copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True, force=force) stat_res = hosts.all.stat(path=dest) verify_copy = hosts.all.shell( cmd="cat {0}".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): - assert result.get("msg") is None + if force: + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + else: + assert result.get("msg") is not None + assert result.get("changed") is False for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True for result in verify_copy.contacted.values(): assert result.get("rc") == 0 - assert result.get("stdout") != "" finally: hosts.all.file(path=dest, state="absent") +@pytest.mark.uss +@pytest.mark.seq def test_copy_ps_to_existing_uss_dir(ansible_zos_module): hosts = ansible_zos_module src_ds = TEST_PS dest = "/tmp/ddchkpt" dest_path = dest + "/" + TEST_PS + try: hosts.all.file(path=dest, state="directory") copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) @@ -613,8 +726,10 @@ def test_copy_ps_to_existing_uss_dir(ansible_zos_module): verify_copy = hosts.all.shell( cmd="cat {0}".format(dest_path), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True for result in verify_copy.contacted.values(): @@ -624,17 +739,23 @@ def test_copy_ps_to_existing_uss_dir(ansible_zos_module): hosts.all.file(path=dest, state="absent") +@pytest.mark.seq def test_copy_ps_to_non_existing_ps(ansible_zos_module): hosts = ansible_zos_module src_ds = TEST_PS dest = "USER.TEST.SEQ.FUNCTEST" + try: + hosts.all.zos_data_set(name=dest, state="absent") copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) verify_copy = hosts.all.shell( cmd="cat \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in verify_copy.contacted.values(): assert result.get("rc") == 0 assert result.get("stdout") != "" @@ -642,18 +763,25 @@ def test_copy_ps_to_non_existing_ps(ansible_zos_module): hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_ps_to_existing_ps(ansible_zos_module): +@pytest.mark.seq +@pytest.mark.parametrize("force", [False, True]) +def test_copy_ps_to_empty_ps(ansible_zos_module, force): hosts = ansible_zos_module src_ds = TEST_PS dest = "USER.TEST.SEQ.FUNCTEST" + try: hosts.all.zos_data_set(name=dest, type="seq", state="present") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + + copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True, force=force) verify_copy = hosts.all.shell( cmd="cat \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in verify_copy.contacted.values(): assert result.get("rc") == 0 assert result.get("stdout") != "" @@ -661,1328 +789,1112 @@ def test_copy_ps_to_existing_ps(ansible_zos_module): hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_ps_to_existing_pdse_member(ansible_zos_module): +@pytest.mark.seq +@pytest.mark.parametrize("force", [False, True]) +def test_copy_ps_to_non_empty_ps(ansible_zos_module, force): hosts = ansible_zos_module src_ds = TEST_PS - dest_ds = "USER.TEST.PDSE.FUNCTEST" - dest = dest_ds + "(DATA)" + dest = "USER.TEST.SEQ.FUNCTEST" + try: - hosts.all.zos_data_set(name=dest_ds, type="pdse", state="present") - hosts.all.zos_data_set(name=dest, type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + hosts.all.zos_data_set(name=dest, type="seq", state="absent") + hosts.all.zos_copy(content="Inline content", dest=dest) + + copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True, force=force) verify_copy = hosts.all.shell( cmd="cat \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): - assert result.get("msg") is None + if force: + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + else: + assert result.get("msg") is not None + assert result.get("changed") is False for result in verify_copy.contacted.values(): assert result.get("rc") == 0 assert result.get("stdout") != "" finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_ps_to_non_existing_pdse_member(ansible_zos_module): +@pytest.mark.seq +@pytest.mark.parametrize("backup", [None, "USER.TEST.SEQ.FUNCTEST.BACK"]) +def test_backup_sequential_data_set(ansible_zos_module, backup): hosts = ansible_zos_module - src_ds = TEST_PS - dest_ds = "USER.TEST.PDSE.FUNCTEST" - dest = dest_ds + "(DATA)" + src = "/etc/profile" + dest = "USER.TEST.SEQ.FUNCTEST" + try: - hosts.all.zos_data_set(name=dest_ds, type="pdse", state="present") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + hosts.all.zos_data_set(name=dest, type="seq", state="present") + if backup: + copy_res = hosts.all.zos_copy(src=src, dest=dest, force=True, backup=True, backup_name=backup) + else: + copy_res = hosts.all.zos_copy(src=src, dest=dest, force=True, backup=True) -def test_copy_pds_to_non_existing_uss_dir(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_PDS - dest = "/tmp/" - dest_path = "/tmp/" + TEST_PDS - try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest_path) for result in copy_res.contacted.values(): assert result.get("msg") is None + backup_name = result.get("backup_name") + assert backup_name is not None + + stat_res = hosts.all.shell( + cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), + executable=SHELL_EXECUTABLE, + ) for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - assert result.get("stat").get("isdir") is True + assert result.get("rc") == 0 + assert "NOT IN CATALOG" not in result.get("stdout") + assert "NOT IN CATALOG" not in result.get("stderr") + finally: - hosts.all.file(path=dest_path, state="absent") + hosts.all.zos_data_set(name=dest, state="absent") + if backup_name: + hosts.all.zos_data_set(name=backup_name, state="absent") -def test_copy_pds_to_existing_pds(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", is_file=True, is_binary=False, is_remote=False), + dict(src="/etc/profile", is_file=True, is_binary=True, is_remote=False), + dict(src="Example inline content", is_file=False, is_binary=False, is_remote=False), + dict(src="Example inline content", is_file=False, is_binary=True, is_remote=False), + dict(src="/etc/profile", is_file=True, is_binary=False, is_remote=True), + dict(src="/etc/profile", is_file=True, is_binary=True, is_remote=True), +]) +def test_copy_file_to_non_existing_member(ansible_zos_module, src): hosts = ansible_zos_module - src_ds = TEST_PDS - dest = "USER.TEST.PDS.FUNCTEST" + data_set = "USER.TEST.PDS.FUNCTEST" + dest = "{0}(PROFILE)".format(data_set) + try: hosts.all.zos_data_set( - name=dest, - type="pds", + name=data_set, + type="pdse", space_primary=5, space_type="M", record_format="fba", - record_length=25, + record_length=80, + replace=True ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + + if src["is_file"]: + copy_result = hosts.all.zos_copy(src=src["src"], dest=dest, is_binary=src["is_binary"], remote_src=src["is_remote"]) + else: + copy_result = hosts.all.zos_copy(content=src["src"], dest=dest, is_binary=src["is_binary"]) + verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\"".format( - "{0}({1})".format(dest, extract_member_name(TEST_PDS_MEMBER)) - ), + cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest), executable=SHELL_EXECUTABLE, ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest, state="absent") + for cp_res in copy_result.contacted.values(): + assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + finally: + hosts.all.zos_data_set(name=data_set, state="absent") + + +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("src", [ + dict(src="/etc/profile", is_file=True, force=False, is_remote=False), + dict(src="/etc/profile", is_file=True, force=True, is_remote=False), + dict(src="Example inline content", is_file=False, force=False, is_remote=False), + dict(src="Example inline content", is_file=False, force=True, is_remote=False), + dict(src="/etc/profile", is_file=True, force=False, is_remote=True), + dict(src="/etc/profile", is_file=True, force=True, is_remote=True) +]) +def test_copy_file_to_existing_member(ansible_zos_module, src): + hosts = ansible_zos_module + data_set = "USER.TEST.PDS.FUNCTEST" + dest = "{0}(PROFILE)".format(data_set) + + try: + hosts.all.zos_data_set( + name=data_set, + type="pdse", + space_primary=5, + space_type="M", + record_format="fba", + record_length=80, + replace=True + ) + hosts.all.zos_data_set(name=dest, type="member", state="present") + + if src["is_file"]: + copy_result = hosts.all.zos_copy(src=src["src"], dest=dest, force=src["force"], remote_src=src["is_remote"]) + else: + copy_result = hosts.all.zos_copy(content=src["src"], dest=dest, force=src["force"]) + + verify_copy = hosts.all.shell( + cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest), + executable=SHELL_EXECUTABLE, + ) + + for cp_res in copy_result.contacted.values(): + if src["force"]: + assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest + else: + assert cp_res.get("msg") is not None + assert cp_res.get("changed") is False + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + finally: + hosts.all.zos_data_set(name=data_set, state="absent") + + +@pytest.mark.seq +@pytest.mark.pdse +@pytest.mark.parametrize("args", [ + dict(type="seq", is_binary=False), + dict(type="seq", is_binary=True), + dict(type="pds", is_binary=False), + dict(type="pds", is_binary=True), + dict(type="pdse", is_binary=False), + dict(type="pdse", is_binary=True) +]) +def test_copy_data_set_to_non_existing_member(ansible_zos_module, args): + hosts = ansible_zos_module + src_data_set = "USER.TEST.PDS.SOURCE" + src = src_data_set if args["type"] == "seq" else "{0}(TEST)".format(src_data_set) + dest_data_set = "USER.TEST.PDS.FUNCTEST" + dest = "{0}(MEMBER)".format(dest_data_set) + + try: + hosts.all.zos_data_set(name=src_data_set, type=args["type"]) + if args["type"] != "seq": + hosts.all.zos_data_set(name=src, type="member") + + hosts.all.shell( + "decho 'Records for test' '{0}'".format(src), + executable=SHELL_EXECUTABLE + ) + + hosts.all.zos_data_set(name=dest_data_set, type="pdse", replace=True) + copy_result = hosts.all.zos_copy(src=src, dest=dest, is_binary=args["is_binary"], remote_src=True) + + verify_copy = hosts.all.shell( + cmd="cat \"//'{0}'\"".format(dest), + executable=SHELL_EXECUTABLE, + ) + + for cp_res in copy_result.contacted.values(): + assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + assert v_cp.get("stdout") != "" + finally: + hosts.all.zos_data_set(name=src_data_set, state="absent") + hosts.all.zos_data_set(name=dest_data_set, state="absent") + + +@pytest.mark.seq +@pytest.mark.pdse +@pytest.mark.parametrize("args", [ + dict(type="seq", force=False), + dict(type="seq", force=True), + dict(type="pds", force=False), + dict(type="pds", force=True), + dict(type="pdse", force=False), + dict(type="pdse", force=True) +]) +def test_copy_data_set_to_existing_member(ansible_zos_module, args): + hosts = ansible_zos_module + src_data_set = "USER.TEST.PDS.SOURCE" + src = src_data_set if args["type"] == "seq" else "{0}(TEST)".format(src_data_set) + dest_data_set = "USER.TEST.PDS.FUNCTEST" + dest = "{0}(MEMBER)".format(dest_data_set) + + try: + hosts.all.zos_data_set(name=src_data_set, type=args["type"]) + if args["type"] != "seq": + hosts.all.zos_data_set(name=src, type="member") + + hosts.all.shell( + "decho 'Records for test' '{0}'".format(src), + executable=SHELL_EXECUTABLE + ) + + hosts.all.zos_data_set(name=dest_data_set, type="pdse", replace=True) + hosts.all.zos_data_set(name=dest, type="member") + copy_result = hosts.all.zos_copy(src=src, dest=dest, force=args["force"], remote_src=True) + + verify_copy = hosts.all.shell( + cmd="cat \"//'{0}'\"".format(dest), + executable=SHELL_EXECUTABLE, + ) + + for cp_res in copy_result.contacted.values(): + if args["force"]: + assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest + else: + assert cp_res.get("msg") is not None + assert cp_res.get("changed") is False + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + if args["force"]: + assert v_cp.get("stdout") != "" + finally: + hosts.all.zos_data_set(name=src_data_set, state="absent") + hosts.all.zos_data_set(name=dest_data_set, state="absent") -def test_copy_pds_to_non_existing_pds(ansible_zos_module): + +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("is_remote", [False, True]) +def test_copy_file_to_non_existing_pdse(ansible_zos_module, is_remote): hosts = ansible_zos_module - src_ds = TEST_PDS dest = "USER.TEST.PDS.FUNCTEST" + dest_path = "{0}(PROFILE)".format(dest) + src_file = "/etc/profile" + try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + hosts.all.zos_data_set(name=dest, state="absent") + + copy_result = hosts.all.zos_copy(src=src_file, dest=dest_path, remote_src=is_remote) + verify_copy = hosts.all.shell( + cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest_path), + executable=SHELL_EXECUTABLE, + ) + + for cp_res in copy_result.contacted.values(): + assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest_path + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + finally: + hosts.all.zos_data_set(name=dest, state="absent") + + +@pytest.mark.uss +@pytest.mark.pdse +def test_copy_dir_to_non_existing_pdse(ansible_zos_module): + hosts = ansible_zos_module + src_dir = "/tmp/testdir" + dest = "USER.TEST.PDSE.FUNCTEST" + + try: + hosts.all.file(path=src_dir, state="directory") + for i in range(5): + hosts.all.file(path=src_dir + "/" + "file" + str(i), state="touch") + + copy_res = hosts.all.zos_copy(src=src_dir, dest=dest, remote_src=True) verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\"".format( - "{0}({1})".format(dest, extract_member_name(TEST_PDS_MEMBER)) - ), + cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest + "(FILE2)"), executable=SHELL_EXECUTABLE, ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in verify_copy.contacted.values(): assert result.get("rc") == 0 - assert result.get("stdout") != "" finally: + hosts.all.file(path=src_dir, state="absent") hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pds_to_existing_pdse(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("src_type", ["pds", "pdse"]) +def test_copy_dir_to_existing_pdse(ansible_zos_module, src_type): hosts = ansible_zos_module - src_ds = TEST_PDS - dest = "USER.TEST.PDSE.FUNCTEST" + src_dir = "/tmp/testdir" + dest = "USER.TEST.PDS.FUNCTEST" + try: + hosts.all.file(path=src_dir, state="directory") + for i in range(5): + hosts.all.file(path=src_dir + "/" + "file" + str(i), state="touch") + hosts.all.zos_data_set( name=dest, - type="pdse", + type=src_type, space_primary=5, space_type="M", record_format="fba", - record_length=25, + record_length=80, ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + + copy_result = hosts.all.zos_copy(src=src_dir, dest=dest, remote_src=True) verify_copy = hosts.all.shell( - cmd="cat \"//'{0}'\"".format( - "{0}({1})".format(dest, extract_member_name(TEST_PDS_MEMBER)) - ), + cmd="cat \"//'{0}'\" > /dev/null 2>/dev/null".format(dest + "(FILE2)"), executable=SHELL_EXECUTABLE, ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + + for cp_res in copy_result.contacted.values(): + assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 finally: + hosts.all.file(path=src_dir, state="absent") hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pdse_to_non_existing_uss_dir(ansible_zos_module): +@pytest.mark.seq +@pytest.mark.pdse +@pytest.mark.parametrize("src_type", ["seq", "pds", "pdse"]) +def test_copy_data_set_to_non_existing_pdse(ansible_zos_module, src_type): hosts = ansible_zos_module - src_ds = TEST_PDSE - dest = "/tmp/" - dest_path = "/tmp/" + src_ds + src_data_set = "USER.TEST.PDS.SOURCE" + src = src_data_set if src_type == "seq" else "{0}(TEST)".format(src_data_set) + dest_data_set = "USER.TEST.PDS.FUNCTEST" + dest = "{0}(MEMBER)".format(dest_data_set) + try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest_path) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - assert result.get("stat").get("isdir") is True + hosts.all.zos_data_set(name=src_data_set, type=src_type) + if src_type != "seq": + hosts.all.zos_data_set(name=src, type="member") + + hosts.all.shell( + "decho 'Records for test' '{0}'".format(src), + executable=SHELL_EXECUTABLE + ) + + hosts.all.zos_data_set(name=dest_data_set, state="absent") + copy_result = hosts.all.zos_copy(src=src, dest=dest, remote_src=True) + + verify_copy = hosts.all.shell( + cmd="cat \"//'{0}'\"".format(dest), + executable=SHELL_EXECUTABLE, + ) + + for cp_res in copy_result.contacted.values(): + assert cp_res.get("msg") is None + assert cp_res.get("changed") is True + assert cp_res.get("dest") == dest + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + assert v_cp.get("stdout") != "" finally: - hosts.all.file(path=dest_path, state="absent") + hosts.all.zos_data_set(name=src_data_set, state="absent") + hosts.all.zos_data_set(name=dest_data_set, state="absent") -def test_copy_pdse_to_existing_pdse(ansible_zos_module): +@pytest.mark.pdse +@pytest.mark.parametrize("args", [ + dict(src_type="pds", dest_type="pds"), + dict(src_type="pds", dest_type="pdse"), + dict(src_type="pdse", dest_type="pds"), + dict(src_type="pdse", dest_type="pdse"), +]) +def test_copy_pds_to_existing_pds(ansible_zos_module, args): hosts = ansible_zos_module - src_ds = TEST_PDSE - dest = "USER.TEST.PDSE.FUNCTEST" + src = "USER.TEST.PDS.SRC" + dest = "USER.TEST.PDS.DEST" + try: - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + populate_partitioned_data_set(hosts, src, args["src_type"]) + hosts.all.zos_data_set(name=dest, type=args["dest_type"]) + + copy_res = hosts.all.zos_copy(src=src, dest=dest, remote_src=True) verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format( - "{0}({1})".format(dest, extract_member_name(TEST_PDSE_MEMBER)) - ), - executable=SHELL_EXECUTABLE, + cmd="mls {0}".format(dest), + executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + assert result.get("changed") is True + assert result.get("dest") == dest + + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + stdout = v_cp.get("stdout") + assert stdout is not None + assert len(stdout.splitlines()) == 3 finally: + hosts.all.zos_data_set(name=src, state="absent") hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pdse_to_non_existing_pdse(ansible_zos_module): +@pytest.mark.pdse +def test_copy_multiple_data_set_members(ansible_zos_module): hosts = ansible_zos_module - src_ds = TEST_PDSE - dest = "USER.TEST.PDSE.FUNCTEST" + src = "USER.FUNCTEST.SRC.PDS" + src_wildcard = "{0}(ABC*)".format(src) + + dest = "USER.FUNCTEST.DEST.PDS" + member_list = ["MEMBER1", "ABCXYZ", "ABCASD"] + ds_list = ["{0}({1})".format(src, member) for member in member_list] + try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format( - "{0}({1})".format(dest, extract_member_name(TEST_PDSE_MEMBER)) - ), - executable=SHELL_EXECUTABLE, - ) + hosts.all.zos_data_set(name=src, type="pds") + hosts.all.zos_data_set(name=dest, type="pds") + + for member in ds_list: + hosts.all.shell( + cmd="decho '{0}' '{1}'".format(DUMMY_DATA, member), + executable=SHELL_EXECUTABLE + ) + + copy_res = hosts.all.zos_copy(src=src_wildcard, dest=dest, remote_src=True) for result in copy_res.contacted.values(): assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + assert result.get("changed") is True + assert result.get("dest") == dest + + verify_copy = hosts.all.shell( + cmd="mls {0}".format(dest), + executable=SHELL_EXECUTABLE + ) + + for v_cp in verify_copy.contacted.values(): + assert v_cp.get("rc") == 0 + stdout = v_cp.get("stdout") + assert stdout is not None + assert len(stdout.splitlines()) == 2 + finally: + hosts.all.zos_data_set(name=src, state="absent") hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pds_member_to_existing_uss_file(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("ds_type", ["pds", "pdse"]) +def test_copy_member_to_non_existing_uss_file(ansible_zos_module, ds_type): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest = "/tmp/" + extract_member_name(src_ds).lower() + data_set = "USER.TEST.PDSE.SOURCE" + src = "{0}(MEMBER)".format(data_set) + dest = "/tmp/member" + try: - hosts.all.file(path=dest, state="touch") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + hosts.all.file(path=dest, state="absent") + hosts.all.zos_data_set(name=data_set, state="present", type=ds_type) + hosts.all.shell( + cmd="decho 'Record for data set' '{0}'".format(src), + executable=SHELL_EXECUTABLE + ) + + copy_res = hosts.all.zos_copy(src=src, dest=dest, remote_src=True) stat_res = hosts.all.stat(path=dest) verify_copy = hosts.all.shell( cmd="head {0}".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True for result in verify_copy.contacted.values(): assert result.get("rc") == 0 assert result.get("stdout") != "" finally: + hosts.all.zos_data_set(path=data_set, state="absent") hosts.all.file(path=dest, state="absent") -def test_copy_pds_member_to_non_existing_uss_file(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("args", [ + dict(ds_type="pds", force=False), + dict(ds_type="pds", force=True), + dict(ds_type="pdse", force=False), + dict(ds_type="pdse", force=True) +]) +def test_copy_member_to_existing_uss_file(ansible_zos_module, args): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest = "/tmp/" + extract_member_name(src_ds).lower() + data_set = "USER.TEST.PDSE.SOURCE" + src = "{0}(MEMBER)".format(data_set) + dest = "/tmp/member" + try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + hosts.all.file(path=dest, state="touch") + hosts.all.zos_data_set(name=data_set, state="present", type=args["ds_type"]) + hosts.all.shell( + cmd="decho 'Record for data set' '{0}'".format(src), + executable=SHELL_EXECUTABLE + ) + + copy_res = hosts.all.zos_copy(src=src, dest=dest, remote_src=True, force=args["force"]) stat_res = hosts.all.stat(path=dest) verify_copy = hosts.all.shell( cmd="head {0}".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): - assert result.get("msg") is None + if args["force"]: + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + else: + assert result.get("msg") is not None + assert result.get("changed") is False for result in stat_res.contacted.values(): assert result.get("stat").get("exists") is True for result in verify_copy.contacted.values(): assert result.get("rc") == 0 - assert result.get("stdout") != "" + if args["force"]: + assert result.get("stdout") != "" finally: + hosts.all.zos_data_set(name=data_set, state="absent") hosts.all.file(path=dest, state="absent") -def test_copy_pds_member_to_existing_ps(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("src_type", ["pds", "pdse"]) +def test_copy_pdse_to_uss_dir(ansible_zos_module, src_type): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest = "USER.TEST.SEQ.FUNCTEST" + src_ds = "USER.TEST.FUNCTEST" + dest = "/tmp/" + dest_path = "/tmp/{0}".format(src_ds) + try: - hosts.all.zos_data_set(name=dest, type="seq", state="present") + hosts.all.zos_data_set(name=src_ds, type=src_type, state="present") + members = ["MEMBER1", "MEMBER2", "MEMBER3"] + ds_list = ["{0}({1})".format(src_ds, member) for member in members] + for member in ds_list: + hosts.all.shell( + cmd="decho '{0}' '{1}'".format(DUMMY_DATA, member), + executable=SHELL_EXECUTABLE + ) + + hosts.all.file(path=dest_path, state="directory") + copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + stat_res = hosts.all.stat(path=dest_path) + + for result in copy_res.contacted.values(): + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + for result in stat_res.contacted.values(): + assert result.get("stat").get("exists") is True + assert result.get("stat").get("isdir") is True + finally: + hosts.all.zos_data_set(name=src_ds, state="absent") + hosts.all.file(path=dest_path, state="absent") + + +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("src_type", ["pds", "pdse"]) +def test_copy_member_to_uss_dir(ansible_zos_module, src_type): + hosts = ansible_zos_module + src_ds = "USER.TEST.FUNCTEST" + src = "{0}(MEMBER)".format(src_ds) + dest = "/tmp/" + dest_path = "/tmp/MEMBER" + + try: + hosts.all.zos_data_set(name=src_ds, type=src_type, state="present") + hosts.all.shell( + cmd="decho '{0}' '{1}'".format(DUMMY_DATA, src), + executable=SHELL_EXECUTABLE + ) + + copy_res = hosts.all.zos_copy(src=src, dest=dest, remote_src=True) + stat_res = hosts.all.stat(path=dest_path) verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE + cmd="head {0}".format(dest_path), + executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + for result in stat_res.contacted.values(): + assert result.get("stat").get("exists") is True for result in verify_copy.contacted.values(): assert result.get("rc") == 0 assert result.get("stdout") != "" finally: - hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.zos_data_set(name=src_ds, state="absent") + hosts.all.file(path=dest_path, state="absent") -def test_copy_pds_member_to_non_existing_ps(ansible_zos_module): +@pytest.mark.seq +@pytest.mark.pdse +@pytest.mark.parametrize("src_type", ["pds", "pdse"]) +def test_copy_member_to_non_existing_seq_data_set(ansible_zos_module, src_type): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER + src_ds = "USER.TEST.PDS.SOURCE" + src = "{0}(MEMBER)".format(src_ds) dest = "USER.TEST.SEQ.FUNCTEST" + try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.zos_data_set(name=src_ds, type=src_type, state="present") + hosts.all.shell( + cmd="decho 'A record' '{0}'".format(src), + executable=SHELL_EXECUTABLE + ) + + copy_res = hosts.all.zos_copy(src=src, dest=dest, remote_src=True) verify_copy = hosts.all.shell( cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in verify_copy.contacted.values(): assert result.get("rc") == 0 assert result.get("stdout") != "" finally: + hosts.all.zos_data_set(name=src_ds, state="absent") hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pds_member_to_existing_pds_member(ansible_zos_module): +@pytest.mark.seq +@pytest.mark.pdse +@pytest.mark.parametrize("args", [ + dict(type="pds", force=False), + dict(type="pds", force=True), + dict(type="pdse", force=False), + dict(type="pdse", force=True), +]) +def test_copy_member_to_existing_seq_data_set(ansible_zos_module, args): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" + src_ds = "USER.TEST.PDS.SOURCE" + src = "{0}(MEMBER)".format(src_ds) + dest = "USER.TEST.SEQ.FUNCTEST" + try: - hosts.all.zos_data_set( - name=dest_ds, - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - hosts.all.zos_data_set(name=dest, type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + hosts.all.zos_data_set(name=dest, type="seq", state="present", replace=True) + hosts.all.zos_data_set(name=src_ds, type=args["type"], state="present") + + for data_set in [src, dest]: + hosts.all.shell( + cmd="decho 'A record' '{0}'".format(data_set), + executable=SHELL_EXECUTABLE + ) + + copy_res = hosts.all.zos_copy(src=src, dest=dest, force=args["force"], remote_src=True) verify_copy = hosts.all.shell( cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE ) + for result in copy_res.contacted.values(): - assert result.get("msg") is None + if args["force"]: + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + else: + assert result.get("msg") is not None + assert result.get("changed") is False for result in verify_copy.contacted.values(): assert result.get("rc") == 0 - assert result.get("stdout") != "" + if args["force"]: + assert result.get("stdout") != "" finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + hosts.all.zos_data_set(name=src_ds, state="absent") + hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pds_member_to_non_existing_pds_member(ansible_zos_module): +@pytest.mark.uss +@pytest.mark.pdse +@pytest.mark.parametrize("dest_type", ["pds", "pdse"]) +def test_copy_file_to_member_convert_encoding(ansible_zos_module, dest_type): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" + src = "/etc/profile" + dest = "USER.TEST.PDS.FUNCTEST" + try: hosts.all.zos_data_set( - name=dest_ds, - type="pds", + type=dest_type, space_primary=5, space_type="M", record_format="fba", record_length=25, ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + + copy_res = hosts.all.zos_copy( + src=src, + dest=dest, + remote_src=False, + encoding={ + "from": "UTF-8", + "to": "IBM-1047" + }, + ) + verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE + cmd="head \"//'{0}'\"".format(dest + "(PROFILE)"), + executable=SHELL_EXECUTABLE, ) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in verify_copy.contacted.values(): assert result.get("rc") == 0 assert result.get("stdout") != "" finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pds_member_to_existing_pds(ansible_zos_module): +@pytest.mark.pdse +@pytest.mark.parametrize("args", [ + dict(type="pds", backup=None), + dict(type="pds", backup="USER.TEST.PDS.BACKUP"), + dict(type="pdse", backup=None), + dict(type="pdse", backup="USER.TEST.PDSE.BACKUP"), +]) +def test_backup_pds(ansible_zos_module, args): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST({0})".format(extract_member_name(src_ds)) + src = tempfile.mkdtemp() + dest = "USER.TEST.PDS.FUNCTEST" + members = ["FILE1", "FILE2", "FILE3", "FILE4", "FILE5"] + backup_name = None + try: - hosts.all.zos_data_set( - name=dest_ds, - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest_ds, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE - ) + populate_dir(src) + populate_partitioned_data_set(hosts, dest, args["type"], members) + + if args["backup"]: + copy_res = hosts.all.zos_copy(src=src, dest=dest, force=True, backup=True, backup_name=args["backup"]) + else: + copy_res = hosts.all.zos_copy(src=src, dest=dest, force=True, backup=True) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest + + backup_name = result.get("backup_name") + assert backup_name is not None + if args["backup"]: + assert backup_name == args["backup"] + + verify_copy = get_listcat_information(hosts, backup_name, args["type"]) + for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + assert result.get("dd_names") is not None + dd_names = result.get("dd_names") + assert len(dd_names) > 0 + output = "\n".join(dd_names[0]["content"]) + assert "IN-CAT" in output + finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + shutil.rmtree(src) + hosts.all.zos_data_set(name=dest, state="absent") + if backup_name: + hosts.all.zos_data_set(name=backup_name, state="absent") -def test_copy_pds_member_to_existing_pdse_member(ansible_zos_module): +@pytest.mark.seq +@pytest.mark.pdse +@pytest.mark.parametrize("src_type", ["seq", "pds", "pdse"]) +def test_copy_data_set_to_volume(ansible_zos_module, src_type): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" + source = "USER.TEST.FUNCTEST.SRC" + dest = "USER.TEST.FUNCTEST.DEST" + try: - hosts.all.zos_data_set( - name=dest_ds, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, + hosts.all.zos_data_set(name=source, type=src_type, state='present') + copy_res = hosts.all.zos_copy( + src=source, + dest=dest, + remote_src=True, + volume='000000' ) - hosts.all.zos_data_set(name=dest, type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE + + for cp in copy_res.contacted.values(): + assert cp.get('msg') is None + assert cp.get('changed') is True + assert cp.get('dest') == dest + + check_vol = hosts.all.shell( + cmd="tsocmd \"LISTDS '{0}'\"".format(dest), + executable=SHELL_EXECUTABLE, ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + + for cv in check_vol.contacted.values(): + assert cv.get('rc') == 0 + assert "000000" in cv.get('stdout') finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + hosts.all.zos_data_set(name=source, state='absent') + hosts.all.zos_data_set(name=dest, state='absent') -def test_copy_pds_member_to_non_existing_pdse_member(ansible_zos_module): +@pytest.mark.vsam +def test_copy_ksds_to_non_existing_ksds(ansible_zos_module): hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" + src_ds = TEST_VSAM_KSDS + dest_ds = "USER.TEST.VSAM.KSDS" + try: - hosts.all.zos_data_set( - name=dest_ds, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE - ) + copy_res = hosts.all.zos_copy(src=src_ds, dest=dest_ds, remote_src=True) + verify_copy = get_listcat_information(hosts, dest_ds, "ksds") + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_ds for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + assert result.get("dd_names") is not None + dd_names = result.get("dd_names") + assert len(dd_names) > 0 + output = "\n".join(dd_names[0]["content"]) + assert "IN-CAT" in output + assert re.search(r"\bINDEXED\b", output) finally: hosts.all.zos_data_set(name=dest_ds, state="absent") -def test_copy_pdse_member_to_existing_uss_file(ansible_zos_module): +@pytest.mark.vsam +@pytest.mark.parametrize("force", [False, True]) +def test_copy_ksds_to_empty_ksds(ansible_zos_module, force): hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest = "/tmp/" + extract_member_name(src_ds).lower() + src_ds = "USER.TEST.VSAM.SOURCE" + dest_ds = "USER.TEST.VSAM.KSDS" + try: - hosts.all.file(path=dest, state="touch") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest) - verify_copy = hosts.all.shell( - cmd="head {0}".format(dest), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.file(path=dest, state="absent") + create_vsam_data_set(hosts, src_ds, "KSDS", add_data=True, key_length=12, key_offset=0) + create_vsam_data_set(hosts, dest_ds, "KSDS", key_length=12, key_offset=0) + copy_res = hosts.all.zos_copy(src=src_ds, dest=dest_ds, remote_src=True, force=force) + verify_copy = get_listcat_information(hosts, dest_ds, "ksds") -def test_copy_pdse_member_to_non_existing_uss_file(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest = "/tmp/" + extract_member_name(src_ds).lower() - try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest) - verify_copy = hosts.all.shell( - cmd="head {0}".format(dest), executable=SHELL_EXECUTABLE - ) for result in copy_res.contacted.values(): assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True + assert result.get("changed") is True + assert result.get("dest") == dest_ds + for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.file(path=dest, state="absent") + assert result.get("dd_names") is not None + dd_names = result.get("dd_names") + assert len(dd_names) > 0 + output = "\n".join(dd_names[0]["content"]) + assert "IN-CAT" in output + assert re.search(r"\bINDEXED\b", output) + finally: + hosts.all.zos_data_set(name=src_ds, state="absent") + hosts.all.zos_data_set(name=dest_ds, state="absent") -def test_copy_pdse_member_to_existing_ps(ansible_zos_module): +@pytest.mark.vsam +@pytest.mark.parametrize("force", [False, True]) +def test_copy_ksds_to_existing_ksds(ansible_zos_module, force): hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest = "USER.TEST.SEQ.FUNCTEST" + src_ds = "USER.TEST.VSAM.SOURCE" + dest_ds = "USER.TEST.VSAM.KSDS" + try: - hosts.all.zos_data_set(name=dest, type="seq", state="present") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE - ) + create_vsam_data_set(hosts, src_ds, "KSDS", add_data=True, key_length=12, key_offset=0) + create_vsam_data_set(hosts, dest_ds, "KSDS", add_data=True, key_length=12, key_offset=0) + + copy_res = hosts.all.zos_copy(src=src_ds, dest=dest_ds, remote_src=True, force=force) + verify_copy = get_listcat_information(hosts, dest_ds, "ksds") + for result in copy_res.contacted.values(): - assert result.get("msg") is None + if force: + assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_ds + else: + assert result.get("msg") is not None + assert result.get("changed") is False + for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest, state="absent") + assert result.get("dd_names") is not None + dd_names = result.get("dd_names") + assert len(dd_names) > 0 + output = "\n".join(dd_names[0]["content"]) + assert "IN-CAT" in output + assert re.search(r"\bINDEXED\b", output) + finally: + hosts.all.zos_data_set(name=src_ds, state="absent") + hosts.all.zos_data_set(name=dest_ds, state="absent") -def test_copy_pdse_member_to_non_existing_ps(ansible_zos_module): +@pytest.mark.vsam +@pytest.mark.parametrize("backup", [None, "USER.TEST.VSAM.KSDS.BACK"]) +def test_backup_ksds(ansible_zos_module, backup): hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest = "USER.TEST.SEQ.FUNCTEST" + src = "USER.TEST.VSAM.SOURCE" + dest = "USER.TEST.VSAM.KSDS" + backup_name = None + try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE - ) + create_vsam_data_set(hosts, src, "KSDS", add_data=True, key_length=12, key_offset=0) + create_vsam_data_set(hosts, dest, "KSDS", add_data=True, key_length=12, key_offset=0) + + if backup: + copy_res = hosts.all.zos_copy(src=src, dest=dest, backup=True, backup_name=backup, remote_src=True, force=True) + else: + copy_res = hosts.all.zos_copy(src=src, dest=dest, backup=True, remote_src=True, force=True) + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + backup_name = result.get("backup_name") + assert backup_name is not None + + if backup: + assert backup_name == backup + + verify_copy = get_listcat_information(hosts, dest, "ksds") + verify_backup = get_listcat_information(hosts, backup_name, "ksds") + for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + assert result.get("dd_names") is not None + dd_names = result.get("dd_names") + assert len(dd_names) > 0 + output = "\n".join(dd_names[0]["content"]) + assert "IN-CAT" in output + assert re.search(r"\bINDEXED\b", output) + for result in verify_backup.contacted.values(): + assert result.get("dd_names") is not None + dd_names = result.get("dd_names") + assert len(dd_names) > 0 + output = "\n".join(dd_names[0]["content"]) + assert "IN-CAT" in output + assert re.search(r"\bINDEXED\b", output) + finally: + hosts.all.zos_data_set(name=src, state="absent") hosts.all.zos_data_set(name=dest, state="absent") + if backup_name: + hosts.all.zos_data_set(name=backup_name, state="absent") -def test_copy_pdse_member_to_existing_pdse_member(ansible_zos_module): +@pytest.mark.vsam +def test_copy_ksds_to_volume(ansible_zos_module): hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" + src_ds = TEST_VSAM_KSDS + dest_ds = "USER.TEST.VSAM.KSDS" + try: - hosts.all.zos_data_set( - name=dest_ds, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - hosts.all.zos_data_set(name=dest, type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE + copy_res = hosts.all.zos_copy( + src=src_ds, + dest=dest_ds, + remote_src=True, + volume="000000" ) + verify_copy = get_listcat_information(hosts, dest_ds, "ksds") + for result in copy_res.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest_ds for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + assert result.get("dd_names") is not None + dd_names = result.get("dd_names") + assert len(dd_names) > 0 + output = "\n".join(dd_names[0]["content"]) + assert "IN-CAT" in output + assert re.search(r"\bINDEXED\b", output) + assert re.search(r"\b000000\b", output) finally: hosts.all.zos_data_set(name=dest_ds, state="absent") -def test_copy_pdse_member_to_non_existing_pdse_member(ansible_zos_module): +def test_dest_data_set_parameters(ansible_zos_module): hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" + src = "/etc/profile" + dest = "USER.TEST.DEST" + volume = "000000" + space_primary = 3 + space_secondary = 2 + space_type = "K" + record_format = "VB" + record_length = 100 + block_size = 21000 + try: - hosts.all.zos_data_set( - name=dest_ds, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, + copy_result = hosts.all.zos_copy( + src=src, + dest=dest, + remote_src=True, + volume=volume, + dest_data_set=dict( + type="SEQ", + space_primary=space_primary, + space_secondary=space_secondary, + space_type=space_type, + record_format=record_format, + record_length=record_length, + block_size=block_size + ) ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) + verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE + cmd="tsocmd \"LISTDS '{0}'\"".format(dest), + executable=SHELL_EXECUTABLE, ) - for result in copy_res.contacted.values(): + + for result in copy_result.contacted.values(): assert result.get("msg") is None + assert result.get("changed") is True + assert result.get("dest") == dest for result in verify_copy.contacted.values(): + # The tsocmd returns 5 lines like this: + # USER.TEST.DEST + # --RECFM-LRECL-BLKSIZE-DSORG + # VB 100 21000 PS + # --VOLUMES-- + # 000000 assert result.get("rc") == 0 - assert result.get("stdout") != "" + output_lines = result.get("stdout").split("\n") + assert len(output_lines) == 5 + data_set_attributes = output_lines[2].strip().split() + assert len(data_set_attributes) == 4 + assert data_set_attributes[0] == record_format + assert data_set_attributes[1] == str(record_length) + assert data_set_attributes[2] == str(block_size) + assert data_set_attributes[3] == "PS" + assert volume in output_lines[4] finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + hosts.all.zos_data_set(name=dest, state="absent") -def test_copy_pdse_member_to_existing_pds_member(ansible_zos_module): +def test_ensure_tmp_cleanup(ansible_zos_module): hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" - try: - hosts.all.zos_data_set( - name=dest_ds, - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - hosts.all.zos_data_set(name=dest, type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") + src = "/etc/profile" + dest = "/tmp" + dest_path = "/tmp/profile" + temp_files_patterns = [ + re.compile(r"\bansible-zos-copy-payload"), + re.compile(r"\bconverted"), + re.compile(r"\bansible-zos-copy-data-set-dump") + ] -def test_copy_pdse_member_to_non_existing_pds_member(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest = "USER.TEST.PDS.FUNCTEST(DATA)" try: - hosts.all.zos_data_set( - name=dest_ds, - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE - ) + copy_res = hosts.all.zos_copy(src=src, dest=dest) for result in copy_res.contacted.values(): assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" + assert result.get("changed") is True + + stat_dir = hosts.all.shell( + cmd="ls", + executable=SHELL_EXECUTABLE, + chdir="/tmp/" + ) + + for result in stat_dir.contacted.values(): + tmp_files = result.get("stdout") + for pattern in temp_files_patterns: + assert not pattern.search(tmp_files) + finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") - - -def test_copy_pds_member_to_uss_dir(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_PDS_MEMBER - dest = "/tmp/" - dest_path = "/tmp/" + extract_member_name(src_ds) - try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest_path) - verify_copy = hosts.all.shell( - cmd="head {0}".format(dest_path), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_copy_pdse_member_to_uss_dir(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_PDSE_MEMBER - dest = "/tmp/" - dest_path = "/tmp/" + extract_member_name(src_ds) - try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest, remote_src=True) - stat_res = hosts.all.stat(path=dest_path) - verify_copy = hosts.all.shell( - cmd="head {0}".format(dest_path), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_copy_vsam_ksds_to_existing_vsam_ksds(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_VSAM_KSDS - dest_ds = "USER.TEST.VSAM.KSDS" - try: - create_vsam_ksds(dest_ds, ansible_zos_module) - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest_ds, remote_src=True) - verify_copy = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(dest_ds), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stderr") - assert "NOT IN CATALOG" not in result.get("stdout") - assert "VSAM" in result.get("stdout") - finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") - - -def test_copy_vsam_ksds_to_non_existing_vsam_ksds(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_VSAM_KSDS - dest_ds = "USER.TEST.VSAM.KSDS" - try: - copy_res = hosts.all.zos_copy(src=src_ds, dest=dest_ds, remote_src=True) - verify_copy = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(dest_ds), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stderr") - assert "NOT IN CATALOG" not in result.get("stdout") - assert "VSAM" in result.get("stdout") - finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") - - -def test_copy_empty_vsam_fails(ansible_zos_module): - hosts = ansible_zos_module - src_ds = TEST_VSAM - dest_ds = "USER.TEST.VSAM.LDS" - try: - copy_res = hosts.all.zos_copy( - src=src_ds, dest=dest_ds, is_vsam=True, remote_src=True - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is not None - finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") - - -def test_copy_inline_content_to_existing_uss_file(ansible_zos_module): - hosts = ansible_zos_module - dest_path = "/tmp/inline" - try: - hosts.all.file(path=dest_path, state="touch") - copy_res = hosts.all.zos_copy(content="Inline content", dest=dest_path) - stat_res = hosts.all.stat(path=dest_path) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_copy_inline_content_to_uss_dir(ansible_zos_module): - hosts = ansible_zos_module - dest = "/tmp/" - dest_path = "/tmp/inline_copy" - try: - copy_res = hosts.all.zos_copy(content="Inline content", dest=dest) - stat_res = hosts.all.stat(path=dest_path) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_copy_inline_content_to_ps(ansible_zos_module): - hosts = ansible_zos_module - dest_path = "USER.TEST.SEQ.FUNCTEST" - try: - copy_res = hosts.all.zos_copy(content="Inline content", dest=dest_path) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest_path), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest_path, state="absent") - - -def test_copy_inline_content_to_pds_member(ansible_zos_module): - hosts = ansible_zos_module - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest_path = "USER.TEST.PDS.FUNCTEST(CONTENT)" - try: - hosts.all.zos_data_set( - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - copy_res = hosts.all.zos_copy(content="Inline content", dest=dest_path) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest_path), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") - - -def test_copy_inline_content_to_pdse_member(ansible_zos_module): - hosts = ansible_zos_module - dest_ds = "USER.TEST.PDS.FUNCTEST" - dest_path = "USER.TEST.PDS.FUNCTEST(CONTENT)" - try: - hosts.all.zos_data_set( - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - copy_res = hosts.all.zos_copy(content="Inline content", dest=dest_path) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest_path), executable=SHELL_EXECUTABLE - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest_ds, state="absent") - - -def test_copy_to_existing_dest_not_forced(ansible_zos_module): - hosts = ansible_zos_module - dest_path = "/tmp/profile" - try: - hosts.all.file(path=dest_path, state="touch") - copy_res = hosts.all.zos_copy(src="/etc/profile", dest=dest_path, force=False) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - assert result.get("note") is not None - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_copy_local_symlink_to_uss_file(ansible_zos_module): - hosts = ansible_zos_module - src_lnk = "/tmp/etclnk" - dest_path = "/tmp/profile" - try: - try: - os.symlink("/etc/profile", src_lnk) - except FileExistsError: - pass - hosts.all.file(path=dest_path, state="touch") - copy_res = hosts.all.zos_copy(src=src_lnk, dest=dest_path, local_follow=True) - verify_copy = hosts.all.shell( - cmd="head {0}".format(dest_path), executable=SHELL_EXECUTABLE - ) - stat_res = hosts.all.stat(path=dest_path) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.file(path=dest_path, state="absent") - os.remove(src_lnk) - - -def test_copy_local_file_to_uss_file_convert_encoding(ansible_zos_module): - hosts = ansible_zos_module - dest_path = "/tmp/profile" - try: - hosts.all.file(path=dest_path, state="absent") - copy_res = hosts.all.zos_copy( - src="/etc/profile", - dest=dest_path, - encoding={"from": "ISO8859-1", "to": "IBM-1047"}, - ) - stat_res = hosts.all.stat(path=dest_path) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_copy_uss_file_to_uss_file_convert_encoding(ansible_zos_module): - hosts = ansible_zos_module - dest_path = "/tmp/profile" - try: - hosts.all.file(path=dest_path, state="absent") - copy_res = hosts.all.zos_copy( - src="/etc/profile", - dest=dest_path, - encoding={"from": "IBM-1047", "to": "IBM-1047"}, - remote_src=True, - ) - stat_res = hosts.all.stat(path=dest_path) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_copy_uss_file_to_pds_member_convert_encoding(ansible_zos_module): - hosts = ansible_zos_module - src = "/etc/profile" - dest_path = "USER.TEST.PDS.FUNCTEST" - try: - hosts.all.zos_data_set( - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=25, - ) - copy_res = hosts.all.zos_copy( - src=src, - dest=dest_path, - remote_src=True, - encoding={"from": "IBM-1047", "to": "IBM-1047"}, - ) - verify_copy = hosts.all.shell( - cmd="head \"//'{0}'\"".format(dest_path + "(PROFILE)"), - executable=SHELL_EXECUTABLE, - ) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - for result in verify_copy.contacted.values(): - assert result.get("rc") == 0 - assert result.get("stdout") != "" - finally: - hosts.all.zos_data_set(name=dest_path, state="absent") - - -def test_ensure_tmp_cleanup(ansible_zos_module): - hosts = ansible_zos_module - src = "/etc/profile" - dest = "/tmp" - dest_path = "/tmp/profile" - try: - stat_dir = hosts.all.shell( - cmd="ls -l", executable=SHELL_EXECUTABLE, chdir="/tmp" - ) - file_count_pre = len(list(stat_dir.contacted.values())[0].get("stdout_lines")) - - copy_res = hosts.all.zos_copy(src=src, dest=dest) - for result in copy_res.contacted.values(): - assert result.get("msg") is None - - stat_dir = hosts.all.shell( - cmd="ls -l", executable=SHELL_EXECUTABLE, chdir="/tmp" - ) - file_count_post = len(list(stat_dir.contacted.values())[0].get("stdout_lines")) - assert file_count_post <= file_count_pre - - finally: - hosts.all.file(path=dest_path, state="absent") - - -def test_backup_uss_file_default_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = "/etc/profile" - dest = "/tmp/profile" - backup_name = None - try: - hosts.all.file(path=dest, state="touch") - copy_res = hosts.all.zos_copy(src=src, dest=dest, backup=True) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - backup_name = result.get("backup_name") - assert backup_name is not None - - stat_res = hosts.all.stat(path=backup_name) - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - - finally: - hosts.all.file(path=dest, state="absent") - if backup_name: - hosts.all.file(path=backup_name, state="absent") - - -def test_backup_sequential_data_set_default_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = "/etc/profile" - dest = "USER.TEST.SEQ.FUNCTEST" - backup_name = None - try: - hosts.all.zos_data_set(name=dest, type="seq", state="present") - copy_res = hosts.all.zos_copy(src=src, dest=dest, backup=True) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - backup_name = result.get("backup_name") - assert backup_name is not None - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_backup_pds_default_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = tempfile.mkdtemp() - dest = "USER.TEST.PDS.FUNCTEST" - backup_name = None - try: - populate_dir(src) - hosts.all.zos_data_set( - name=dest, - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.zos_data_set(name=dest + "(FILE1)", type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy(src=src, dest=dest, backup=True) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - backup_name = result.get("backup_name") - assert backup_name is not None - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - shutil.rmtree(src) - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_backup_pdse_default_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = tempfile.mkdtemp() - dest = "USER.TEST.PDSE.FUNCTEST" - backup_name = None - try: - populate_dir(src) - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.zos_data_set(name=dest + "(FILE1)", type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy(src=src, dest=dest, backup=True) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - backup_name = result.get("backup_name") - assert backup_name is not None - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - shutil.rmtree(src) - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_backup_vsam_default_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = TEST_VSAM_KSDS - dest = "USER.TEST.VSAM.KSDS" - backup_name = None - try: - create_vsam_ksds(dest, ansible_zos_module) - copy_res = hosts.all.zos_copy(src=src, dest=dest, backup=True, remote_src=True) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - backup_name = result.get("backup_name") - assert backup_name is not None - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_backup_uss_file_user_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = "/etc/profile" - dest = "/tmp/profile" - backup_name = "/tmp/uss_backup" - try: - hosts.all.file(path=dest, state="touch") - copy_res = hosts.all.zos_copy( - src=src, dest=dest, backup=True, backup_name=backup_name - ) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - result.get("backup_name") == backup_name - - stat_res = hosts.all.stat(path=backup_name) - for result in stat_res.contacted.values(): - assert result.get("stat").get("exists") is True - - finally: - hosts.all.file(path=dest, state="absent") - if backup_name: - hosts.all.file(path=backup_name, state="absent") - - -def test_backup_sequential_data_set_user_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = "/etc/profile" - dest = "USER.TEST.SEQ.FUNCTEST" - backup_name = "USER.TEST.SEQ.FUNCTEST.BACK" - try: - hosts.all.zos_data_set(name=dest, type="seq", state="present") - copy_res = hosts.all.zos_copy( - src=src, dest=dest, backup=True, backup_name=backup_name - ) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - result.get("backup_name") == backup_name - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_backup_pds_user_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = tempfile.mkdtemp() - dest = "USER.TEST.PDS.FUNCTEST" - backup_name = "USER.TEST.PDS.FUNCTEST.BACK" - try: - populate_dir(src) - hosts.all.zos_data_set( - name=dest, - type="pds", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.zos_data_set(name=dest + "(FILE1)", type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy( - src=src, dest=dest, backup=True, backup_name=backup_name - ) - print(vars(copy_res)) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - result.get("backup_name") == backup_name - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - shutil.rmtree(src) - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_backup_pdse_user_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = tempfile.mkdtemp() - dest = "USER.TEST.PDSE.FUNCTEST" - backup_name = "USER.TEST.PDSE.FUNCTEST.BACK" - try: - populate_dir(src) - hosts.all.zos_data_set( - name=dest, - type="pdse", - space_primary=5, - space_type="M", - record_format="fba", - record_length=80, - ) - hosts.all.zos_data_set(name=dest + "(FILE1)", type="MEMBER", replace="yes") - copy_res = hosts.all.zos_copy( - src=src, dest=dest, backup=True, backup_name=backup_name - ) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - result.get("backup_name") == backup_name - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - shutil.rmtree(src) - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_backup_vsam_user_backup_path(ansible_zos_module): - hosts = ansible_zos_module - src = TEST_VSAM_KSDS - dest = "USER.TEST.VSAM.KSDS" - backup_name = "USER.TEST.VSAM.KSDS.BACK" - try: - create_vsam_ksds(dest, ansible_zos_module) - copy_res = hosts.all.zos_copy( - src=src, dest=dest, backup=True, remote_src=True, backup_name=backup_name - ) - print(vars(copy_res)) - - for result in copy_res.contacted.values(): - assert result.get("msg") is None - result.get("backup_name") == backup_name - - stat_res = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(backup_name), - executable=SHELL_EXECUTABLE, - ) - for result in stat_res.contacted.values(): - assert result.get("rc") == 0 - assert "NOT IN CATALOG" not in result.get("stdout") - assert "NOT IN CATALOG" not in result.get("stderr") - - finally: - hosts.all.zos_data_set(name=dest, state="absent") - if backup_name: - hosts.all.zos_data_set(name=backup_name, state="absent") - - -def test_copy_local_file_insufficient_read_permission_fails(ansible_zos_module): - hosts = ansible_zos_module - src_path = "/tmp/testfile" - dest = "/tmp" - try: - open(src_path, "w").close() - os.chmod(src_path, 0) - copy_res = hosts.all.zos_copy(src=src_path, dest=dest) - for result in copy_res.contacted.values(): - assert result.get("msg") is not None - assert "read permission" in result.get("msg") - finally: - if os.path.exists(src_path): - os.remove(src_path) - - -def test_copy_non_existent_local_file_fails(ansible_zos_module): - hosts = ansible_zos_module - src_path = "/tmp/non_existent_src" - dest = "/tmp" - - copy_res = hosts.all.zos_copy(src=src_path, dest=dest) - for result in copy_res.contacted.values(): - assert result.get("msg") is not None - assert "does not exist" in result.get("msg") - - -def test_copy_local_file_to_vsam_fails(ansible_zos_module): - hosts = ansible_zos_module - src = "/etc/profile" - dest = "USER.TEST.VSAM.KSDS" - try: - create_vsam_ksds(dest, ansible_zos_module) - copy_res = hosts.all.zos_copy(src=src, dest=dest) - for result in copy_res.contacted.values(): - assert result.get("msg") is not None - assert "Incompatible" in result.get("msg") - finally: - hosts.all.zos_data_set(name=dest, state="absent") - - -def test_copy_sequential_data_set_to_vsam_fails(ansible_zos_module): - hosts = ansible_zos_module - src = TEST_PS - dest = "USER.TEST.VSAM.KSDS" - try: - create_vsam_ksds(dest, ansible_zos_module) - copy_res = hosts.all.zos_copy(src=src, dest=dest, remote_src=True) - for result in copy_res.contacted.values(): - assert result.get("msg") is not None - assert "Incompatible" in result.get("msg") - finally: - hosts.all.zos_data_set(name=dest, state="absent") + hosts.all.file(path=dest_path, state="absent") # Deprecated function for ibm.zos_core.zos_copy in v1.4.0 @@ -1995,65 +1907,3 @@ def test_copy_sequential_data_set_to_vsam_fails(ansible_zos_module): # assert result.get("msg") is not None # finally: # hosts.all.file(path=dest_path, state="absent") - - -def test_copy_multiple_data_set_members(ansible_zos_module): - hosts = ansible_zos_module - src = "USER.FUNCTEST.SRC.PDS" - dest = "USER.FUNCTEST.DEST.PDS" - member_list = ["MEMBER1", "ABCXYZ", "ABCASD"] - ds_list = ["{0}({1})".format(src, i) for i in member_list] - try: - hosts.all.zos_data_set(name=src, type="pds") - hosts.all.zos_data_set(name=dest, type="pds") - hosts.all.zos_data_set( - batch=[dict(src=i, type="MEMBER", replace="yes") for i in ds_list] - ) - - for i in ds_list: - hosts.all.zos_copy(content=DUMMY_DATA, dest=i) - - copy_res = hosts.all.zos_copy(src=src + "(ABC*)", dest=dest, remote_src=True) - for res in copy_res.contacted.values(): - assert res.get("msg") is None - - verify_copy = hosts.all.shell( - cmd="mls {0}".format(dest), executable=SHELL_EXECUTABLE - ) - - for v_cp in verify_copy.contacted.values(): - assert v_cp.get("rc") == 0 - stdout = v_cp.get("stdout") - assert stdout is not None - assert(len(stdout.splitlines())) == 2 - - finally: - hosts.all.zos_data_set(name=dest, state="absent") - hosts.all.zos_data_set(name=src, state="absent") - - -def test_copy_pds_to_volume(ansible_zos_module): - hosts = ansible_zos_module - remote_pds = "USER.TEST.FUNCTEST.PDS" - dest_pds = "USER.TEST.FUNCTEST.DEST" - try: - hosts.all.zos_data_set(name=remote_pds, type='pds', state='present') - copy_res = hosts.all.zos_copy( - src=remote_pds, - dest=dest_pds, - remote_src=True, - volume='000000' - ) - for cp in copy_res.contacted.values(): - assert cp.get('msg') is None - - check_vol = hosts.all.shell( - cmd="tsocmd \"LISTDS '{0}'\"".format(dest_pds), - executable=SHELL_EXECUTABLE, - ) - for cv in check_vol.contacted.values(): - assert cv.get('rc') == 0 - assert "000000" in cv.get('stdout') - finally: - hosts.all.zos_data_set(name=remote_pds, state='absent') - hosts.all.zos_data_set(name=dest_pds, state='absent') diff --git a/tests/functional/modules/test_zos_fetch_func.py b/tests/functional/modules/test_zos_fetch_func.py index d1ae46e1b..6fefca435 100644 --- a/tests/functional/modules/test_zos_fetch_func.py +++ b/tests/functional/modules/test_zos_fetch_func.py @@ -481,14 +481,15 @@ def test_fetch_flat_create_dirs(ansible_zos_module, z_python_interpreter): shutil.rmtree("/tmp/" + remote_host) -def test_sftp_negative_port_specification_fails(ansible_zos_module): - hosts = ansible_zos_module - params = dict(src="/etc/profile", dest="/tmp/", flat=True, sftp_port=-1) - try: - results = hosts.all.zos_fetch(**params) - dest_path = "/tmp/profile" - for result in results.contacted.values(): - assert result.get("msg") is not None - finally: - if os.path.exists(dest_path): - os.remove(dest_path) +# Since sftp_port is deprecated, this test doesn't have to be included. +# def test_sftp_negative_port_specification_fails(ansible_zos_module): +# hosts = ansible_zos_module +# params = dict(src="/etc/profile", dest="/tmp/", flat=True, sftp_port=-1) +# try: +# results = hosts.all.zos_fetch(**params) +# dest_path = "/tmp/profile" +# for result in results.contacted.values(): +# assert result.get("msg") is not None +# finally: +# if os.path.exists(dest_path): +# os.remove(dest_path) diff --git a/tests/functional/modules/test_zos_mount_func.py b/tests/functional/modules/test_zos_mount_func.py index 5cc46f404..ddd45c679 100644 --- a/tests/functional/modules/test_zos_mount_func.py +++ b/tests/functional/modules/test_zos_mount_func.py @@ -83,7 +83,7 @@ def create_sourcefile(hosts): hosts.all.shell( cmd="zfsadm define -aggregate " + thisfile - + " -volumes 222222 -cylinders 200 1", + + " -volumes 222222 -cylinders 50 1", executable=SHELL_EXECUTABLE, stdin="", ) diff --git a/tests/helpers/zos_blockinfile_helper.py b/tests/helpers/zos_blockinfile_helper.py index dff354205..20ea80446 100644 --- a/tests/helpers/zos_blockinfile_helper.py +++ b/tests/helpers/zos_blockinfile_helper.py @@ -57,7 +57,7 @@ def set_ds_test_env(test_name, hosts, test_env): results = hosts.all.shell(cmd='hlq') for result in results.contacted.values(): hlq = result.get("stdout") - if(len(hlq) > 8): + if len(hlq) > 8: hlq = hlq[:8] test_env["DS_NAME"] = hlq + "." + test_name.upper() + "." + test_env["DS_TYPE"] diff --git a/tests/pytest.ini b/tests/pytest.ini index fd7be108f..b6b74c5d6 100644 --- a/tests/pytest.ini +++ b/tests/pytest.ini @@ -4,4 +4,7 @@ python_files = test_*.py python_functions = test_* markers = ds: dataset test cases. - uss: uss test cases. + uss: USS test cases. + seq: sequential data sets test cases. + pdse: partitioned data sets test cases. + vsam: VSAM data sets test cases. diff --git a/tests/sanity/ignore-2.10.txt b/tests/sanity/ignore-2.10.txt index bb3819846..d466b4578 100644 --- a/tests/sanity/ignore-2.10.txt +++ b/tests/sanity/ignore-2.10.txt @@ -1,36 +1,66 @@ +plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_apf.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_apf.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. +plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_backup_restore.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_blockinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion +plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_copy.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_data_set.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. plugins/modules/zos_data_set.py validate-modules:doc-default-does-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:doc-type-does-not-match-spec # Have to use raw here for backwards compatibility with old module args, but would confuse current users if exposed. +plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:undocumented-parameter # Keep aliases to match behavior of old module spec, but some aliases were functionally inaccurate, and detailing in docs would only confuse user. -plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_data_set.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_data_set.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_encode.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_fetch.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_find.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_job_output.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_output.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_output.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_lineinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_mount.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mount.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mount.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mvs_raw.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator_action_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_ping.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_ping.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_ping.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_tso_command.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion -plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion -plugins/modules/zos_copy.py pylint:ansible-deprecated-no-version # Version found in call to Display.deprecated or AnsibleModule.deprecate -gins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_fetch.py pylint:ansible-deprecated-no-version # Version found in call to Display.deprecated or AnsibleModule.deprecate -plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -docs/source/modules/zos_apf.rst rstcheck!skip #Inline emphasis start-string without end-string -tests/dependencyfinder.py pylint:global-at-module-level # Ignore for test helper -tests/dependencyfinder.py shebang # This is not an ansible module but a test helper so this shebang limitation is not required. +plugins/modules/zos_tso_command.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_tso_command.py import-2.6!skip # Python 2.6 is unsupported diff --git a/tests/sanity/ignore-2.11.txt b/tests/sanity/ignore-2.11.txt index 5e44ff2ec..d466b4578 100644 --- a/tests/sanity/ignore-2.11.txt +++ b/tests/sanity/ignore-2.11.txt @@ -1,34 +1,66 @@ +plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_apf.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_apf.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. +plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_backup_restore.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_blockinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion +plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_copy.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_data_set.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. plugins/modules/zos_data_set.py validate-modules:doc-default-does-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:doc-type-does-not-match-spec # Have to use raw here for backwards compatibility with old module args, but would confuse current users if exposed. +plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:undocumented-parameter # Keep aliases to match behavior of old module spec, but some aliases were functionally inaccurate, and detailing in docs would only confuse user. -plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_data_set.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_data_set.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_encode.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_fetch.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_find.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_job_output.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_output.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_output.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_lineinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_mount.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mount.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mount.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mvs_raw.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator_action_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_ping.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_ping.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_ping.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_tso_command.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion -plugins/modules/zos_copy.py validate-modules:doc-default-does-not-match-spec # Argument 'unsafe_writes' in argument_spec defines default as (False) but documentation defines default as (None), https://github.com/ansible/ansible/pull/67243 -plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -docs/source/modules/zos_apf.rst rstcheck!skip #Inline emphasis start-string without end-string -tests/dependencyfinder.py pylint:global-at-module-level # Ignore for test helper -tests/dependencyfinder.py shebang # This is not an ansible module but a test helper so this shebang limitation is not required. +plugins/modules/zos_tso_command.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_tso_command.py import-2.6!skip # Python 2.6 is unsupported diff --git a/tests/sanity/ignore-2.12.txt b/tests/sanity/ignore-2.12.txt index 5e44ff2ec..d466b4578 100644 --- a/tests/sanity/ignore-2.12.txt +++ b/tests/sanity/ignore-2.12.txt @@ -1,34 +1,66 @@ +plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_apf.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_apf.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. +plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_backup_restore.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_blockinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion +plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_copy.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_data_set.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. plugins/modules/zos_data_set.py validate-modules:doc-default-does-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:doc-type-does-not-match-spec # Have to use raw here for backwards compatibility with old module args, but would confuse current users if exposed. +plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:undocumented-parameter # Keep aliases to match behavior of old module spec, but some aliases were functionally inaccurate, and detailing in docs would only confuse user. -plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_data_set.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_data_set.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_encode.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_fetch.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_find.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_job_output.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_output.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_output.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_lineinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_mount.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mount.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mount.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mvs_raw.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator_action_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_ping.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_ping.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_ping.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_tso_command.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion -plugins/modules/zos_copy.py validate-modules:doc-default-does-not-match-spec # Argument 'unsafe_writes' in argument_spec defines default as (False) but documentation defines default as (None), https://github.com/ansible/ansible/pull/67243 -plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -docs/source/modules/zos_apf.rst rstcheck!skip #Inline emphasis start-string without end-string -tests/dependencyfinder.py pylint:global-at-module-level # Ignore for test helper -tests/dependencyfinder.py shebang # This is not an ansible module but a test helper so this shebang limitation is not required. +plugins/modules/zos_tso_command.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_tso_command.py import-2.6!skip # Python 2.6 is unsupported diff --git a/tests/sanity/ignore-2.13.txt b/tests/sanity/ignore-2.13.txt new file mode 100644 index 000000000..0c1e1d800 --- /dev/null +++ b/tests/sanity/ignore-2.13.txt @@ -0,0 +1,35 @@ +plugins/modules/zos_data_set.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. +plugins/modules/zos_data_set.py validate-modules:doc-default-does-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. +plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_data_set.py validate-modules:doc-type-does-not-match-spec # Have to use raw here for backwards compatibility with old module args, but would confuse current users if exposed. +plugins/modules/zos_data_set.py validate-modules:undocumented-parameter # Keep aliases to match behavior of old module spec, but some aliases were functionally inaccurate, and detailing in docs would only confuse user. +plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_output.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mount.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_ping.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_tso_command.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion +plugins/modules/zos_copy.py validate-modules:doc-default-does-not-match-spec # Argument 'unsafe_writes' in argument_spec defines default as (False) but documentation defines default as (None), https://github.com/ansible/ansible/pull/67243 +plugins/modules/zos_mount.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. +plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +docs/source/modules/zos_apf.rst rstcheck!skip #Inline emphasis start-string without end-string +tests/dependencyfinder.py pylint:global-at-module-level # Ignore for test helper +tests/dependencyfinder.py shebang # This is not an ansible module but a test helper so this shebang limitation is not required. diff --git a/tests/sanity/ignore-2.9.txt b/tests/sanity/ignore-2.9.txt index eeb68a06d..7ae37ee97 100644 --- a/tests/sanity/ignore-2.9.txt +++ b/tests/sanity/ignore-2.9.txt @@ -1,35 +1,67 @@ +plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_apf.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_apf.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. +plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_backup_restore.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_backup_restore.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_blockinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_blockinfile.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion +plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_copy.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_copy.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_data_set.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. plugins/modules/zos_data_set.py validate-modules:doc-default-does-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:doc-type-does-not-match-spec # Have to use raw here for backwards compatibility with old module args, but would confuse current users if exposed. +plugins/modules/zos_data_set.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 plugins/modules/zos_data_set.py validate-modules:undocumented-parameter # Keep aliases to match behavior of old module spec, but some aliases were functionally inaccurate, and detailing in docs would only confuse user. -plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. -plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_data_set.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_data_set.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_encode.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_encode.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin +plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin +plugins/modules/zos_fetch.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_fetch.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_find.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_find.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_job_output.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_output.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_output.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_query.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_job_submit.py validate-modules:parameter-type-not-in-doc # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py validate-modules:undocumented-parameter # The undocumented parameter should be unknown to the user, as it is a temporary file generated by action plugin. +plugins/modules/zos_job_submit.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_job_submit.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_lineinfile.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_lineinfile.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_mount.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mount.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mount.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_mvs_raw.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_mvs_raw.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator.py import-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_operator_action_query.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_operator_action_query.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_ping.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 +plugins/modules/zos_ping.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_ping.py import-2.6!skip # Python 2.6 is unsupported plugins/modules/zos_tso_command.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator_action_query.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_operator.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_encode.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_lineinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_copy.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_copy.py validate-modules:doc-type-does-not-match-spec # doc type should be str, while spec type is path to allow user path expansion -plugins/modules/zos_copy.py pylint:ansible-deprecated-no-version # Version found in call to Display.deprecated or AnsibleModule.deprecate -plugins/modules/zos_mvs_raw.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_blockinfile.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_find.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_fetch.py validate-modules:parameter-type-not-in-doc # Passing args from action plugin -plugins/modules/zos_fetch.py validate-modules:undocumented-parameter # Passing args from action plugin -plugins/modules/zos_fetch.py pylint:ansible-deprecated-no-version # Version found in call to Display.deprecated or AnsibleModule.deprecate -plugins/modules/zos_backup_restore.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -plugins/modules/zos_backup_restore.py validate-modules:doc-choices-do-not-match-spec # We use our own argument parser for advanced conditional and dependent arguments. -plugins/modules/zos_apf.py validate-modules:missing-gplv3-license # Licensed under Apache 2.0 -docs/source/modules/zos_apf.rst rstcheck!skip #Inline emphasis start-string without end-string -tests/dependencyfinder.py pylint:global-at-module-level # Ignore for test helper +plugins/modules/zos_tso_command.py compile-2.6!skip # Python 2.6 is unsupported +plugins/modules/zos_tso_command.py import-2.6!skip # Python 2.6 is unsupported tests/dependencyfinder.py shebang # This is not an ansible module but a test helper so this shebang limitation is not required.