Skip to content

modulesync 7.3.0

modulesync 7.3.0 #192

Triggered via pull request February 20, 2024 12:41
@zilchmszilchms
synchronize #520
modulesync
Status Cancelled
Total duration 10m 26s
Artifacts

ci.yml

on: pull_request
Puppet  /  Static validations
17s
Puppet / Static validations
Matrix: Puppet / acceptance
Matrix: Puppet / unit
Puppet  /  Test suite
0s
Puppet / Test suite
Fit to window
Zoom out
Zoom in

Annotations

22 errors and 3 warnings
Puppet / 7 (Ruby 2.7): spec/classes/corosync_spec.rb#L687
corosync on ubuntu-18.04-x86_64 has the correct pcs version Failure/Error: is_expected.to contain_class('corosync').with( 'pcs_version' => corosync_stack(os_facts)[:pcs_version] ) expected that the catalogue would contain Class[corosync] with pcs_version set to "0.10.0" but it is set to "0.9.0"
Puppet / 7 (Ruby 2.7): spec/classes/corosync_spec.rb#L812
corosync on ubuntu-18.04-x86_64 when mananging pcsd authorization with a password authorizes all nodes Failure/Error: is_expected.to contain_exec('authorize_members').with( command: "pcs #{auth_command} node1.test.org node2.test.org node3.test.org -u hacluster -p some-secret-sauce", path: '/sbin:/bin:/usr/sbin:/usr/bin', require: [ 'Service[pcsd]', 'User[hacluster]' ] ) expected that the catalogue would contain Exec[authorize_members] with command set to "pcs host auth node1.test.org node2.test.org node3.test.org -u hacluster -p some-secret-sauce" but it is set to "pcs cluster auth node1.test.org node2.test.org node3.test.org -u hacluster -p some-secret-sauce"
Puppet / 7 (Ruby 2.7): spec/classes/corosync_spec.rb#L844
corosync on ubuntu-18.04-x86_64 when mananging pcsd authorization using an ip baseid node list match ip and auth nodes by member names Failure/Error: is_expected.to contain_exec('authorize_members').with( command: "pcs #{auth_command} 192.168.0.10 192.168.0.12 192.168.0.13 -u hacluster -p some-secret-sauce", path: '/sbin:/bin:/usr/sbin:/usr/bin', require: [ 'Service[pcsd]', 'User[hacluster]' ] ) expected that the catalogue would contain Exec[authorize_members] with command set to "pcs host auth 192.168.0.10 192.168.0.12 192.168.0.13 -u hacluster -p some-secret-sauce" but it is set to "pcs cluster auth 192.168.0.10 192.168.0.12 192.168.0.13 -u hacluster -p some-secret-sauce"
Puppet / 7 (Ruby 2.7): spec/classes/corosync_spec.rb#L1020
corosync on ubuntu-18.04-x86_64 when quorum device is configured with all parameters configures a temporary cluster if corosync.conf is missing Failure/Error: is_expected.to contain_exec('pcs_cluster_temporary').with( command: "pcs cluster setup --force #{cluster_name_arg} cluster_test node1.test.org node2.test.org node3.test.org", path: '/sbin:/bin:/usr/sbin:/usr/bin', onlyif: 'test ! -f /etc/corosync/corosync.conf', require: 'Exec[authorize_members]' ) expected that the catalogue would contain Exec[pcs_cluster_temporary] with command set to "pcs cluster setup --force cluster_test node1.test.org node2.test.org node3.test.org" but it is set to "pcs cluster setup --force --name cluster_test node1.test.org node2.test.org node3.test.org"
Puppet / 7 (Ruby 2.7): spec/classes/corosync_spec.rb#L1029
corosync on ubuntu-18.04-x86_64 when quorum device is configured with all parameters authorizes and adds the quorum device Failure/Error: is_expected.to contain_exec('authorize_qdevice').with( command: "pcs #{auth_command} quorum1.test.org -u hacluster -p quorum-secret-password", path: '/sbin:/bin:/usr/sbin:/usr/bin', onlyif: 'test 0 -ne $(grep quorum1.test.org /var/lib/pcsd/tokens >/dev/null 2>&1; echo $?)', require: [ 'Package[corosync-qdevice]', 'Exec[authorize_members]', 'Exec[pcs_cluster_temporary]' ] ) expected that the catalogue would contain Exec[authorize_qdevice] with command set to "pcs host auth quorum1.test.org -u hacluster -p quorum-secret-password" but it is set to "pcs cluster auth quorum1.test.org -u hacluster -p quorum-secret-password"
Puppet / 7 (Ruby 2.7)
Process completed with exit code 1.
Puppet / Puppet 7 - Ubuntu 18.04
Canceling since a higher priority waiting request for '520/merge' exists
Puppet / Puppet 7 - Ubuntu 18.04: spec/acceptance/cs_colocation_spec.rb#L73
corosync creates the service resource Failure/Error: shell(command) do |r| expect(r.stdout).to match(%r{nginx_service.*IPaddr2}) end Beaker::Host::CommandFailure: Host 'ubuntu1804-64-puppet7.example.com' exited with 1 running: pcs resource status Last 10 lines of output were: Delete the VirtualIP resource. Notes: Starting resources on a cluster is (almost) always done by pacemaker and not directly from pcs. If your resource isn't starting, it's usually due to either a misconfiguration of the resource (which you debug in the system log), or constraints preventing the resource from starting or the resource being disabled. You can use 'pcs resource debug-start' to test resource configuration, but it should *not* normally be used to start resources in a cluster.
Puppet / Puppet 7 - Ubuntu 18.04: spec/acceptance/cs_colocation_spec.rb#L88
corosync creates the vip resource Failure/Error: shell(command) do |r| expect(r.stdout).to match(%r{nginx_vip.*IPaddr2}) end Beaker::Host::CommandFailure: Host 'ubuntu1804-64-puppet7.example.com' exited with 1 running: pcs resource status Last 10 lines of output were: Delete the VirtualIP resource. Notes: Starting resources on a cluster is (almost) always done by pacemaker and not directly from pcs. If your resource isn't starting, it's usually due to either a misconfiguration of the resource (which you debug in the system log), or constraints preventing the resource from starting or the resource being disabled. You can use 'pcs resource debug-start' to test resource configuration, but it should *not* normally be used to start resources in a cluster.
Puppet / Puppet 7 - Ubuntu 18.04: spec/acceptance/cs_commit_spec.rb#L89
corosync creates the service resource in the cib Failure/Error: shell(command) do |r| expect(r.stdout).to match(%r{apache2_service.*IPaddr2}) end Beaker::Host::CommandFailure: Host 'ubuntu1804-64-puppet7.example.com' exited with 1 running: pcs resource status Last 10 lines of output were: Delete the VirtualIP resource. Notes: Starting resources on a cluster is (almost) always done by pacemaker and not directly from pcs. If your resource isn't starting, it's usually due to either a misconfiguration of the resource (which you debug in the system log), or constraints preventing the resource from starting or the resource being disabled. You can use 'pcs resource debug-start' to test resource configuration, but it should *not* normally be used to start resources in a cluster.
Puppet / Puppet 7 - Ubuntu 18.04: spec/acceptance/cs_commit_spec.rb#L104
corosync creates the vip resource in the cib Failure/Error: shell(command) do |r| expect(r.stdout).to match(%r{apache2_vip.*IPaddr2}) end Beaker::Host::CommandFailure: Host 'ubuntu1804-64-puppet7.example.com' exited with 1 running: pcs resource status Last 10 lines of output were: Delete the VirtualIP resource. Notes: Starting resources on a cluster is (almost) always done by pacemaker and not directly from pcs. If your resource isn't starting, it's usually due to either a misconfiguration of the resource (which you debug in the system log), or constraints preventing the resource from starting or the resource being disabled. You can use 'pcs resource debug-start' to test resource configuration, but it should *not* normally be used to start resources in a cluster.
Puppet / Puppet 7 - Ubuntu 18.04: spec/acceptance/cs_commit_spec.rb#L141
corosync creates the vip resource in the shadow cib Failure/Error: shell(command) do |r| expect(r.stdout).to match(%r{apache2_vip.*IPaddr2}) end Beaker::Host::CommandFailure: Host 'ubuntu1804-64-puppet7.example.com' exited with 1 running: pcs resource status -f /opt/puppetlabs/puppet/cache/shadow.puppet Last 10 lines of output were: Delete the VirtualIP resource. Notes: Starting resources on a cluster is (almost) always done by pacemaker and not directly from pcs. If your resource isn't starting, it's usually due to either a misconfiguration of the resource (which you debug in the system log), or constraints preventing the resource from starting or the resource being disabled. You can use 'pcs resource debug-start' to test resource configuration, but it should *not* normally be used to start resources in a cluster.
Puppet / Puppet 7 - Ubuntu 18.04: spec/acceptance/cs_commit_spec.rb#L156
corosync creates the service resource in the shadow cib Failure/Error: shell(command) do |r| expect(r.stdout).to match(%r{apache2_service.*IPaddr2}) end Beaker::Host::CommandFailure: Host 'ubuntu1804-64-puppet7.example.com' exited with 1 running: pcs resource status -f /opt/puppetlabs/puppet/cache/shadow.puppet Last 10 lines of output were: Delete the VirtualIP resource. Notes: Starting resources on a cluster is (almost) always done by pacemaker and not directly from pcs. If your resource isn't starting, it's usually due to either a misconfiguration of the resource (which you debug in the system log), or constraints preventing the resource from starting or the resource being disabled. You can use 'pcs resource debug-start' to test resource configuration, but it should *not* normally be used to start resources in a cluster.
Puppet / Puppet 7 - Ubuntu 18.04
The operation was canceled.
Puppet / Puppet 7 - CentOS 7
Canceling since a higher priority waiting request for '520/merge' exists
Puppet / Puppet 7 - CentOS 7
The operation was canceled.
Puppet / Puppet 7 - Ubuntu 20.04
Canceling since a higher priority waiting request for '520/merge' exists
Puppet / Puppet 7 - Ubuntu 20.04
The operation was canceled.
Puppet / Puppet 7 - Debian 11
Canceling since a higher priority waiting request for '520/merge' exists
Puppet / Puppet 7 - Debian 11
The operation was canceled.
Puppet / Puppet 7 - Debian 10
Canceling since a higher priority waiting request for '520/merge' exists
Puppet / Puppet 7 - Debian 10
The operation was canceled.
Puppet / 7 (Ruby 2.7): spec/unit/puppet/provider/cs_primitive_crm_spec.rb#L66
Puppet::Type::Cs_primitive::ProviderCrm when getting instances each instance has an primitive_class parameter corresponding to the <primitive>'s class attribute Failure/Error: expect(instance.primitive_class).to eq('ocf') NoMethodError: undefined method `primitive_class' for (provider=crm):Puppet::Type::Cs_primitive::ProviderCrm
Puppet / 7 (Ruby 2.7): spec/unit/puppet/provider/cs_primitive_crm_spec.rb#L71
Puppet::Type::Cs_primitive::ProviderCrm when getting instances each instance has an primitive_type parameter corresponding to the <primitive>'s type attribute Failure/Error: expect(instance.primitive_type).to eq('Xen') NoMethodError: undefined method `primitive_type' for (provider=crm):Puppet::Type::Cs_primitive::ProviderCrm
Puppet / 7 (Ruby 2.7): spec/unit/puppet/provider/cs_primitive_crm_spec.rb#L76
Puppet::Type::Cs_primitive::ProviderCrm when getting instances each instance has an provided_by parameter corresponding to the <primitive>'s provider attribute Failure/Error: expect(instance.provided_by).to eq('heartbeat') NoMethodError: undefined method `provided_by' for (provider=crm):Puppet::Type::Cs_primitive::ProviderCrm