Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates Whonix-based templates 14 -> 15 #358

Merged
merged 9 commits into from
Dec 11, 2019
Merged

Conversation

conorsch
Copy link
Contributor

@conorsch conorsch commented Dec 5, 2019

Status

Ready for review.

Closes #300. Closes #306.

Description

Updates all Whonix-related VMs to use Whonix 15 as base, rather than the EOL Whonix 14. Leverages the new upgrade-in-place logic introduced in #352.

Test plan

The upgrade-in-place scenario is the most important to test, since we want to ensure that automatic upgrades will handle the transition smoothly.

  • checkout latest master (7c19c63 at time of writing)
  • make clone && make clean && make all
  • checkout this PR branch
  • make clone && make all
  • make test

Confirm no errors.

@conorsch conorsch added the WIP label Dec 5, 2019
Leveraging the script logic by @emkll to update the Whonix-based SDW
components in-place, including shutting them down if necessary in order
to clean up.
@conorsch conorsch force-pushed the 300-update-to-whonix-15 branch from d71948f to d1fad9c Compare December 6, 2019 00:53
Conor Schaefer added 4 commits December 5, 2019 16:58
All SDW components based on Whonix (Workstation & Gateway) should be
based on the 15 series. These changes achieve that. In order to handle
update-in-place (as opposed to a clean install), we'll have to script
that separately.

Includes updates to the config tests to verify the proper base templates
are used.
The Python concurrent package was only necessary for a prior version of
Salt, when running under Python2. Both Saltstack and Qubes have updated
the relevant dependencies so that we don't need to ensure this package
is present any longer.
Via the `qvm.anon-whonix` SLS file, we pull in all of the following:

  * qvm.anon-whonix
  * qvm.template-whonix-ws
  * qvm.sys-whonix
  * qvm.whonix-ws-dvm
  * qvm.sys-firewall

The value of this dependency chain is that we can concisely ensure that
all Whonix-related machines, both AppVMs and TemplateVMs, are up to
date.

There's an edge case where already-running VMs, such as `sys-whonix`
(which is autostart=True by default) may not have their "template"
config field updated, since changing that value while a TemplateVM
is running is forbidden. Still, let's leverage the upstream config
as much as possible, and we'll continue to handle the
shutdown-to-change-template settings in our own scripts, as necessary.
Most of the config tests to date, reasonably, target the SDW-managed
VMs. As we attempt to ensure up-to-date components across the board, we
must also ensure that Qubes-provided VM configures are updated. These
include:

  * sys-usb
  * sys-net
  * sys-firewall
  * sys-whonix

If we check those VMs to match our expected definition of "up-to-date"
templates, then we should have rather solid coverage. It's not as
complete as testing for e.g. "any fedora-based VM should be based on the
latest fedora template". Ideally we'll get to that point, but for now,
that could report failures of components unrelated to the SDW config.
@conorsch conorsch removed the WIP label Dec 6, 2019
@conorsch conorsch force-pushed the 300-update-to-whonix-15 branch from d1fad9c to d49f1d9 Compare December 6, 2019 01:00
@conorsch conorsch marked this pull request as ready for review December 6, 2019 01:00
@conorsch
Copy link
Contributor Author

conorsch commented Dec 6, 2019

During pre-review discussion, we weren't certain whether importing the upstream Qubes Salt config for the Whonix VMs would guarantee 15, rather than 14. Judging by the following, however:

It appears that dom0 updates will indeed move this value from 14 -> 15.

Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @conorsch ! This worked for me in a very specific scenario, but based on my testing on another Qubes workstation, it may not work in all cases (specifically if whonix-15 templates were not present/configured before):

  1. As part of this upgrade, we will also want to invoke the sls: qvm.sys-whonix . This will ensure sys-whonix is up-to-date (which is important, since it is the updateVM for any whonix-based templates). qvm.anon-whonix does not perform this change. There are also some other options we could consider here:
  • Replace the updateVM for all Whonix templates to sd-whonix
  • eliminate sd-whonix and use sys-whonix instead (reducing by 1 the number of VMs on each system), but this decision depends assumptions on whether or not the workstation will be used for other purposes, requiring a larger conversation
  1. I have observed (but cannot reproduce) an error that occurred when running make clean on this branch, after provisioning from this branch (while upgrading) from a fresh provisioning on master. I suspect it may have been a flake, as sd-whonix could not be destroyed as it would automatically reboot as it was being killed. This was because sd-proxy was not killed/destroyed, which is strange because the API returns the domains in alphabetical order. I don't think we should block merge here, but curious to see if you've experienced this issue (see [1] below)

  2. There are two comments inline, those changes were required to get the securedrop-handle-upgrade script to work properly on this branch.

Unrelated to this PR, make clean will fail if fedora-30-dvm doesnt exist. Setting the default DispVM to '' in the make clean would resolve (or searching for a default dvm template), opened #360 to track

[1] :

[user@dom0 securedrop-workstation]$ make clean
Deploying Salt config...
local:
    ----------
    sd-workstation.top:
        ----------
        status:
            unchanged
qubes-prefs default_dispvm fedora-30-dvm
./scripts/destroy-vm --all
qvm-remove: error: Domain is not powered off: 'sd-proxy'
Destroying VM 'sd-proxy'... Traceback (most recent call last):
  File "./scripts/destroy-vm", line 78, in <module>
    destroy_all()
  File "./scripts/destroy-vm", line 68, in destroy_all
    destroy_vm(vm)
  File "./scripts/destroy-vm", line 46, in destroy_vm
    subprocess.check_call(["qvm-remove", "-f", vm.name])
  File "/usr/lib64/python3.5/subprocess.py", line 271, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['qvm-remove', '-f', 'sd-proxy']' returned non-zero exit status 1
Makefile:128: recipe for target 'destroy-all' failed
make: *** [destroy-all] Error 1
[user@dom0 securedrop-workstation]$ qvm-shutdown sd-svs
[user@dom0 securedrop-workstation]$ make clean


if qvm-check --quiet sd-whonix; then
BASE_TEMPLATE=$(qvm-prefs sd-whonix template)
if [[ ! $BASE_TEMPLATE =~ "buster" ]]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since TemplateVM for sd-whonix should be whonix-gw-{14,15}, I think he we would want to compare BASE_TEMPLATE to '14'. otherwise sd-whonix will shut down on every run.

- label: blue
- tags:
- add:
- sd-workstation
- sd-buster
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we must require sls sd-upgrade-templates here , to ensure the template update was successful before cloning whonix-ws-15 to sd-proxy-buster-template

@conorsch
Copy link
Contributor Author

conorsch commented Dec 9, 2019

Great comments, @emkll. Will take another pass through and ping for re-review when ready.

As part of this upgrade, we will also want to invoke the sls: qvm.sys-whonix . This will ensure sys-whonix is up-to-date (which is important, since it is the updateVM for any whonix-based templates). qvm.anon-whonix does not perform this change

My understanding is that importing the qvm.anon-whonix SLS does indeed result in qvm.sys-whonix being included, as well. See commit message in 15f5914 . If those statements aren't accurate, then yes we should indeed make an update here. Will monitor for specific tasks in the output when re-retesting this branch to confirm the behavior described in that commit.

Addressed review comments by @emkll. Specifically

  * Check for "15" string in Whonix-based template checks,
    since "buster" doesn't occur in those template names
  * Require the upgrade state before configuring sd-proxy
@conorsch
Copy link
Contributor Author

conorsch commented Dec 9, 2019

@emkll Pushed up a tiny commit to address the changes you pointed out.

Regarding the sys-whonix base, we are including the upstream Qubes-maintained sys-whonix logic, but it won't be sufficient to force updates to template settings. If we want to strongarm e.g. sys-whonix, then we should include logic in the securedrop-upgrade-templates script. I've not done that here, but willing to add if you feel now's the best time.

Note that the qvm-shutdown --wait sys-whonix approach will fail if an AppVM is running with netvm set to sys-whonix. We can get around that by using qvm-kill, which will disrupt the networking and allow us to proceed with updates in place.

@emkll
Copy link
Contributor

emkll commented Dec 9, 2019

Regarding the sys-whonix base, we are including the upstream Qubes-maintained sys-whonix logic, but it won't be sufficient to force updates to template settings. If we want to strongarm e.g. sys-whonix, then we should include logic in the securedrop-upgrade-templates script. I've not done that here, but willing to add if you feel now's the best time.

Based on our approach in #331, I think that addressing this issue in this PR would make the most sense, since it is

  1. It's a fairly small change to both the diff and the test plan
  2. It will require another clone + make all for developers, changing this in one pass will help with longer-term testing and reduce discrepancies across developer environments.
  3. We might forget/omit/defer updating this one later on

What do you think?

Shuts down the sys-whonix template to ensure we can update the template
settings. The Qubes-provided Salt logic only enforces template versions
on first config of the sys-whonix VM; if it already exists, it will hang
back on an older version of Whonix. That's unacceptable for us, so we'll
add a reference to the "handle-upgrade" logic to shut it down, then via
a more aggressive Salt task, enforce the new template setting.
@conorsch
Copy link
Contributor Author

Ready for your input again, @emkll. The 14 -> 15 transition is also handled now for sys-whonix and anon-whonix. If you can find any problems with the upgrade-in-place scenario, please append directly here to resolve!

@kushaldas
Copy link
Contributor

Testing this one.

@kushaldas
Copy link
Contributor

Actually on master I got the following error in many Buster based VMs. IIRC this is due to a missing python package.

sd-svs:
      ----------
      _error:
          Failed to return clean data
      retcode:
          0
      stderr:
          Traceback (most recent call last):
            File "/var/tmp/.root_62a99a_salt/salt-call", line 15, in <module>
              salt_call()
            File "/var/tmp/.root_62a99a_salt/py2/salt/scripts.py", line 395, in salt_call
              import salt.cli.call
            File "/var/tmp/.root_62a99a_salt/py2/salt/cli/call.py", line 5, in <module>
              import salt.utils.parsers
            File "/var/tmp/.root_62a99a_salt/py2/salt/utils/parsers.py", line 27, in <module>
              import salt.config as config
            File "/var/tmp/.root_62a99a_salt/py2/salt/config/__init__.py", line 98, in <module>
              _DFLT_IPC_WBUFFER = _gather_buffer_space() * .5
            File "/var/tmp/.root_62a99a_salt/py2/salt/config/__init__.py", line 88, in _gather_buffer_space
              import salt.grains.core
            File "/var/tmp/.root_62a99a_salt/py2/salt/grains/core.py", line 44, in <module>
              import salt.utils.dns
            File "/var/tmp/.root_62a99a_salt/py2/salt/utils/dns.py", line 32, in <module>
              import salt.modules.cmdmod
            File "/var/tmp/.root_62a99a_salt/py2/salt/modules/cmdmod.py", line 34, in <module>
              import salt.utils.templates
            File "/var/tmp/.root_62a99a_salt/py2/salt/utils/templates.py", line 32, in <module>
              import salt.utils.http
            File "/var/tmp/.root_62a99a_salt/py2/salt/utils/http.py", line 41, in <module>
              import salt.loader
            File "/var/tmp/.root_62a99a_salt/py2/salt/loader.py", line 28, in <module>
              import salt.utils.event
            File "/var/tmp/.root_62a99a_salt/py2/salt/utils/event.py", line 85, in <module>
              import salt.transport.ipc
            File "/var/tmp/.root_62a99a_salt/py2/salt/transport/ipc.py", line 21, in <module>
              from tornado.locks import Semaphore
            File "/var/tmp/.root_62a99a_salt/py2/tornado/locks.py", line 18, in <module>
              from concurrent.futures import CancelledError
          ImportError: No module named concurrent.futures
      stdout:

@emkll
Copy link
Contributor

emkll commented Dec 10, 2019

@kushaldas if you make clean and make all on this branch, do you get the same error?

It seems like the error you report is related to #352 (comment) , can you confirm you are pulling the latest buster template and that your dom0 salt packages are up-to-date?

failed to return clean data is also a transient error I observe when the machine has no memory left to boot VMs, is this possible?

@kushaldas
Copy link
Contributor

My qubes is not letting me remove Buster (so that I can reinstall), saying the following vms are dependent on it: and nothing, next prompt in bash

let_me_remove_buster

Also, in this branch I got

----------
          ID: set-fedora-default-template-version
    Function: cmd.run
        Name: qubes-prefs default_template fedora-30
      Result: False
     Comment: Command "qubes-prefs default_template fedora-30" run
     Started: 19:49:03.367013
    Duration: 103.539 ms
     Changes:   
              ----------
              pid:
                  27652
              retcode:
                  1
              stderr:
                  qubes-prefs: error: Failed to connect to qubesd service: [Errno 2] No such file or directory
              stdout:
----------
          ID: topd-always-passes
    Function: test.succeed_without_changes
        Name: foo
      Result: True
     Comment: Success!
     Started: 19:49:03.474935
    Duration: 0.695 ms
     Changes:   
----------
          ID: sd-svs-template
    Function: qvm.vm
        Name: sd-svs-buster-template
      Result: False
     Comment: An exception occurred in this state: Traceback (most recent call last):
                File "/usr/lib/python2.7/site-packages/salt/state.py", line 1837, in call
                  **cdata['kwargs'])
                File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1794, in wrapper
                  return f(*args, **kwargs)
                File "/var/cache/salt/minion/extmods/states/ext_state_qvm.py", line 434, in vm
                  status = globals()[action](name, *_varargs, **keywords)
                File "/var/cache/salt/minion/extmods/states/ext_state_qvm.py", line 281, in clone
                  exists_status = Status(**_state_action('qvm.check', name, *['exists']))
                File "/var/cache/salt/minion/extmods/states/ext_state_qvm.py", line 144, in _state_action
                  status = __salt__[_action](*varargs, **kwargs)
                File "/var/cache/salt/minion/extmods/modules/ext_module_qvm.py", line 228, in check
                  args = qvm.parse_args(vmname, *varargs, **kwargs)
                File "/var/cache/salt/minion/extmods/modules/module_utils.py", line 149, in parse_args
                  self.args = self.argparser.parse_salt_args(*varargs, **kwargs)
                File "/var/cache/salt/minion/extmods/modules/module_utils.py", line 328, in parse_salt_args
                  args = self.parse_args(args=arg_info['__argv'])
                File "/var/cache/salt/minion/extmods/modules/module_utils.py", line 312, in parse_args
                  namespace=namespace
                File "/usr/lib64/python2.7/argparse.py", line 1701, in parse_args
                  args, argv = self.parse_known_args(args, namespace)
                File "/var/cache/salt/minion/extmods/modules/module_utils.py", line 323, in parse_known_args
                  namespace=namespace
                File "/usr/lib64/python2.7/argparse.py", line 1733, in parse_known_args
                  namespace, args = self._parse_known_args(args, namespace)
                File "/usr/lib64/python2.7/argparse.py", line 1942, in _parse_known_args
                  stop_index = consume_positionals(start_index)
                File "/usr/lib64/python2.7/argparse.py", line 1898, in consume_positionals
                  take_action(action, args)
                File "/usr/lib64/python2.7/argparse.py", line 1807, in take_action
                  action(self, namespace, argument_values, option_string)
                File "/var/cache/salt/minion/extmods/modules/ext_module_qvm.py", line 109, in __call__
                  namespace.vm = values
                File "/var/cache/salt/minion/extmods/modules/ext_module_qvm.py", line 78, in vm
                  self._vm = app.domains[value]
                File "/usr/lib/python2.7/site-packages/qubesadmin/app.py", line 90, in __getitem__
                  if not self.app.blind_mode and item not in self:
                File "/usr/lib/python2.7/site-packages/qubesadmin/app.py", line 113, in __contains__
                  self.refresh_cache()
                File "/usr/lib/python2.7/site-packages/qubesadmin/app.py", line 62, in refresh_cache
                  'admin.vm.List'
                File "/usr/lib/python2.7/site-packages/qubesadmin/app.py", line 563, in qubesd_call
                  'Failed to connect to qubesd service: %s', str(e))
              QubesDaemonCommunicationError: Failed to connect to qubesd service: [Errno 2] No such file or directory
     Started: 19:49:03.476505
    Duration: 3.878 ms
     Changes:   
----------
          ID: sd-svs
    Function: qvm.vm
      Result: False
     Comment: One or more requisite failed: sd-svs.sd-svs-template
     Changes:   
----------
          ID: sd-svs-template-sync-appmenus
    Function: cmd.run
        Name: qvm-start --skip-if-running sd-svs-buster-template && qvm-sync-appmenus sd-svs-buster-template

      Result: False
     Comment: One or more requisite failed: sd-svs.sd-svs-template
     Changes:   

Summary for local
-------------
Succeeded: 50 (changed=30)
Failed:    12

Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes @conorsch , I have tested in two separate scenarios:

  • a clean install, make all and make test complete successfully, all tests run/pass
  • upgrade install (with sys-whonix based on whonix-14-gw): make all and make test complete successfully, all tests run/pass

Diff looks good to me (see minor comment inline)

if qvm-check --quiet sys-whonix; then
BASE_TEMPLATE=$(qvm-prefs sys-whonix template)
if [[ ! $BASE_TEMPLATE =~ "15" ]]; then
if qvm-check --quiet --running sys-whonix; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: weird indentation for sys-whonix here, makes it a bit hard to read

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@emkll Ack, this is tabs-versus-spaces. Most of the scripts in the repo use 4 spaces for bash scripts, but this one and only this one uses tabs instead—I wasn't careful about making sure to force use of tabs when editing the script. Agreed, we should indeed clean it up to avoid large amounts of frustration. 😃

There was a mix of tabs and spaces as a result of collaboration. Since
most other scripts in this repo use spaces, rather than tabs, let's
stick with that to make the decision easy.
@sssoleileraaa
Copy link
Contributor

I ran through the test plan and hit an error after checking out the pr branch and running make clone:

tar: securedrop-workstation: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors

hmmmm... think I'm forgetting something but I'm following the Test Plan :/

@sssoleileraaa
Copy link
Contributor

ah, looks like setting the environment variables (again) fixes it

@sssoleileraaa
Copy link
Contributor

Test plan

The upgrade-in-place scenario is the most important to test, since we want to ensure that automatic upgrades will handle the transition smoothly.

  • checkout latest master (7c19c63 at time of writing)
  • make clone && make clean && make all
  • checkout this PR branch
  • make clone && make all
  • make test

Confirm no errors.

on the second make all step i see the following error:

----------
          ID: sys-whonix-template-config
    Function: qvm.vm
        Name: sys-whonix
      Result: False
     Comment: An exception occurred in this state: Traceback (most recent call last):
                File "/usr/lib/python2.7/site-packages/salt/state.py", line 1837, in call
                  **cdata['kwargs'])
                File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1794, in wrapper
                  return f(*args, **kwargs)
                File "/var/cache/salt/minion/extmods/states/ext_state_qvm.py", line 434, in vm
                  status = globals()[action](name, *_varargs, **keywords)
                File "/var/cache/salt/minion/extmods/states/ext_state_qvm.py", line 300, in prefs
                  return _state_action('qvm.prefs', name, *varargs, **kwargs)
                File "/var/cache/salt/minion/extmods/states/ext_state_qvm.py", line 144, in _state_action
                  status = __salt__[_action](*varargs, **kwargs)
                File "/var/cache/salt/minion/extmods/modules/ext_module_qvm.py", line 933, in prefs
                  setattr(args.vm, dest, value_new)
                File "/usr/lib/python2.7/site-packages/qubesadmin/base.py", line 281, in __setattr__
                  str(value).encode('utf-8'))
                File "/usr/lib/python2.7/site-packages/qubesadmin/base.py", line 68, in qubesd_call
                  payload_stream)
                File "/usr/lib/python2.7/site-packages/qubesadmin/app.py", line 577, in qubesd_call
                  return self._parse_qubesd_response(return_data)
                File "/usr/lib/python2.7/site-packages/qubesadmin/base.py", line 102, in _parse_qubesd_response
                  raise exc_class(format_string, *args)
              QubesVMNotHaltedError: Cannot change template while qube is running
     Started: 13:59:17.688781
    Duration: 29.324 ms
     Changes:   
----------
          ID: whonix-ws-15-dvm
    Function: qvm.vm
      Result: False
     Comment: One or more requisite failed: sd-sys-whonix-vms.sys-whonix-template-config
     Changes:   
----------
          ID: qvm-appmenus --update whonix-ws-15-dvm
    Function: cmd.run
      Result: False
     Comment: One or more requisite failed: qvm.whonix-ws-dvm.whonix-ws-15-dvm
     Changes:   
----------

And these are the vms that are running:

qvm-ls --running
NAME           STATE    CLASS         LABEL  TEMPLATE   NETVM
dom0           Running  AdminVM       black  -          -
sd-dev         Running  StandaloneVM  black  -          sys-firewall
sd-dev-buster  Running  StandaloneVM  green  -          sys-firewall
sys-firewall   Running  AppVM         green  fedora-30  sys-net
sys-net        Running  AppVM         red    fedora-30  -
sys-usb        Running  AppVM         red    fedora-30  -

@emkll
Copy link
Contributor

emkll commented Dec 10, 2019

Thanks for the report @creviera , looks like there might be a race condition
a1768ec#diff-f191eb2f59ead8c3cfadd2026f19e62cR72 if sd-gpg is not powered on, and perhaps the kill action doesn't complete in time.

I suspect qvm-kill was used here instead of qvm-shutdown --wait in case sys-whonix was a netvm for some other vm. Does adding sleep 5 on line 73 resolve for you?

@sssoleileraaa
Copy link
Contributor

Thanks @emkll, I put in the sleep, and reran make all and it did complete without error this time. make test also completed without error dancing penguin emoji here

@sssoleileraaa sssoleileraaa self-requested a review December 10, 2019 19:52
Copy link
Contributor

@sssoleileraaa sssoleileraaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kushaldas
Copy link
Contributor

Finally make all succeeded. But, not the make test.

======================================================================
ERROR: test_do_not_open_here (test_proxy_vm.SD_Proxy_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_proxy_vm.py", line 15, in test_do_not_open_here
    "sd-proxy/do-not-open-here")
  File "/home/kdas/securedrop-workstation/tests/base.py", line 83, in assertFilesMatch
    remote_content = self._get_file_contents(remote_path)
  File "/home/kdas/securedrop-workstation/tests/base.py", line 69, in _get_file_contents
    contents = subprocess.check_output(cmd).decode("utf-8")
  File "/usr/lib64/python3.5/subprocess.py", line 316, in check_output
    **kwargs).stdout
  File "/usr/lib64/python3.5/subprocess.py", line 398, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['qvm-run', '-p', 'sd-proxy', '/bin/cat /usr/bin/do-not-open-here']' returned non-zero exit status 1

======================================================================
ERROR: test_sd_proxy_package_installed (test_proxy_vm.SD_Proxy_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_proxy_vm.py", line 18, in test_sd_proxy_package_installed
    self.assertTrue(self._package_is_installed("securedrop-proxy"))
  File "/home/kdas/securedrop-workstation/tests/base.py", line 79, in _package_is_installed
    "dpkg --verify {}".format(pkg)])
  File "/usr/lib64/python3.5/subprocess.py", line 271, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['qvm-run', '-a', '-q', 'sd-proxy', 'dpkg --verify securedrop-proxy']' returned non-zero exit status 1

======================================================================
ERROR: test_sd_proxy_yaml_config (test_proxy_vm.SD_Proxy_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_proxy_vm.py", line 33, in test_sd_proxy_yaml_config
    self.assertFileHasLine("/etc/sd-proxy.yaml", line)
  File "/home/kdas/securedrop-workstation/tests/base.py", line 93, in assertFileHasLine
    remote_content = self._get_file_contents(remote_path)
  File "/home/kdas/securedrop-workstation/tests/base.py", line 69, in _get_file_contents
    contents = subprocess.check_output(cmd).decode("utf-8")
  File "/usr/lib64/python3.5/subprocess.py", line 316, in check_output
    **kwargs).stdout
  File "/usr/lib64/python3.5/subprocess.py", line 398, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['qvm-run', '-p', 'sd-proxy', '/bin/cat /etc/sd-proxy.yaml']' returned non-zero exit status 1

======================================================================
ERROR: test_all_fedora_vms_uptodate (test_vms_platform.SD_VM_Platform_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_vms_platform.py", line 132, in test_all_fedora_vms_uptodate
    self._ensure_packages_up_to_date(vm, fedora=True)
  File "/home/kdas/securedrop-workstation/tests/test_vms_platform.py", line 84, in _ensure_packages_up_to_date
    stdout, stderr = vm.run(cmd)
  File "/usr/lib/python3.5/site-packages/qubesadmin/vm/__init__.py", line 311, in run
    raise e
  File "/usr/lib/python3.5/site-packages/qubesadmin/vm/__init__.py", line 308, in run
    input=self.prepare_input_for_vmshell(command, input), **kwargs)
  File "/usr/lib/python3.5/site-packages/qubesadmin/vm/__init__.py", line 287, in run_service_for_stdio
    raise exc
subprocess.CalledProcessError: Command 'sudo dnf check-update' returned non-zero exit status 100

======================================================================
ERROR: test_all_sd_vm_apt_sources (test_vms_platform.SD_VM_Platform_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_vms_platform.py", line 195, in test_all_sd_vm_apt_sources
    self._validate_apt_sources(vm)
  File "/home/kdas/securedrop-workstation/tests/test_vms_platform.py", line 59, in _validate_apt_sources
    stdout, stderr = vm.run(cmd)
  File "/usr/lib/python3.5/site-packages/qubesadmin/vm/__init__.py", line 311, in run
    raise e
  File "/usr/lib/python3.5/site-packages/qubesadmin/vm/__init__.py", line 308, in run
    input=self.prepare_input_for_vmshell(command, input), **kwargs)
  File "/usr/lib/python3.5/site-packages/qubesadmin/vm/__init__.py", line 287, in run_service_for_stdio
    raise exc
subprocess.CalledProcessError: Command 'cat /etc/apt/sources.list.d/securedrop_workstation.list' returned non-zero exit status 1

======================================================================
FAIL: test_Policies (test_qubes_rpc.SD_Qubes_Rpc_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_qubes_rpc.py", line 25, in test_Policies
    self.assertFalse(fail)
AssertionError: True is not false

======================================================================
FAIL: test_accept_sd_xfer_extracted_file (test_sd_whonix.SD_Whonix_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_sd_whonix.py", line 21, in test_accept_sd_xfer_extracted_file
    self.assertFileHasLine("/usr/local/etc/torrc.d/50_user.conf", line)
  File "/home/kdas/securedrop-workstation/tests/base.py", line 100, in assertFileHasLine
    raise AssertionError(msg)
AssertionError: File /usr/local/etc/torrc.d/50_user.conf does not contain expected line HidServAuth xslb3f2ntwen5kux.onion Q7h5tlkpvd5Z9VsXNW6yiR

======================================================================
FAIL: test_all_sd_vms_uptodate (test_vms_platform.SD_VM_Platform_Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/kdas/securedrop-workstation/tests/test_vms_platform.py", line 120, in test_all_sd_vms_uptodate
    self._ensure_packages_up_to_date(vm)
  File "/home/kdas/securedrop-workstation/tests/test_vms_platform.py", line 80, in _ensure_packages_up_to_date
    self.assertEqual(results, "Listing...", fail_msg)
AssertionError: 'Listing...\nanon-apps-config/unknown 3:3.1-1 all [u[13760 chars]0u1]' != 'Listing...'
Diff is 14027 characters long. Set self.maxDiff to None to see it. : Unapplied updates for VM 'sd-whonix'

----------------------------------------------------------------------
Ran 43 tests in 265.938s

FAILED (failures=3, errors=5)
Makefile:86: recipe for target 'test' failed
make: *** [test] Error 1

@kushaldas
Copy link
Contributor

My salt dump from dom0:

rpm -qa | grep salt

qubes-mgmt-salt-base-overrides-4.0.2-1.fc25.noarch
qubes-mgmt-salt-dom0-4.0.19-1.fc25.noarch
salt-2017.7.1-1.fc25.noarch
qubes-mgmt-salt-dom0-virtual-machines-4.0.16-1.fc25.noarch
qubes-mgmt-salt-admin-tools-4.0.19-1.fc25.noarch
salt-minion-2017.7.1-1.fc25.noarch
qubes-mgmt-salt-dom0-update-4.0.8-1.fc25.noarch
qubes-mgmt-salt-base-4.0.3-1.fc25.noarch
qubes-mgmt-salt-dom0-qvm-4.0.10-1.fc25.noarch
qubes-mgmt-salt-base-topd-4.0.1-1.fc25.noarch
qubes-mgmt-salt-base-overrides-libs-4.0.2-1.fc25.noarch
qubes-mgmt-salt-config-4.0.19-1.fc25.noarch
qubes-mgmt-salt-base-config-4.0.1-1.fc25.noarch
qubes-mgmt-salt-4.0.19-1.fc25.noarch

The qvm-kill task returns rather quickly, before the target machine is
completely halted. Proceeding on to other tasks when the killed VM is
still coming down will lead to subsequent errors, in this case:

   QubesVMNotHaltedError: Cannot change template while qube is running

Use "sleep" to wait a bit for the machine to come down.
@conorsch
Copy link
Contributor Author

Added the requested "sleep" task based on @creviera's testing. @emkll please take another look!

@emkll
Copy link
Contributor

emkll commented Dec 11, 2019

It seems like the salt package we are running are the same.

In dom0 (sorted because the output order was different than yours):

~$rpm -qa | grep salt | sort
qubes-mgmt-salt-4.0.19-1.fc25.noarch
qubes-mgmt-salt-admin-tools-4.0.19-1.fc25.noarch
qubes-mgmt-salt-base-4.0.3-1.fc25.noarch
qubes-mgmt-salt-base-config-4.0.1-1.fc25.noarch
qubes-mgmt-salt-base-overrides-4.0.2-1.fc25.noarch
qubes-mgmt-salt-base-overrides-libs-4.0.2-1.fc25.noarch
qubes-mgmt-salt-base-topd-4.0.1-1.fc25.noarch
qubes-mgmt-salt-config-4.0.19-1.fc25.noarch
qubes-mgmt-salt-dom0-4.0.19-1.fc25.noarch
qubes-mgmt-salt-dom0-qvm-4.0.10-1.fc25.noarch
qubes-mgmt-salt-dom0-update-4.0.8-1.fc25.noarch
qubes-mgmt-salt-dom0-virtual-machines-4.0.16-1.fc25.noarch
salt-2017.7.1-1.fc25.noarch
salt-minion-2017.7.1-1.fc25.noarch

in securedrop-workstation-buster:

user@securedrop-workstation-buster:~$ sudo apt list --installed | grep salt

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

qubes-mgmt-salt-vm-connector/unknown,now 4.0.19-1+deb10u1 all [installed,automatic]
salt-common/stable,now 2018.3.4+dfsg1-6 all [installed,automatic]
salt-ssh/stable,now 2018.3.4+dfsg1-6 all [installed,automatic]
user@securedrop-workstation-buster:~$ sudo apt list --installed | grep futures

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested upgrade scenario locally once again and works as expected, with all test passes. Visual diff lgtm. Thanks @conorsch for the changes and @creviera and @kushaldas for reviewing.

I am going to merge this PR since it appears the issue is limited to @kushaldas, and opened a follow-up ticket to track, investigate and if required, address these: #366

@sssoleileraaa sssoleileraaa self-requested a review December 11, 2019 19:35
Copy link
Contributor

@sssoleileraaa sssoleileraaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Switch templates and packages to Debian Buster Whonix VMs approaching EOL
4 participants