Skip to content

Commit

Permalink
Address review comments
Browse files Browse the repository at this point in the history
Hat tip to @emkll. Specifically:

  * Don't process binary files with grep in dom0
  * Makes destroy-vm subprocess cmd more readable
  * Adds comments to use of "update.qubes-vm" SLS
  * Removes extraneous newlines from RPC policy cleanup
  • Loading branch information
Conor Schaefer committed Dec 2, 2019
1 parent 1fede0e commit 22f4b1d
Show file tree
Hide file tree
Showing 5 changed files with 141 additions and 4 deletions.
104 changes: 104 additions & 0 deletions PR.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@




### Overview
We've discussed a few options for how to handle updates:

1. Install them regularly via cron (daily or weekly).
2. Rely on the Qubes Updater GUI and user action to apply updates.
1. Force `updates-available=1` on SDW VMs, so they're always in the list.
2. Configure qubes.UpdatesProxy for SDW VMs, so they can poll for updates.

Note that for the present context, we're mostly concerned about packages *inside* the AppVMs,
meaning we'd update the corresponding TemplateVMs, and the new versions would be present
on the next boot of the AppVM (or of the physical workstation). For dom0 packages, particularly the
Salt logic managing VM configuration, such as enforcing net-less config and RPC policy whitelisting,
we'll identify a separate solution.


### Option 1: Install updates regularly via cron
This is the approach we first started with, in #172. At that time, the Qubes Updater GUI
tool did not exist, so it wasn't an option for us. Since #172, we haven't updated the logic at all,
so it's starting to show its age. We should consider sprucing up the logic, e.g. by resolving #339.

Since most team members are familiar with this logic, and the bulk of research has gone to the alternatives,
I'll stop here.

### Option 2i: Force updates-available=1 on SDW VMs
If we manually set `qvm-features sd-svs-template updates-available 1`, then the `sd-svs-template`
will *always* show in the Qubes Updater GUI. The result is that even after applying updates as recommended,
the "Updates available" notification persists, showing the same VM that was updated. Hardly ideal, and downright
dishonest to end users. It _does_ provide a mechanism to enable users to apply updates to SDW VMs,
but at the cost of clarity about both whether there are updates actually available, and whether updates
were successfully applied.

### Option 2ii: Configure qubes.UpdatesProxy to check for SDW updates
Given the discussion above, this was a promising option. Qubes already provides appropriate tooling
for checking for updates, although it naturally assumes that a network connection is available to do so.
In the case of e.g. `sd-svs`, no network connection is available, so apt polls fail, meaning updates are
never identified and reported back to dom0 to surface notifications to the user.

A potential solution is to enable the qubes.UpdatesProxy service on net-less AppVMs. With just a few changes,
we can permit polling from `sd-svs` to notify about updates to the underlying TemplateVM:

1. Add `$tag:sd-workstation $default allow,target=sys-net` to `/etc/qubes-rpc/policy/qubes.UpdatesProxy` in dom0.
2. Run `qvm-service sd-svs updates-proxy-setup on` in dom0.
3. Reboot/start `sd-svs`.

Thereafter, apt calls made from `sd-svs`, including the Qubes-maintained cron job for checking update availabilty,
will run successfully. The problem, however, is we've violated a security control on `sd-svs` by enabling a *global* TCP proxy:

```
[user@dom0 ~]$ qvm-service sd-svs netvm
[user@dom0 ~]$ qvm-service sd-svs updates-proxy-setup
on
user@sd-svs:~$ curl https://ifconfig.co
curl: (6) Could not resolve host: ifconfig.co
user@sd-svs:~$ tail -n2 /etc/apt/apt.conf.d/01qubes-proxy
Acquire::http::Proxy "http://127.0.0.1:8082/";
Acquire::tor::proxy "http://127.0.0.1:8082/";
user@sd-svs:~$ https_proxy=http://127.0.0.1:8082/ curl -s https://ifconfig.co | perl -npE 's/\d/X/g'
XX.XXX.XXX.XXX
```

In the final line, I chose to redact the IP, as that was my current IP, absent all VPNs and firewall settings,
by virtue of the connection running through `sys-net`. If it's not clear, anything is possible at this stage:

```
user@sd-svs:~$ https_proxy=http://127.0.0.1:8082/ curl -sf https://raw.githubusercontent.com/speed47/spectre-meltdown-checker/master/spectre-meltdown-checker.sh | sudo bash -s -- --batch
awk: fatal: cannot open file `bash' for reading (No such file or directory)
awk: fatal: cannot open file `bash' for reading (No such file or directory)
CVE-2017-5753: OK (Mitigation: usercopy/swapgs barriers and __user pointer sanitization (complete, automated))
CVE-2017-5715: OK (Full retpoline + IBPB are mitigating the vulnerability)
CVE-2017-5754: OK (Mitigation: PAX_UDEREF (pgd switching))
CVE-2018-3640: OK (your CPU microcode mitigates the vulnerability)
CVE-2018-3639: OK (Mitigation: Speculative Store Bypass disabled via prctl and seccomp)
CVE-2018-3615: OK (your CPU vendor reported your CPU model as not vulnerable)
CVE-2018-3620: OK (Mitigation: PTE Inversion)
CVE-2018-3646: OK (this system is not running a hypervisor)
CVE-2018-12126: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2018-12130: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2018-12127: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2019-11091: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2019-11135: OK (your CPU vendor reported your CPU model as not vulnerable)
CVE-2018-12207: OK (this system is not running a hypervisor)
```

(Yes, I piped curl to bash to make a point.) Clearly this isn't a strategy we can accept.


### Recommended next steps

Let's shore up the cron job configuration so it's less disruptive to daily users. We can selectively target certain VMs
for updates (#341) in order to minimize the performance impact (and runtime), as well as reduce the frequency daily -> weekly.

We should also reconsider the use of discrete TemplateVMs for each SDW component. If we were instead to install all the SDW-related
packages in a single TemplateVM, then reuse that template across multiple components, then we'd get the benefit of the built-in
Qubes tooling to notify about updates. There may be conflicts between certain deb packages, particularly those managing mimetypes,
but perhaps there's a homedir-based work around that would work well with the Qubes isolation.

The option remains open that we could establish unused AppVMs with network enabled, strictly to check for updates, and preserve
the isolation of the app-specific data in the net-less VMs. The overhead involved in maintaining and running those dedicated checker
VMs seems comparable with that of simply running the update checks for each TemplateVM, however, and therefore is unlikely to be worth the effort.
9 changes: 8 additions & 1 deletion dom0/fpf-apt-test-repo.sls
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
# -*- coding: utf-8 -*-
# vim: set syntax=yaml ts=2 sw=2 sts=2 et :
#

# Import the Qubes-maintained Salt logic for upgrading VM packages.
# Intelligently handles both Debian & Fedora VMs. For reference, see:
#
# dom0:/srv/formulas/base/update-formula/update/qubes-vm.sls
#
include:
- update.qubes-vm

Expand All @@ -11,7 +17,8 @@ install-python-apt-for-repo-config:
- pkgs:
- python-apt
- require:
# Require that the Qubes update state has run first
# Require that the Qubes update state has run first. Doing so
# will ensure that apt is sufficiently patched prior to installing.
- sls: update.qubes-vm

configure-apt-test-apt-repo:
Expand Down
4 changes: 2 additions & 2 deletions dom0/sd-clean-all.sls
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,12 @@ sd-cleanup-rpc-mgmt-policy:
- repl: ''
- pattern: '^disp-mgmt-sd-\w+\s+sd-\w+\s+allow,user=root'

{% set sdw_customized_rpc_files = salt['cmd.shell']('grep -rl "BEGIN securedrop-workstation" /etc/qubes-rpc/ | cat').splitlines() %}
{% set sdw_customized_rpc_files = salt['cmd.shell']('grep -rIl "BEGIN securedrop-workstation" /etc/qubes-rpc/ | cat').splitlines() %}
{% if sdw_customized_rpc_files|length > 0 %}
sd-cleanup-rpc-policy-grants:
file.replace:
- names: {{ sdw_customized_rpc_files }}
- pattern: '### BEGIN securedrop-workstation ###.*### END securedrop-workstation ###'
- pattern: '### BEGIN securedrop-workstation ###.*### END securedrop-workstation ###\s*'
- flags:
- MULTILINE
- DOTALL
Expand Down
26 changes: 26 additions & 0 deletions migrate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/usr/bin/env python3

import sys
import qubesadmin
q = qubesadmin.Qubes()



sys.argv[


# Declare vars:
#
# AppVMs to update
# TemplateVM to enforce
#
# Confirm TemplateVM exists
# Look up all target AppVMs
# If Template VM does not match:
#
# Halt AppVM
# Update TemplateVM setting
# Run qubesctl highstate against AppVM
#
#
#
2 changes: 1 addition & 1 deletion scripts/destroy-vm
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def destroy_vm(vm):
if vm.is_running():
vm.kill()
print("Destroying VM '{}'... ".format(vm.name), end="")
subprocess.check_call("qvm-remove -f {}".format(vm.name).split())
subprocess.check_call(["qvm-remove", "-f", vm.name])
print("OK")


Expand Down

0 comments on commit 22f4b1d

Please sign in to comment.