patches and low-level development discussion
 help / color / mirror / code / Atom feed
* [PATCH nixpkgs 00/16] Inter-guest networking
@ 2021-04-11 11:57 Alyssa Ross
  2021-04-11 11:57 ` [PATCH nixpkgs 01/16] linux: enable Xen everywhere it can be Alyssa Ross
                   ` (16 more replies)
  0 siblings, 17 replies; 20+ messages in thread
From: Alyssa Ross @ 2021-04-11 11:57 UTC (permalink / raw)
  To: devel

In Spectrum, we want the host kernel to include as few drivers as
possible, to reduce attack service.  To accomplish this, we need to
move as much hardware interaction as possible into VMs.  This series
introduces proof-of-concept network hardware isolation by passing
through network devices to a VM, and having that VM handle all
interaction with that hardware instead of the host system.


Background
----------

Ideally, the Spectrum host system wouldn't have to have support for
networking at all.  Network hardware could be handled by a VM, which
would act a bit like a router, and export virtual network devices
directly to other VMs.  The hard part of that ideal is the bit where a
VM exports a virtual device to another VM.  There's work going on in
the Linux virtualisation ecosystem to make that possible, through a
protocol called virtio-vhost-user[1].  But it looks like it's going to
be a long time before virtio-vhost-user is baked enough to be useful
to us, and inter-guest networking is so fundamentally important to
Spectrum that we can't afford to wait that long.  So we need a plan B.

Because we have no way to have a VM provide a virtual device, the host
is going to need to do it.  And this means that some amount of
networking is going to have to happen on the host.  But we can still
get most of what we were going for:

 * Pass hardware network devices through to a dedicated VM.
 * Attach a virtual network device to the same VM.
 * Connect htat virtual network device to virtual network devices for
   other VMs on the host.

With this approach, the host is still doing networking, but there's a
lot less code involved, because instead of having loads of drivers for
every kind of Wi-Fi card and so on available on the host, the host
only ever needs to use the driver for the virtual network devices.  If
we connect each network client VM to the router VM using a bridge
(i.e. a virtual Ethernet switch), we don't even need to worry about IP
addresses or routing tables on the host.  (There are some drawbacks to
this approach, which are elaborated later in the series.)

So that's what's implemented here.

[1]: https://wiki.qemu.org/Features/VirtioVhostUser


Implementation
--------------

This series starts with some cherry-picks from upstream Nixpkgs.  (We
should sync with upstream soon, but I didn't want to delay this any
longer by blocking it on a Nixpkgs sync.)  Then there's some important
refactoring to make it possible to fit multiple VM definitions into
spectrumPackages.  (Until now, there's only been one.)  Finally, I
implement two VMs -- one to act as the router and one to be the client
-- as well as a service manager that can set everything up on the host
to be able to run those VMs, and run them.

Don't read too much into the structure of the Nix code for the VMs.  I
just needed _some_ structure, and this was what I came up with.  As we
explore the configuration aspect of Spectrum more, I expect it to
change dramatically.  There's quite a bit of duplication between VM
definitions, but I don't think it's worth spending time getting rid of
that for now when the entire structure could change before it becomes
an issue.  The important thing here is what's going on at runtime on
the host and in the VMs, not the Nix code used to build that.

Everything in the router VM is set up to be able to handle multiple
clients coming and going at runtime (with the exception of a small
issue with clients going away that I'll get into later in the series).
The only reason the client VM is defined up front is because that's
what fits our Nix code.  There's no reason from the networking point
of view that client VMs couldn't be instantiated entirely at runtime.

One thing that might jump out to anybody skimming the series is that
we're using cloud-hypervisor here.  This is because there's a feature
it supports that crosvm doesn't (I'll go into detail about what
exactly later in the series), and it's more expedient to just use both
VMMs as required by the needs of the VMs they're running than it would
be to port features from cloud-hypervisor to crosvm or vice versa.
Obviously using multiple different VMMs on the host is not good in the
long term, but things are so in flux at the moment that it's very
likely that any porting work will no longer be in use by the time we
need to pare down to a single VMM anyway.


Testing
-------

If you want to try this out for yourself (and I encourage you to do so
and reply to me with a Tested-by!), here's what you need to do:

 1. Identify the PCI location of a physical Ethernet device on your
    system.  `lspci -n' should help with this.

 2. Modify pkgs/os-specific/linux/spectrum/testhost/default.nix to
    define PCI_LOCATION to be your one.  In future, we can either have
    a configuration option for this sort of thing, or try to figure it
    out at runtime.

 3. Start the service manager:

    	sudo env XDG_RUNTIME_DIR=/run $(nix-build -A spectrumPackages.spectrum-testhost)/bin/spectrum-testhost

    A temporary directory will be created for service manager state.
    Its location will be printed to the terminal.

 4. At another terminal, tell the service manager to start the
    "application VM".  This is the client that will be connected to
    the router VM.

    	s6-rc -u -l /run/spectrum.sy2huQuC3x/s6-rc/live change vm-app

    Remember to substitute in your temporary directory.

    You'll need the s6-rc command available (it's in Nixpkgs).  This
    tells the service manager to start the service named vm-app.  This
    service has a dependency on another VM called vm-net, which is the
    router VM that deals with the network hardware, so both will be
    started when you ask s6-rc to start vm-app.

 5. The terminal running the service manager will be connected to the
    serial console of client VM.  After a few seconds (because the
    router VM will probably have to sort out DHCP first), you should
    be able to ping hosts on the internet.  Note that the client VM
    doesn't come with DNS set up.

 6. Once you're done, you can tell the service manager to stop the
    VMs:

    	s6-rc -da -l /run/spectrum.sy2huQuC3x/s6-rc/live change

    Again, remember to substitute your temporary directory.

    The service manager itself won't respond to ^C.  I believe this is
    handled better in more recent versions of s6, which we'll get when
    we sync Nixpkgs.

    After this, it's a good idea to reboot, to restore all the network
    devices to their default state.


Alyssa Ross (16):
  linux: enable Xen everywhere it can be
  cloud-hypervisor: 0.8.0 -> 0.14.1
  mdevd: init at 0.1.3.0
  spectrumPackages.linux_vm: fix cloud-hypervisor hotplug
  spectrumPackages.linux_vm: allow config overrides
  crosvm: support setting guest MAC from --tap-fd
  spectrumPackages: export makeRootfs
  spectrumPackages.rootfs: add s6-rc support
  spectrumPackages.rootfs: make /var/lib and /var/run
  spectrumPackages.rootfs: add dbus configuration
  spectrumPackages.rootfs: add connman dbus services
  spectrumPackages.sys-vms.comp: init
  spectrumPackages.makeRootfs: move to default.nix
  spectrumPackages.sys-vms.net: init
  spectrumPackages.sys-vms.app: init
  spectrumPackages.spectrum-testhost: init

 .../cargo-lock-vendor-fix.patch               |  53 ----
 .../cloud-hypervisor/default.nix              |  15 +-
 ...upport-setting-guest-MAC-from-tap-fd.patch | 294 ++++++++++++++++++
 .../linux/chromium-os/crosvm/default.nix      |   1 +
 .../linux/kernel/common-config.nix            |  13 +-
 pkgs/os-specific/linux/kernel/patches.nix     |   9 +
 pkgs/os-specific/linux/mdevd/default.nix      |  28 ++
 pkgs/os-specific/linux/spectrum/default.nix   |   6 +-
 pkgs/os-specific/linux/spectrum/linux/vm.nix  |   7 +-
 .../linux/spectrum/rootfs/default.nix         |  92 +++---
 .../linux/spectrum/rootfs/etc/group           |   1 +
 .../linux/spectrum/rootfs/etc/passwd          |   1 +
 .../linux/spectrum/rootfs/generic.nix         |  48 ---
 .../linux/spectrum/rootfs/rc-services.nix     |  26 ++
 .../linux/spectrum/rootfs/stage1.nix          |  25 +-
 .../linux/spectrum/spectrum-vm/default.nix    |   6 +-
 .../linux/spectrum/testhost/default.nix       | 205 ++++++++++++
 .../linux/spectrum/vm/app/default.nix         |  63 ++++
 .../linux/spectrum/vm/comp/default.nix        |  86 +++++
 .../os-specific/linux/spectrum/vm/default.nix |   9 +
 .../linux/spectrum/vm/net/default.nix         | 165 ++++++++++
 pkgs/top-level/aliases.nix                    |   6 +
 pkgs/top-level/all-packages.nix               |  12 +-
 23 files changed, 976 insertions(+), 195 deletions(-)
 delete mode 100644 pkgs/applications/virtualization/cloud-hypervisor/cargo-lock-vendor-fix.patch
 create mode 100644 pkgs/os-specific/linux/chromium-os/crosvm/0001-crosvm-support-setting-guest-MAC-from-tap-fd.patch
 create mode 100644 pkgs/os-specific/linux/mdevd/default.nix
 delete mode 100644 pkgs/os-specific/linux/spectrum/rootfs/generic.nix
 create mode 100644 pkgs/os-specific/linux/spectrum/rootfs/rc-services.nix
 create mode 100644 pkgs/os-specific/linux/spectrum/testhost/default.nix
 create mode 100644 pkgs/os-specific/linux/spectrum/vm/app/default.nix
 create mode 100644 pkgs/os-specific/linux/spectrum/vm/comp/default.nix
 create mode 100644 pkgs/os-specific/linux/spectrum/vm/default.nix
 create mode 100644 pkgs/os-specific/linux/spectrum/vm/net/default.nix

-- 
2.30.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-04-14 23:57 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-11 11:57 [PATCH nixpkgs 00/16] Inter-guest networking Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 01/16] linux: enable Xen everywhere it can be Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 02/16] cloud-hypervisor: 0.8.0 -> 0.14.1 Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 03/16] mdevd: init at 0.1.3.0 Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 04/16] spectrumPackages.linux_vm: fix cloud-hypervisor hotplug Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 05/16] spectrumPackages.linux_vm: allow config overrides Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 06/16] crosvm: support setting guest MAC from --tap-fd Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 07/16] spectrumPackages: export makeRootfs Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 08/16] spectrumPackages.rootfs: add s6-rc support Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 09/16] spectrumPackages.rootfs: make /var/lib and /var/run Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 10/16] spectrumPackages.rootfs: add dbus configuration Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 11/16] spectrumPackages.rootfs: add connman dbus services Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 12/16] spectrumPackages.sys-vms.comp: init Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 13/16] spectrumPackages.makeRootfs: move to default.nix Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 14/16] spectrumPackages.sys-vms.net: init Alyssa Ross
2021-04-14 20:49   ` Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 15/16] spectrumPackages.sys-vms.app: init Alyssa Ross
2021-04-11 11:57 ` [PATCH nixpkgs 16/16] spectrumPackages.spectrum-testhost: init Alyssa Ross
2021-04-14 22:15 ` [PATCH nixpkgs 00/16] Inter-guest networking Cole Helbling
2021-04-14 23:56   ` Alyssa Ross

Code repositories for project(s) associated with this public inbox

	https://spectrum-os.org/git/crosvm
	https://spectrum-os.org/git/doc
	https://spectrum-os.org/git/mktuntap
	https://spectrum-os.org/git/nixpkgs
	https://spectrum-os.org/git/spectrum
	https://spectrum-os.org/git/ucspi-vsock
	https://spectrum-os.org/git/www

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).