Hi,
I've been running Qubes for a few years now and I'd like to give
Spectrum a try, as I've been having some hardware and performance
problems with Qubes. Is there some up-to-date guide I can follow? I
found https://alyssa.is/using-virtio-wl/#demo and was able to see the
weston terminal. I also tried updating to the latest commit and was
able to get a nested wayfire window with:
nix-build . -A spectrumPackages && ./result-3/bin/spectrum-vm
(I'm fairly new to Nix, so not sure if this is the right way to do things)
I managed to change the keyboard layout, mount a tmpfs for home, and
increase the memory enough to start firefox, but I haven't managed to
get much further. Things I tried so far:
- I tried replacing wayfire with weston-terminal, to avoid the nested
session. But sommelier segfaults when I do that.
- I tried adding `--shared-dir /tmp/ff:ff:type=9p` to share a host
directory. Then `mount -t 9p -o trans=virtio,version=9p2000.L ff /tmp`
in the VM seemed to work, but `ls /tmp` crashed the VM.
- I tried using `-d /dev/mapper/disk` to share an LVM partition, but
`mount -t ext4 /dev/vdb /tmp` refused to mount it.
- I tried enabling networking with `--host_ip 10.0.0.1`, etc, but it
said it couldn't create a tap device. I guess it needs more
privileges.
Ideally, I'd like to run a VM with each of my old Qubes filesystems,
to get back to where I was with my Qubes setup, before investigating
new spectrum stuff (e.g. one app per VM). Do you have any advice on
this? I see these lists are a bit quiet - I hope someone is still
working on this because it sounds great :-)
Thanks!
--
talex5 (GitHub/Twitter) http://roscidus.com/blog/
GPG: 5DD5 8D70 899C 454A 966D 6A51 7513 3C8F 94F6 E0CC
Hi,
Now that we've been developing Spectrum ARM (aarch64) support
with iMX8 boards, I'd like to get back to Spectrum HW configuration design.
On x86 the generic image with kernel supporting most devices as modules can
make sense. On ARM, the vendor specific BSP HW quirks are more common.
As of now, the spectrum fork for aarch64 just adds another config
after rpi configs
and replaces the default config to use that to build. With small
changes this could
be handled like rpi configs. In addition, cloud-hypervisor accepts
kernel only in
EFI format for aarch64[1]. Anyway, this would allow us to build an
aarch64 Spectrum installer
- even make it with a more generic kernel. That takes us to ARM
vendor/device specific HW
quirks which would need to be handled anyway. I'll intentionally leave
device specific
kernel hardening and disabling kernel module loading for security
reasons for now.
As of now the vendor/device specifics are not supported unless one builds device
specific Spectrum image with all configs build-time and skips
installer altogether.
The other option that I see. We discussed earlier nix-hardware and
device specific modules.
That would bring nixos configuration.nix and installation supporting
scripts to Spectrum,
though. Those could be called from the Spectrum installer but it would
change the installer
logic from writing an image to dynamically configuring the device
during install based on user
selections.
Any thoughts which would be the preferred way? Maybe some other way?
In the end, HW specifics are needed also on x86 as we saw with NUCs
and different
Lenovo laptops in the spring. I'm not convinced one image to rule them
all is realistic or secure.
Finally, this is by no means blocking the hardened iMX8 based Spectrum
development
but will keep that work in Spectrum fork until there's an agreed path
to implement this.
Integrating this sooner and making it more generic would make Spectrum
more useful
for a wider audience.
Best regards,
-Ville
[1] https://github.com/tiiuae/spectrum/pull/3#issuecomment-1211834302
Recently I've been working on making it possible for us to use crosvm's
implementation of virtio-gpu (which is necessary for multi-VM Wayland).
The approach I was originally planning on was porting crosvm's
vhost-user-gpu frontend to cloud-hypervisor. That would allow us to run
the crosvm implementation of the device unmodified, with just a small
amount of glue code in cloud-hypervisor.
But then I discovered some things that made me decide to investigate
other approaches:
- crosvm does not implement the standard vhost-user-gpu protocol.
It implements it its own special way. Perhaps ironically, the
crosvm-specific way seems to be closer to how vhost-user works for
other devices (like network and block), which should actually make
it easier to port the frontend to cloud-hypervisor. But it also
changes the potential to upstream that port to cloud-hypervisor
from "a hard sell" to "not going to happen". So if I did that,
it would commit us to carrying a cloud-hypervisor patch indefinitely.
- There's an interesting new protocol called vfio-user, that would be
really helpful to us in this situation. Whereas vhost-user requires
the VMM to still have some basic per-device knowledge (the glue code
I was planning to port), vfio-user operates at the PCI level, so the
VMM only needs to know that the device is PCI. So if we could
somehow provide a virtio-gpu device to cloud-hypervisor over
vfio-user, cloud-hypervisor wouldn't need any GPU-specific code at
all. Everything should just work without any changes to
cloud-hypervisor, as it already implements a vfio-user client.
- The next release of QEMU, 7.1.0, will include support for exporting
any virtual device QEMU can provide over vfio-user.
So with all this in mind, there are three ways we could try to proceed:
1. Port the crosvm-specific vhost-user-gpu frontend to cloud-hypervisor.
2. Make crosvm speak the standard version of vhost-user-gpu, then use
QEMU to act as a bridge between the crosvm GPU device, and
cloud-hypervisor, translating vhost-user to vfio-user, but not doing
anything else. (So we're not using QEMU to run a VM, just to
translate between these two protocols and handle the PCI stuff.)
3. Implement a vfio-user server in crosvm, so crosvm device backends
can be used directly with cloud-hypervisor.
3 is the clear best option, because it doesn't require adding QEMU into
the system, and it would be entirely upstreamable — I spoke to a crosvm
developer about it in their Matrix channel and they said they'd be
interested in patches for it. But it's also quite complicated to
implement, and beyond my ability, at least if I want results any time
soon. 2 would probably be even more complicated as it would require
coordinating between crosvm and QEMU to get them to agree on how the
protocol should work.
So if we want this working soon, 1 is the only feasible option, at least
if it's me doing the work. But it means committing to carrying a
cloud-hypervisor patch until somebody comes along to implement 3. It
gives me bad vibes because it goes against the upstream-first approach I
try to take with Spectrum development, and once timely package updates
are something we have to take more seriously, the patch no longer
applying would block any sort of automatic updates.
So I'm posting this to solicit thoughts on what to do here. The ideal
scenario is that we are able to find somebody else (with more VMM
implementation experience than me) who is able to do 3, and then I would
be totally comfortable with doing 1 as a stopgap until that can happen.
Otherwise, I can either keep trying to chip away at doing it myself,
however long that takes, or we'd have to just accept the consequences of
having the patch indefinitely, and hope that Google or somebody else
also finds themself wishing crosvm had a vfio-user server and implements
it themself.
Thoughts welcome. :)
If you've been paying close attention recently, you'll have seen
patches coming from a few different Unikie[1] email accounts. In
addition to contributing to Spectrum, Unikie has hired me to work on it.
Unikie is interested in Spectrum for both desktop and embedded use.
They're contracting for the TII Secure Systems Research Center[2],
working on developing a Spectrum-based reference system.
One thing they're currently working on is being able to run Spectrum on
the i.MX8 development board, which means we'll hopefully see patches
adding ARM and cross-compilation support to Spectrum in the near future.
The first big thing I'll be working on for Unikie is finally integrating
crosvm's support for graphical application VMs into Spectrum.
While I'm working for Unikie, I plan to set aside any donations I
receive for my Spectrum work, to be used for project expenses and
funding Spectrum work from other people, rather than being used to pay
for my general living expenses as has been the case up to now.
And just to clarify: Spectrum has not been acquired or anything — I'm
still leading the project, and Unikie has the same rights as any other
contributor (copyright ownership of contributions, mostly).
[1]: https://www.unikie.com/en/
[2]: https://www.tii.ae/secure-systems
I was recently at MCH 2022[1], one of the big European hacker camps. We
had some really good conversations about Spectrum, and I thought I'd
share my takeaways here:
1. We were praised for our recent documentation efforts, both in
implementing Diátaxis[2] and Architecture Decision Records[3].
So big thanks to Ville for spearheading the latter.
2. We talked about the use case of having multiple user data partitions.
This would allow very strict separation of security domains, and
could also be helpful for data portability — you could have one user
data partition in your desktop, and another on a portable disk, for
example. And if, way down the line, we want to do really cool things
like have live migration of VMs between systems, architecting for
multiple user data partitions will be a big help with that too.
This is one of those things where it's not difficult to do, as long
as we plan for doing it that way from the start. But if we didn't do
it that way from the start, and decided we wanted to add it later, I
can see how we'd be in for a world of pain. So I think it's a
sensible change to make. We're unlikely to regret making it, but
reasonably likely to regret not having done it earlier if it becomes
really important later on.
3. Something that can apparently be difficult for Qubes is having every
VM have a unique, human-readable name in a global namespace. This
means that, for example, disposable VMs have to try to generate a
name that isn't already in use. This is especially relevant if we
end up supporting multiple sources of VMs as described above.
So in the short term, we should probably change VMs to be identified
with UUIDs, and have human-readable names be a layer on top. Not
having a human-readable unique names in a single global namespace
will help with thinking about VMs in terms of capabilities.
Since points 2 and 3 are architectural changes, I'll write them up and
submit them as proper ADRs when I get the chance.
[1]: https://mch2022.org/
[2]: https://diataxis.fr/
[3]: https://spectrum-os.org/doc/decisions/