[-- Attachment #1: Type: text/plain, Size: 544 bytes --] I've updated Spectrum's Cloud Hypervisor patches (which add virtio-gpu support) to support the recently released Cloud Hypervisor 38.0. Also new in this release: • Support for adding GPU devices using Cloud Hypervisor's optional D-Bus API. • ch-remote now has an add-gpu subcommand. • GPU support in the API is now documented in the OpenAPI definition. More information is available at <https://spectrum-os.org/software/cloud-hypervisor/>. (I'm still not sure what the right place to announce these updates is…) [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1643 bytes --] Hi all, When we had an informal compartmentalization dinner at NixCon, one thing that we talked about a lot was how to communicate the ongoing work happening in Spectrum, make it easier for people to get familiar with how the project works etc. And the suggestion that really seemed to vibe with people there was development streams / videos, which have worked well for projects like Serenity OS and Asahi Linux. It’s been a while since then, but in the background myself and some collaborators have been working on setting everything up to be able to do consistent, high quality Spectrum development streams. I’ve done three so far this year, and I plan to keep going at that sort of pace. So far, on stream I’ve been working on implementing support for sharing files with VMs using the XDG File Chooser Portal. Streams are broadcast on https://live.qyliss.net/ And recordings are available at https://diode.zone/c/spectrum/ Currently, there’s no schedule, as it’s usually hard for me to stick to that sort of thing, and I want to get into the rythym of just streaming frequently when I can before I even attempt that. But I’ll try to do some scheduled streams in future if I can, because I know that would make it easier for people to join in. For now, you can be notified when streams start via browser notifications or ActivityPub. Thanks to everbody who suggested this and helped me get set up for it, and I hope this and other things I’m doing (like a proper release of the Cloud Hypervisor patches) make the project seem more alive and vibrant in 2024. :) Alyssa [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1043 bytes --] Hi, For the past while, I've been maintaining a patchset for Cloud Hypervisor that makes it possible to use crosvm’s virtio-gpu implementation over vhost-user to provide a virtual GPU device to Cloud Hypervisor guests. It’s likely not upstreamable, especially as-is, but given there’s demand for this[1][2], and some other people have already started using my patches, I figured it was worth releasing them properly. The patches, along with more information, are available at: https://spectrum-os.org/software/cloud-hypervisor I’ve been keeping them up to date with new releases of Cloud Hypervisor, and expect to continue to do that. The patchset is developed and maintained as part of Spectrum[3], a project to create a compartmentalized desktop operating system. I’m not sure where the best place will be to announce updated versions of the patchset yet. Spectrum Discuss mailing list? Cloud Hypervisor list? A new one? Feedback very welcome. Get in touch if you have any questions, Alyssa [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1118 bytes --] Some time in the next week, the #spectrum:libera.chat Matrix room is going to stop working, as a result of a dispute between Libera.Chat and matrix.org[1]. So, unfortunately, if you want to keep using Matrix to participate in the Spectrum chat, you need to take some action: 1. Leave #spectrum:libera.chat 2. Join #spectrum:fairydust.space It's very important to do it in that order, because otherwise the bridge gets confused and bad things happen. If something goes wrong, rejoin either channel, ask for help, and we'll figure it out. On the positive side, the new Matrix room is controlled by me, rather than by the administrators of the Matrix-Libera bridge. This means we can have nice things like room history, and if something like this happens again (needing to use a different bridge, moving IRC network, etc.), I should be able to use my room admin powers to change the setup transparently, without any action being required from individual Matrix users. So this is hopefully the last time I'm going to have to ask Matrix users to join a new room. [1]: https://libera.chat/news/matrix-deportalling [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1268 bytes --] An annoying longstanding problem with Spectrum is that I do a lot of very interesting work, but I'm not very good at keeping people following the project informed about it. Over the years I've tried writing status updates, but it's really difficult for me to do them consistently. The community calls we've had a couple of are a good format for me, especially because they're interactive, but the potential attendees change over time, depending on who's interested in the project at the moment, and so it's important to work with their schedules to make sure enough people come to make it worthwhile. That means that organising the call is yet another task I should do on a schedule, and struggle with. So, I was wondering if there's somebody out there who's better at sticking to schedules and would be able to help with this part. It really just involves making the scheduling poll, sending out an email about it, and then checking the responses when the poll closes to pick the meeting time, and sending out another email announcing that time. Ideally, I think we'd have a community call every month. If that sounds like that something that would be easier for you than it is for me, it would be lovely if you could get in touch in any of the normal ways. :) [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 722 bytes --] On Sat, Feb 25, 2023 at 03:17:09PM +0000, Alyssa Ross wrote: > Hi! I'm organising another community call, trying to make these a > regular thing. There's not quite enough time left to organise one in > February taking into account when people are available, so it'll be in > March. > > This is a call, open to anybody who wants to attend, to ask questions > about Spectrum, talk about any related work they're doing or interested > in doing, etc. > > If you'd like to come, please fill in this poll to indicate when works > for you: Okay, we'll have the call on Friday, March 10, 2023 11:00 UTC, on: https://meet.jit.si/moderated/f20761163cb82478c3cc28dbffc1edcd788d8766232fa96f8dc0642b88a6ee59 Talk to you all then. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 762 bytes --] On Sat, Feb 25, 2023 at 03:17:09PM +0000, Alyssa Ross wrote: > Hi! I'm organising another community call, trying to make these a > regular thing. There's not quite enough time left to organise one in > February taking into account when people are available, so it'll be in > March. > > This is a call, open to anybody who wants to attend, to ask questions > about Spectrum, talk about any related work they're doing or interested > in doing, etc. > > If you'd like to come, please fill in this poll to indicate when works > for you: > > https://www.systemli.org/poll/#/poll/tYCYsBFri1/participation?encryptionKey=zZ2ibQqqtQZiebVPjx0nhFTzwLoA5demSwXjvMui > > (Note that responses are public.) Reminder to fill in the linked poll if you want to participate. :) [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 607 bytes --] Hi! I'm organising another community call, trying to make these a regular thing. There's not quite enough time left to organise one in February taking into account when people are available, so it'll be in March. This is a call, open to anybody who wants to attend, to ask questions about Spectrum, talk about any related work they're doing or interested in doing, etc. If you'd like to come, please fill in this poll to indicate when works for you: https://www.systemli.org/poll/#/poll/tYCYsBFri1/participation?encryptionKey=zZ2ibQqqtQZiebVPjx0nhFTzwLoA5demSwXjvMui (Note that responses are public.) [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 3316 bytes --] Some deliberately brief updates so I can actually get all this written down finally: First, December was my last month working at Unikie. Some annoying German bureaucratic quirks unfortunately got in the way and made things difficult for me. My understanding is that they plan to continue working with and contributing to Spectrum. Former Unikie colleagues, I'm looking forward to continuing to work with you. :) Relatedly, I'd like to organise another community call soon, as the last one was really good. I'd have liked to do one in January too, but too many non-work things getting in the way meant I wasn't able to make that happen. But let's keep them going with a call in February. More on this soon. This also means that, for the time being, I'm once again funding my work on Spectrum through community donations. (But as previously promised, all the money I raised while I was working for Unikie will remain set aside.) Please consider supporting my work on Spectrum through https://github.com/sponsors/alyssais or https://liberapay.com/qyliss. Hopefully I'll have more to share soon on other sources of funding. What I've been doing recently: I spent December getting the foundations of virtio-gpu support into upstream rust-vmm[1], and January doing work on Nixpkgs. It would be a big win for development experience if I could get rid of all of Spectrum's modifications in upstream Nixpkgs, so I've been working towards that. I'm taking a bit of a roundabout route because I need to be able to demonstrate why the changes are useful in ways that aren't specific to Spectrum — as a result of this, I've actually been working a bit on improving Nixpkgs' FreeBSD support, since it shares some characteristics with Spectrum that most Linux systems wouldn't. (Non-systemd udev implementation, for example.) [1]: https://github.com/rust-vmm/vm-virtio/commits/c527b45dada0a81d343aca7f06759d5637d6429a?author=alyssais Upcoming challenges I'm thinking about: - Way too much of my time at the moment is spent doing QA — making sure new Nixpkgs updates or kernels aren't going to introduce regressions in Spectrum. We do some unusual things, so can't rely on other people to catch problems before they affect us. I want to get some automated testing against upstreams sorted out, so I can free up more of my time for working on documentation and features, which is what I really want to be doing but am fighting for time for at the moment. Getting to 0 Nixpkgs patches is part of this. - I want to improve the experience for other contributors — I know it's lacking in various ways. I think the quickest win here will be to figure out a way to let people join the Spectrum chat through Matrix without being kicked after 30 days of inactivity (which is what the Libera bridge does). I've been told there are various alternative ways we could have this work. Having reliable real-time chat is pretty critical for collaboration, and it's become especially clear, especially after winter holidays, that we don't quite have that at the moment. Also, I'll be at FOSDEM this weekend. Get in touch on IRC (qyliss on libera) or Matrix (@qyliss:fairydust.space) if you want to say hello. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1.1: Type: text/plain, Size: 764 bytes --] Tomorrow (Wednesday) at 12:00 UTC I will be holding an experimental Spectrum community call. The idea here is that it gives all participants in the Spectrum community (interested users, developers, etc.) the chance to meet and talk about upcoming development work. So if you would like to hear or ask questions about where Spectrum is going, or if you're working on Spectrum (or would like to be!) and want to share what you're doing, please join us tomorrow at the following URL: https://meet.jit.si/moderated/f20761163cb82478c3cc28dbffc1edcd788d8766232fa96f8dc0642b88a6ee59 Since somebody has asked already: I don't plan on recording this call. If it goes well and we make this a regular thing, we can discuss at that point whether calls should be recorded. [-- Attachment #1.2: invite.ics --] [-- Type: text/calendar, Size: 605 bytes --] BEGIN:VCALENDAR PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN VERSION:2.0 METHOD:REQUEST BEGIN:VTIMEZONE TZID:Etc/UTC BEGIN:STANDARD TZOFFSETFROM:+0000 TZOFFSETTO:+0000 TZNAME:GMT DTSTART:19700101T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20221129T104341Z LAST-MODIFIED:20221129T104431Z DTSTAMP:20221129T104431Z UID:bec6b49d-ae2a-40fa-b44e-fd7291e64efb SUMMARY:Spectrum community call ORGANIZER;PARTSTAT=NEEDS-ACTION;ROLE=REQ-PARTICIPANT:mailto:alyssa.ross@un ikie.com DTSTART;TZID=Etc/UTC:20221130T120000 DTEND;TZID=Etc/UTC:20221130T130000 TRANSP:OPAQUE END:VEVENT END:VCALENDAR [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1769 bytes --] On Wed, Nov 23, 2022 at 07:41:00AM +0000, Juha Park wrote: > Hello. > > In the spectrum OS, as far as I know, all appvms will connect to outside through netvm. > And each appvm has different subnet. > However, sometimes, a app should be able to access the host network by bridging. > For example, an P2P app needs to send and receive multicast or broadcast to find other peers. > > I wonder if it(bridging to host network) is possible in spectrum OS model, and if possible, > I want to know how to do it. > And if there is no such feature, I want to know the plan or opinion to support such app in spectrum OS. Hi, thanks for your question! First, to clarify, in Spectrum, the goal is to avoid having any networking on the host at all, by passing network adapters through to VMs. That's immaterial to your question about multicast, etc., just something important to be aware of. Bridged networking is definitely on the agenda. I don't know yet exactly how it will work — networking isn't my area of expertise. As I understand it, one possibility would be to run an NDP proxy in the network VM, so each VM would get its own IPv6 address on the host network — as I recall, that's how Chrome OS does it. But what exactly we end up doing will depend on how people who understand networking better than me (possibly such as yourself) think it should be done. In general, I'm not too happy with the current state of Spectrum's networking — I did it in the way that was easiest to get basic functionality up and running, especially because a key technology for doing it better (virtio-vhost-user) wasn't mature enough at the time. Revisiting it is definitely on the cards, so it's really useful to hear about use cases like this. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1913 bytes --] On Mon, Nov 14, 2022 at 02:02:52PM +0200, Ville Ilvonen wrote: > Hi, > > We've built the wayland demo branch on aarch64 port and added some other > apps on it to showcase embedded virtualization with Spectrum OS. > > The build configuration for the reference device is documented [1] with the > out-of-tree build configuration as agreed in our earlier discussion[2]. > There's also some additional accompanied documentation on binary cache we > have used with aarch64 and could be of general use. I'd like to see that > documentation upstreamed as much as possible. > > Would it make sense to link this work from Spectrum documentation? To > indicate work in progress aarch64 support with known issues on a ref. device > and avoid forking this documentation? > > The practical benefit is that anyone interested in building Spectrum OS for > an aarch64 device using vendor BSP would have some reference linked from > Spectrum OS documentation - e.g. an initial porting guide as even the x86_64 > does not yet use the build configuration. Additionally, I don't want to > fragment Spectrum OS ports and their documentation further than necessary. Yeah, I think it would be a great idea to link to that from the build configuration page in the Spectrum documentation, as an example. To clarify, I don't expect the *generic* x86_64 image to use a build configuration file, nor would I expect a generic aarch64 image to. The build configuration file is there for when it's neccesary to override the defaults, for example to use a custom kernel. (But it would also make sense to have a commented *example* build configuration file in the Spectrum repo, to be a starting point for users.) > Best, > > -Ville > > [1] https://github.com/tiiuae/spectrum-config-imx8/blob/main/README.md > [2] https://spectrum-os.org/lists/archives/spectrum-discuss/CAP-nJwHTmROzMbyYNtrTrOdXGV-iJvwPuJ3FSZb3gLy5R3z80Q@mail.gmail.com/ > > [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 557 bytes --] Hello. In the spectrum OS, as far as I know, all appvms will connect to outside through netvm. And each appvm has different subnet. However, sometimes, a app should be able to access the host network by bridging. For example, an P2P app needs to send and receive multicast or broadcast to find other peers. I wonder if it(bridging to host network) is possible in spectrum OS model, and if possible, I want to know how to do it. And if there is no such feature, I want to know the plan or opinion to support such app in spectrum OS. Thanks. [-- Attachment #2: Type: text/html, Size: 2732 bytes --]
Hi, We've built the wayland demo branch on aarch64 port and added some other apps on it to showcase embedded virtualization with Spectrum OS. The build configuration for the reference device is documented [1] with the out-of-tree build configuration as agreed in our earlier discussion[2]. There's also some additional accompanied documentation on binary cache we have used with aarch64 and could be of general use. I'd like to see that documentation upstreamed as much as possible. Would it make sense to link this work from Spectrum documentation? To indicate work in progress aarch64 support with known issues on a ref. device and avoid forking this documentation? The practical benefit is that anyone interested in building Spectrum OS for an aarch64 device using vendor BSP would have some reference linked from Spectrum OS documentation - e.g. an initial porting guide as even the x86_64 does not yet use the build configuration. Additionally, I don't want to fragment Spectrum OS ports and their documentation further than necessary. Best, -Ville [1] https://github.com/tiiuae/spectrum-config-imx8/blob/main/README.md [2] https://spectrum-os.org/lists/archives/spectrum-discuss/CAP-nJwHTmROzMbyYNtrTrOdXGV-iJvwPuJ3FSZb3gLy5R3z80Q@mail.gmail.com/
[-- Attachment #1: Type: text/plain, Size: 2444 bytes --] Thomas Leonard <talex5@gmail.com> writes: > On Tue, 9 Aug 2022 at 12:01, Alyssa Ross <hi@alyssa.is> wrote: >> >> On Mon, Mar 21, 2022 at 04:05:34PM +0000, Alyssa Ross wrote: >> > On Mon, Mar 21, 2022 at 12:10:43PM +0000, Thomas Leonard wrote: >> > > I think perhaps that crosvm is compiled without the "virgl_renderer" >> > > feature (it's not in the default set), and this is causing it to crash >> > > because that's also "self.default_component". I don't know how to >> > > compile crosvm with virgl enabled, though. >> > >> > It wasn't easy, but I got it to build[1]. I hope that helps. It adds >> > both virgl_renderer and virgl_renderer_next. I think virgl_renderer >> > is on by default with --gpu, and virgl_renderer_next is used with the >> > --gpu-render-server argument. Hopefully at least one of those does the >> > right thing — let me know! >> > >> > [1]: https://github.com/NixOS/nixpkgs/pull/165128 >> >> Small update: Nixpkgs unstable's crosvm package is now built with the >> virgl_renderer and virgl_renderer_next features. > > I got this working eventually, but I had to apply a load of patches. > How are you getting it to run? > > My patches are here: https://gitlab.com/talex5/crosvm/-/commits/main > > In particular: > - It failed to start (when using virtio-gpu) because it doesn't have > access to /nix/store, and I haven't hit this one. I've mostly been testing with vhost-user — maybe crosvm devices don't run sandboxed when run standalone for vhost-user, since running that device is the only thing the crosvm process is doing? > - It failed to send Wayland keymaps because they need to be mapped read-only The vhost-user-gpu frontend implementation we wrote for cloud-hypervisor[1] will map buffers as read-only if mapping them read-write fails[2]. This is only a workaround for crosvm not setting the flags correctly in the vhost-user message, though. The real fix will be in crosvm. Also, not all Wayland compositors require keymaps to be mapped read-only. wlroots does, but Weston doesn't. I suspect Chromium doesn't either, hence the bug persisting in crosvm. [1]: https://spectrum-os.org/lists/archives/spectrum-devel/20220930210906.1696349-8-alyssa.ross@unikie.com/ [2]: https://spectrum-os.org/lists/archives/spectrum-devel/20b1f9da3af/s/?b=pkgs/applications/virtualization/cloud-hypervisor/0003-virtio-devices-add-a-GPU-device.patch#n247 [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --]
On Tue, 9 Aug 2022 at 12:01, Alyssa Ross <hi@alyssa.is> wrote: > > On Mon, Mar 21, 2022 at 04:05:34PM +0000, Alyssa Ross wrote: > > On Mon, Mar 21, 2022 at 12:10:43PM +0000, Thomas Leonard wrote: > > > I think perhaps that crosvm is compiled without the "virgl_renderer" > > > feature (it's not in the default set), and this is causing it to crash > > > because that's also "self.default_component". I don't know how to > > > compile crosvm with virgl enabled, though. > > > > It wasn't easy, but I got it to build[1]. I hope that helps. It adds > > both virgl_renderer and virgl_renderer_next. I think virgl_renderer > > is on by default with --gpu, and virgl_renderer_next is used with the > > --gpu-render-server argument. Hopefully at least one of those does the > > right thing — let me know! > > > > [1]: https://github.com/NixOS/nixpkgs/pull/165128 > > Small update: Nixpkgs unstable's crosvm package is now built with the > virgl_renderer and virgl_renderer_next features. I got this working eventually, but I had to apply a load of patches. How are you getting it to run? My patches are here: https://gitlab.com/talex5/crosvm/-/commits/main In particular: - It failed to start (when using virtio-gpu) because it doesn't have access to /nix/store, and - It failed to send Wayland keymaps because they need to be mapped read-only -- talex5 (GitHub/Twitter) http://roscidus.com/blog/
[-- Attachment #1: Type: text/plain, Size: 1302 bytes --] Puck has created a video demonstrating the work she's been doing with the in-development Wayland security-context protocol [1], which allows a Wayland compositor to distinguish between applications running in different sandboxes (e.g. in different VMs). The video is available at https://diode.zone/w/2n3kKNNjXFkSWUwyjT3hgt Or alternatively, magnet:?xt=urn:btih:f340dfd391be0cabbb0638eb8af6659214c5d821&dn=puck%27s%20video%20720p.mp4&tr=https%3A%2F%2Fdiode.zone%2Ftracker%2Fannounce&ws=https%3A%2F%2Fdiode.zone%2Fstatic%2Fstreaming-playlists%2Fhls%2F0b093345-a100-4051-b4c3-37292af48c81%2F176adb94-167a-4cb7-b954-a09b301c4d80-720-fragmented.mp4 As part of this work, she updated the draft wlroots and Sway implementations to support the latest proposed version of the protocol, exposed the security context information to Sway configuration hooks, and created a draft crosvm implementation of exposing security context information to the compositor. There's some more information in Puck's post to the Spectrum development mailing list. [2] Thanks to NLnet and NGI Zero for funding this project. [1]: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/68 [2]: https://spectrum-os.org/lists/archives/spectrum-devel/5cf20f6f-9d89-4cf9-9154-6dd3c9310c06@app.fastmail.com/ [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --]
[-- Attachment #1: Type: text/plain, Size: 889 bytes --] Alyssa Ross <hi@alyssa.is> writes: > The Qubes OS summit will be held in Berlin from tomorrow (Friday) until > Sunday. I'll be attending, and Puck will be giving a talk at 15:20 > tomorrow. > > Puck's talk is called "Isolating GUIs with the power of Wayland", and > the abstract is: > > Could Qubes OS replace its custom GUI isolation protocol with > Wayland while staying as performant and secure? With the advent > of Wayland, many strides have been made in the desktop Linux > space, limiting the effects a malicious application can > have. Gone are the days of every application being able to snoop > every keypress! This presentation will dive into the differences > between X and Wayland, and why it makes for a great fit in > isolating operating systems like Qubes OS and Spectrum. https://youtube.com/watch?v=hkWWz3xGqS8 [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --]
[-- Attachment #1: Type: text/plain, Size: 274 bytes --] On Thu, Sep 08, 2022 at 10:24:04PM +0000, Alyssa Ross wrote: > The Qubes OS summit will be held in Berlin from tomorrow (Friday) until > Sunday. I'll be attending, and Puck will be giving a talk at 15:20 > tomorrow. To clarify, that's 15:20 Berlin time (CEST), 13:20 UTC. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #1: Type: text/plain, Size: 966 bytes --] The Qubes OS summit will be held in Berlin from tomorrow (Friday) until Sunday. I'll be attending, and Puck will be giving a talk at 15:20 tomorrow. Puck's talk is called "Isolating GUIs with the power of Wayland", and the abstract is: Could Qubes OS replace its custom GUI isolation protocol with Wayland while staying as performant and secure? With the advent of Wayland, many strides have been made in the desktop Linux space, limiting the effects a malicious application can have. Gone are the days of every application being able to snoop every keypress! This presentation will dive into the differences between X and Wayland, and why it makes for a great fit in isolating operating systems like Qubes OS and Spectrum. The talk will be live streamed at: https://www.youtube.com/channel/UC_djHbyjuJvhVjfT18nyqmQ iCalendar file for the talk: https://cfp.3mdeb.com/qubes-os-summit-2022/talk/ZY8KHW.ics [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 832 bytes --]
On 18.8.2022 13.17, Alyssa Ross wrote: > Okay. Obviously we can support booting with the filesystem mounted > read-write and no dm-verity. But the problem with that is that then > changes have to be manually copied back to the source tree. We could One way to handle this is so that people git clone to the target and build small incremental changes there. > try to add some conveniences for that (e.g. a facility to diff the > modified root filesystem with the orginal version). It still means > making changes to the compiled system, rather than working on the > source, but for the kind of changes you're talking about, maybe doing > it that way and then integrating the changes into the Spectrum source > at the end wouldn't be too bad? When installation of development tools to the target is limited, remotely mounting filesystem (e.g. sshfs) from the development host works. > Ideally it would be nice to have something like nixos-rebuild switch, > that can build a new system and then switch into it. But the problem > is, you have to figure out how to *automatically* take the runtime state > of the system (the processes and services that are running), and somehow > carry that over into the new system, where anything could have changed. > So I'm not sure how we'd do that. NixOS's implementation of this is > massive, tightly coupled to both NixOS and systemd, and still it's not > very difficult to get it into a weird state where you need to reboot > anyway to get an accurate idea of what the change does (or, if you > don't realise you need to do that, you just think the change hasn't > worked). If a kernel module has changed, do you try to unload an > reload it? That's pretty complicated to implement, because you have to > unbind the devices from it, unload the module, then rebind them, and > even then the module might have had important state that's been lost. Good examples but there's no one answer to these scenarios. Depends really much on the case and developer preferences. Often services restarting is enough, sometimes system reboot is required. Also, people doing kernel development are usually fairly seasoned and consider which makes sense to themselves. E.g. kernel modules run reset for the hardware in init() and scripts/tests can be used to handle unbind-unload-rebind in driver development. Managing state persistence over resets or SW updates is an eternity problem. It's an interesting problem - I studied dynamic software updates and state migration is one key challenge there. DSU has not taken traction, though. It's tough to update both code and the runtime state. We have legacy and culture of restarts. In critical systems it's designed and handled with hardware, even system redundancy - not at single component level. > If the developer is working on that kernel module it's probably okay, > but what if they're working on something else and just pulled a bunch of > changes that happened to include a kernel module change? Same with if > e.g. the Weston service changes, except in that case we have no way to > restore the previous state. So either we choose not to automatically > restart Weston, in which case we're not really running the new system, > but a weird hybrid that may have bugs not present in either the old or > new system, or we do restart Weston, and risk an unsuspecting user > losing their whole windowing session. True. In these scenarios I've learned not to try to cover every possible scenario on behalf of other people. Developers are smart and they understand and learn constraints of system changes. > If there's a way we could make this work well, I'm open to it, but it's > not at all obvious to me what that would look like. Given above, I try to keep this at high level design. Two configurations: a. development - enables development, testing and debugging with caveats b. user - immutable, hardened, strong security promises Now we have b. which makes a. by design impossible *on target*, making development iterations slow. The question is if we want to generally enable a. and then say - "there it is for anyone who needs it". Best, -Ville
[-- Attachment #1: Type: text/plain, Size: 3650 bytes --] On Thu, Aug 18, 2022 at 12:15:28PM +0300, Ville Ilvonen wrote: > On Wed, Aug 17, 2022 at 4:39 PM Alyssa Ross <hi@alyssa.is> wrote: > > > Yeah, I agree something like this would be good. Especially when > > testing on hardware as you say. I would like to think more about > > exactly how this should work. Do you think that, if you it were > > possible to develop Spectrum on Spectrum, it would be acceptable to have > > to reboot into a new configuration if the host system was changed? > > (Assume that the process of actually building the new system is fast — > > the reboot would be the main overhead.) > > Option to develop on the target system is something people (not vocal > here) already expect. I think booting into a new configuration is MCU > style development process and not a fast enough iteration cycle on > Linux user space in all scenarios. Even in kernel driver development, > one may want to unload/load module during development but even disable > module loading as a security hardening mechanism in > deployment/production configuration. Another example is security > policies development (SELinux) - you want to iterate and test them in > user space during development but you want to deploy them immutable. > Then again, kernel or device tree changes will require rebooting. Okay. Obviously we can support booting with the filesystem mounted read-write and no dm-verity. But the problem with that is that then changes have to be manually copied back to the source tree. We could try to add some conveniences for that (e.g. a facility to diff the modified root filesystem with the orginal version). It still means making changes to the compiled system, rather than working on the source, but for the kind of changes you're talking about, maybe doing it that way and then integrating the changes into the Spectrum source at the end wouldn't be too bad? Ideally it would be nice to have something like nixos-rebuild switch, that can build a new system and then switch into it. But the problem is, you have to figure out how to *automatically* take the runtime state of the system (the processes and services that are running), and somehow carry that over into the new system, where anything could have changed. So I'm not sure how we'd do that. NixOS's implementation of this is massive, tightly coupled to both NixOS and systemd, and still it's not very difficult to get it into a weird state where you need to reboot anyway to get an accurate idea of what the change does (or, if you don't realise you need to do that, you just think the change hasn't worked). If a kernel module has changed, do you try to unload and reload it? That's pretty complicated to implement, because you have to unbind the devices from it, unload the module, then rebind them, and even then the module might have had important state that's been lost. If the developer is working on that kernel module it's probably okay, but what if they're working on something else and just pulled a bunch of changes that happened to include a kernel module change? Same with if e.g. the Weston service changes, except in that case we have no way to restore the previous state. So either we choose not to automatically restart Weston, in which case we're not really running the new system, but a weird hybrid that may have bugs not present in either the old or new system, or we do restart Weston, and risk an unsuspecting user losing their whole windowing session. If there's a way we could make this work well, I'm open to it, but it's not at all obvious to me what that would look like. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
On Wed, Aug 17, 2022 at 4:39 PM Alyssa Ross <hi@alyssa.is> wrote: > Yeah, I agree something like this would be good. Especially when > testing on hardware as you say. I would like to think more about > exactly how this should work. Do you think that, if you it were > possible to develop Spectrum on Spectrum, it would be acceptable to have > to reboot into a new configuration if the host system was changed? > (Assume that the process of actually building the new system is fast — > the reboot would be the main overhead.) Option to develop on the target system is something people (not vocal here) already expect. I think booting into a new configuration is MCU style development process and not a fast enough iteration cycle on Linux user space in all scenarios. Even in kernel driver development, one may want to unload/load module during development but even disable module loading as a security hardening mechanism in deployment/production configuration. Another example is security policies development (SELinux) - you want to iterate and test them in user space during development but you want to deploy them immutable. Then again, kernel or device tree changes will require rebooting. Point is - rebooting for everything is not development friendly enough mechanism alone. > You mean set it to your own, custom version of nixos-hardware that > included WIP support for the board you were working on? Yeah, that Yeah, like an own git repo as source for nix-channel to pick the nix-hardware module. Ideally, some of that could be easily contributed to nix-hardware upstream - currently it seems to accept different quality support for different HW so that should not be an issue. > wouldn't be a problem at all. Sounds good. > > > In the medium term, I'd like to decouple nixos-hardware's custom kernel > > > packages from NixOS configurations. But that would require somebody > > > finding the time to sit down and make the change, and also convince > > > other nixos-hardware users that it's the way to go. I don't think it > > > would be a problem though, especially if it meant nixos-hardware getting > > > more active maintenance, which it's lacking at the moment because it's > > > not too well advertised and so not enough people are using it. > > > > Ideally yes and I hope we could contribute to that effort. However, we need > > to focus on getting Spectrum running on aarch64 with imx8 for now. For that > > I'm reading that nixos-hardware approach is preferred. > > Yeah, it's definitely the way to go. And if we can make nixos-hardware > better in future, that would just be further progress on top of > integrating nixos-hardware as described above. Right. Even iMX8 dev board generic support might be interesting for other projects. I've seen iMX8M EVK being used at least with Zircon on Fuchsia. Best, -Ville
[-- Attachment #1: Type: text/plain, Size: 4967 bytes --] On Wed, Aug 17, 2022 at 04:25:20PM +0300, Ville Ilvonen wrote: > > > As of now, the spectrum fork for aarch64 just adds another config > > > after rpi configs > > > and replaces the default config to use that to build. With small > > > changes this could > > > be handled like rpi configs. In addition, cloud-hypervisor accepts > > > kernel only in > > > EFI format for aarch64[1]. Anyway, this would allow us to build an > > > aarch64 Spectrum installer > > > - even make it with a more generic kernel. That takes us to ARM > > > vendor/device specific HW > > > quirks which would need to be handled anyway. I'll intentionally leave > > > device specific > > > kernel hardening and disabling kernel module loading for security > > > reasons for now. > > > As of now the vendor/device specifics are not supported unless one builds device > > > specific Spectrum image with all configs build-time and skips > > > installer altogether. > > > > > > The other option that I see. We discussed earlier nix-hardware and > > > device specific modules. > > > That would bring nixos configuration.nix and installation supporting > > > scripts to Spectrum, > > > though. Those could be called from the Spectrum installer but it would > > > change the installer > > > logic from writing an image to dynamically configuring the device > > > during install based on user > > > selections. > > > > I don't think the full NixOS module system, with rebuilds, etc. belongs > > in Spectrum. Being able to treat images as immutable makes it easier to > > provide various strong security guarantees. But not wanting to > > This was and still is one important design decision to build on Spectrum. > Regardless, it makes development iterations on target HW more challenging > than needed. Conceptually we've had discussions on separating concerns > between "development system - writable, easily updatable" and > "production system - immutable, updated as image". Latter could have more > hardening, security policies etc. enabled which makes development more > difficult by design. In practice, some developers have remounted the > Spectrum file > system as writable to make development iterations easier. In many cases, > the development must be done on the target HW which brings us back to the need > for the "development system" configuration. Update image iteration > cycle is too slow. Yeah, I agree something like this would be good. Especially when testing on hardware as you say. I would like to think more about exactly how this should work. Do you think that, if you it were possible to develop Spectrum on Spectrum, it would be acceptable to have to reboot into a new configuration if the host system was changed? (Assume that the process of actually building the new system is fast — the reboot would be the main overhead.) > > integrate the full module system doesn't prevent us taking advantage of > > nixos-hardware. It's possible to evaluate NixOS modules standalone in > > a Spectrum build, in fact we already do that to reuse NixOS's list of > > all redistributable firmware packages[3]. We could do a similar thing > > to extract the kernel that nixos-hardware configures for a particular > > device, something like this: > > > > inherit (nixos { > > configuration = [ <nixos-hardware/pine64/pinebook-pro/default.nix> ]; > > }.config.boot.kernelPackages) kernel; > > > > And naturally which device that's pulling from should be configurable — > > we'll want to have a config file somewhere, just not a full NixOS one. > > This made me propose nix-hardware usage more and think if we could have > what I called "development configuration". In essence, nix-hardware is > NixOS channel and we could have a custom channel to support development > as well (e.g. dev git repo(s)). You mean set it to your own, custom version of nixos-hardware that included WIP support for the board you were working on? Yeah, that wouldn't be a problem at all. > > In the medium term, I'd like to decouple nixos-hardware's custom kernel > > packages from NixOS configurations. But that would require somebody > > finding the time to sit down and make the change, and also convince > > other nixos-hardware users that it's the way to go. I don't think it > > would be a problem though, especially if it meant nixos-hardware getting > > more active maintenance, which it's lacking at the moment because it's > > not too well advertised and so not enough people are using it. > > Ideally yes and I hope we could contribute to that effort. However, we need > to focus on getting Spectrum running on aarch64 with imx8 for now. For that > I'm reading that nixos-hardware approach is preferred. Yeah, it's definitely the way to go. And if we can make nixos-hardware better in future, that would just be further progress on top of integrating nixos-hardware as described above. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --]
On Wed, Aug 17, 2022 at 10:52 AM Alyssa Ross <hi@alyssa.is> wrote: > > On Tue, Aug 16, 2022 at 06:50:48PM +0300, Ville Ilvonen wrote: > > Hi, > > > > Now that we've been developing Spectrum ARM (aarch64) support > > with iMX8 boards, I'd like to get back to Spectrum HW configuration design. > > > > On x86 the generic image with kernel supporting most devices as modules can > > make sense. On ARM, the vendor specific BSP HW quirks are more common. > > What impact will Google's Generic Kernel Image[1] efforts have on this? > As I understand it, going forward Android devices won't be allowed to > make arbitrary kernel changes, and will be restricted to adding extra > modules. Which presumably means that SOCs aiming to be used for Android > devices will have to work with the standard Android kernel, which is > hopefully getting closer to mainline over time[2]. ARM goes beyond Google Android ecosystem - including automotive and increasingly servers and other personal devices than smartphones. GKI is definitely the right direction but I'm afraid Google can't pull the BSPs on other sectors in. Which takes us to your point below. Nothing to add there. > Regardless, I understand that there will always be some cases where a > non-upstream kernel is a necessity (even if the Android situation gets > better, there will always be a new MacBook that doesn't have upstream > drivers yet!) so I am keen to figure out how to support that well in > Spectrum. > > [1]: https://source.android.com/docs/core/architecture/kernel/generic-kernel-image > [2]: https://lwn.net/Articles/830979/ > > > As of now, the spectrum fork for aarch64 just adds another config > > after rpi configs > > and replaces the default config to use that to build. With small > > changes this could > > be handled like rpi configs. In addition, cloud-hypervisor accepts > > kernel only in > > EFI format for aarch64[1]. Anyway, this would allow us to build an > > aarch64 Spectrum installer > > - even make it with a more generic kernel. That takes us to ARM > > vendor/device specific HW > > quirks which would need to be handled anyway. I'll intentionally leave > > device specific > > kernel hardening and disabling kernel module loading for security > > reasons for now. > > As of now the vendor/device specifics are not supported unless one builds device > > specific Spectrum image with all configs build-time and skips > > installer altogether. > > > > The other option that I see. We discussed earlier nix-hardware and > > device specific modules. > > That would bring nixos configuration.nix and installation supporting > > scripts to Spectrum, > > though. Those could be called from the Spectrum installer but it would > > change the installer > > logic from writing an image to dynamically configuring the device > > during install based on user > > selections. > > I don't think the full NixOS module system, with rebuilds, etc. belongs > in Spectrum. Being able to treat images as immutable makes it easier to > provide various strong security guarantees. But not wanting to This was and still is one important design decision to build on Spectrum. Regardless, it makes development iterations on target HW more challenging than needed. Conceptually we've had discussions on separating concerns between "development system - writable, easily updatable" and "production system - immutable, updated as image". Latter could have more hardening, security policies etc. enabled which makes development more difficult by design. In practice, some developers have remounted the Spectrum file system as writable to make development iterations easier. In many cases, the development must be done on the target HW which brings us back to the need for the "development system" configuration. Update image iteration cycle is too slow. > integrate the full module system doesn't prevent us taking advantage of > nixos-hardware. It's possible to evaluate NixOS modules standalone in > a Spectrum build, in fact we already do that to reuse NixOS's list of > all redistributable firmware packages[3]. We could do a similar thing > to extract the kernel that nixos-hardware configures for a particular > device, something like this: > > inherit (nixos { > configuration = [ <nixos-hardware/pine64/pinebook-pro/default.nix> ]; > }.config.boot.kernelPackages) kernel; > > And naturally which device that's pulling from should be configurable — > we'll want to have a config file somewhere, just not a full NixOS one. This made me propose nix-hardware usage more and think if we could have what I called "development configuration". In essence, nix-hardware is NixOS channel and we could have a custom channel to support development as well (e.g. dev git repo(s)). > In the medium term, I'd like to decouple nixos-hardware's custom kernel > packages from NixOS configurations. But that would require somebody > finding the time to sit down and make the change, and also convince > other nixos-hardware users that it's the way to go. I don't think it > would be a problem though, especially if it meant nixos-hardware getting > more active maintenance, which it's lacking at the moment because it's > not too well advertised and so not enough people are using it. Ideally yes and I hope we could contribute to that effort. However, we need to focus on getting Spectrum running on aarch64 with imx8 for now. For that I'm reading that nixos-hardware approach is preferred. > I am intrigued by the idea of the installer being able to generate > images, though. Using Nix with a substituter configured on the > installer image would mean that it could download a pre-built image if > one exists for that platform, or fall back to generating one if not. > (And if there was a pre-built image, it would even still be able to > properly Secure Boot with a trusted key once we're in a position to do > that.) So I'm definitely keen on exploring that idea, but it might be > something to do a bit down the road since the work to generate > board-specific Spectrum images wouldn't be contingent on it. Agree, but I'd also move this down the road. > [3]: https://spectrum-os.org/git/spectrum/tree/host/rootfs/default.nix?id=b01594b2c089ce2434dacddccf9a285af7334d24#n64 > > > Any thoughts which would be the preferred way? Maybe some other way? > > In the end, HW specifics are needed also on x86 as we saw with NUCs > > and different > > Lenovo laptops in the spring. I'm not convinced one image to rule them > > all is realistic or secure. > > The issues we saw with Lenovo laptops, etc. wouldn't have been solved by > device specific images — those devices were broken because of bugs that > hadn't affected any other systems I'd tried, but in the end the fixes > were applicable everywhere. That itself isn't an indication that we need > device-specific images, just more hardware testing. Fair enough, there were those as well. But there were also device specific quirks. There always will be. HW makers sell HW with their constraints. There will be hacks. > But as I said above, I'm open to having an officially blessed > configuration mechanism to make it possible to build custom images. Sounds good. Would you also please share your thoughts on this "development configuration"? Like some nix tooling enabled for "development" and not included in the immutable "production" image. > > Finally, this is by no means blocking the hardened iMX8 based Spectrum > > development > > but will keep that work in Spectrum fork until there's an agreed path > > to implement this. > > Integrating this sooner and making it more generic would make Spectrum > > more useful > > for a wider audience. > > Makes sense — although of course, if any of the work that's been done so > far is not i.MX8-specific, but is instead just generic stuff to make > Spectrum more ARM-friendly or cross-compilable, I'd be happy to look at > those patches already since they'll be relevant regardless of how we do > device-specific stuff. Thanks, I think it's time to start looking for those with generic stuff in mind so that the board bring-up won't diverge too much from the Spectrum mainline. -Ville