From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.3 (2019-12-06) on atuin.qyliss.net X-Spam-Level: X-Spam-Status: No, score=-1.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_PASS autolearn=unavailable autolearn_force=no version=3.4.3 Received: by atuin.qyliss.net (Postfix, from userid 496) id 077AFC71C; Tue, 11 Aug 2020 00:07:50 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by atuin.qyliss.net (Postfix) with ESMTP id 463CDC684; Tue, 11 Aug 2020 00:07:28 +0000 (UTC) Received: by atuin.qyliss.net (Postfix, from userid 496) id E2A1CC562; Tue, 11 Aug 2020 00:07:25 +0000 (UTC) Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by atuin.qyliss.net (Postfix) with ESMTPS id D1ECEC560; Tue, 11 Aug 2020 00:07:21 +0000 (UTC) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id D3BB95C0113; Mon, 10 Aug 2020 20:07:19 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 10 Aug 2020 20:07:19 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alyssa.is; h= from:to:subject:date:message-id:mime-version:content-type; s= fm3; bh=CpgZ2z3lP+0iRgwPjwDfqlm3bZ4FpGAXAjG5hK7zaW4=; b=f5tEs5F3 NggVKESY841bS/sILBGLqTlQ2k5KiJT78VpvUC4UlPrpfMN6L8PhhibzORxtKDy8 phxG+7+rkjxnF/IvFFjIUMWv0mFWUpeP/90zZANIpITNDJX9asHQK09V1LJ4CF8m pG5IGfo09MM3Cs1xetQ7b6b9sIoa67JCbo9DBdw3t6gUXAOb7ynX88DoMGeIKxFp ZUvfnU//Sfptt5fYikG21fJZ2bg+mMaZ3uNbzQfsn9UOMzxsKQAPtn5pCCTGpvyb cpAUuOhzDYCoWZf88u4V3D4f6gNVSaFacDKpwRdMzHTq5qLOpoWsT+LDKe6NQAlw NvYrCfPdXd8AKw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=content-type:date:from:message-id :mime-version:subject:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm3; bh=CpgZ2z3lP+0iRgwPjwDfqlm3bZ4Fp GAXAjG5hK7zaW4=; b=TPVRAh+E3+VLRKAQajUvwl3PlplLrP2VLUjbrkuUDj2VB WGzXya32rnOCYVWOcnk7MOFbSWWyuFurLWF3pPouQWCu9byzCrE/+KS/+bcCWNxu nRrDoCerR/23FnAFx61WKRdNdCuzK9EpykeOsoDUVERDeWUlMMajEnAi2Tswle1e vT7rtG7lm8bAkzBmFvT/f2ifj45RjzcXHIo3UM7y5LQB4qq223CI5zC5QbIUaMyH qbdZ1pq8GdEg3afdUtLLtqUpL9cD02OSvNTlgVoGrahTwtFeGcreGdSXyVg72rJA wxx+2KwR4gOmojdzZc9AhXGiFkc0V3ZBAsE0m/MmQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrkeelgddvkecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkfggtgesthdtredttddttd enucfhrhhomheptehlhihsshgrucftohhsshcuoehhihesrghlhihsshgrrdhisheqnecu ggftrfgrthhtvghrnhepiefgveegudetjeehudeludelheejuedugfeujeejjedvleetue etuddvfefhteejnecuffhomhgrihhnpehgihhthhhusgdrtghomhdpkhgvrhhnvghlrdho rhhgnecukfhppeekgedrudekgedrvddvledrvdeggeenucevlhhushhtvghrufhiiigvpe dtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehhihesrghlhihsshgrrdhish X-ME-Proxy: Received: from x220.qyliss.net (p54b8e5f4.dip0.t-ipconnect.de [84.184.229.244]) by mail.messagingengine.com (Postfix) with ESMTPA id 25B783280059; Mon, 10 Aug 2020 20:07:19 -0400 (EDT) Received: by x220.qyliss.net (Postfix, from userid 1000) id 1726D397; Tue, 11 Aug 2020 00:07:17 +0000 (UTC) From: Alyssa Ross To: discuss@spectrum-os.org, devel@spectrum-os.org Subject: This (Last) Week in Spectrum, 2020-W32 Date: Tue, 11 Aug 2020 00:07:17 +0000 Message-ID: <87wo26jb9m.fsf@alyssa.is> MIME-Version: 1.0 Content-Type: text/plain Message-ID-Hash: 5MCF53ERTVFTAWWTYYVS4CMCGDTU4HGX X-Message-ID-Hash: 5MCF53ERTVFTAWWTYYVS4CMCGDTU4HGX X-MailFrom: hi@alyssa.is X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.3.1 Precedence: list List-Id: General high-level discussion about Spectrum Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Last week, I'd just finished getting the cloud-hypervisor vhost-user-net frontend code to build as part of crosvm, and the next step was testing it. crosvm ------ I wrote some hacky code that replaced the virtio-net device creation in crosvm with an instance of the ported vhost-user-net code. When I booted crosvm, there were some of the expected simple oversights of mine that needed to be addressed, but once those were taken care of, it still didn't quite work. The VM boots, sees a network interface, and even communicates with the vhost-user-net backend! But, it doesn't quite work. The vhost-user-net code never realises/gets told that it has traffic, and so it's never processed. Unsure of what to do about this, I decided to turn to cloud-hypervisor and look at how the code ran there. cloud-hypervisor ---------------- I wanted to try running the cloud-hypervisor v-u-n backend I was using for testing (because it's much simpler than DPDK -- it just sends traffic to a TAP device) with QEMU as the frontend, because QEMU is a VMM I'm familiar with (much more so than cloud-hypervisor as the frontend), and I thought it would be useful to have a working frontend/backend combination to compare to. I had some problems, though, because apparently nobody had ever wanted to use QEMU with the cloud-hypervisor vhost-user-net backend before -- or if they had, they hadn't wanted to enough to make it work. The cloud-hypervisor backend didn't implement the vhost-user spec correctly in a few subtle ways that made it incompatible with QEMU. I won't explain every subtle issue, but I ended up writing a few patches[1][2] for cloud-hypervisor and the "vhost" crate it depends on (that is in the process of being moved under the rust-vmm umbrella). One interesting issue I will go into a little detail of was that the wording in the spec was a little unclear, and QEMU interpreted it one way, and cloud-hypervisor the other. I ended up sending an email[3] to the author of the spec asking for clarification. He answered my question, and we discussed how the wording could be improved. He liked my second attempt at improving my working, and asked me to send a patch, but preferably not right now, because QEMU is currently gearing up for a release, scheduled for next week if everything goes well. Since I wrote these cloud-hypervisor patches, and had to test them, I ended up having to learn how to use cloud-hypervisor anyway to make sure I hadn't broken it in fixing the backend up to work with QEMU. Oh well. Once this was done, I could use both QEMU and cloud-hypervisor with the backend, but not crosvm. But it was a little more complex than that. When I ported the v-u-n code to crosvm, I ported the first version of it that was added to the cloud-hypervisor tree, rather than the latest version. The theory here was that the earlier version would be closer to crosvm, because cloud-hypervisor would have had less time to diverge. Then, once I had that working, I could add on the later changes gradually. What I didn't account for here is that the initial version of the v-u-n frontend in cloud-hypervisor didn't really work properly, and needed some time to bake before it did. So having now had this experience I think it might be better to try to port the latest version, and accept that porting might be a bit harder, but the end result is more likely to work. [1]: https://github.com/cloud-hypervisor/vhost/pull/22 [2]: https://github.com/cloud-hypervisor/cloud-hypervisor/pull/1565 [3]: https://lore.kernel.org/qemu-devel/87sgd1ktx9.fsf@alyssa.is/ libgit2 ------- While bisecting cloud-hypervisor to see if I could figure out when the v-u-n frontend started working properly, I encountered a large section of commits that I couldn't build any more, because Cargo couldn't resolve a git dependency. The dependency was locked to a commit that was no longer in the branch it had been in when the cloud-hypervisor commit was from. Despite knowing the exact commit it needed, Cargo fetched the branch the commit used to be on. This is because it is generally not possible to fetch arbitrary commits with git. Some servers, like GitHub, do however allow this, and I wondered why Cargo wouldn't at least fall back to trying that. As it turns out, it actually couldn't do that, though! Cargo uses libgit2, and libgit2 doesn't support fetching arbitrary commits. So I wrote a quick patch to libgit2 to support this[4]. It's only a partial implementation, though, because I don't find libgit2 to be a particularly easy codebase to work in (although it's better than git!). So I'm hoping somebody who knows more about it than me will help me figure out how to finish it. [4]: https://github.com/libgit2/libgit2/pull/5603 Next week, I'm hoping that I'll be able to get to vhost-user-net in crosvm working. I think this will probably mean porting the code again, using the latest version. Which is a bit of a shame, but at least I have an idea of what to do next. I am, overall, feeling pretty optimistic, though. I'm pretty confident that we can get some sort of decent but imperfect network hardware isolation even though virtio-vhost-user might not be ready yet, which was something I was worried about before. I don't want to really go into detail in that now though because this is already a long email and it's already a day late because I was tired yesterday, but essentially, we could forward the network device to a VM that would run the driver, and forward traffic back to the host over virtio-net. The host could handle this either in kernelspace or userspace with DPDK, but the important thing is that the only network driver it would need to support would be virtio-net. No talking to hundreds of different Wi-Fi cards and hoping that none of the drivers have a vulnerability. So, not perfect compared to proper guest<->guest networking, but a step in the right direction, and one that should be as simple as possible to upgrade to virtio-vhost-user once that becomes possible.