From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.3 (2019-12-06) on atuin.qyliss.net X-Spam-Level: X-Spam-Status: No, score=-1.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_PASS autolearn=unavailable autolearn_force=no version=3.4.3 Received: by atuin.qyliss.net (Postfix, from userid 496) id EE95422606; Mon, 20 Jul 2020 01:34:26 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by atuin.qyliss.net (Postfix) with ESMTP id 0F1FE224C2; Mon, 20 Jul 2020 01:34:20 +0000 (UTC) Received: by atuin.qyliss.net (Postfix, from userid 496) id 40E8A224BA; Mon, 20 Jul 2020 01:34:18 +0000 (UTC) Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by atuin.qyliss.net (Postfix) with ESMTPS id 571DF22588; Mon, 20 Jul 2020 01:34:14 +0000 (UTC) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id 849265C0221; Sun, 19 Jul 2020 21:34:11 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Sun, 19 Jul 2020 21:34:11 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alyssa.is; h= from:to:cc:subject:date:message-id:mime-version:content-type; s= fm3; bh=blyUWpFhjP9H580nrMVp6PiiM4MI62zyu6Nuu2062ls=; b=CEFEyU7O a3KrsdBtr2gw168I4vRRJznAjmswkJdasYLjirnv/q+IZdczQCGuk4KPIbRAcILZ /MFHFSwHQZu/gQOUny36wlO6lqgtENzILM9sxSf2X+U5Vci6oESLOZuAU8RL/Iut 7QSOvuqX6gDHgv34HyhstrMuAzDia9D2RCkGo4dxYqG2AXe3zAcbW3DGRm+INsl1 6sXxWyrjlHyVwmwhtzdSicm1CSPImPqI7lHoXCREpQucYnUpcG1NSYZejQNS4n2c OueVNztj+OTqZp9HY7aMN/mtKQ9oEFNebdPFfVMmzXs+mQ/zwkHxWWtRqBuE7WUq qkZS7tG/OlVi9g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-type:date:from:message-id :mime-version:subject:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm3; bh=blyUWpFhjP9H580nrMVp6PiiM4MI6 2zyu6Nuu2062ls=; b=NaQLJX977bhSZa8BnJk4wVzrXgER4PoV2drg4TJN6LLvR gYPD/OQYSVVQc+u10IDh3gHfSANgZNjkKrRwtz5na92G8YgJJ3DR1wDymKCGo5qs MPj/EKUkcxMychLkwkryqRPi1+YJhdL0OSqqVxsRdf3zfOxZMJkyXMKB32Eti8qi 01scv0XsTBS1q/HjcFalwVFxdydOvG9+N8QS5SVGVhk/FG1yiSbhp+cDw3UHzBUs MQRBwDquqnL91Bff0DyAdh5wPZr2BX+8tDqNaVF6KWdO51yuDWaY0Ky4FW9R8C/3 yqzyIPZiPElBGGk33xsvJcSg+a4FXw7p6PoYqJVuA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrgedvgdeglecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecuogfuuhhsphgvtghtffhomhgrihhnucdlgeelmdenuc fjughrpefhvffufffkgggtsehttdertddttddtnecuhfhrohhmpeetlhihshhsrgcutfho shhsuceohhhisegrlhihshhsrgdrihhsqeenucggtffrrghtthgvrhhnpeeklefhjeekud eihedvtdeljefgjeetgfduheeguddvjedtudeuheeukeefteevtdenucffohhmrghinhep fihikhhiphgvughirgdrohhrghdpmhgrnhejrdhorhhgpdhqvghmuhdrohhrghdpghhith hhuhgsrdgtohhmpdhnohhnghhnuhdrohhrghdpohgrshhishdqohhpvghnrdhorhhgpdhg ihhthhhusgdrihhonecukfhppeegiedrkedtrddugedvrdekfeenucevlhhushhtvghruf hiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehhihesrghlhihsshgrrdhish X-ME-Proxy: Received: from x220.qyliss.net (p2e508e53.dip0.t-ipconnect.de [46.80.142.83]) by mail.messagingengine.com (Postfix) with ESMTPA id 7970E3060A08; Sun, 19 Jul 2020 21:34:10 -0400 (EDT) Received: by x220.qyliss.net (Postfix, from userid 1000) id B8C084AF; Mon, 20 Jul 2020 01:34:08 +0000 (UTC) From: Alyssa Ross To: discuss@spectrum-os.org Subject: This Week in Spectrum, 2020-W29 Date: Mon, 20 Jul 2020 01:34:08 +0000 Message-ID: <87pn8rezqn.fsf@alyssa.is> MIME-Version: 1.0 Content-Type: text/plain Message-ID-Hash: GOU5JYCDTXK5VWCGFYMRZDAN3DK6ZUDJ X-Message-ID-Hash: GOU5JYCDTXK5VWCGFYMRZDAN3DK6ZUDJ X-MailFrom: hi@alyssa.is X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-config-1; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: devel@spectrum-os.org, edef X-Mailman-Version: 3.3.1 Precedence: list List-Id: Patches and low-level development discussion Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: This has been a week of thinking I wanted to do one thing, not being sure how to do it, and finding out that there was a better way. I'll write it up in the order it happened. crosvm ------ Last week, I described that I wanted to implement a virtio proxy to be able to allow a kernel in an application VM to use a virtual device in another VM. I was wondering how to manage virtio buffers, and thought that I probably wanted an allocator to be able to manage throwing buffers of different sizes around. This turned out to be a case of the XY problem[1]. I couldn't find a good solution, but it turned out that an allocator wasn't what I wanted anyway. edef pointed out that I could just make the shared memory I allocated as big as necessary to hold buffers of the maximum size I wanted to support. The kernel will only actually allocate pages as they are written to, and I could use fallocate[2] with FALLOC_FL_PUNCH_HOLE to tell the kernel it can drop pages when I'm done with them. This would mean that an unusually large buffer would only take up lots of memory while it was in use, and as soon as it was done with, the kernel would be able to take back the memory. So exactly what I wanted from an allocator, but with no need for an allocator at all! This made the implementation much simpler, and by Friday I was able to get the proxy into a state where it could pass unit tests that transported messages in both directions through it. And then it was suggested to me that maybe a virtio proxy is not what I want after all. The main disadvantage to a virtio proxy is that it requires context switching to the host to send data between VMs. This is a trade-off I was aware of, but a virtio proxy is pretty straightforward to write as inter-VM communication systems go, and I was not aware of anything else that would be up to the job. As it turns out, there is something. vhost-user is a mechanism for connecting, say, a virtio device to a userspace network stack in a performant way. I was aware of this, but what I was not aware of was virtio-vhost-user[3]. virtio-vhost-user is a proposed mechanism to allow a VMM to forward a vhost-user backend to a VM. This means that two VMs could directly share virtqueues, with no host copy step. This would mean there would be no opportunity for the host to mediate communication between two guests, but that wasn't really on the cards anyway -- if it's ever required, a virtio proxy would probably be the way to go. For all the other cases, virtio-vhost-user would be a faster, cleaner way of sharing network devices between VMs. The main problem with virtio-vhost-user is that it's still in its infancy. There's a patchset[4] implementing it for QEMU that's a couple of years old, but that has not been accepted upstream. The main blocker for this seems to be first standardising it in the Virtio spec[5][6]. The good news here is that the standardisation process seems to be progressing actively at the moment. It's being discussed on the virtio-dev mailing list basically right now, with the most recent emails dated Friday (unfortunately, I don't know of a good web archive with virtio-dev, but you can find the thread on Gmane if you're interested but not subscribed to the list). The good news is that virtio-vhost-user mostly works by composing things that already exist. There's no kernel work required, because devices are just exposed by the VMM as regular virtio devices. The frontend VM (i.e. the one that uses the virtual device, as opposed to the one that provides it) doesn't need any special virtio-vhost-user support, because it just needs to speak normal vhost-user. Only the backend VM needs support for virtio-vhost-user, because its VMM needs to expose the vhost-user backend from the host to that VM. This means that provisionally using virtio-vhost-user in Spectrum actually looks very feasible, with a couple of compromises. For evaluation purposes, it's not worth writing a virtio-vhost-user device for crosvm. But, the VMs that need that device are the ones that are very specialised -- VMs that manage networking or block devices or similar. So for these VMs, for now, we could use QEMU, with the virtio-vhost-user patch. I investigated what it would take to port it to the most recent QEMU version, and the answer appears to be "not much at all". Obviously having two VMMs in the Trusted Computing Base (TCB) isn't something we'd want in the long term, but it would be fine for, say, reaching the next funding milestone. If we decide that virtio-vhost-user is the way to go after all, support in crosvm can be added then -- in general, adding a new virtio device to crosvm isn't a huge undertaking. Earlier, I said that the application side of the communication doesn't need anything special, because to that it's just regular vhost-user. This is true, but I glossed over there that crosvm doesn't actually implement vhost-user. Implementing vhost-user in crosvm would probably be a big deal at this stage, and not something I feel would be a good use of my time. BUT! Remember, crosvm has two children: Amazon's Firecracker[7], and at so-called "serverless" computing; and Intel's Cloud Hypervisor[8], which aims at traditional, full system server virtualisation. And both of these children inherited the crosvm device model from their parents, and Cloud Hypervisor implements vhost-user[9]. So I _think_ it should be possible to pretty much lift the vhost-user implementation from Cloud Hypervisor, and use it in crosvm. Pretty neat! So, the setup I'd like to evaluate is QEMU with the virtio-vhost-user patch on one side, and crosvm with Cloud Hypervisor's vhost-user implementation on the other. It might well be that there are complications here. If there are, I'll probably just finish the proxy and move on for now, because I want to keep up the pace. I do think that virtio-vhost-user is probably the way to do interguest networking in the long-term, though. Another thing that I've realised is that I don't need to worry about pulling bits out of crosvm to run in other VMs. I focused a lot on that towards the beginning of the year, mostly motivated by Wayland, because the virtio wayland implementation in crosvm is the only one there is. Now that that works in a different way, though, there's no need to continue down this path, because things like networking can be done in more normal ways through virtio and the device VM kernel. [1]: https://en.wikipedia.org/wiki/XY_problem [2]: https://man7.org/linux/man-pages/man2/fallocate.2.html [3]: https://wiki.qemu.org/Features/VirtioVhostUser [4]: https://github.com/stefanha/qemu/compare/master...virtio-vhost-user [5]: https://lists.nongnu.org/archive/html/qemu-devel/2019-04/msg03082.html [6]: https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html [7]: https://firecracker-microvm.github.io/ [8]: https://github.com/cloud-hypervisor/cloud-hypervisor [9]: https://github.com/cloud-hypervisor/cloud-hypervisor/blob/b4d04bdff6a7e2c3da39fdb5b1906a228c38223e/docs/device_model.md#vhost-user-devices Overall, it's been frustrating for me to try things, and discover they're not going to work, or not going to work as well as some other thing, and make a call on whether to keep going on something I know is the worse option or switch to the better thing. I have to keep reminding myself that Spectrum is a research project, and there are always going to be false starts like this. Lots of what we're doing is either very unusual (virtio-vhost-user) or brand new (interguest Wayland), after all.