summary refs log tree commit diff
path: root/nixos/tests/common
diff options
context:
space:
mode:
authoraszlig <aszlig@redmoonstudios.org>2017-07-27 10:16:29 +0200
committeraszlig <aszlig@redmoonstudios.org>2017-09-13 23:16:33 +0200
commitb3162a107491ce306996de591926830b68e9bc69 (patch)
tree04222c6f0804ec27f7fce574eaeff53a47e16b81 /nixos/tests/common
parent56ea313c29986c4ccbb12396de06efb3c47b81ac (diff)
downloadnixpkgs-b3162a107491ce306996de591926830b68e9bc69.tar
nixpkgs-b3162a107491ce306996de591926830b68e9bc69.tar.gz
nixpkgs-b3162a107491ce306996de591926830b68e9bc69.tar.bz2
nixpkgs-b3162a107491ce306996de591926830b68e9bc69.tar.lz
nixpkgs-b3162a107491ce306996de591926830b68e9bc69.tar.xz
nixpkgs-b3162a107491ce306996de591926830b68e9bc69.tar.zst
nixpkgs-b3162a107491ce306996de591926830b68e9bc69.zip
nixos/tests: Add common modules for letsencrypt
These modules implement a way to test ACME based on a test instance of
Letsencrypt's Boulder service. The service implementation is in
letsencrypt.nix and the second module (resolver.nix) is a support-module
for the former, but can also be used for tests not involving ACME.

The second module provides a DNS server which hosts a root zone
containing all the zones and /etc/hosts entries (except loopback) in the
entire test network, so this can be very useful for other modules that
need DNS resolution.

Originally, I wrote these modules for the Headcounter deployment, but
I've refactored them a bit to be generally useful to NixOS users. The
original implementation can be found here:

https://github.com/headcounter/deployment/tree/89e7feafb/modules/testing

Quoting parts from the commit message of the initial implementation of
the Letsencrypt module in headcounter/deployment@95dfb31110397567534f2:

    This module is going to be used for tests where we need to
    impersonate an ACME service such as the one from Letsencrypt within
    VM tests, which is the reason why this module is a bit ugly (I only
    care if it's working not if it's beautiful).

    While the module isn't used anywhere, it will serve as a pluggable
    module for testing whether ACME works properly to fetch certificates
    and also as a replacement for our snakeoil certificate generator.

Also quoting parts of the commit where I have refactored the same module
in headcounter/deployment@85fa481b3431bbc450e8008fd25adc28ef0c6036:

    Now we have a fully pluggable module which automatically discovers
    in which network it's used via the nodes attribute.

    The test environment of Boulder used "dns-test-srv", which is a fake
    DNS server that's resolving almost everything to 127.0.0.1. On our
    setup this is not useful, so instead we're now running a local BIND
    name server which has a fake root zone and uses the mentioned node
    attribute to automatically discover other zones in the network of
    machines and generate delegations from the root zone to the
    respective zones with the primaryIPAddress of the node.

    ...

    We want to use real letsencrypt.org FQDNs here, so we can't get away
    with the snakeoil test certificates from the upstream project but
    now roll our own.

    This not only has the benefit that we can easily pass the snakeoil
    certificate to other nodes, but we can (and do) also use it for an
    nginx proxy that's now serving HTTPS for the Boulder web front end.

The Headcounter deployment tests are simulating a production scenario
with real IPs and nameservers so it won't need to rely on
networking.extraHost. However in this implementation we don't
necessarily want to do that, so I've added auto-discovery of
networking.extraHosts in the resolver module.

Another change here is that the letsencrypt module now falls back to
using a local resolver, the Headcounter implementation on the other hand
always required to add an extra test node which serves as a resolver.

I could have squashed both modules into the final ACME test, but that
would make it not very reusable, so that's the main reason why I put
these modules in tests/common.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Diffstat (limited to 'nixos/tests/common')
-rw-r--r--nixos/tests/common/letsencrypt.nix444
-rw-r--r--nixos/tests/common/resolver.nix141
2 files changed, 585 insertions, 0 deletions
diff --git a/nixos/tests/common/letsencrypt.nix b/nixos/tests/common/letsencrypt.nix
new file mode 100644
index 00000000000..162f50385d8
--- /dev/null
+++ b/nixos/tests/common/letsencrypt.nix
@@ -0,0 +1,444 @@
+# Fully pluggable module to have Letsencrypt's Boulder ACME service running in
+# a test environment.
+#
+# The certificate for the ACME service is exported as:
+#
+#   config.test-support.letsencrypt.caCert
+#
+# This value can be used inside the configuration of other test nodes to inject
+# the snakeoil certificate into security.pki.certificateFiles or into package
+# overlays.
+#
+# Another value that's needed if you don't use a custom resolver (see below for
+# notes on that) is to add the letsencrypt node as a nameserver to every node
+# that needs to acquire certificates using ACME, because otherwise the API host
+# for letsencrypt.org can't be resolved.
+#
+# A configuration example of a full node setup using this would be this:
+#
+# {
+#   letsencrypt = import ./common/letsencrypt.nix;
+#
+#   example = { nodes, ... }: {
+#     networking.nameservers = [
+#       nodes.letsencrypt.config.networking.primaryIPAddress
+#     ];
+#     security.pki.certificateFiles = [
+#       nodes.letsencrypt.config.test-support.letsencrypt.caCert
+#     ];
+#   };
+# }
+#
+# By default, this module runs a local resolver, generated using resolver.nix
+# from the same directory to automatically discover all zones in the network.
+#
+# If you do not want this and want to use your own resolver, you can just
+# override networking.nameservers like this:
+#
+# {
+#   letsencrypt = { nodes, ... }: {
+#     imports = [ ./common/letsencrypt.nix ];
+#     networking.nameservers = [
+#       nodes.myresolver.config.networking.primaryIPAddress
+#     ];
+#   };
+#
+#   myresolver = ...;
+# }
+#
+# Keep in mind, that currently only _one_ resolver is supported, if you have
+# more than one resolver in networking.nameservers only the first one will be
+# used.
+#
+# Also make sure that whenever you use a resolver from a different test node
+# that it has to be started _before_ the ACME service.
+{ config, pkgs, lib, ... }:
+
+let
+  softhsm = pkgs.stdenv.mkDerivation rec {
+    name = "softhsm-${version}";
+    version = "1.3.8";
+
+    src = pkgs.fetchurl {
+      url = "https://dist.opendnssec.org/source/${name}.tar.gz";
+      sha256 = "0flmnpkgp65ym7w3qyg78d3fbmvq3aznmi66rgd420n33shf7aif";
+    };
+
+    configureFlags = [ "--with-botan=${pkgs.botan}" ];
+    buildInputs = [ pkgs.sqlite ];
+  };
+
+  pkcs11-proxy = pkgs.stdenv.mkDerivation {
+    name = "pkcs11-proxy";
+
+    src = pkgs.fetchFromGitHub {
+      owner = "SUNET";
+      repo = "pkcs11-proxy";
+      rev = "944684f78bca0c8da6cabe3fa273fed3db44a890";
+      sha256 = "1nxgd29y9wmifm11pjcdpd2y293p0dgi0x5ycis55miy97n0f5zy";
+    };
+
+    postPatch = "patchShebangs mksyscalls.sh";
+
+    nativeBuildInputs = [ pkgs.cmake ];
+    buildInputs = [ pkgs.openssl pkgs.libseccomp ];
+  };
+
+  mkGoDep = { goPackagePath, url ? "https://${goPackagePath}", rev, sha256 }: {
+    inherit goPackagePath;
+    src = pkgs.fetchgit { inherit url rev sha256; };
+  };
+
+  goose = let
+    owner = "liamstask";
+    repo = "goose";
+    rev = "8488cc47d90c8a502b1c41a462a6d9cc8ee0a895";
+    version = "20150116";
+
+  in pkgs.buildGoPackage rec {
+    name = "${repo}-${version}";
+
+    src = pkgs.fetchFromBitbucket {
+      name = "${name}-src";
+      inherit rev owner repo;
+      sha256 = "1jy0pscxjnxjdg3hj111w21g8079rq9ah2ix5ycxxhbbi3f0wdhs";
+    };
+
+    goPackagePath = "bitbucket.org/${owner}/${repo}";
+    subPackages = [ "cmd/goose" ];
+    extraSrcs = map mkGoDep [
+      { goPackagePath = "github.com/go-sql-driver/mysql";
+        rev = "2e00b5cd70399450106cec6431c2e2ce3cae5034";
+        sha256 = "085g48jq9hzmlcxg122n0c4pi41sc1nn2qpx1vrl2jfa8crsppa5";
+      }
+      { goPackagePath = "github.com/kylelemons/go-gypsy";
+        rev = "08cad365cd28a7fba23bb1e57aa43c5e18ad8bb8";
+        sha256 = "1djv7nii3hy451n5jlslk0dblqzb1hia1cbqpdwhnps1g8hqjy8q";
+      }
+      { goPackagePath = "github.com/lib/pq";
+        rev = "ba5d4f7a35561e22fbdf7a39aa0070f4d460cfc0";
+        sha256 = "1mfbqw9g00bk24bfmf53wri5c2wqmgl0qh4sh1qv2da13a7cwwg3";
+      }
+      { goPackagePath = "github.com/mattn/go-sqlite3";
+        rev = "2acfafad5870400156f6fceb12852c281cbba4d5";
+        sha256 = "1rpgil3w4hh1cibidskv1js898hwz83ps06gh0hm3mym7ki8d5h7";
+      }
+      { goPackagePath = "github.com/ziutek/mymysql";
+        rev = "0582bcf675f52c0c2045c027fd135bd726048f45";
+        sha256 = "0bkc9x8sgqbzgdimsmsnhb0qrzlzfv33fgajmmjxl4hcb21qz3rf";
+      }
+      { goPackagePath = "golang.org/x/net";
+        url = "https://go.googlesource.com/net";
+        rev = "10c134ea0df15f7e34d789338c7a2d76cc7a3ab9";
+        sha256 = "14cbr2shl08gyg85n5gj7nbjhrhhgrd52h073qd14j97qcxsakcz";
+      }
+    ];
+  };
+
+  boulder = let
+    owner = "letsencrypt";
+    repo = "boulder";
+    rev = "9866abab8962a591f06db457a4b84c518cc88243";
+    version = "20170510";
+
+  in pkgs.buildGoPackage rec {
+    name = "${repo}-${version}";
+
+    src = pkgs.fetchFromGitHub {
+      name = "${name}-src";
+      inherit rev owner repo;
+      sha256 = "170m5cjngbrm36wi7wschqw8jzs7kxpcyzmshq3pcrmcpigrhna1";
+    };
+
+    postPatch = ''
+      # compat for go < 1.8
+      sed -i -e 's/time\.Until(\([^)]\+\))/\1.Sub(time.Now())/' \
+        test/ocsp/helper/helper.go
+
+      find test -type f -exec sed -i -e '/libpkcs11-proxy.so/ {
+        s,/usr/local,${pkcs11-proxy},
+      }' {} +
+
+      sed -i -r \
+        -e '/^def +install/a \    return True' \
+        -e 's,exec \./bin/,,' \
+        test/startservers.py
+
+      cat "${snakeOilCa}/ca.key" > test/test-ca.key
+      cat "${snakeOilCa}/ca.pem" > test/test-ca.pem
+    '';
+
+    goPackagePath = "github.com/${owner}/${repo}";
+    buildInputs = [ pkgs.libtool ];
+  };
+
+  boulderSource = "${boulder.out}/share/go/src/${boulder.goPackagePath}";
+
+  softHsmConf = pkgs.writeText "softhsm.conf" ''
+    0:/var/lib/softhsm/slot0.db
+    1:/var/lib/softhsm/slot1.db
+  '';
+
+  snakeOilCa = pkgs.runCommand "snakeoil-ca" {
+    buildInputs = [ pkgs.openssl ];
+  } ''
+    mkdir "$out"
+    openssl req -newkey rsa:4096 -x509 -sha256 -days 36500 \
+      -subj '/CN=Snakeoil CA' -nodes \
+      -out "$out/ca.pem" -keyout "$out/ca.key"
+  '';
+
+  createAndSignCert = fqdn: let
+    snakeoilCertConf = pkgs.writeText "snakeoil.cnf" ''
+      [req]
+      default_bits = 4096
+      prompt = no
+      default_md = sha256
+      req_extensions = req_ext
+      distinguished_name = dn
+      [dn]
+      CN = ${fqdn}
+      [req_ext]
+      subjectAltName = DNS:${fqdn}
+    '';
+  in pkgs.runCommand "snakeoil-certs-${fqdn}" {
+    buildInputs = [ pkgs.openssl ];
+  } ''
+    mkdir "$out"
+    openssl genrsa -out "$out/snakeoil.key" 4096
+    openssl req -new -key "$out/snakeoil.key" \
+      -config ${lib.escapeShellArg snakeoilCertConf} \
+      -out snakeoil.csr
+    openssl x509 -req -in snakeoil.csr -sha256 -set_serial 666 \
+      -CA "${snakeOilCa}/ca.pem" -CAkey "${snakeOilCa}/ca.key" \
+      -extfile ${lib.escapeShellArg snakeoilCertConf} \
+      -out "$out/snakeoil.pem" -days 36500
+  '';
+
+  wfeCerts = createAndSignCert wfeDomain;
+  wfeDomain = "acme-v01.api.letsencrypt.org";
+  wfeCertFile = "${wfeCerts}/snakeoil.pem";
+  wfeKeyFile = "${wfeCerts}/snakeoil.key";
+
+  siteCerts = createAndSignCert siteDomain;
+  siteDomain = "letsencrypt.org";
+  siteCertFile = "${siteCerts}/snakeoil.pem";
+  siteKeyFile = "${siteCerts}/snakeoil.key";
+
+  # Retrieved via:
+  # curl -s -I https://acme-v01.api.letsencrypt.org/terms \
+  #   | sed -ne 's/^[Ll]ocation: *//p'
+  tosUrl = "https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf";
+  tosPath = builtins.head (builtins.match "https?://[^/]+(.*)" tosUrl);
+
+  tosFile = pkgs.fetchurl {
+    url = tosUrl;
+    sha256 = "08b2gacdz23mzji2pjr1pwnk82a84rzvr36isif7mmi9kydl6wv3";
+  };
+
+  resolver = let
+    message = "You need to define a resolver for the letsencrypt test module.";
+    firstNS = lib.head config.networking.nameservers;
+  in if config.networking.nameservers == [] then throw message else firstNS;
+
+  cfgDir = pkgs.stdenv.mkDerivation {
+    name = "boulder-config";
+    src = "${boulderSource}/test/config";
+    nativeBuildInputs = [ pkgs.jq ];
+    phases = [ "unpackPhase" "patchPhase" "installPhase" ];
+    postPatch = ''
+      sed -i -e 's/5002/80/' -e 's/5002/443/' va.json
+      sed -i -e '/listenAddress/s/:4000/:80/' wfe.json
+      sed -i -r \
+        -e ${lib.escapeShellArg "s,http://boulder:4000/terms/v1,${tosUrl},g"} \
+        -e 's,http://(boulder|127\.0\.0\.1):4000,https://${wfeDomain},g' \
+        -e '/dnsResolver/s/127\.0\.0\.1:8053/${resolver}:53/' \
+        *.json
+      if grep 4000 *.json; then exit 1; fi
+
+      # Change all ports from 1909X to 909X, because the 1909X range of ports is
+      # allocated by startservers.py in order to intercept gRPC communication.
+      sed -i -e 's/\<1\(909[0-9]\)\>/\1/' *.json
+
+      # Patch out all additional issuer certs
+      jq '. + {ca: (.ca + {Issuers:
+        [.ca.Issuers[] | select(.CertFile == "test/test-ca.pem")]
+      })}' ca.json > tmp
+      mv tmp ca.json
+    '';
+    installPhase = "cp -r . \"$out\"";
+  };
+
+  components = {
+    gsb-test-srv.args = "-apikey my-voice-is-my-passport";
+    gsb-test-srv.waitForPort = 6000;
+    gsb-test-srv.first = true;
+    boulder-sa.args = "--config ${cfgDir}/sa.json";
+    boulder-wfe.args = "--config ${cfgDir}/wfe.json";
+    boulder-ra.args = "--config ${cfgDir}/ra.json";
+    boulder-ca.args = "--config ${cfgDir}/ca.json";
+    boulder-va.args = "--config ${cfgDir}/va.json";
+    boulder-publisher.args = "--config ${cfgDir}/publisher.json";
+    boulder-publisher.waitForPort = 9091;
+    ocsp-updater.args = "--config ${cfgDir}/ocsp-updater.json";
+    ocsp-updater.after = [ "boulder-publisher" ];
+    ocsp-responder.args = "--config ${cfgDir}/ocsp-responder.json";
+    ct-test-srv = {};
+    mail-test-srv.args = "--closeFirst 5";
+  };
+
+  commonPath = [ softhsm pkgs.mariadb goose boulder ];
+
+  mkServices = a: b: with lib; listToAttrs (concatLists (mapAttrsToList a b));
+
+  componentServices = mkServices (name: attrs: let
+    mkSrvName = n: "boulder-${n}.service";
+    firsts = lib.filterAttrs (lib.const (c: c.first or false)) components;
+    firstServices = map mkSrvName (lib.attrNames firsts);
+    firstServicesNoSelf = lib.remove "boulder-${name}.service" firstServices;
+    additionalAfter = firstServicesNoSelf ++ map mkSrvName (attrs.after or []);
+    needsPort = attrs ? waitForPort;
+    inits = map (n: "boulder-init-${n}.service") [ "mysql" "softhsm" ];
+    portWaiter = {
+      name = "boulder-${name}";
+      value = {
+        description = "Wait For Port ${toString attrs.waitForPort} (${name})";
+        after = [ "boulder-real-${name}.service" "bind.service" ];
+        requires = [ "boulder-real-${name}.service" ];
+        requiredBy = [ "boulder.service" ];
+        serviceConfig.Type = "oneshot";
+        serviceConfig.RemainAfterExit = true;
+        script = let
+          netcat = "${pkgs.netcat-openbsd}/bin/nc";
+          portCheck = "${netcat} -z 127.0.0.1 ${toString attrs.waitForPort}";
+        in "while ! ${portCheck}; do :; done";
+      };
+    };
+  in lib.optional needsPort portWaiter ++ lib.singleton {
+    name = if needsPort then "boulder-real-${name}" else "boulder-${name}";
+    value = {
+      description = "Boulder ACME Component (${name})";
+      after = inits ++ additionalAfter;
+      requires = inits;
+      requiredBy = [ "boulder.service" ];
+      path = commonPath;
+      environment.GORACE = "halt_on_error=1";
+      environment.SOFTHSM_CONF = softHsmConf;
+      environment.PKCS11_PROXY_SOCKET = "tcp://127.0.0.1:5657";
+      serviceConfig.WorkingDirectory = boulderSource;
+      serviceConfig.ExecStart = "${boulder}/bin/${name} ${attrs.args or ""}";
+      serviceConfig.Restart = "on-failure";
+    };
+  }) components;
+
+in {
+  imports = [ ./resolver.nix ];
+
+  options.test-support.letsencrypt.caCert = lib.mkOption {
+    type = lib.types.path;
+    description = ''
+      A certificate file to use with the <literal>nodes</literal> attribute to
+      inject the snakeoil CA certificate used in the ACME server into
+      <option>security.pki.certificateFiles</option>.
+    '';
+  };
+
+  config = {
+    test-support = {
+      resolver.enable = let
+        isLocalResolver = config.networking.nameservers == [ "127.0.0.1" ];
+      in lib.mkOverride 900 isLocalResolver;
+      letsencrypt.caCert = "${snakeOilCa}/ca.pem";
+    };
+
+    # This has priority 140, because modules/testing/test-instrumentation.nix
+    # already overrides this with priority 150.
+    networking.nameservers = lib.mkOverride 140 [ "127.0.0.1" ];
+    networking.firewall.enable = false;
+
+    networking.extraHosts = ''
+      127.0.0.1 ${toString [
+        "sa.boulder" "ra.boulder" "wfe.boulder" "ca.boulder" "va.boulder"
+        "publisher.boulder" "ocsp-updater.boulder" "admin-revoker.boulder"
+        "boulder" "boulder-mysql" wfeDomain
+      ]}
+      ${config.networking.primaryIPAddress} ${wfeDomain} ${siteDomain}
+    '';
+
+    services.mysql.enable = true;
+    services.mysql.package = pkgs.mariadb;
+
+    services.nginx.enable = true;
+    services.nginx.recommendedProxySettings = true;
+    services.nginx.virtualHosts.${wfeDomain} = {
+      enableSSL = true;
+      sslCertificate = wfeCertFile;
+      sslCertificateKey = wfeKeyFile;
+      locations."/".proxyPass = "http://127.0.0.1:80";
+    };
+    services.nginx.virtualHosts.${siteDomain} = {
+      enableSSL = true;
+      sslCertificate = siteCertFile;
+      sslCertificateKey = siteKeyFile;
+      locations.${tosPath}.extraConfig = "alias ${tosFile};";
+    };
+
+    systemd.services = {
+      pkcs11-daemon = {
+        description = "PKCS11 Daemon";
+        after = [ "boulder-init-softhsm.service" ];
+        before = map (n: "${n}.service") (lib.attrNames componentServices);
+        wantedBy = [ "multi-user.target" ];
+        environment.SOFTHSM_CONF = softHsmConf;
+        environment.PKCS11_DAEMON_SOCKET = "tcp://127.0.0.1:5657";
+        serviceConfig.ExecStart = let
+          softhsmLib = "${softhsm}/lib/softhsm/libsofthsm.so";
+        in "${pkcs11-proxy}/bin/pkcs11-daemon ${softhsmLib}";
+      };
+
+      boulder-init-mysql = {
+        description = "Boulder ACME Init (MySQL)";
+        after = [ "mysql.service" ];
+        serviceConfig.Type = "oneshot";
+        serviceConfig.RemainAfterExit = true;
+        serviceConfig.WorkingDirectory = boulderSource;
+        path = commonPath;
+        script = "${pkgs.bash}/bin/sh test/create_db.sh";
+      };
+
+      boulder-init-softhsm = {
+        description = "Boulder ACME Init (SoftHSM)";
+        environment.SOFTHSM_CONF = softHsmConf;
+        serviceConfig.Type = "oneshot";
+        serviceConfig.RemainAfterExit = true;
+        serviceConfig.WorkingDirectory = boulderSource;
+        preStart = "mkdir -p /var/lib/softhsm";
+        path = commonPath;
+        script = ''
+          softhsm --slot 0 --init-token \
+            --label intermediate --pin 5678 --so-pin 1234
+          softhsm --slot 0 --import test/test-ca.key \
+            --label intermediate_key --pin 5678 --id FB
+          softhsm --slot 1 --init-token \
+            --label root --pin 5678 --so-pin 1234
+          softhsm --slot 1 --import test/test-root.key \
+            --label root_key --pin 5678 --id FA
+        '';
+      };
+
+      boulder = {
+        description = "Boulder ACME Server";
+        after = map (n: "${n}.service") (lib.attrNames componentServices);
+        wantedBy = [ "multi-user.target" ];
+        serviceConfig.Type = "oneshot";
+        serviceConfig.RemainAfterExit = true;
+        script = let
+          ports = lib.range 8000 8005 ++ lib.singleton 80;
+          netcat = "${pkgs.netcat-openbsd}/bin/nc";
+          mkPortCheck = port: "${netcat} -z 127.0.0.1 ${toString port}";
+          checks = "(${lib.concatMapStringsSep " && " mkPortCheck ports})";
+        in "while ! ${checks}; do :; done";
+      };
+    } // componentServices;
+  };
+}
diff --git a/nixos/tests/common/resolver.nix b/nixos/tests/common/resolver.nix
new file mode 100644
index 00000000000..a1901c5c816
--- /dev/null
+++ b/nixos/tests/common/resolver.nix
@@ -0,0 +1,141 @@
+# This module automatically discovers zones in BIND and NSD NixOS
+# configurations and creates zones for all definitions of networking.extraHosts
+# (except those that point to 127.0.0.1 or ::1) within the current test network
+# and delegates these zones using a fake root zone served by a BIND recursive
+# name server.
+{ config, nodes, pkgs, lib, ... }:
+
+{
+  options.test-support.resolver.enable = lib.mkOption {
+    type = lib.types.bool;
+    default = true;
+    internal = true;
+    description = ''
+      Whether to enable the resolver that automatically discovers zone in the
+      test network.
+
+      This option is <literal>true</literal> by default, because the module
+      defining this option needs to be explicitly imported.
+
+      The reason this option exists is for the
+      <filename>nixos/tests/common/letsencrypt.nix</filename> module, which
+      needs that option to disable the resolver once the user has set its own
+      resolver.
+    '';
+  };
+
+  config = lib.mkIf config.test-support.resolver.enable {
+    networking.firewall.enable = false;
+    services.bind.enable = true;
+    services.bind.cacheNetworks = lib.mkForce [ "any" ];
+    services.bind.forwarders = lib.mkForce [];
+    services.bind.zones = lib.singleton {
+      name = ".";
+      file = let
+        addDot = zone: zone + lib.optionalString (!lib.hasSuffix "." zone) ".";
+        mkNsdZoneNames = zones: map addDot (lib.attrNames zones);
+        mkBindZoneNames = zones: map (zone: addDot zone.name) zones;
+        getZones = cfg: mkNsdZoneNames cfg.services.nsd.zones
+                     ++ mkBindZoneNames cfg.services.bind.zones;
+
+        getZonesForNode = attrs: {
+          ip = attrs.config.networking.primaryIPAddress;
+          zones = lib.filter (zone: zone != ".") (getZones attrs.config);
+        };
+
+        zoneInfo = lib.mapAttrsToList (lib.const getZonesForNode) nodes;
+
+        # A and AAAA resource records for all the definitions of
+        # networking.extraHosts except those for 127.0.0.1 or ::1.
+        #
+        # The result is an attribute set with keys being the host name and the
+        # values are either { ipv4 = ADDR; } or { ipv6 = ADDR; } where ADDR is
+        # the IP address for the corresponding key.
+        recordsFromExtraHosts = let
+          getHostsForNode = lib.const (n: n.config.networking.extraHosts);
+          allHostsList = lib.mapAttrsToList getHostsForNode nodes;
+          allHosts = lib.concatStringsSep "\n" allHostsList;
+
+          reIp = "[a-fA-F0-9.:]+";
+          reHost = "[a-zA-Z0-9.-]+";
+
+          matchAliases = str: let
+            matched = builtins.match "[ \t]+(${reHost})(.*)" str;
+            continue = lib.singleton (lib.head matched)
+                    ++ matchAliases (lib.last matched);
+          in if matched == null then [] else continue;
+
+          matchLine = str: let
+            result = builtins.match "[ \t]*(${reIp})[ \t]+(${reHost})(.*)" str;
+          in if result == null then null else {
+            ipAddr = lib.head result;
+            hosts = lib.singleton (lib.elemAt result 1)
+                 ++ matchAliases (lib.last result);
+          };
+
+          skipLine = str: let
+            rest = builtins.match "[^\n]*\n(.*)" str;
+          in if rest == null then "" else lib.head rest;
+
+          getEntries = str: acc: let
+            result = matchLine str;
+            next = getEntries (skipLine str);
+            newEntry = acc ++ lib.singleton result;
+            continue = if result == null then next acc else next newEntry;
+          in if str == "" then acc else continue;
+
+          isIPv6 = str: builtins.match ".*:.*" str != null;
+          loopbackIps = [ "127.0.0.1" "::1" ];
+          filterLoopback = lib.filter (e: !lib.elem e.ipAddr loopbackIps);
+
+          allEntries = lib.concatMap (entry: map (host: {
+            inherit host;
+            ${if isIPv6 entry.ipAddr then "ipv6" else "ipv4"} = entry.ipAddr;
+          }) entry.hosts) (filterLoopback (getEntries (allHosts + "\n") []));
+
+          mkRecords = entry: let
+            records = lib.optional (entry ? ipv6) "AAAA ${entry.ipv6}"
+                   ++ lib.optional (entry ? ipv4) "A ${entry.ipv4}";
+            mkRecord = typeAndData: "${entry.host}. IN ${typeAndData}";
+          in lib.concatMapStringsSep "\n" mkRecord records;
+
+        in lib.concatMapStringsSep "\n" mkRecords allEntries;
+
+        # All of the zones that are subdomains of existing zones.
+        # For example if there is only "example.com" the following zones would
+        # be 'subZones':
+        #
+        #  * foo.example.com.
+        #  * bar.example.com.
+        #
+        # While the following would *not* be 'subZones':
+        #
+        #  * example.com.
+        #  * com.
+        #
+        subZones = let
+          allZones = lib.concatMap (zi: zi.zones) zoneInfo;
+          isSubZoneOf = z1: z2: lib.hasSuffix z2 z1 && z1 != z2;
+        in lib.filter (z: lib.any (isSubZoneOf z) allZones) allZones;
+
+        # All the zones without 'subZones'.
+        filteredZoneInfo = map (zi: zi // {
+          zones = lib.filter (x: !lib.elem x subZones) zi.zones;
+        }) zoneInfo;
+
+      in pkgs.writeText "fake-root.zone" ''
+        $TTL 3600
+        . IN SOA ns.fakedns. admin.fakedns. ( 1 3h 1h 1w 1d )
+        ns.fakedns. IN A ${config.networking.primaryIPAddress}
+        . IN NS ns.fakedns.
+        ${lib.concatImapStrings (num: { ip, zones }: ''
+          ns${toString num}.fakedns. IN A ${ip}
+          ${lib.concatMapStrings (zone: ''
+          ${zone} IN NS ns${toString num}.fakedns.
+          '') zones}
+        '') (lib.filter (zi: zi.zones != []) filteredZoneInfo)}
+        ${recordsFromExtraHosts}
+      '';
+    };
+  };
+}