summary refs log tree commit diff
path: root/doc/builders
diff options
context:
space:
mode:
Diffstat (limited to 'doc/builders')
-rw-r--r--doc/builders/fetchers.chapter.md110
-rw-r--r--doc/builders/images.xml12
-rw-r--r--doc/builders/images/appimagetools.section.md48
-rw-r--r--doc/builders/images/dockertools.section.md314
-rw-r--r--doc/builders/images/ocitools.section.md37
-rw-r--r--doc/builders/images/snaptools.section.md71
-rw-r--r--doc/builders/packages/cataclysm-dda.section.md129
-rw-r--r--doc/builders/packages/citrix.section.md32
-rw-r--r--doc/builders/packages/dlib.section.md13
-rw-r--r--doc/builders/packages/eclipse.section.md64
-rw-r--r--doc/builders/packages/elm.section.md11
-rw-r--r--doc/builders/packages/emacs.section.md119
-rw-r--r--doc/builders/packages/etc-files.section.md18
-rw-r--r--doc/builders/packages/firefox.section.md52
-rw-r--r--doc/builders/packages/fish.section.md50
-rw-r--r--doc/builders/packages/fuse.section.md45
-rw-r--r--doc/builders/packages/ibus.section.md38
-rw-r--r--doc/builders/packages/index.xml29
-rw-r--r--doc/builders/packages/kakoune.section.md9
-rw-r--r--doc/builders/packages/linux.section.md41
-rw-r--r--doc/builders/packages/locales.section.md5
-rw-r--r--doc/builders/packages/nginx.section.md11
-rw-r--r--doc/builders/packages/opengl.section.md15
-rw-r--r--doc/builders/packages/shell-helpers.section.md12
-rw-r--r--doc/builders/packages/steam.section.md63
-rw-r--r--doc/builders/packages/unfree.xml13
-rw-r--r--doc/builders/packages/urxvt.section.md71
-rw-r--r--doc/builders/packages/weechat.section.md85
-rw-r--r--doc/builders/packages/xorg.section.md34
-rw-r--r--doc/builders/special.xml11
-rw-r--r--doc/builders/special/fhs-environments.section.md49
-rw-r--r--doc/builders/special/invalidateFetcherByDrvHash.section.md31
-rw-r--r--doc/builders/special/mkshell.section.md37
-rw-r--r--doc/builders/trivial-builders.chapter.md223
34 files changed, 1902 insertions, 0 deletions
diff --git a/doc/builders/fetchers.chapter.md b/doc/builders/fetchers.chapter.md
new file mode 100644
index 00000000000..28388ba685d
--- /dev/null
+++ b/doc/builders/fetchers.chapter.md
@@ -0,0 +1,110 @@
+# Fetchers {#chap-pkgs-fetchers}
+
+When using Nix, you will frequently need to download source code and other files from the internet. For this purpose, Nix provides the [_fixed output derivation_](https://nixos.org/manual/nix/stable/#fixed-output-drvs) feature and Nixpkgs provides various functions that implement the actual fetching from various protocols and services.
+
+## Caveats
+
+Because fixed output derivations are _identified_ by their hash, a common mistake is to update a fetcher's URL or a version parameter, without updating the hash. **This will cause the old contents to be used.** So remember to always invalidate the hash argument.
+
+For those who develop and maintain fetchers, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#sec-pkgs-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
+
+## `fetchurl` and `fetchzip` {#fetchurl}
+
+Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
+
+```nix
+{ stdenv, fetchurl }:
+
+stdenv.mkDerivation {
+  name = "hello";
+  src = fetchurl {
+    url = "http://www.example.org/hello.tar.gz";
+    sha256 = "1111111111111111111111111111111111111111111111111111";
+  };
+}
+```
+
+The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
+
+`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
+
+Most other fetchers return a directory rather than a single file.
+
+## `fetchsvn` {#fetchsvn}
+
+Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha256`.
+
+## `fetchgit` {#fetchgit}
+
+Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`.
+
+Additionally the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
+
+If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more infomation:
+
+```nix
+{ stdenv, fetchgit }:
+
+stdenv.mkDerivation {
+  name = "hello";
+  src = fetchgit {
+    url = "https://...";
+    sparseCheckout = ''
+      path/to/be/included
+      another/path
+    '';
+    sha256 = "0000000000000000000000000000000000000000000000000000";
+  };
+}
+```
+
+## `fetchfossil` {#fetchfossil}
+
+Used with Fossil. Expects `url` to a Fossil archive, `rev`, and `sha256`.
+
+## `fetchcvs` {#fetchcvs}
+
+Used with CVS. Expects `cvsRoot`, `tag`, and `sha256`.
+
+## `fetchhg` {#fetchhg}
+
+Used with Mercurial. Expects `url`, `rev`, and `sha256`.
+
+A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below.
+
+## `fetchFromGitHub` {#fetchfromgithub}
+
+`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
+
+`fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options.
+
+## `fetchFromGitLab` {#fetchfromgitlab}
+
+This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
+
+## `fetchFromGitiles` {#fetchfromgitiles}
+
+This is used with Gitiles repositories. The arguments expected are similar to fetchgit.
+
+## `fetchFromBitbucket` {#fetchfrombitbucket}
+
+This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.
+
+## `fetchFromSavannah` {#fetchfromsavannah}
+
+This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
+
+## `fetchFromRepoOrCz` {#fetchfromrepoorcz}
+
+This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
+
+## `fetchFromSourcehut` {#fetchfromsourcehut}
+
+This is used with sourcehut repositories. Similar to `fetchFromGitHub` above,
+it expects `owner`, `repo`, `rev` and `sha256`, but don't forget the tilde (~)
+in front of the username! Expected arguments also include `vc` ("git" (default)
+or "hg"), `domain` and `fetchSubmodules`.
+
+If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit`
+or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`,
+respectively. Otherwise the fetcher uses `fetchzip`.
diff --git a/doc/builders/images.xml b/doc/builders/images.xml
new file mode 100644
index 00000000000..cd10d69a96d
--- /dev/null
+++ b/doc/builders/images.xml
@@ -0,0 +1,12 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+         xmlns:xi="http://www.w3.org/2001/XInclude"
+         xml:id="chap-images">
+ <title>Images</title>
+ <para>
+  This chapter describes tools for creating various types of images.
+ </para>
+ <xi:include href="images/appimagetools.section.xml" />
+ <xi:include href="images/dockertools.section.xml" />
+ <xi:include href="images/ocitools.section.xml" />
+ <xi:include href="images/snaptools.section.xml" />
+</chapter>
diff --git a/doc/builders/images/appimagetools.section.md b/doc/builders/images/appimagetools.section.md
new file mode 100644
index 00000000000..67e63dc5f61
--- /dev/null
+++ b/doc/builders/images/appimagetools.section.md
@@ -0,0 +1,48 @@
+# pkgs.appimageTools {#sec-pkgs-appimageTools}
+
+`pkgs.appimageTools` is a set of functions for extracting and wrapping [AppImage](https://appimage.org/) files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, `pkgs.appimage-run` can be used as well.
+
+::: {.warning}
+The `appimageTools` API is unstable and may be subject to backwards-incompatible changes in the future.
+:::
+
+## AppImage formats {#ssec-pkgs-appimageTools-formats}
+
+There are different formats for AppImages, see [the specification](https://github.com/AppImage/AppImageSpec/blob/74ad9ca2f94bf864a4a0dac1f369dd4f00bd1c28/draft.md#image-format) for details.
+
+- Type 1 images are ISO 9660 files that are also ELF executables.
+- Type 2 images are ELF executables with an appended filesystem.
+
+They can be told apart with `file -k`:
+
+```ShellSession
+$ file -k type1.AppImage
+type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0,
+spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data
+
+$ file -k type2.AppImage
+type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data
+```
+
+Note how the type 1 AppImage is described as an `ISO 9660 CD-ROM filesystem`, and the type 2 AppImage is not.
+
+## Wrapping {#ssec-pkgs-appimageTools-wrapping}
+
+Depending on the type of AppImage you're wrapping, you'll have to use `wrapType1` or `wrapType2`.
+
+```nix
+appimageTools.wrapType2 { # or wrapType1
+  name = "patchwork";
+  src = fetchurl {
+    url = "https://github.com/ssbc/patchwork/releases/download/v3.11.4/Patchwork-3.11.4-linux-x86_64.AppImage";
+    sha256 = "1blsprpkvm0ws9b96gb36f0rbf8f5jgmw4x6dsb1kswr4ysf591s";
+  };
+  extraPkgs = pkgs: with pkgs; [ ];
+}
+```
+
+- `name` specifies the name of the resulting image.
+- `src` specifies the AppImage file to extract.
+- `extraPkgs` allows you to pass a function to include additional packages inside the FHS environment your AppImage is going to run in. There are a few ways to learn which dependencies an application needs:
+  - Looking through the extracted AppImage files, reading its scripts and running `patchelf` and `ldd` on its executables. This can also be done in `appimage-run`, by setting `APPIMAGE_DEBUG_EXEC=bash`.
+  - Running `strace -vfefile` on the wrapped executable, looking for libraries that can't be found.
diff --git a/doc/builders/images/dockertools.section.md b/doc/builders/images/dockertools.section.md
new file mode 100644
index 00000000000..7ff4b2aeb36
--- /dev/null
+++ b/doc/builders/images/dockertools.section.md
@@ -0,0 +1,314 @@
+# pkgs.dockerTools {#sec-pkgs-dockerTools}
+
+`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
+
+## buildImage {#ssec-pkgs-dockerTools-buildImage}
+
+This function is analogous to the `docker build` command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with `docker load`.
+
+The parameters of `buildImage` with relative example values are described below:
+
+[]{#ex-dockerTools-buildImage}
+[]{#ex-dockerTools-buildImage-runAsRoot}
+
+```nix
+buildImage {
+  name = "redis";
+  tag = "latest";
+
+  fromImage = someBaseImage;
+  fromImageName = null;
+  fromImageTag = "latest";
+
+  contents = pkgs.redis;
+  runAsRoot = ''
+    #!${pkgs.runtimeShell}
+    mkdir -p /data
+  '';
+
+  config = {
+    Cmd = [ "/bin/redis-server" ];
+    WorkingDir = "/data";
+    Volumes = { "/data" = { }; };
+  };
+}
+```
+
+The above example will build a Docker image `redis/latest` from the given base image. Loading and running this image in Docker results in `redis-server` being started automatically.
+
+- `name` specifies the name of the resulting image. This is the only required argument for `buildImage`.
+
+- `tag` specifies the tag of the resulting image. By default it's `null`, which indicates that the nix output hash will be used as tag.
+
+- `fromImage` is the repository tarball containing the base image. It must be a valid Docker image, such as exported by `docker save`. By default it's `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`.
+
+- `fromImageName` can be used to further specify the base image within the repository, in case it contains multiple images. By default it's `null`, in which case `buildImage` will peek the first image available in the repository.
+
+- `fromImageTag` can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's `null`, in which case `buildImage` will peek the first tag available for the base image.
+
+- `contents` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it's `null`.
+
+- `runAsRoot` is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied `contents` derivation. This can be similarly seen as `RUN ...` in a `Dockerfile`.
+
+> **_NOTE:_** Using this parameter requires the `kvm` device to be available.
+
+- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
+
+After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
+
+At the end of the process, only one new single layer will be produced and added to the resulting image.
+
+The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
+
+It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
+
+> **_NOTE:_** If you see errors similar to `getProtocolByName: does not exist (no such protocol name: tcp)` you may need to add `pkgs.iana-etc` to `contents`.
+
+> **_NOTE:_** If you see errors similar to `Error_Protocol ("certificate has unknown CA",True,UnknownCa)` you may need to add `pkgs.cacert` to `contents`.
+
+By default `buildImage` will use a static date of one second past the UNIX Epoch. This allows `buildImage` to produce binary reproducible images. When listing images with `docker images`, the newly created images will be listed like this:
+
+```ShellSession
+$ docker images
+REPOSITORY   TAG      IMAGE ID       CREATED        SIZE
+hello        latest   08c791c7846e   48 years ago   25.2MB
+```
+
+You can break binary reproducibility but have a sorted, meaningful `CREATED` column by setting `created` to `now`.
+
+```nix
+pkgs.dockerTools.buildImage {
+  name = "hello";
+  tag = "latest";
+  created = "now";
+  contents = pkgs.hello;
+
+  config.Cmd = [ "/bin/hello" ];
+}
+```
+
+and now the Docker CLI will display a reasonable date and sort the images as expected:
+
+```ShellSession
+$ docker images
+REPOSITORY   TAG      IMAGE ID       CREATED              SIZE
+hello        latest   de2bf4786de6   About a minute ago   25.2MB
+```
+
+however, the produced images will not be binary reproducible.
+
+## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
+
+Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use `streamLayeredImage` instead, which this function uses internally.
+
+`name`
+
+: The name of the resulting image.
+
+`tag` _optional_
+
+: Tag of the generated image.
+
+    *Default:* the output path's hash
+
+`fromImage` _optional_
+
+: The repository tarball containing the base image. It must be a valid Docker image, such as one exported by `docker save`.
+
+    *Default:* `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`.
+
+`contents` _optional_
+
+: Top level paths in the container. Either a single derivation, or a list of derivations.
+
+    *Default:* `[]`
+
+`config` _optional_
+
+: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
+
+    *Default:* `{}`
+
+`created` _optional_
+
+: Date and time the layers were created. Follows the same `now` exception supported by `buildImage`.
+
+    *Default:* `1970-01-01T00:00:01Z`
+
+`maxLayers` _optional_
+
+: Maximum number of layers to create.
+
+    *Default:* `100`
+
+    *Maximum:* `125`
+
+`extraCommands` _optional_
+
+: Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files.
+
+`fakeRootCommands` _optional_
+
+: Shell commands to run while creating the archive for the final layer in a fakeroot environment. Unlike `extraCommands`, you can run `chown` to change the owners of the files in the archive, changing fakeroot's state instead of the real filesystem. The latter would require privileges that the build user does not have. Static binaries do not interact with the fakeroot environment. By default all files in the archive will be owned by root.
+
+`enableFakechroot` _optional_
+
+: Whether to run in `fakeRootCommands` in `fakechroot`, making programs behave as though `/` is the root of the image being created, while files in the Nix store are available as usual. This allows scripts that perform installation in `/` to work as expected. Considering that `fakechroot` is implemented via the same mechanism as `fakeroot`, the same caveats apply.
+
+    *Default:* `false`
+
+### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents}
+
+Each path directly listed in `contents` will have a symlink in the root of the image.
+
+For example:
+
+```nix
+pkgs.dockerTools.buildLayeredImage {
+  name = "hello";
+  contents = [ pkgs.hello ];
+}
+```
+
+will create symlinks for all the paths in the `hello` package:
+
+```ShellSession
+/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
+/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
+/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
+```
+
+### Automatic inclusion of `config` references {#dockerTools-buildLayeredImage-arg-config}
+
+The closure of `config` is automatically included in the closure of the final image.
+
+This allows you to make very simple Docker images with very little code. This container will start up and run `hello`:
+
+```nix
+pkgs.dockerTools.buildLayeredImage {
+  name = "hello";
+  config.Cmd = [ "${pkgs.hello}/bin/hello" ];
+}
+```
+
+### Adjusting `maxLayers` {#dockerTools-buildLayeredImage-arg-maxLayers}
+
+Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
+
+Modern Docker installations support up to 128 layers, however older versions support as few as 42.
+
+If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
+
+The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration.
+
+Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
+
+## streamLayeredImage {#ssec-pkgs-dockerTools-streamLayeredImage}
+
+Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for `buildLayeredImage`. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
+
+The image produced by running the output script can be piped directly into `docker load`, to load it into the local docker daemon:
+
+```ShellSession
+$(nix-build) | docker load
+```
+
+Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
+
+```ShellSession
+$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
+```
+
+## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry}
+
+This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images.
+
+Its parameters are described in the example below:
+
+```nix
+pullImage {
+  imageName = "nixos/nix";
+  imageDigest =
+    "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b";
+  finalImageName = "nix";
+  finalImageTag = "1.11";
+  sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8";
+  os = "linux";
+  arch = "x86_64";
+}
+```
+
+- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required.
+
+- `imageDigest` specifies the digest of the image to be downloaded. This argument is required.
+
+- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to `imageName`.
+
+- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's `latest`.
+
+- `sha256` is the checksum of the whole fetched image. This argument is required.
+
+- `os`, if specified, is the operating system of the fetched image. By default it's `linux`.
+
+- `arch`, if specified, is the cpu architecture of the fetched image. By default it's `x86_64`.
+
+`nix-prefetch-docker` command can be used to get required image parameters:
+
+```ShellSession
+$ nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
+```
+
+Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
+
+```ShellSession
+$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
+```
+
+Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments:
+
+```ShellSession
+$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
+```
+
+## exportImage {#ssec-pkgs-dockerTools-exportImage}
+
+This function is analogous to the `docker export` command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with `docker import`.
+
+> **_NOTE:_** Using this function requires the `kvm` device to be available.
+
+The parameters of `exportImage` are the following:
+
+```nix
+exportImage {
+  fromImage = someLayeredImage;
+  fromImageName = null;
+  fromImageTag = null;
+
+  name = someLayeredImage.name;
+}
+```
+
+The parameters relative to the base image have the same synopsis as described in [buildImage](#ssec-pkgs-dockerTools-buildImage), except that `fromImage` is the only required argument in this case.
+
+The `name` argument is the name of the derivation output, which defaults to `fromImage.name`.
+
+## shadowSetup {#ssec-pkgs-dockerTools-shadowSetup}
+
+This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a [`buildImage` `runAsRoot`](#ex-dockerTools-buildImage-runAsRoot) script for cases like in the example below:
+
+```nix
+buildImage {
+  name = "shadow-basic";
+
+  runAsRoot = ''
+    #!${pkgs.runtimeShell}
+    ${shadowSetup}
+    groupadd -r redis
+    useradd -r -g redis redis
+    mkdir /data
+    chown redis:redis /data
+  '';
+}
+```
+
+Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
diff --git a/doc/builders/images/ocitools.section.md b/doc/builders/images/ocitools.section.md
new file mode 100644
index 00000000000..d3dee57ebac
--- /dev/null
+++ b/doc/builders/images/ocitools.section.md
@@ -0,0 +1,37 @@
+# pkgs.ociTools {#sec-pkgs-ociTools}
+
+`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
+
+## buildContainer {#ssec-pkgs-ociTools-buildContainer}
+
+This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
+
+The parameters of `buildContainer` with an example value are described below:
+
+```nix
+buildContainer {
+  args = [
+    (with pkgs;
+      writeScript "run.sh" ''
+        #!${bash}/bin/bash
+        exec ${bash}/bin/bash
+      '').outPath
+  ];
+
+  mounts = {
+    "/data" = {
+      type = "none";
+      source = "/var/lib/mydata";
+      options = [ "bind" ];
+    };
+  };
+
+  readonly = false;
+}
+```
+
+- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container
+
+- `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)
+
+- `readonly` makes the container\'s rootfs read-only if it is set to true. The default value is false `false`.
diff --git a/doc/builders/images/snaptools.section.md b/doc/builders/images/snaptools.section.md
new file mode 100644
index 00000000000..5f710d2de7f
--- /dev/null
+++ b/doc/builders/images/snaptools.section.md
@@ -0,0 +1,71 @@
+# pkgs.snapTools {#sec-pkgs-snapTools}
+
+`pkgs.snapTools` is a set of functions for creating Snapcraft images. Snap and Snapcraft is not used to perform these operations.
+
+## The makeSnap Function {#ssec-pkgs-snapTools-makeSnap-signature}
+
+`makeSnap` takes a single named argument, `meta`. This argument mirrors [the upstream `snap.yaml` format](https://docs.snapcraft.io/snap-format) exactly.
+
+The `base` should not be specified, as `makeSnap` will force set it.
+
+Currently, `makeSnap` does not support creating GUI stubs.
+
+## Build a Hello World Snap {#ssec-pkgs-snapTools-build-a-snap-hello}
+
+The following expression packages GNU Hello as a Snapcraft snap.
+
+``` {#ex-snapTools-buildSnap-hello .nix}
+let
+  inherit (import <nixpkgs> { }) snapTools hello;
+in snapTools.makeSnap {
+  meta = {
+    name = "hello";
+    summary = hello.meta.description;
+    description = hello.meta.longDescription;
+    architectures = [ "amd64" ];
+    confinement = "strict";
+    apps.hello.command = "${hello}/bin/hello";
+  };
+}
+```
+
+`nix-build` this expression and install it with `snap install ./result --dangerous`. `hello` will now be the Snapcraft version of the package.
+
+## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox}
+
+Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
+
+``` {#ex-snapTools-buildSnap-firefox .nix}
+let
+  inherit (import <nixpkgs> { }) snapTools firefox;
+in snapTools.makeSnap {
+  meta = {
+    name = "nix-example-firefox";
+    summary = firefox.meta.description;
+    architectures = [ "amd64" ];
+    apps.nix-example-firefox = {
+      command = "${firefox}/bin/firefox";
+      plugs = [
+        "pulseaudio"
+        "camera"
+        "browser-support"
+        "avahi-observe"
+        "cups-control"
+        "desktop"
+        "desktop-legacy"
+        "gsettings"
+        "home"
+        "network"
+        "mount-observe"
+        "removable-media"
+        "x11"
+      ];
+    };
+    confinement = "strict";
+  };
+}
+```
+
+`nix-build` this expression and install it with `snap install ./result --dangerous`. `nix-example-firefox` will now be the Snapcraft version of the Firefox package.
+
+The specific meaning behind plugs can be looked up in the [Snapcraft interface documentation](https://docs.snapcraft.io/supported-interfaces).
diff --git a/doc/builders/packages/cataclysm-dda.section.md b/doc/builders/packages/cataclysm-dda.section.md
new file mode 100644
index 00000000000..bfeacb47fef
--- /dev/null
+++ b/doc/builders/packages/cataclysm-dda.section.md
@@ -0,0 +1,129 @@
+# Cataclysm: Dark Days Ahead {#cataclysm-dark-days-ahead}
+
+## How to install Cataclysm DDA {#how-to-install-cataclysm-dda}
+
+To install the latest stable release of Cataclysm DDA to your profile, execute
+`nix-env -f "<nixpkgs>" -iA cataclysm-dda`. For the curses build (build
+without tiles), install `cataclysmDDA.stable.curses`. Note: `cataclysm-dda` is
+an alias to `cataclysmDDA.stable.tiles`.
+
+If you like access to a development build of your favorite git revision,
+override `cataclysm-dda-git` (or `cataclysmDDA.git.curses` if you like curses
+build):
+
+```nix
+cataclysm-dda-git.override {
+  version = "YYYY-MM-DD";
+  rev = "YOUR_FAVORITE_REVISION";
+  sha256 = "CHECKSUM_OF_THE_REVISION";
+}
+```
+
+The sha256 checksum can be obtained by
+
+```sh
+nix-prefetch-url --unpack "https://github.com/CleverRaven/Cataclysm-DDA/archive/${YOUR_FAVORITE_REVISION}.tar.gz"
+```
+
+The default configuration directory is `~/.cataclysm-dda`. If you prefer
+`$XDG_CONFIG_HOME/cataclysm-dda`, override the derivation:
+
+```nix
+cataclysm-dda.override {
+  useXdgDir = true;
+}
+```
+
+## Important note for overriding packages {#important-note-for-overriding-packages}
+
+After applying `overrideAttrs`, you need to fix `passthru.pkgs` and
+`passthru.withMods` attributes either manually or by using `attachPkgs`:
+
+```nix
+let
+  # You enabled parallel building.
+  myCDDA = cataclysm-dda-git.overrideAttrs (_: {
+    enableParallelBuilding = true;
+  });
+
+  # Unfortunately, this refers to the package before overriding and
+  # parallel building is still disabled.
+  badExample = myCDDA.withMods (_: []);
+
+  inherit (cataclysmDDA) attachPkgs pkgs wrapCDDA;
+
+  # You can fix it by hand
+  goodExample1 = myCDDA.overrideAttrs (old: {
+    passthru = old.passthru // {
+      pkgs = pkgs.override { build = goodExample1; };
+      withMods = wrapCDDA goodExample1;
+    };
+  });
+
+  # or by using a helper function `attachPkgs`.
+  goodExample2 = attachPkgs pkgs myCDDA;
+in
+
+# badExample                     # parallel building disabled
+# goodExample1.withMods (_: [])  # parallel building enabled
+goodExample2.withMods (_: [])    # parallel building enabled
+```
+
+## Customizing with mods {#customizing-with-mods}
+
+To install Cataclysm DDA with mods of your choice, you can use `withMods`
+attribute:
+
+```nix
+cataclysm-dda.withMods (mods: with mods; [
+  tileset.UndeadPeople
+])
+```
+
+All mods, soundpacks, and tilesets available in nixpkgs are found in
+`cataclysmDDA.pkgs`.
+
+Here is an example to modify existing mods and/or add more mods not available
+in nixpkgs:
+
+```nix
+let
+  customMods = self: super: lib.recursiveUpdate super {
+    # Modify existing mod
+    tileset.UndeadPeople = super.tileset.UndeadPeople.overrideAttrs (old: {
+      # If you like to apply a patch to the tileset for example
+      patches = [ ./path/to/your.patch ];
+    });
+
+    # Add another mod
+    mod.Awesome = cataclysmDDA.buildMod {
+      modName = "Awesome";
+      version = "0.x";
+      src = fetchFromGitHub {
+        owner = "Someone";
+        repo = "AwesomeMod";
+        rev = "...";
+        sha256 = "...";
+      };
+      # Path to be installed in the unpacked source (default: ".")
+      modRoot = "contents/under/this/path/will/be/installed";
+    };
+
+    # Add another soundpack
+    soundpack.Fantastic = cataclysmDDA.buildSoundPack {
+      # ditto
+    };
+
+    # Add another tileset
+    tileset.SuperDuper = cataclysmDDA.buildTileSet {
+      # ditto
+    };
+  };
+in
+cataclysm-dda.withMods (mods: with mods.extend customMods; [
+  tileset.UndeadPeople
+  mod.Awesome
+  soundpack.Fantastic
+  tileset.SuperDuper
+])
+```
diff --git a/doc/builders/packages/citrix.section.md b/doc/builders/packages/citrix.section.md
new file mode 100644
index 00000000000..b25ecb0bdef
--- /dev/null
+++ b/doc/builders/packages/citrix.section.md
@@ -0,0 +1,32 @@
+# Citrix Workspace {#sec-citrix}
+
+The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a remote desktop viewer which provides access to [XenDesktop](https://www.citrix.com/products/xenapp-xendesktop/) installations.
+
+## Basic usage {#sec-citrix-base}
+
+The tarball archive needs to be downloaded manually as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store the package can be built and installed with Nix.
+
+## Citrix Selfservice {#sec-citrix-selfservice}
+
+The [selfservice](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
+
+In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that you can configure the `selfservice` like this:
+
+```ShellSession
+$ storebrowse -C ~/Downloads/receiverconfig.cr
+$ selfservice
+```
+
+## Custom certificates {#sec-citrix-custom-certs}
+
+The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
+
+```nix
+with import <nixpkgs> { config.allowUnfree = true; };
+let
+  extraCerts = [
+    ./custom-cert-1.pem
+    ./custom-cert-2.pem # ...
+  ];
+in citrix_workspace.override { inherit extraCerts; }
+```
diff --git a/doc/builders/packages/dlib.section.md b/doc/builders/packages/dlib.section.md
new file mode 100644
index 00000000000..8f0aa861018
--- /dev/null
+++ b/doc/builders/packages/dlib.section.md
@@ -0,0 +1,13 @@
+# DLib {#dlib}
+
+[DLib](http://dlib.net/) is a modern, C++-based toolkit which provides several machine learning algorithms.
+
+## Compiling without AVX support {#compiling-without-avx-support}
+
+Especially older CPUs don\'t support [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) (Advanced Vector Extensions) instructions that are used by DLib to optimize their algorithms.
+
+On the affected hardware errors like `Illegal instruction` will occur. In those cases AVX support needs to be disabled:
+
+```nix
+self: super: { dlib = super.dlib.override { avxSupport = false; }; }
+```
diff --git a/doc/builders/packages/eclipse.section.md b/doc/builders/packages/eclipse.section.md
new file mode 100644
index 00000000000..faabb188450
--- /dev/null
+++ b/doc/builders/packages/eclipse.section.md
@@ -0,0 +1,64 @@
+# Eclipse {#sec-eclipse}
+
+The Nix expressions related to the Eclipse platform and IDE are in [`pkgs/applications/editors/eclipse`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/editors/eclipse).
+
+Nixpkgs provides a number of packages that will install Eclipse in its various forms. These range from the bare-bones Eclipse Platform to the more fully featured Eclipse SDK or Scala-IDE packages and multiple version are often available. It is possible to list available Eclipse packages by issuing the command:
+
+```ShellSession
+$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
+```
+
+Once an Eclipse variant is installed it can be run using the `eclipse` command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
+
+If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add
+
+```nix
+packageOverrides = pkgs: {
+  myEclipse = with pkgs.eclipses; eclipseWithPlugins {
+    eclipse = eclipse-platform;
+    jvmArgs = [ "-Xmx2048m" ];
+    plugins = [ plugins.color-theme ];
+  };
+}
+```
+
+to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running
+
+```ShellSession
+$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
+```
+
+If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
+
+Expanding the previous example with two plugins using the above functions we have
+
+```nix
+packageOverrides = pkgs: {
+  myEclipse = with pkgs.eclipses; eclipseWithPlugins {
+    eclipse = eclipse-platform;
+    jvmArgs = [ "-Xmx2048m" ];
+    plugins = [
+      plugins.color-theme
+      (plugins.buildEclipsePlugin {
+        name = "myplugin1-1.0";
+        srcFeature = fetchurl {
+          url = "http://…/features/myplugin1.jar";
+          sha256 = "123…";
+        };
+        srcPlugin = fetchurl {
+          url = "http://…/plugins/myplugin1.jar";
+          sha256 = "123…";
+        };
+      });
+      (plugins.buildEclipseUpdateSite {
+        name = "myplugin2-1.0";
+        src = fetchurl {
+          stripRoot = false;
+          url = "http://…/myplugin2.zip";
+          sha256 = "123…";
+        };
+      });
+    ];
+  };
+}
+```
diff --git a/doc/builders/packages/elm.section.md b/doc/builders/packages/elm.section.md
new file mode 100644
index 00000000000..ae223c802da
--- /dev/null
+++ b/doc/builders/packages/elm.section.md
@@ -0,0 +1,11 @@
+# Elm {#sec-elm}
+
+To start a development environment do
+
+```ShellSession
+nix-shell -p elmPackages.elm elmPackages.elm-format
+```
+
+To update the Elm compiler, see `nixpkgs/pkgs/development/compilers/elm/README.md`.
+
+To package Elm applications, [read about elm2nix](https://github.com/hercules-ci/elm2nix#elm2nix).
diff --git a/doc/builders/packages/emacs.section.md b/doc/builders/packages/emacs.section.md
new file mode 100644
index 00000000000..577f1a23ce0
--- /dev/null
+++ b/doc/builders/packages/emacs.section.md
@@ -0,0 +1,119 @@
+# Emacs {#sec-emacs}
+
+## Configuring Emacs {#sec-emacs-config}
+
+The Emacs package comes with some extra helpers to make it easier to configure. `emacs.pkgs.withPackages` allows you to manage packages from ELPA. This means that you will not have to install that packages from within Emacs. For instance, if you wanted to use `company` `counsel`, `flycheck`, `ivy`, `magit`, `projectile`, and `use-package` you could use this as a `~/.config/nixpkgs/config.nix` override:
+
+```nix
+{
+  packageOverrides = pkgs: with pkgs; {
+    myEmacs = emacs.pkgs.withPackages (epkgs: (with epkgs.melpaStablePackages; [
+      company
+      counsel
+      flycheck
+      ivy
+      magit
+      projectile
+      use-package
+    ]));
+  }
+}
+```
+
+You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
+
+```nix
+{
+  packageOverrides = pkgs: with pkgs; rec {
+    myEmacsConfig = writeText "default.el" ''
+      ;; initialize package
+
+      (require 'package)
+      (package-initialize 'noactivate)
+      (eval-when-compile
+        (require 'use-package))
+
+      ;; load some packages
+
+      (use-package company
+        :bind ("<C-tab>" . company-complete)
+        :diminish company-mode
+        :commands (company-mode global-company-mode)
+        :defer 1
+        :config
+        (global-company-mode))
+
+      (use-package counsel
+        :commands (counsel-descbinds)
+        :bind (([remap execute-extended-command] . counsel-M-x)
+               ("C-x C-f" . counsel-find-file)
+               ("C-c g" . counsel-git)
+               ("C-c j" . counsel-git-grep)
+               ("C-c k" . counsel-ag)
+               ("C-x l" . counsel-locate)
+               ("M-y" . counsel-yank-pop)))
+
+      (use-package flycheck
+        :defer 2
+        :config (global-flycheck-mode))
+
+      (use-package ivy
+        :defer 1
+        :bind (("C-c C-r" . ivy-resume)
+               ("C-x C-b" . ivy-switch-buffer)
+               :map ivy-minibuffer-map
+               ("C-j" . ivy-call))
+        :diminish ivy-mode
+        :commands ivy-mode
+        :config
+        (ivy-mode 1))
+
+      (use-package magit
+        :defer
+        :if (executable-find "git")
+        :bind (("C-x g" . magit-status)
+               ("C-x G" . magit-dispatch-popup))
+        :init
+        (setq magit-completing-read-function 'ivy-completing-read))
+
+      (use-package projectile
+        :commands projectile-mode
+        :bind-keymap ("C-c p" . projectile-command-map)
+        :defer 5
+        :config
+        (projectile-global-mode))
+    '';
+
+    myEmacs = emacs.pkgs.withPackages (epkgs: (with epkgs.melpaStablePackages; [
+      (runCommand "default.el" {} ''
+         mkdir -p $out/share/emacs/site-lisp
+         cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
+       '')
+      company
+      counsel
+      flycheck
+      ivy
+      magit
+      projectile
+      use-package
+    ]));
+  };
+}
+```
+
+This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing `-q` to the Emacs command.
+
+Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use `overrideScope'`.
+
+```nix
+overrides = self: super: rec {
+  haskell-mode = self.melpaPackages.haskell-mode;
+  ...
+};
+((emacsPackagesFor emacs).overrideScope' overrides).withPackages
+  (p: with p; [
+    # here both these package will use haskell-mode of our own choice
+    ghc-mod
+    dante
+  ])
+```
diff --git a/doc/builders/packages/etc-files.section.md b/doc/builders/packages/etc-files.section.md
new file mode 100644
index 00000000000..2405a54634d
--- /dev/null
+++ b/doc/builders/packages/etc-files.section.md
@@ -0,0 +1,18 @@
+# /etc files {#etc}
+
+Certain calls in glibc require access to runtime files found in /etc such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
+
+On non-NixOS distributions these files are typically provided by packages (i.e. [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
+
+If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your _buildInputs_ then it will set the environment varaibles `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a _setup-hook_.
+
+
+```bash
+> nix-shell -p iana-etc
+
+[nix-shell:~]$ env | grep NIX_ETC
+NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/services
+NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols
+```
+
+Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variable are *not set*, then it will attempt to find the files at the default location within _/etc_.
diff --git a/doc/builders/packages/firefox.section.md b/doc/builders/packages/firefox.section.md
new file mode 100644
index 00000000000..d6426981da7
--- /dev/null
+++ b/doc/builders/packages/firefox.section.md
@@ -0,0 +1,52 @@
+# Firefox {#sec-firefox}
+
+## Build wrapped Firefox with extensions and policies {#build-wrapped-firefox-with-extensions-and-policies}
+
+The `wrapFirefox` function allows to pass policies, preferences and extension that are available to Firefox. With the help of `fetchFirefoxAddon` this allows build a Firefox version that already comes with addons pre-installed:
+
+```nix
+{
+  # Nix firefox addons only work with the firefox-esr package.
+  myFirefox = wrapFirefox firefox-esr-unwrapped {
+    nixExtensions = [
+      (fetchFirefoxAddon {
+        name = "ublock"; # Has to be unique!
+        url = "https://addons.mozilla.org/firefox/downloads/file/3679754/ublock_origin-1.31.0-an+fx.xpi";
+        sha256 = "1h768ljlh3pi23l27qp961v1hd0nbj2vasgy11bmcrlqp40zgvnr";
+      })
+    ];
+
+    extraPolicies = {
+      CaptivePortal = false;
+      DisableFirefoxStudies = true;
+      DisablePocket = true;
+      DisableTelemetry = true;
+      DisableFirefoxAccounts = true;
+      FirefoxHome = {
+        Pocket = false;
+        Snippets = false;
+      };
+       UserMessaging = {
+         ExtensionRecommendations = false;
+         SkipOnboarding = true;
+       };
+    };
+
+    extraPrefs = ''
+      // Show more ssl cert infos
+      lockPref("security.identityblock.show_extended_validation", true);
+    '';
+  };
+}
+```
+
+If `nixExtensions != null` then all manually installed addons will be uninstalled from your browser profile.
+To view available enterprise policies visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
+or type into the Firefox url bar: `about:policies#documentation`.
+Nix installed addons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded addons are checksumed and manual addons can't be installed. Also make sure that the `name` field of fetchFirefoxAddon is unique. If you remove an addon from the nixExtensions array, rebuild and start Firefox the removed addon will be completly removed with all of its settings.
+
+## Troubleshooting {#sec-firefox-troubleshooting}
+If addons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for addons thus nix addons get disabled by the normal Firefox binary.
+
+If addons do not appear installed although they have been defined in your nix configuration file reset the local addon state of your Firefox profile by clicking `help -> restart with addons disabled -> restart -> refresh firefox`. This can happen if you switch from manual addon mode to nix addon mode and then back to manual mode and then again to nix addon mode.
+
diff --git a/doc/builders/packages/fish.section.md b/doc/builders/packages/fish.section.md
new file mode 100644
index 00000000000..3086bd68348
--- /dev/null
+++ b/doc/builders/packages/fish.section.md
@@ -0,0 +1,50 @@
+# Fish {#sec-fish}
+
+Fish is a "smart and user-friendly command line shell" with support for plugins.
+
+
+## Vendor Fish scripts {#sec-fish-vendor}
+
+Any package may ship its own Fish completions, configuration snippets, and
+functions. Those should be installed to
+`$out/share/fish/vendor_{completions,conf,functions}.d` respectively.
+
+When the `programs.fish.enable` and
+`programs.fish.vendor.{completions,config,functions}.enable` options from the
+NixOS Fish module are set to true, those paths are symlinked in the current
+system environment and automatically loaded by Fish.
+
+
+## Packaging Fish plugins {#sec-fish-plugins-pkg}
+
+While packages providing standalone executables belong to the top level,
+packages which have the sole purpose of extending Fish belong to the
+`fishPlugins` scope and should be registered in
+`pkgs/shells/fish/plugins/default.nix`.
+
+The `buildFishPlugin` utility function can be used to automatically copy Fish
+scripts from `$src/{completions,conf,conf.d,functions}` to the standard vendor
+installation paths. It also sets up the test environment so that the optional
+`checkPhase` is executed in a Fish shell with other already packaged plugins
+and package-local Fish functions specified in `checkPlugins` and
+`checkFunctionDirs` respectively.
+
+See `pkgs/shells/fish/plugins/pure.nix` for an example of Fish plugin package
+using `buildFishPlugin` and running unit tests with the `fishtape` test runner.
+
+
+## Fish wrapper {#sec-fish-wrapper}
+
+The `wrapFish` package is a wrapper around Fish which can be used to create
+Fish shells initialised with some plugins as well as completions, configuration
+snippets and functions sourced from the given paths. This provides a convenient
+way to test Fish plugins and scripts without having to alter the environment.
+
+```nix
+wrapFish {
+  pluginPkgs = with fishPlugins; [ pure foreign-env ];
+  completionDirs = [];
+  functionDirs = [];
+  confDirs = [ "/path/to/some/fish/init/dir/" ];
+}
+```
diff --git a/doc/builders/packages/fuse.section.md b/doc/builders/packages/fuse.section.md
new file mode 100644
index 00000000000..eb0023fcbc3
--- /dev/null
+++ b/doc/builders/packages/fuse.section.md
@@ -0,0 +1,45 @@
+# FUSE {#sec-fuse}
+
+Some packages rely on
+[FUSE](https://www.kernel.org/doc/html/latest/filesystems/fuse.html) to provide
+support for additional filesystems not supported by the kernel.
+
+In general, FUSE software are primarily developed for Linux but many of them can
+also run on macOS. Nixpkgs supports FUSE packages on macOS, but it requires
+[macFUSE](https://osxfuse.github.io) to be installed outside of Nix. macFUSE
+currently isn't packaged in Nixpkgs mainly because it includes a kernel
+extension, which isn't supported by Nix outside of NixOS.
+
+If a package fails to run on macOS with an error message similar to the
+following, it's a likely sign that you need to have macFUSE installed.
+
+    dyld: Library not loaded: /usr/local/lib/libfuse.2.dylib
+    Referenced from: /nix/store/w8bi72bssv0bnxhwfw3xr1mvn7myf37x-sshfs-fuse-2.10/bin/sshfs
+    Reason: image not found
+    [1]    92299 abort      /nix/store/w8bi72bssv0bnxhwfw3xr1mvn7myf37x-sshfs-fuse-2.10/bin/sshfs
+
+Package maintainers may often encounter the following error when building FUSE
+packages on macOS:
+
+    checking for fuse.h... no
+    configure: error: No fuse.h found.
+
+This happens on autoconf based projects that uses `AC_CHECK_HEADERS` or
+`AC_CHECK_LIBS` to detect libfuse, and will occur even when the `fuse` package
+is included in `buildInputs`. It happens because libfuse headers throw an error
+on macOS if the `FUSE_USE_VERSION` macro is undefined. Many proejcts do define
+`FUSE_USE_VERSION`, but only inside C source files. This results in the above
+error at configure time because the configure script would attempt to compile
+sample FUSE programs without defining `FUSE_USE_VERSION`.
+
+There are two possible solutions for this problem in Nixpkgs:
+
+1. Pass `FUSE_USE_VERSION` to the configure script by adding
+   `CFLAGS=-DFUSE_USE_VERSION=25` in `configureFlags`. The actual value would
+   have to match the definition used in the upstream source code.
+2. Remove `AC_CHECK_HEADERS` / `AC_CHECK_LIBS` for libfuse.
+
+However, a better solution might be to fix the build script upstream to use
+`PKG_CHECK_MODULES` instead. This approach wouldn't suffer from the problem that
+`AC_CHECK_HEADERS`/`AC_CHECK_LIBS` has at the price of introducing a dependency
+on pkg-config.
diff --git a/doc/builders/packages/ibus.section.md b/doc/builders/packages/ibus.section.md
new file mode 100644
index 00000000000..2ce85467bb8
--- /dev/null
+++ b/doc/builders/packages/ibus.section.md
@@ -0,0 +1,38 @@
+# ibus-engines.typing-booster {#sec-ibus-typing-booster}
+
+This package is an ibus-based completion method to speed up typing.
+
+## Activating the engine {#sec-ibus-typing-booster-activate}
+
+IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html).
+
+On NixOS you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
+
+```nix
+{ pkgs, ... }: {
+  i18n.inputMethod = {
+    enabled = "ibus";
+    ibus.engines = with pkgs.ibus-engines; [ typing-booster ];
+  };
+}
+```
+
+## Using custom hunspell dictionaries {#sec-ibus-typing-booster-customize-hunspell}
+
+The IBus engine is based on `hunspell` to support completion in many languages. By default the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
+
+```nix
+ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; }
+```
+
+_Note: each language passed to `langs` must be an attribute name in `pkgs.hunspellDicts`._
+
+## Built-in emoji picker {#sec-ibus-typing-booster-emoji-picker}
+
+The `ibus-engines.typing-booster` package contains a program named `emoji-picker`. To display all emojis correctly, a special font such as `noto-fonts-emoji` is needed:
+
+On NixOS it can be installed using the following expression:
+
+```nix
+{ pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; }
+```
diff --git a/doc/builders/packages/index.xml b/doc/builders/packages/index.xml
new file mode 100644
index 00000000000..206e1e49f1f
--- /dev/null
+++ b/doc/builders/packages/index.xml
@@ -0,0 +1,29 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+         xmlns:xi="http://www.w3.org/2001/XInclude"
+         xml:id="chap-packages">
+ <title>Packages</title>
+ <para>
+  This chapter contains information about how to use and maintain the Nix expressions for a number of specific packages, such as the Linux kernel or X.org.
+ </para>
+ <xi:include href="citrix.section.xml" />
+ <xi:include href="dlib.section.xml" />
+ <xi:include href="eclipse.section.xml" />
+ <xi:include href="elm.section.xml" />
+ <xi:include href="emacs.section.xml" />
+ <xi:include href="firefox.section.xml" />
+ <xi:include href="fish.section.xml" />
+ <xi:include href="fuse.section.xml" />
+ <xi:include href="ibus.section.xml" />
+ <xi:include href="kakoune.section.xml" />
+ <xi:include href="linux.section.xml" />
+ <xi:include href="locales.section.xml" />
+ <xi:include href="etc-files.section.xml" />
+ <xi:include href="nginx.section.xml" />
+ <xi:include href="opengl.section.xml" />
+ <xi:include href="shell-helpers.section.xml" />
+ <xi:include href="steam.section.xml" />
+ <xi:include href="cataclysm-dda.section.xml" />
+ <xi:include href="urxvt.section.xml" />
+ <xi:include href="weechat.section.xml" />
+ <xi:include href="xorg.section.xml" />
+</chapter>
diff --git a/doc/builders/packages/kakoune.section.md b/doc/builders/packages/kakoune.section.md
new file mode 100644
index 00000000000..8e054777a75
--- /dev/null
+++ b/doc/builders/packages/kakoune.section.md
@@ -0,0 +1,9 @@
+# Kakoune {#sec-kakoune}
+
+Kakoune can be built to autoload plugins:
+
+```nix
+(kakoune.override {
+  plugins = with pkgs.kakounePlugins; [ parinfer-rust ];
+})
+```
diff --git a/doc/builders/packages/linux.section.md b/doc/builders/packages/linux.section.md
new file mode 100644
index 00000000000..f669c720710
--- /dev/null
+++ b/doc/builders/packages/linux.section.md
@@ -0,0 +1,41 @@
+# Linux kernel {#sec-linux-kernel}
+
+The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/kernel`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel).
+
+The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernel’s `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
+
+The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package:
+
+```nix
+modulesTree = [kernel]
+  ++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi
+  ++ ...;
+```
+
+How to add a new (major) version of the Linux kernel to Nixpkgs:
+
+1.  Copy the old Nix expression (e.g. `linux-2.6.21.nix`) to the new one (e.g. `linux-2.6.22.nix`) and update it.
+
+2.  Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
+
+3.  Now we’re going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
+
+    1.  Make an copy from the old config (e.g. `config-2.6.21-i686-smp`) to the new one (e.g. `config-2.6.22-i686-smp`).
+
+    2.  Copy the config file for this platform (e.g. `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
+
+    3.  Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e. don’t enable some feature on `i686` and disable it on `x86_64`).
+
+    4.  If needed you can also run `make menuconfig`:
+
+        ```ShellSession
+        $ nix-env -f "<nixpkgs>" -iA ncurses
+        $ export NIX_CFLAGS_LINK=-lncurses
+        $ make menuconfig ARCH=arch
+        ```
+
+    5.  Copy `.config` over the new config file (e.g. `config-2.6.22-i686-smp`).
+
+4.  Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
+
+5.  It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `linux-kernels.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages aren’t backwards compatible with older kernels, you may need to keep the older versions around.
diff --git a/doc/builders/packages/locales.section.md b/doc/builders/packages/locales.section.md
new file mode 100644
index 00000000000..e5a03700481
--- /dev/null
+++ b/doc/builders/packages/locales.section.md
@@ -0,0 +1,5 @@
+# Locales {#locales}
+
+To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
+
+On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.
diff --git a/doc/builders/packages/nginx.section.md b/doc/builders/packages/nginx.section.md
new file mode 100644
index 00000000000..154c21f9b36
--- /dev/null
+++ b/doc/builders/packages/nginx.section.md
@@ -0,0 +1,11 @@
+# Nginx {#sec-nginx}
+
+[Nginx](https://nginx.org) is a reverse proxy and lightweight webserver.
+
+## ETags on static files served from the Nix store {#sec-nginx-etag}
+
+HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility).
+
+Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
+
+As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of `/nix/store`, the hash in the store path is used as the `ETag` header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.
diff --git a/doc/builders/packages/opengl.section.md b/doc/builders/packages/opengl.section.md
new file mode 100644
index 00000000000..ee7f3af98cf
--- /dev/null
+++ b/doc/builders/packages/opengl.section.md
@@ -0,0 +1,15 @@
+# OpenGL {#sec-opengl}
+
+OpenGL support varies depending on which hardware is used and which drivers are available and loaded.
+
+Broadly, we support both GL vendors: Mesa and NVIDIA.
+
+## NixOS Desktop {#nixos-desktop}
+
+The NixOS desktop or other non-headless configurations are the primary target for OpenGL libraries and applications. The current solution for discovering which drivers are available is based on [libglvnd](https://gitlab.freedesktop.org/glvnd/libglvnd). `libglvnd` performs "vendor-neutral dispatch", trying a variety of techniques to find the system's GL implementation. In practice, this will be either via standard GLX for X11 users or EGL for Wayland users, and supporting either NVIDIA or Mesa extensions.
+
+## Nix on GNU/Linux {#nix-on-gnulinux}
+
+If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
+
+For proprietary video drivers you might have luck with also adding the corresponding video driver package.
diff --git a/doc/builders/packages/shell-helpers.section.md b/doc/builders/packages/shell-helpers.section.md
new file mode 100644
index 00000000000..57b8619c500
--- /dev/null
+++ b/doc/builders/packages/shell-helpers.section.md
@@ -0,0 +1,12 @@
+# Interactive shell helpers {#sec-shell-helpers}
+
+Some packages provide the shell integration to be more useful. But unlike other systems, nix doesn't have a standard `share` directory location. This is why a bunch `PACKAGE-share` scripts are shipped that print the location of the corresponding shared folder. Current list of such packages is as following:
+
+- `fzf` : `fzf-share`
+
+E.g. `fzf` can then used in the `.bashrc` like this:
+
+```bash
+source "$(fzf-share)/completion.bash"
+source "$(fzf-share)/key-bindings.bash"
+```
diff --git a/doc/builders/packages/steam.section.md b/doc/builders/packages/steam.section.md
new file mode 100644
index 00000000000..3ce33c9b60e
--- /dev/null
+++ b/doc/builders/packages/steam.section.md
@@ -0,0 +1,63 @@
+# Steam {#sec-steam}
+
+## Steam in Nix {#sec-steam-nix}
+
+Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in \$HOME.
+
+Nix problems and constraints:
+
+- We don't have `/bin/bash` and many scripts point there. Similarly for `/usr/bin/python`.
+- We don't have the dynamic loader in `/lib`.
+- The `steam.sh` script in \$HOME can not be patched, as it is checked and rewritten by steam.
+- The steam binary cannot be patched, it's also checked.
+
+The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
+
+## How to play {#sec-steam-play}
+
+Use `programs.steam.enable = true;` if you want to add steam to systemPackages and also enable a few workarrounds aswell as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pr.
+
+## Troubleshooting {#sec-steam-troub}
+
+- **Steam fails to start. What do I do?**
+
+  Try to run
+
+  ```ShellSession
+  strace steam
+  ```
+
+  to see what is causing steam to fail.
+
+- **Using the FOSS Radeon or nouveau (nvidia) drivers**
+
+  - The `newStdcpp` parameter was removed since NixOS 17.09 and should not be needed anymore.
+  - Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
+
+    ```
+    steam.sh: line 713: 7842 Segmentation fault (core dumped)
+    ```
+
+    have a look at [this pull request](https://github.com/NixOS/nixpkgs/pull/20269).
+
+- **Java**
+
+  1. There is no java in steam chrootenv by default. If you get a message like
+
+    ```
+    /home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
+    ```
+
+    you need to add
+
+    ```nix
+    steam.override { withJava = true; };
+    ```
+
+## steam-run {#sec-steam-run}
+
+The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with
+
+```
+steam-run ./foo
+```
diff --git a/doc/builders/packages/unfree.xml b/doc/builders/packages/unfree.xml
new file mode 100644
index 00000000000..3d4f199f8fb
--- /dev/null
+++ b/doc/builders/packages/unfree.xml
@@ -0,0 +1,13 @@
+<section xmlns="http://docbook.org/ns/docbook"
+         xmlns:xlink="http://www.w3.org/1999/xlink"
+         xml:id="unfree-software">
+ <title>Unfree software</title>
+
+ <para>
+  All users of Nixpkgs are free software users, and many users (and developers) of Nixpkgs want to limit and tightly control their exposure to unfree software. At the same time, many users need (or want) to run some specific pieces of proprietary software. Nixpkgs includes some expressions for unfree software packages. By default unfree software cannot be installed and doesn’t show up in searches. To allow installing unfree software in a single Nix invocation one can export <literal>NIXPKGS_ALLOW_UNFREE=1</literal>. For a persistent solution, users can set <literal>allowUnfree</literal> in the Nixpkgs configuration.
+ </para>
+
+ <para>
+  Fine-grained control is possible by defining <literal>allowUnfreePredicate</literal> function in config; it takes the <literal>mkDerivation</literal> parameter attrset and returns <literal>true</literal> for unfree packages that should be allowed.
+ </para>
+</section>
diff --git a/doc/builders/packages/urxvt.section.md b/doc/builders/packages/urxvt.section.md
new file mode 100644
index 00000000000..2d1196d9227
--- /dev/null
+++ b/doc/builders/packages/urxvt.section.md
@@ -0,0 +1,71 @@
+# Urxvt {#sec-urxvt}
+
+Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
+
+## Configuring urxvt {#sec-urxvt-conf}
+
+In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as
+
+```nix
+rxvt-unicode.override {
+  configure = { availablePlugins, ... }: {
+    plugins = with availablePlugins; [ perls resize-font vtwheel ];
+  };
+}
+```
+
+If the `configure` function returns an attrset without the `plugins` attribute, `availablePlugins` will be used automatically.
+
+In order to add plugins but also keep all default plugins installed, it is possible to use the following method:
+
+```nix
+rxvt-unicode.override {
+  configure = { availablePlugins, ... }: {
+    plugins = (builtins.attrValues availablePlugins) ++ [ custom-plugin ];
+  };
+}
+```
+
+To get a list of all the plugins available, open the Nix REPL and run
+
+```ShellSession
+$ nix repl
+:l <nixpkgs>
+map (p: p.name) pkgs.rxvt-unicode.plugins
+```
+
+Alternatively, if your shell is bash or zsh and have completion enabled, simply type `nixpkgs.rxvt-unicode.plugins.<tab>`.
+
+In addition to `plugins` the options `extraDeps` and `perlDeps` can be used to install extra packages. `extraDeps` can be used, for example, to provide `xsel` (a clipboard manager) to the clipboard plugin, without installing it globally:
+
+```nix
+rxvt-unicode.override {
+  configure = { availablePlugins, ... }: {
+    pluginsDeps = [ xsel ];
+  };
+}
+```
+
+`perlDeps` is a handy way to provide Perl packages to your custom plugins (in `$HOME/.urxvt/ext`). For example, if you need `AnyEvent` you can do:
+
+```nix
+rxvt-unicode.override {
+  configure = { availablePlugins, ... }: {
+    perlDeps = with perlPackages; [ AnyEvent ];
+  };
+}
+```
+
+## Packaging urxvt plugins {#sec-urxvt-pkg}
+
+Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
+
+A plugin can be any kind of derivation, the only requirement is that it should always install perl scripts in `$out/lib/urxvt/perl`. Look for existing plugins for examples.
+
+If the plugin is itself a perl package that needs to be imported from other plugins or scripts, add the following passthrough:
+
+```nix
+passthru.perlPackages = [ "self" ];
+```
+
+This will make the urxvt wrapper pick up the dependency and set up the perl path accordingly.
diff --git a/doc/builders/packages/weechat.section.md b/doc/builders/packages/weechat.section.md
new file mode 100644
index 00000000000..e4e956b908e
--- /dev/null
+++ b/doc/builders/packages/weechat.section.md
@@ -0,0 +1,85 @@
+# Weechat {#sec-weechat}
+
+Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
+
+```nix
+weechat.override {configure = {availablePlugins, ...}: {
+    plugins = with availablePlugins; [ python perl ];
+  }
+}
+```
+
+If the `configure` function returns an attrset without the `plugins` attribute, `availablePlugins` will be used automatically.
+
+The plugins currently available are `python`, `perl`, `ruby`, `guile`, `tcl` and `lua`.
+
+The python and perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
+
+```nix
+weechat.override { configure = {availablePlugins, ...}: {
+    plugins = with availablePlugins; [
+            (python.withPackages (ps: with ps; [ pycrypto python-dbus ]))
+        ];
+    };
+}
+```
+
+In order to also keep all default plugins installed, it is possible to use the following method:
+
+```nix
+weechat.override { configure = { availablePlugins, ... }: {
+  plugins = builtins.attrValues (availablePlugins // {
+    python = availablePlugins.python.withPackages (ps: with ps; [ pycrypto python-dbus ]);
+  });
+}; }
+```
+
+WeeChat allows to set defaults on startup using the `--run-command`. The `configure` method can be used to pass commands to the program:
+
+```nix
+weechat.override {
+  configure = { availablePlugins, ... }: {
+    init = ''
+      /set foo bar
+      /server add libera irc.libera.chat
+    '';
+  };
+}
+```
+
+Further values can be added to the list of commands when running `weechat --run-command "your-commands"`.
+
+Additionally it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
+
+```nix
+weechat.override {
+  configure = { availablePlugins, ... }: {
+    scripts = with pkgs.weechatScripts; [
+      weechat-xmpp weechat-matrix-bridge wee-slack
+    ];
+    init = ''
+      /set plugins.var.python.jabber.key "val"
+    '':
+  };
+}
+```
+
+In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in `$out/share`. An exemplary derivation looks like this:
+
+```nix
+{ stdenv, fetchurl }:
+
+stdenv.mkDerivation {
+  name = "exemplary-weechat-script";
+  src = fetchurl {
+    url = "https://scripts.tld/your-scripts.tar.gz";
+    sha256 = "...";
+  };
+  passthru.scripts = [ "foo.py" "bar.lua" ];
+  installPhase = ''
+    mkdir $out/share
+    cp foo.py $out/share
+    cp bar.lua $out/share
+  '';
+}
+```
diff --git a/doc/builders/packages/xorg.section.md b/doc/builders/packages/xorg.section.md
new file mode 100644
index 00000000000..ae885f92346
--- /dev/null
+++ b/doc/builders/packages/xorg.section.md
@@ -0,0 +1,34 @@
+# X.org {#sec-xorg}
+
+The Nix expressions for the X.org packages reside in `pkgs/servers/x11/xorg/default.nix`. This file is automatically generated from lists of tarballs in an X.org release. As such it should not be modified directly; rather, you should modify the lists, the generator script or the file `pkgs/servers/x11/xorg/overrides.nix`, in which you can override or add to the derivations produced by the generator.
+
+## Katamari Tarballs {#katamari-tarballs}
+
+X.org upstream releases used to include [katamari](https://en.wiktionary.org/wiki/%E3%81%8B%E3%81%9F%E3%81%BE%E3%82%8A) releases, which included a holistic recommended version for each tarball, up until 7.7. To create a list of tarballs in a katamari release:
+
+```ShellSession
+export release="X11R7.7"
+export url="mirror://xorg/$release/src/everything/"
+cat $(PRINT_PATH=1 nix-prefetch-url $url | tail -n 1) \
+  | perl -e 'while (<>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'url'}$2\n"; }; }' \
+  | sort > "tarballs-$release.list"
+```
+
+## Individual Tarballs {#individual-tarballs}
+
+The upstream release process for [X11R7.8](https://x.org/wiki/Releases/7.8/) does not include a planned katamari. Instead, each component of X.org is released as its own tarball. We maintain `pkgs/servers/x11/xorg/tarballs.list` as a list of tarballs for each individual package. This list includes X.org core libraries and protocol descriptions, extra newer X11 interface libraries, like `xorg.libxcb`, and classic utilities which are largely unused but still available if needed, like `xorg.imake`.
+
+## Generating Nix Expressions {#generating-nix-expressions}
+
+The generator is invoked as follows:
+
+```ShellSession
+cd pkgs/servers/x11/xorg
+<tarballs.list perl ./generate-expr-from-tarballs.pl
+```
+
+For each of the tarballs in the `.list` files, the script downloads it, unpacks it, and searches its `configure.ac` and `*.pc.in` files for dependencies. This information is used to generate `default.nix`. The generator caches downloaded tarballs between runs. Pay close attention to the `NOT FOUND: $NAME` messages at the end of the run, since they may indicate missing dependencies. (Some might be optional dependencies, however.)
+
+## Overriding the Generator {#overriding-the-generator}
+
+If the expression for a package requires derivation attributes that the generator cannot figure out automatically (say, `patches` or a `postInstall` hook), you should modify `pkgs/servers/x11/xorg/overrides.nix`.
diff --git a/doc/builders/special.xml b/doc/builders/special.xml
new file mode 100644
index 00000000000..2f84599cdd4
--- /dev/null
+++ b/doc/builders/special.xml
@@ -0,0 +1,11 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+         xmlns:xi="http://www.w3.org/2001/XInclude"
+         xml:id="chap-special">
+ <title>Special builders</title>
+ <para>
+  This chapter describes several special builders.
+ </para>
+ <xi:include href="special/fhs-environments.section.xml" />
+ <xi:include href="special/mkshell.section.xml" />
+ <xi:include href="special/invalidateFetcherByDrvHash.section.xml" />
+</chapter>
diff --git a/doc/builders/special/fhs-environments.section.md b/doc/builders/special/fhs-environments.section.md
new file mode 100644
index 00000000000..cacad261e28
--- /dev/null
+++ b/doc/builders/special/fhs-environments.section.md
@@ -0,0 +1,49 @@
+# buildFHSUserEnv {#sec-fhs-environments}
+
+`buildFHSUserEnv` provides a way to build and run FHS-compatible lightweight sandboxes. It creates an isolated root with bound `/nix/store`, so its footprint in terms of disk space needed is quite small. This allows one to run software which is hard or unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions, games distributed as tarballs, software with integrity checking and/or external self-updated binaries. It uses Linux namespaces feature to create temporary lightweight environments which are destroyed after all child processes exit, without root user rights requirement. Accepted arguments are:
+
+- `name`
+        Environment name.
+- `targetPkgs`
+        Packages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.
+- `multiPkgs`
+        Packages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.
+- `extraBuildCommands`
+        Additional commands to be executed for finalizing the directory structure.
+- `extraBuildCommandsMulti`
+        Like `extraBuildCommands`, but executed only on multilib architectures.
+- `extraOutputsToInstall`
+        Additional derivation outputs to be linked for both target and multi-architecture packages.
+- `extraInstallCommands`
+        Additional commands to be executed for finalizing the derivation with runner script.
+- `runScript`
+        A command that would be executed inside the sandbox and passed all the command line arguments. It defaults to `bash`.
+- `profile`
+        Optional script for `/etc/profile` within the sandbox.
+
+One can create a simple environment using a `shell.nix` like that:
+
+```nix
+{ pkgs ? import <nixpkgs> {} }:
+
+(pkgs.buildFHSUserEnv {
+  name = "simple-x11-env";
+  targetPkgs = pkgs: (with pkgs;
+    [ udev
+      alsa-lib
+    ]) ++ (with pkgs.xorg;
+    [ libX11
+      libXcursor
+      libXrandr
+    ]);
+  multiPkgs = pkgs: (with pkgs;
+    [ udev
+      alsa-lib
+    ]);
+  runScript = "bash";
+}).env
+```
+
+Running `nix-shell` would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change `runScript` to the application path, e.g. `./bin/start.sh` -- relative paths are supported.
+
+Additionally, the FHS builder links all relocated gsettings-schemas (the glib setup-hook moves them to `share/gsettings-schemas/${name}/glib-2.0/schemas`) to their standard FHS location. This means you don't need to wrap binaries with `wrapGAppsHook`.
diff --git a/doc/builders/special/invalidateFetcherByDrvHash.section.md b/doc/builders/special/invalidateFetcherByDrvHash.section.md
new file mode 100644
index 00000000000..7c2f03a64b7
--- /dev/null
+++ b/doc/builders/special/invalidateFetcherByDrvHash.section.md
@@ -0,0 +1,31 @@
+
+## `invalidateFetcherByDrvHash` {#sec-pkgs-invalidateFetcherByDrvHash}
+
+Use the derivation hash to invalidate the output via name, for testing.
+
+Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
+
+Normally, fixed output derivations can and should be cached by their output
+hash only, but for testing we want to re-fetch everytime the fetcher changes.
+
+Changes to the fetcher become apparent in the drvPath, which is a hash of
+how to fetch, rather than a fixed store path.
+By inserting this hash into the name, we can make sure to re-run the fetcher
+every time the fetcher changes.
+
+This relies on the assumption that Nix isn't clever enough to reuse its
+database of local store contents to optimize fetching.
+
+You might notice that the "salted" name derives from the normal invocation,
+not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
+function twice: once to get a derivation hash, and again to produce the final
+fixed output derivation.
+
+Example:
+
+    tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
+      name = "nix-source";
+      url = "https://github.com/NixOS/nix";
+      rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
+      sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
+    };
diff --git a/doc/builders/special/mkshell.section.md b/doc/builders/special/mkshell.section.md
new file mode 100644
index 00000000000..73cc57f485b
--- /dev/null
+++ b/doc/builders/special/mkshell.section.md
@@ -0,0 +1,37 @@
+# pkgs.mkShell {#sec-pkgs-mkShell}
+
+`pkgs.mkShell` is a specialized `stdenv.mkDerivation` that removes some
+repetition when using it with `nix-shell` (or `nix develop`).
+
+## Usage {#sec-pkgs-mkShell-usage}
+
+Here is a common usage example:
+
+```nix
+{ pkgs ? import <nixpkgs> {} }:
+pkgs.mkShell {
+  packages = [ pkgs.gnumake ];
+
+  inputsFrom = [ pkgs.hello pkgs.gnutar ];
+
+  shellHook = ''
+    export DEBUG=1
+  '';
+}
+```
+
+## Attributes
+
+* `name` (default: `nix-shell`). Set the name of the derivation.
+* `packages` (default: `[]`). Add executable packages to the `nix-shell` environment.
+* `inputsFrom` (default: `[]`). Add build dependencies of the listed derivations to the `nix-shell` environment.
+* `shellHook` (default: `""`). Bash statements that are executed by `nix-shell`.
+
+... all the attributes of `stdenv.mkDerivation`.
+
+## Building the shell
+
+This derivation output will contain a text file that contains a reference to
+all the build inputs. This is useful in CI where we want to make sure that
+every derivation, and its dependencies, build properly. Or when creating a GC
+root so that the build dependencies don't get garbage-collected.
diff --git a/doc/builders/trivial-builders.chapter.md b/doc/builders/trivial-builders.chapter.md
new file mode 100644
index 00000000000..779a0a801b4
--- /dev/null
+++ b/doc/builders/trivial-builders.chapter.md
@@ -0,0 +1,223 @@
+# Trivial builders {#chap-trivial-builders}
+
+Nixpkgs provides a couple of functions that help with building derivations. The most important one, `stdenv.mkDerivation`, has already been documented above. The following functions wrap `stdenv.mkDerivation`, making it easier to use in certain cases.
+
+## `runCommand` {#trivial-builder-runCommand}
+
+This takes three arguments, `name`, `env`, and `buildCommand`. `name` is just the name that Nix will append to the store path in the same way that `stdenv.mkDerivation` uses its `name` attribute. `env` is an attribute set specifying environment variables that will be set for this derivation. These attributes are then passed to the wrapped `stdenv.mkDerivation`. `buildCommand` specifies the commands that will be run to create this derivation. Note that you will need to create `$out` for Nix to register the command as successful.
+
+An example of using `runCommand` is provided below.
+
+```nix
+(import <nixpkgs> {}).runCommand "my-example" {} ''
+  echo My example command is running
+
+  mkdir $out
+
+  echo I can write data to the Nix store > $out/message
+
+  echo I can also run basic commands like:
+
+  echo ls
+  ls
+
+  echo whoami
+  whoami
+
+  echo date
+  date
+''
+```
+
+## `runCommandCC` {#trivial-builder-runCommandCC}
+
+This works just like `runCommand`. The only difference is that it also provides a C compiler in `buildCommand`'s environment. To minimize your dependencies, you should only use this if you are sure you will need a C compiler as part of running your command.
+
+## `runCommandLocal` {#trivial-builder-runCommandLocal}
+
+Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build.
+
+::: {.note}
+This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`.
+:::
+
+## `writeTextFile`, `writeText`, `writeTextDir`, `writeScript`, `writeScriptBin` {#trivial-builder-writeText}
+
+These functions write `text` to the Nix store. This is useful for creating scripts from Nix expressions. `writeTextFile` takes an attribute set and expects two arguments, `name` and `text`. `name` corresponds to the name used in the Nix store path. `text` will be the contents of the file. You can also set `executable` to true to make this file have the executable bit set.
+
+Many more commands wrap `writeTextFile` including `writeText`, `writeTextDir`, `writeScript`, and `writeScriptBin`. These are convenience functions over `writeTextFile`.
+
+Here are a few examples:
+```nix
+# Writes my-file to /nix/store/<store path>
+writeTextFile {
+  name = "my-file";
+  text = ''
+    Contents of File
+  '';
+}
+# See also the `writeText` helper function below.
+
+# Writes executable my-file to /nix/store/<store path>/bin/my-file
+writeTextFile {
+  name = "my-file";
+  text = ''
+    Contents of File
+  '';
+  executable = true;
+  destination = "/bin/my-file";
+}
+# Writes contents of file to /nix/store/<store path>
+writeText "my-file"
+  ''
+  Contents of File
+  '';
+# Writes contents of file to /nix/store/<store path>/share/my-file
+writeTextDir "share/my-file"
+  ''
+  Contents of File
+  '';
+# Writes my-file to /nix/store/<store path> and makes executable
+writeScript "my-file"
+  ''
+  Contents of File
+  '';
+# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
+writeScriptBin "my-file"
+  ''
+  Contents of File
+  '';
+# Writes my-file to /nix/store/<store path> and makes executable.
+writeShellScript "my-file"
+  ''
+  Contents of File
+  '';
+# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
+writeShellScriptBin "my-file"
+  ''
+  Contents of File
+  '';
+
+```
+
+## `concatTextFile`, `concatText`, `concatScript` {#trivial-builder-concatText}
+
+These functions concatenate `files` to the Nix store in a single file. This is useful for configuration files structured in lines of text. `concatTextFile` takes an attribute set and expects two arguments, `name` and `files`. `name` corresponds to the name used in the Nix store path. `files` will be the files to be concatenated. You can also set `executable` to true to make this file have the executable bit set.
+`concatText` and`concatScript` are simple wrappers over `concatTextFile`.
+
+Here are a few examples:
+```nix
+
+# Writes my-file to /nix/store/<store path>
+concatTextFile {
+  name = "my-file";
+  files = [ drv1 "${drv2}/path/to/file" ];
+}
+# See also the `concatText` helper function below.
+
+# Writes executable my-file to /nix/store/<store path>/bin/my-file
+concatTextFile {
+  name = "my-file";
+  files = [ drv1 "${drv2}/path/to/file" ];
+  executable = true;
+  destination = "/bin/my-file";
+}
+# Writes contents of files to /nix/store/<store path>
+concatText "my-file" [ file1 file2 ]
+
+# Writes contents of files to /nix/store/<store path>
+concatScript "my-file" [ file1 file2 ]
+```
+
+## `writeShellApplication` {#trivial-builder-writeShellApplication}
+
+This can be used to easily produce a shell script that has some dependencies (`runtimeInputs`). It automatically sets the `PATH` of the script to contain all of the listed inputs, sets some sanity shellopts (`errexit`, `nounset`, `pipefail`), and checks the resulting script with [`shellcheck`](https://github.com/koalaman/shellcheck).
+
+For example, look at the following code:
+
+```nix
+writeShellApplication {
+  name = "show-nixos-org";
+
+  runtimeInputs = [ curl w3m ];
+
+  text = ''
+    curl -s 'https://nixos.org' | w3m -dump -T text/html
+  '';
+}
+```
+
+Unlike with normal `writeShellScriptBin`, there is no need to manually write out `${curl}/bin/curl`, setting the PATH
+was handled by `writeShellApplication`. Moreover, the script is being checked with `shellcheck` for more strict
+validation.
+
+## `symlinkJoin` {#trivial-builder-symlinkJoin}
+
+This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, `name`, and `paths`. `name` is the name used in the Nix store path for the created derivation. `paths` is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.
+Here is an example:
+```nix
+# adds symlinks of hello and stack to current build and prints "links added"
+symlinkJoin { name = "myexample"; paths = [ pkgs.hello pkgs.stack ]; postBuild = "echo links added"; }
+```
+This creates a derivation with a directory structure like the following:
+```
+/nix/store/sglsr5g079a5235hy29da3mq3hv8sjmm-myexample
+|-- bin
+|   |-- hello -> /nix/store/qy93dp4a3rqyn2mz63fbxjg228hffwyw-hello-2.10/bin/hello
+|   `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/bin/stack
+`-- share
+    |-- bash-completion
+    |   `-- completions
+    |       `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/bash-completion/completions/stack
+    |-- fish
+    |   `-- vendor_completions.d
+    |       `-- stack.fish -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/fish/vendor_completions.d/stack.fish
+...
+```
+
+## `writeReferencesToFile` {#trivial-builder-writeReferencesToFile}
+
+Writes the closure of transitive dependencies to a file.
+
+This produces the equivalent of `nix-store -q --requisites`.
+
+For example,
+
+```nix
+writeReferencesToFile (writeScriptBin "hi" ''${hello}/bin/hello'')
+```
+
+produces an output path `/nix/store/<hash>-runtime-deps` containing
+
+```nix
+/nix/store/<hash>-hello-2.10
+/nix/store/<hash>-hi
+/nix/store/<hash>-libidn2-2.3.0
+/nix/store/<hash>-libunistring-0.9.10
+/nix/store/<hash>-glibc-2.32-40
+```
+
+You can see that this includes `hi`, the original input path,
+`hello`, which is a direct reference, but also
+the other paths that are indirectly required to run `hello`.
+
+## `writeDirectReferencesToFile` {#trivial-builder-writeDirectReferencesToFile}
+
+Writes the set of references to the output file, that is, their immediate dependencies.
+
+This produces the equivalent of `nix-store -q --references`.
+
+For example,
+
+```nix
+writeDirectReferencesToFile (writeScriptBin "hi" ''${hello}/bin/hello'')
+```
+
+produces an output path `/nix/store/<hash>-runtime-references` containing
+
+```nix
+/nix/store/<hash>-hello-2.10
+```
+
+but none of `hello`'s dependencies, because those are not referenced directly
+by `hi`'s output.