summary refs log tree commit diff
path: root/nixos/doc/manual/administration
diff options
context:
space:
mode:
Diffstat (limited to 'nixos/doc/manual/administration')
-rw-r--r--nixos/doc/manual/administration/boot-problems.section.md41
-rw-r--r--nixos/doc/manual/administration/cleaning-store.chapter.md62
-rw-r--r--nixos/doc/manual/administration/container-networking.section.md44
-rw-r--r--nixos/doc/manual/administration/containers.chapter.md28
-rw-r--r--nixos/doc/manual/administration/control-groups.chapter.md59
-rw-r--r--nixos/doc/manual/administration/declarative-containers.section.md48
-rw-r--r--nixos/doc/manual/administration/imperative-containers.section.md115
-rw-r--r--nixos/doc/manual/administration/logging.chapter.md38
-rw-r--r--nixos/doc/manual/administration/maintenance-mode.section.md11
-rw-r--r--nixos/doc/manual/administration/network-problems.section.md21
-rw-r--r--nixos/doc/manual/administration/rebooting.chapter.md30
-rw-r--r--nixos/doc/manual/administration/rollback.section.md38
-rw-r--r--nixos/doc/manual/administration/running.xml21
-rw-r--r--nixos/doc/manual/administration/service-mgmt.chapter.md120
-rw-r--r--nixos/doc/manual/administration/store-corruption.section.md28
-rw-r--r--nixos/doc/manual/administration/troubleshooting.chapter.md12
-rw-r--r--nixos/doc/manual/administration/user-sessions.chapter.md43
17 files changed, 759 insertions, 0 deletions
diff --git a/nixos/doc/manual/administration/boot-problems.section.md b/nixos/doc/manual/administration/boot-problems.section.md
new file mode 100644
index 00000000000..bca4fdc3fb3
--- /dev/null
+++ b/nixos/doc/manual/administration/boot-problems.section.md
@@ -0,0 +1,41 @@
+# Boot Problems {#sec-boot-problems}
+
+If NixOS fails to boot, there are a number of kernel command line parameters that may help you to identify or fix the issue. You can add these parameters in the GRUB boot menu by pressing “e” to modify the selected boot entry and editing the line starting with `linux`. The following are some useful kernel command line parameters that are recognised by the NixOS boot scripts or by systemd:
+
+`boot.shell_on_fail`
+
+: Allows the user to start a root shell if something goes wrong in stage 1 of the boot process (the initial ramdisk). This is disabled by default because there is no authentication for the root shell.
+
+`boot.debug1`
+
+: Start an interactive shell in stage 1 before anything useful has been done. That is, no modules have been loaded and no file systems have been mounted, except for `/proc` and `/sys`.
+
+`boot.debug1devices`
+
+: Like `boot.debug1`, but runs stage1 until kernel modules are loaded and device nodes are created. This may help with e.g. making the keyboard work.
+
+`boot.debug1mounts`
+
+: Like `boot.debug1` or `boot.debug1devices`, but runs stage1 until all filesystems that are mounted during initrd are mounted (see [neededForBoot](#opt-fileSystems._name_.neededForBoot)). As a motivating example, this could be useful if you've forgotten to set [neededForBoot](#opt-fileSystems._name_.neededForBoot) on a file system.
+
+`boot.trace`
+
+: Print every shell command executed by the stage 1 and 2 boot scripts.
+
+`single`
+
+: Boot into rescue mode (a.k.a. single user mode). This will cause systemd to start nothing but the unit `rescue.target`, which runs `sulogin` to prompt for the root password and start a root login shell. Exiting the shell causes the system to continue with the normal boot process.
+
+`systemd.log_level=debug` `systemd.log_target=console`
+
+: Make systemd very verbose and send log messages to the console instead of the journal. For more parameters recognised by systemd, see systemd(1).
+
+In addition, these arguments are recognised by the live image only:
+
+`live.nixos.passwd=password`
+
+: Set the password for the `nixos` live user. This can be used for SSH access if there are issues using the terminal.
+
+Notice that for `boot.shell_on_fail`, `boot.debug1`, `boot.debug1devices`, and `boot.debug1mounts`, if you did **not** select "start the new shell as pid 1", and you `exit` from the new shell, boot will proceed normally from the point where it failed, as if you'd chosen "ignore the error and continue".
+
+If no login prompts or X11 login screens appear (e.g. due to hanging dependencies), you can press Alt+ArrowUp. If you’re lucky, this will start rescue mode (described above). (Also note that since most units have a 90-second timeout before systemd gives up on them, the `agetty` login prompts should appear eventually unless something is very wrong.)
diff --git a/nixos/doc/manual/administration/cleaning-store.chapter.md b/nixos/doc/manual/administration/cleaning-store.chapter.md
new file mode 100644
index 00000000000..c9140d0869c
--- /dev/null
+++ b/nixos/doc/manual/administration/cleaning-store.chapter.md
@@ -0,0 +1,62 @@
+# Cleaning the Nix Store {#sec-nix-gc}
+
+Nix has a purely functional model, meaning that packages are never
+upgraded in place. Instead new versions of packages end up in a
+different location in the Nix store (`/nix/store`). You should
+periodically run Nix's *garbage collector* to remove old, unreferenced
+packages. This is easy:
+
+```ShellSession
+$ nix-collect-garbage
+```
+
+Alternatively, you can use a systemd unit that does the same in the
+background:
+
+```ShellSession
+# systemctl start nix-gc.service
+```
+
+You can tell NixOS in `configuration.nix` to run this unit automatically
+at certain points in time, for instance, every night at 03:15:
+
+```nix
+nix.gc.automatic = true;
+nix.gc.dates = "03:15";
+```
+
+The commands above do not remove garbage collector roots, such as old
+system configurations. Thus they do not remove the ability to roll back
+to previous configurations. The following command deletes old roots,
+removing the ability to roll back to them:
+
+```ShellSession
+$ nix-collect-garbage -d
+```
+
+You can also do this for specific profiles, e.g.
+
+```ShellSession
+$ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
+```
+
+Note that NixOS system configurations are stored in the profile
+`/nix/var/nix/profiles/system`.
+
+Another way to reclaim disk space (often as much as 40% of the size of
+the Nix store) is to run Nix's store optimiser, which seeks out
+identical files in the store and replaces them with hard links to a
+single copy.
+
+```ShellSession
+$ nix-store --optimise
+```
+
+Since this command needs to read the entire Nix store, it can take quite
+a while to finish.
+
+## NixOS Boot Entries {#sect-nixos-gc-boot-entries}
+
+If your `/boot` partition runs out of space, after clearing old profiles
+you must rebuild your system with `nixos-rebuild boot` or `nixos-rebuild
+switch` to update the `/boot` partition and clear space.
diff --git a/nixos/doc/manual/administration/container-networking.section.md b/nixos/doc/manual/administration/container-networking.section.md
new file mode 100644
index 00000000000..0873768376c
--- /dev/null
+++ b/nixos/doc/manual/administration/container-networking.section.md
@@ -0,0 +1,44 @@
+# Container Networking {#sec-container-networking}
+
+When you create a container using `nixos-container create`, it gets it
+own private IPv4 address in the range `10.233.0.0/16`. You can get the
+container's IPv4 address as follows:
+
+```ShellSession
+# nixos-container show-ip foo
+10.233.4.2
+
+$ ping -c1 10.233.4.2
+64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms
+```
+
+Networking is implemented using a pair of virtual Ethernet devices. The
+network interface in the container is called `eth0`, while the matching
+interface in the host is called `ve-container-name` (e.g., `ve-foo`).
+The container has its own network namespace and the `CAP_NET_ADMIN`
+capability, so it can perform arbitrary network configuration such as
+setting up firewall rules, without affecting or having access to the
+host's network.
+
+By default, containers cannot talk to the outside network. If you want
+that, you should set up Network Address Translation (NAT) rules on the
+host to rewrite container traffic to use your external IP address. This
+can be accomplished using the following configuration on the host:
+
+```nix
+networking.nat.enable = true;
+networking.nat.internalInterfaces = ["ve-+"];
+networking.nat.externalInterface = "eth0";
+```
+
+where `eth0` should be replaced with the desired external interface.
+Note that `ve-+` is a wildcard that matches all container interfaces.
+
+If you are using Network Manager, you need to explicitly prevent it from
+managing container interfaces:
+
+```nix
+networking.networkmanager.unmanaged = [ "interface-name:ve-*" ];
+```
+
+You may need to restart your system for the changes to take effect.
diff --git a/nixos/doc/manual/administration/containers.chapter.md b/nixos/doc/manual/administration/containers.chapter.md
new file mode 100644
index 00000000000..ea51f91f698
--- /dev/null
+++ b/nixos/doc/manual/administration/containers.chapter.md
@@ -0,0 +1,28 @@
+# Container Management {#ch-containers}
+
+NixOS allows you to easily run other NixOS instances as *containers*.
+Containers are a light-weight approach to virtualisation that runs
+software in the container at the same speed as in the host system. NixOS
+containers share the Nix store of the host, making container creation
+very efficient.
+
+::: {.warning}
+Currently, NixOS containers are not perfectly isolated from the host
+system. This means that a user with root access to the container can do
+things that affect the host. So you should not give container root
+access to untrusted users.
+:::
+
+NixOS containers can be created in two ways: imperatively, using the
+command `nixos-container`, and declaratively, by specifying them in your
+`configuration.nix`. The declarative approach implies that containers
+get upgraded along with your host system when you run `nixos-rebuild`,
+which is often not what you want. By contrast, in the imperative
+approach, containers are configured and updated independently from the
+host system.
+
+```{=docbook}
+<xi:include href="imperative-containers.section.xml" />
+<xi:include href="declarative-containers.section.xml" />
+<xi:include href="container-networking.section.xml" />
+```
diff --git a/nixos/doc/manual/administration/control-groups.chapter.md b/nixos/doc/manual/administration/control-groups.chapter.md
new file mode 100644
index 00000000000..abe8dd80b5a
--- /dev/null
+++ b/nixos/doc/manual/administration/control-groups.chapter.md
@@ -0,0 +1,59 @@
+# Control Groups {#sec-cgroups}
+
+To keep track of the processes in a running system, systemd uses
+*control groups* (cgroups). A control group is a set of processes used
+to allocate resources such as CPU, memory or I/O bandwidth. There can be
+multiple control group hierarchies, allowing each kind of resource to be
+managed independently.
+
+The command `systemd-cgls` lists all control groups in the `systemd`
+hierarchy, which is what systemd uses to keep track of the processes
+belonging to each service or user session:
+
+```ShellSession
+$ systemd-cgls
+├─user
+│ └─eelco
+│   └─c1
+│     ├─ 2567 -:0
+│     ├─ 2682 kdeinit4: kdeinit4 Running...
+│     ├─ ...
+│     └─10851 sh -c less -R
+└─system
+  ├─httpd.service
+  │ ├─2444 httpd -f /nix/store/3pyacby5cpr55a03qwbnndizpciwq161-httpd.conf -DNO_DETACH
+  │ └─...
+  ├─dhcpcd.service
+  │ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
+  └─ ...
+```
+
+Similarly, `systemd-cgls cpu` shows the cgroups in the CPU hierarchy,
+which allows per-cgroup CPU scheduling priorities. By default, every
+systemd service gets its own CPU cgroup, while all user sessions are in
+the top-level CPU cgroup. This ensures, for instance, that a thousand
+run-away processes in the `httpd.service` cgroup cannot starve the CPU
+for one process in the `postgresql.service` cgroup. (By contrast, it
+they were in the same cgroup, then the PostgreSQL process would get
+1/1001 of the cgroup's CPU time.) You can limit a service's CPU share in
+`configuration.nix`:
+
+```nix
+systemd.services.httpd.serviceConfig.CPUShares = 512;
+```
+
+By default, every cgroup has 1024 CPU shares, so this will halve the CPU
+allocation of the `httpd.service` cgroup.
+
+There also is a `memory` hierarchy that controls memory allocation
+limits; by default, all processes are in the top-level cgroup, so any
+service or session can exhaust all available memory. Per-cgroup memory
+limits can be specified in `configuration.nix`; for instance, to limit
+`httpd.service` to 512 MiB of RAM (excluding swap):
+
+```nix
+systemd.services.httpd.serviceConfig.MemoryLimit = "512M";
+```
+
+The command `systemd-cgtop` shows a continuously updated list of all
+cgroups with their CPU and memory usage.
diff --git a/nixos/doc/manual/administration/declarative-containers.section.md b/nixos/doc/manual/administration/declarative-containers.section.md
new file mode 100644
index 00000000000..0d9d4017ed8
--- /dev/null
+++ b/nixos/doc/manual/administration/declarative-containers.section.md
@@ -0,0 +1,48 @@
+# Declarative Container Specification {#sec-declarative-containers}
+
+You can also specify containers and their configuration in the host's
+`configuration.nix`. For example, the following specifies that there
+shall be a container named `database` running PostgreSQL:
+
+```nix
+containers.database =
+  { config =
+      { config, pkgs, ... }:
+      { services.postgresql.enable = true;
+      services.postgresql.package = pkgs.postgresql_10;
+      };
+  };
+```
+
+If you run `nixos-rebuild switch`, the container will be built. If the
+container was already running, it will be updated in place, without
+rebooting. The container can be configured to start automatically by
+setting `containers.database.autoStart = true` in its configuration.
+
+By default, declarative containers share the network namespace of the
+host, meaning that they can listen on (privileged) ports. However, they
+cannot change the network configuration. You can give a container its
+own network as follows:
+
+```nix
+containers.database = {
+  privateNetwork = true;
+  hostAddress = "192.168.100.10";
+  localAddress = "192.168.100.11";
+};
+```
+
+This gives the container a private virtual Ethernet interface with IP
+address `192.168.100.11`, which is hooked up to a virtual Ethernet
+interface on the host with IP address `192.168.100.10`. (See the next
+section for details on container networking.)
+
+To disable the container, just remove it from `configuration.nix` and
+run `nixos-rebuild
+  switch`. Note that this will not delete the root directory of the
+container in `/var/lib/containers`. Containers can be destroyed using
+the imperative method: `nixos-container destroy foo`.
+
+Declarative containers can be started and stopped using the
+corresponding systemd service, e.g.
+`systemctl start container@database`.
diff --git a/nixos/doc/manual/administration/imperative-containers.section.md b/nixos/doc/manual/administration/imperative-containers.section.md
new file mode 100644
index 00000000000..05196bf5d81
--- /dev/null
+++ b/nixos/doc/manual/administration/imperative-containers.section.md
@@ -0,0 +1,115 @@
+# Imperative Container Management {#sec-imperative-containers}
+
+We'll cover imperative container management using `nixos-container`
+first. Be aware that container management is currently only possible as
+`root`.
+
+You create a container with identifier `foo` as follows:
+
+```ShellSession
+# nixos-container create foo
+```
+
+This creates the container's root directory in `/var/lib/containers/foo`
+and a small configuration file in `/etc/containers/foo.conf`. It also
+builds the container's initial system configuration and stores it in
+`/nix/var/nix/profiles/per-container/foo/system`. You can modify the
+initial configuration of the container on the command line. For
+instance, to create a container that has `sshd` running, with the given
+public key for `root`:
+
+```ShellSession
+# nixos-container create foo --config '
+  services.openssh.enable = true;
+  users.users.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];
+'
+```
+
+By default the next free address in the `10.233.0.0/16` subnet will be
+chosen as container IP. This behavior can be altered by setting
+`--host-address` and `--local-address`:
+
+```ShellSession
+# nixos-container create test --config-file test-container.nix \
+    --local-address 10.235.1.2 --host-address 10.235.1.1
+```
+
+Creating a container does not start it. To start the container, run:
+
+```ShellSession
+# nixos-container start foo
+```
+
+This command will return as soon as the container has booted and has
+reached `multi-user.target`. On the host, the container runs within a
+systemd unit called `container@container-name.service`. Thus, if
+something went wrong, you can get status info using `systemctl`:
+
+```ShellSession
+# systemctl status container@foo
+```
+
+If the container has started successfully, you can log in as root using
+the `root-login` operation:
+
+```ShellSession
+# nixos-container root-login foo
+[root@foo:~]#
+```
+
+Note that only root on the host can do this (since there is no
+authentication). You can also get a regular login prompt using the
+`login` operation, which is available to all users on the host:
+
+```ShellSession
+# nixos-container login foo
+foo login: alice
+Password: ***
+```
+
+With `nixos-container run`, you can execute arbitrary commands in the
+container:
+
+```ShellSession
+# nixos-container run foo -- uname -a
+Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
+```
+
+There are several ways to change the configuration of the container.
+First, on the host, you can edit
+`/var/lib/container/name/etc/nixos/configuration.nix`, and run
+
+```ShellSession
+# nixos-container update foo
+```
+
+This will build and activate the new configuration. You can also specify
+a new configuration on the command line:
+
+```ShellSession
+# nixos-container update foo --config '
+  services.httpd.enable = true;
+  services.httpd.adminAddr = "foo@example.org";
+  networking.firewall.allowedTCPPorts = [ 80 ];
+'
+
+# curl http://$(nixos-container show-ip foo)/
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…
+```
+
+However, note that this will overwrite the container's
+`/etc/nixos/configuration.nix`.
+
+Alternatively, you can change the configuration from within the
+container itself by running `nixos-rebuild switch` inside the container.
+Note that the container by default does not have a copy of the NixOS
+channel, so you should run `nix-channel --update` first.
+
+Containers can be stopped and started using `nixos-container
+  stop` and `nixos-container start`, respectively, or by using
+`systemctl` on the container's service unit. To destroy a container,
+including its file system, do
+
+```ShellSession
+# nixos-container destroy foo
+```
diff --git a/nixos/doc/manual/administration/logging.chapter.md b/nixos/doc/manual/administration/logging.chapter.md
new file mode 100644
index 00000000000..4ce6f5e9fa7
--- /dev/null
+++ b/nixos/doc/manual/administration/logging.chapter.md
@@ -0,0 +1,38 @@
+# Logging {#sec-logging}
+
+System-wide logging is provided by systemd's *journal*, which subsumes
+traditional logging daemons such as syslogd and klogd. Log entries are
+kept in binary files in `/var/log/journal/`. The command `journalctl`
+allows you to see the contents of the journal. For example,
+
+```ShellSession
+$ journalctl -b
+```
+
+shows all journal entries since the last reboot. (The output of
+`journalctl` is piped into `less` by default.) You can use various
+options and match operators to restrict output to messages of interest.
+For instance, to get all messages from PostgreSQL:
+
+```ShellSession
+$ journalctl -u postgresql.service
+-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --
+...
+Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG:  database system is shut down
+-- Reboot --
+Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG:  database system was shut down at 2013-01-07 15:44:14 CET
+Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG:  database system is ready to accept connections
+```
+
+Or to get all messages since the last reboot that have at least a
+"critical" severity level:
+
+```ShellSession
+$ journalctl -b -p crit
+Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]
+Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1)
+```
+
+The system journal is readable by root and by users in the `wheel` and
+`systemd-journal` groups. All users have a private journal that can be
+read using `journalctl`.
diff --git a/nixos/doc/manual/administration/maintenance-mode.section.md b/nixos/doc/manual/administration/maintenance-mode.section.md
new file mode 100644
index 00000000000..0aec013c0a9
--- /dev/null
+++ b/nixos/doc/manual/administration/maintenance-mode.section.md
@@ -0,0 +1,11 @@
+# Maintenance Mode {#sec-maintenance-mode}
+
+You can enter rescue mode by running:
+
+```ShellSession
+# systemctl rescue
+```
+
+This will eventually give you a single-user root shell. Systemd will
+stop (almost) all system services. To get out of maintenance mode, just
+exit from the rescue shell.
diff --git a/nixos/doc/manual/administration/network-problems.section.md b/nixos/doc/manual/administration/network-problems.section.md
new file mode 100644
index 00000000000..d360120d72d
--- /dev/null
+++ b/nixos/doc/manual/administration/network-problems.section.md
@@ -0,0 +1,21 @@
+# Network Problems {#sec-nix-network-issues}
+
+Nix uses a so-called *binary cache* to optimise building a package from
+source into downloading it as a pre-built binary. That is, whenever a
+command like `nixos-rebuild` needs a path in the Nix store, Nix will try
+to download that path from the Internet rather than build it from
+source. The default binary cache is `https://cache.nixos.org/`. If this
+cache is unreachable, Nix operations may take a long time due to HTTP
+connection timeouts. You can disable the use of the binary cache by
+adding `--option use-binary-caches false`, e.g.
+
+```ShellSession
+# nixos-rebuild switch --option use-binary-caches false
+```
+
+If you have an alternative binary cache at your disposal, you can use it
+instead:
+
+```ShellSession
+# nixos-rebuild switch --option binary-caches http://my-cache.example.org/
+```
diff --git a/nixos/doc/manual/administration/rebooting.chapter.md b/nixos/doc/manual/administration/rebooting.chapter.md
new file mode 100644
index 00000000000..ec4b889b164
--- /dev/null
+++ b/nixos/doc/manual/administration/rebooting.chapter.md
@@ -0,0 +1,30 @@
+# Rebooting and Shutting Down {#sec-rebooting}
+
+The system can be shut down (and automatically powered off) by doing:
+
+```ShellSession
+# shutdown
+```
+
+This is equivalent to running `systemctl poweroff`.
+
+To reboot the system, run
+
+```ShellSession
+# reboot
+```
+
+which is equivalent to `systemctl reboot`. Alternatively, you can
+quickly reboot the system using `kexec`, which bypasses the BIOS by
+directly loading the new kernel into memory:
+
+```ShellSession
+# systemctl kexec
+```
+
+The machine can be suspended to RAM (if supported) using `systemctl suspend`,
+and suspended to disk using `systemctl hibernate`.
+
+These commands can be run by any user who is logged in locally, i.e. on
+a virtual console or in X11; otherwise, the user is asked for
+authentication.
diff --git a/nixos/doc/manual/administration/rollback.section.md b/nixos/doc/manual/administration/rollback.section.md
new file mode 100644
index 00000000000..290d685a2a1
--- /dev/null
+++ b/nixos/doc/manual/administration/rollback.section.md
@@ -0,0 +1,38 @@
+# Rolling Back Configuration Changes {#sec-rollback}
+
+After running `nixos-rebuild` to switch to a new configuration, you may
+find that the new configuration doesn't work very well. In that case,
+there are several ways to return to a previous configuration.
+
+First, the GRUB boot manager allows you to boot into any previous
+configuration that hasn't been garbage-collected. These configurations
+can be found under the GRUB submenu "NixOS - All configurations". This
+is especially useful if the new configuration fails to boot. After the
+system has booted, you can make the selected configuration the default
+for subsequent boots:
+
+```ShellSession
+# /run/current-system/bin/switch-to-configuration boot
+```
+
+Second, you can switch to the previous configuration in a running
+system:
+
+```ShellSession
+# nixos-rebuild switch --rollback
+```
+
+This is equivalent to running:
+
+```ShellSession
+# /nix/var/nix/profiles/system-N-link/bin/switch-to-configuration switch
+```
+
+where `N` is the number of the NixOS system configuration. To get a
+list of the available configurations, do:
+
+```ShellSession
+$ ls -l /nix/var/nix/profiles/system-*-link
+...
+lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055
+```
diff --git a/nixos/doc/manual/administration/running.xml b/nixos/doc/manual/administration/running.xml
new file mode 100644
index 00000000000..d9fcc1aee26
--- /dev/null
+++ b/nixos/doc/manual/administration/running.xml
@@ -0,0 +1,21 @@
+<part xmlns="http://docbook.org/ns/docbook"
+      xmlns:xlink="http://www.w3.org/1999/xlink"
+      xmlns:xi="http://www.w3.org/2001/XInclude"
+      version="5.0"
+      xml:id="ch-running">
+ <title>Administration</title>
+ <partintro xml:id="ch-running-intro">
+  <para>
+   This chapter describes various aspects of managing a running NixOS system,
+   such as how to use the <command>systemd</command> service manager.
+  </para>
+ </partintro>
+ <xi:include href="../from_md/administration/service-mgmt.chapter.xml" />
+ <xi:include href="../from_md/administration/rebooting.chapter.xml" />
+ <xi:include href="../from_md/administration/user-sessions.chapter.xml" />
+ <xi:include href="../from_md/administration/control-groups.chapter.xml" />
+ <xi:include href="../from_md/administration/logging.chapter.xml" />
+ <xi:include href="../from_md/administration/cleaning-store.chapter.xml" />
+ <xi:include href="../from_md/administration/containers.chapter.xml" />
+ <xi:include href="../from_md/administration/troubleshooting.chapter.xml" />
+</part>
diff --git a/nixos/doc/manual/administration/service-mgmt.chapter.md b/nixos/doc/manual/administration/service-mgmt.chapter.md
new file mode 100644
index 00000000000..bb0f9b62e91
--- /dev/null
+++ b/nixos/doc/manual/administration/service-mgmt.chapter.md
@@ -0,0 +1,120 @@
+# Service Management {#sec-systemctl}
+
+In NixOS, all system services are started and monitored using the
+systemd program. systemd is the "init" process of the system (i.e. PID
+1), the parent of all other processes. It manages a set of so-called
+"units", which can be things like system services (programs), but also
+mount points, swap files, devices, targets (groups of units) and more.
+Units can have complex dependencies; for instance, one unit can require
+that another unit must be successfully started before the first unit can
+be started. When the system boots, it starts a unit named
+`default.target`; the dependencies of this unit cause all system
+services to be started, file systems to be mounted, swap files to be
+activated, and so on.
+
+## Interacting with a running systemd {#sect-nixos-systemd-general}
+
+The command `systemctl` is the main way to interact with `systemd`. The
+following paragraphs demonstrate ways to interact with any OS running
+systemd as init system. NixOS is of no exception. The [next section
+](#sect-nixos-systemd-nixos) explains NixOS specific things worth
+knowing.
+
+Without any arguments, `systemctl` the status of active units:
+
+```ShellSession
+$ systemctl
+-.mount          loaded active mounted   /
+swapfile.swap    loaded active active    /swapfile
+sshd.service     loaded active running   SSH Daemon
+graphical.target loaded active active    Graphical Interface
+...
+```
+
+You can ask for detailed status information about a unit, for instance,
+the PostgreSQL database service:
+
+```ShellSession
+$ systemctl status postgresql.service
+postgresql.service - PostgreSQL Server
+          Loaded: loaded (/nix/store/pn3q73mvh75gsrl8w7fdlfk3fq5qm5mw-unit/postgresql.service)
+          Active: active (running) since Mon, 2013-01-07 15:55:57 CET; 9h ago
+        Main PID: 2390 (postgres)
+          CGroup: name=systemd:/system/postgresql.service
+                  ├─2390 postgres
+                  ├─2418 postgres: writer process
+                  ├─2419 postgres: wal writer process
+                  ├─2420 postgres: autovacuum launcher process
+                  ├─2421 postgres: stats collector process
+                  └─2498 postgres: zabbix zabbix [local] idle
+
+Jan 07 15:55:55 hagbard postgres[2394]: [1-1] LOG:  database system was shut down at 2013-01-07 15:55:05 CET
+Jan 07 15:55:57 hagbard postgres[2390]: [1-1] LOG:  database system is ready to accept connections
+Jan 07 15:55:57 hagbard postgres[2420]: [1-1] LOG:  autovacuum launcher started
+Jan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server.
+```
+
+Note that this shows the status of the unit (active and running), all
+the processes belonging to the service, as well as the most recent log
+messages from the service.
+
+Units can be stopped, started or restarted:
+
+```ShellSession
+# systemctl stop postgresql.service
+# systemctl start postgresql.service
+# systemctl restart postgresql.service
+```
+
+These operations are synchronous: they wait until the service has
+finished starting or stopping (or has failed). Starting a unit will
+cause the dependencies of that unit to be started as well (if
+necessary).
+
+## systemd in NixOS {#sect-nixos-systemd-nixos}
+
+Packages in Nixpkgs sometimes provide systemd units with them, usually
+in e.g `#pkg-out#/lib/systemd/`. Putting such a package in
+`environment.systemPackages` doesn\'t make the service available to
+users or the system.
+
+In order to enable a systemd *system* service with provided upstream
+package, use (e.g):
+
+```nix
+systemd.packages = [ pkgs.packagekit ];
+```
+
+Usually NixOS modules written by the community do the above, plus take
+care of other details. If a module was written for a service you are
+interested in, you\'d probably need only to use
+`services.#name#.enable = true;`. These services are defined in
+Nixpkgs\' [ `nixos/modules/` directory
+](https://github.com/NixOS/nixpkgs/tree/master/nixos/modules). In case
+the service is simple enough, the above method should work, and start
+the service on boot.
+
+*User* systemd services on the other hand, should be treated
+differently. Given a package that has a systemd unit file at
+`#pkg-out#/lib/systemd/user/`, using [](#opt-systemd.packages) will
+make you able to start the service via `systemctl --user start`, but it
+won\'t start automatically on login. However, You can imperatively
+enable it by adding the package\'s attribute to
+[](#opt-systemd.packages) and then do this (e.g):
+
+```ShellSession
+$ mkdir -p ~/.config/systemd/user/default.target.wants
+$ ln -s /run/current-system/sw/lib/systemd/user/syncthing.service ~/.config/systemd/user/default.target.wants/
+$ systemctl --user daemon-reload
+$ systemctl --user enable syncthing.service
+```
+
+If you are interested in a timer file, use `timers.target.wants` instead
+of `default.target.wants` in the 1st and 2nd command.
+
+Using `systemctl --user enable syncthing.service` instead of the above,
+will work, but it\'ll use the absolute path of `syncthing.service` for
+the symlink, and this path is in `/nix/store/.../lib/systemd/user/`.
+Hence [garbage collection](#sec-nix-gc) will remove that file and you
+will wind up with a broken symlink in your systemd configuration, which
+in turn will not make the service / timer start on login.
diff --git a/nixos/doc/manual/administration/store-corruption.section.md b/nixos/doc/manual/administration/store-corruption.section.md
new file mode 100644
index 00000000000..bd8a5772b37
--- /dev/null
+++ b/nixos/doc/manual/administration/store-corruption.section.md
@@ -0,0 +1,28 @@
+# Nix Store Corruption {#sec-nix-store-corruption}
+
+After a system crash, it's possible for files in the Nix store to become
+corrupted. (For instance, the Ext4 file system has the tendency to
+replace un-synced files with zero bytes.) NixOS tries hard to prevent
+this from happening: it performs a `sync` before switching to a new
+configuration, and Nix's database is fully transactional. If corruption
+still occurs, you may be able to fix it automatically.
+
+If the corruption is in a path in the closure of the NixOS system
+configuration, you can fix it by doing
+
+```ShellSession
+# nixos-rebuild switch --repair
+```
+
+This will cause Nix to check every path in the closure, and if its
+cryptographic hash differs from the hash recorded in Nix's database, the
+path is rebuilt or redownloaded.
+
+You can also scan the entire Nix store for corrupt paths:
+
+```ShellSession
+# nix-store --verify --check-contents --repair
+```
+
+Any corrupt paths will be redownloaded if they're available in a binary
+cache; otherwise, they cannot be repaired.
diff --git a/nixos/doc/manual/administration/troubleshooting.chapter.md b/nixos/doc/manual/administration/troubleshooting.chapter.md
new file mode 100644
index 00000000000..548456eaf6d
--- /dev/null
+++ b/nixos/doc/manual/administration/troubleshooting.chapter.md
@@ -0,0 +1,12 @@
+# Troubleshooting {#ch-troubleshooting}
+
+This chapter describes solutions to common problems you might encounter
+when you manage your NixOS system.
+
+```{=docbook}
+<xi:include href="boot-problems.section.xml" />
+<xi:include href="maintenance-mode.section.xml" />
+<xi:include href="rollback.section.xml" />
+<xi:include href="store-corruption.section.xml" />
+<xi:include href="network-problems.section.xml" />
+```
diff --git a/nixos/doc/manual/administration/user-sessions.chapter.md b/nixos/doc/manual/administration/user-sessions.chapter.md
new file mode 100644
index 00000000000..5ff468b3012
--- /dev/null
+++ b/nixos/doc/manual/administration/user-sessions.chapter.md
@@ -0,0 +1,43 @@
+# User Sessions {#sec-user-sessions}
+
+Systemd keeps track of all users who are logged into the system (e.g. on
+a virtual console or remotely via SSH). The command `loginctl` allows
+querying and manipulating user sessions. For instance, to list all user
+sessions:
+
+```ShellSession
+$ loginctl
+   SESSION        UID USER             SEAT
+        c1        500 eelco            seat0
+        c3          0 root             seat0
+        c4        500 alice
+```
+
+This shows that two users are logged in locally, while another is logged
+in remotely. ("Seats" are essentially the combinations of displays and
+input devices attached to the system; usually, there is only one seat.)
+To get information about a session:
+
+```ShellSession
+$ loginctl session-status c3
+c3 - root (0)
+           Since: Tue, 2013-01-08 01:17:56 CET; 4min 42s ago
+          Leader: 2536 (login)
+            Seat: seat0; vc3
+             TTY: /dev/tty3
+         Service: login; type tty; class user
+           State: online
+          CGroup: name=systemd:/user/root/c3
+                  ├─ 2536 /nix/store/10mn4xip9n7y9bxqwnsx7xwx2v2g34xn-shadow-4.1.5.1/bin/login --
+                  ├─10339 -bash
+                  └─10355 w3m nixos.org
+```
+
+This shows that the user is logged in on virtual console 3. It also
+lists the processes belonging to this session. Since systemd keeps track
+of this, you can terminate a session in a way that ensures that all the
+session's processes are gone:
+
+```ShellSession
+# loginctl terminate-session c3
+```