summary refs log tree commit diff
path: root/nixos/doc/manual/configuration/kubernetes.xml
blob: 54a100e44795473a1efe4f54f46191d12799f49b (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
<chapter xmlns="http://docbook.org/ns/docbook"
         xmlns:xlink="http://www.w3.org/1999/xlink"
         xmlns:xi="http://www.w3.org/2001/XInclude"
         version="5.0"
         xml:id="sec-kubernetes">
 <title>Kubernetes</title>
 <para>
  The NixOS Kubernetes module is a collective term for a handful of individual
  submodules implementing the Kubernetes cluster components.
 </para>
 <para>
  There are generally two ways of enabling Kubernetes on NixOS. One way is to
  enable and configure cluster components appropriately by hand:
<programlisting>
services.kubernetes = {
  apiserver.enable = true;
  controllerManager.enable = true;
  scheduler.enable = true;
  addonManager.enable = true;
  proxy.enable = true;
  flannel.enable = true;
};
</programlisting>
  Another way is to assign cluster roles ("master" and/or "node") to the host.
  This enables apiserver, controllerManager, scheduler, addonManager,
  kube-proxy and etcd:
<programlisting>
<xref linkend="opt-services.kubernetes.roles"/> = [ "master" ];
</programlisting>
  While this will enable the kubelet and kube-proxy only:
<programlisting>
<xref linkend="opt-services.kubernetes.roles"/> = [ "node" ];
</programlisting>
  Assigning both the master and node roles is usable if you want a single node
  Kubernetes cluster for dev or testing purposes:
<programlisting>
<xref linkend="opt-services.kubernetes.roles"/> = [ "master" "node" ];
</programlisting>
  Note: Assigning either role will also default both
  <xref linkend="opt-services.kubernetes.flannel.enable"/> and
  <xref linkend="opt-services.kubernetes.easyCerts"/> to true. This sets up
  flannel as CNI and activates automatic PKI bootstrapping.
 </para>
 <para>
  As of kubernetes 1.10.X it has been deprecated to open non-tls-enabled ports
  on kubernetes components. Thus, from NixOS 19.03 all plain HTTP ports have
  been disabled by default. While opening insecure ports is still possible, it
  is recommended not to bind these to other interfaces than loopback. To
  re-enable the insecure port on the apiserver, see options:
  <xref linkend="opt-services.kubernetes.apiserver.insecurePort"/> and
  <xref linkend="opt-services.kubernetes.apiserver.insecureBindAddress"/>
 </para>
 <note>
  <para>
   As of NixOS 19.03, it is mandatory to configure:
   <xref linkend="opt-services.kubernetes.masterAddress"/>. The masterAddress
   must be resolveable and routeable by all cluster nodes. In single node
   clusters, this can be set to <literal>localhost</literal>.
  </para>
 </note>
 <para>
  Role-based access control (RBAC) authorization mode is enabled by default.
  This means that anonymous requests to the apiserver secure port will
  expectedly cause a permission denied error. All cluster components must
  therefore be configured with x509 certificates for two-way tls communication.
  The x509 certificate subject section determines the roles and permissions
  granted by the apiserver to perform clusterwide or namespaced operations. See
  also:
  <link
     xlink:href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">
  Using RBAC Authorization</link>.
 </para>
 <para>
  The NixOS kubernetes module provides an option for automatic certificate
  bootstrapping and configuration,
  <xref linkend="opt-services.kubernetes.easyCerts"/>. The PKI bootstrapping
  process involves setting up a certificate authority (CA) daemon (cfssl) on
  the kubernetes master node. cfssl generates a CA-cert for the cluster, and
  uses the CA-cert for signing subordinate certs issued to each of the cluster
  components. Subsequently, the certmgr daemon monitors active certificates and
  renews them when needed. For single node Kubernetes clusters, setting
  <xref linkend="opt-services.kubernetes.easyCerts"/> = true is sufficient and
  no further action is required. For joining extra node machines to an existing
  cluster on the other hand, establishing initial trust is mandatory.
 </para>
 <para>
  To add new nodes to the cluster: On any (non-master) cluster node where
  <xref linkend="opt-services.kubernetes.easyCerts"/> is enabled, the helper
  script <literal>nixos-kubernetes-node-join</literal> is available on PATH.
  Given a token on stdin, it will copy the token to the kubernetes secrets
  directory and restart the certmgr service. As requested certificates are
  issued, the script will restart kubernetes cluster components as needed for
  them to pick up new keypairs.
 </para>
 <note>
  <para>
   Multi-master (HA) clusters are not supported by the easyCerts module.
  </para>
 </note>
 <para>
  In order to interact with an RBAC-enabled cluster as an administrator, one
  needs to have cluster-admin privileges. By default, when easyCerts is
  enabled, a cluster-admin kubeconfig file is generated and linked into
  <literal>/etc/kubernetes/cluster-admin.kubeconfig</literal> as determined by
  <xref linkend="opt-services.kubernetes.pki.etcClusterAdminKubeconfig"/>.
  <literal>export KUBECONFIG=/etc/kubernetes/cluster-admin.kubeconfig</literal>
  will make kubectl use this kubeconfig to access and authenticate the cluster.
  The cluster-admin kubeconfig references an auto-generated keypair owned by
  root. Thus, only root on the kubernetes master may obtain cluster-admin
  rights by means of this file.
 </para>
</chapter>